text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
In any organization, finance teams are in a unique position. They hold the keys to a vast amount of data, and they have an overarching view of the organization as a whole. They use these two pieces to analyze the financial health of the organization and plan for the financial needs of the company. However, the Office of Finance has the potential to contribute so much more. By leveraging intelligent planning – combining business budgeting, financial forecasting, and financial reporting & data analysis to drive strategic decisions – the Office of Finance has the ability to elevate its role. With modern tools and tactics, finance teams can play a crucial role in business growth and powerful decision-making. This roadmap offers the key steps the CFO and his team need in order to effectively adopt intelligent planning in 2019. Ask any finance team what they need more of in the day, and they are likely to tell you "Time." Many tasks, like gathering, analyzing, interpreting and reporting on critical financial data that can change daily, are highly manual and sometimes tedious, leaving the Office of Finance with little time to deep-dive into financial analysis. And because the data takes time to compile and review, it is rarely up-to-date. Finance, however, is beginning to reap the benefits of organizational digital transformation. The automated integration of broad data sets along with the tools that can quickly analyze the most recent financial data means that finance teams are free to engage in more strategic and value-added work. Modern businesses face pressures from all sides. To stay competitive, companies must find ways to cut costs, drive efficiency, and adapt to the changing needs and demands of their customers. Disruptors may enter the same space. CFOs need information available, now, to guide the organization to swiftly capitalize on new opportunities. Combined data from across the organization, visibility into the organization's financial health , and deep analysis within a single view make it easier for the CFO to answer tough questions from the rest of the C-suite, and to do so at a moment's notice. With the right tools that enable intelligent planning, finance teams can quickly run scenarios to uncover paths to growth. The CFO is now at the heart of digital transformation, with eight in ten CFOs now spearheading transformation efforts. That should come as no surprise – transformation initiatives are meant to adopt new technologies and modernize business processes and few departments i can benefit as much as the Office of Finance from modernized business tools. As finance teams move away from cumbersome spreadsheets and embrace the technical capabilities of today's tools, their visibility into the company's financial position and the impact of risky decisions becomes more clear. The right tools allow teams to test and analyze multiple potential outcomes and understand the risks and benefits of decisions, based on a broad base of cross-functional data. There was a time when, in some companies, the budget cycle was a battle between departments for a limited set of resources. Departmental budgets became land grabs, and siloed teams were pitted against one another, with the finance function stuck in the middle. But in today's fast-paced business world, companies must move together with a unified set of goals if they are to survive and prosper. The finance team, once again, finds themselves at the epicenter of the business. But instead of being mediators, they are the facilitators of collaboration. With intelligent planning tools and initiatives, corporate goals are linked to operational activities. At the same time, enterprise-wide data is brought together for analysis, and the analysis of the information can be shared back out to leaders in customized dashboards. This 360-degree view of the organization, whittled down to relevant information by department or group, enables line-of-business leaders to make valuable, data-driven decisions. Pulling together massive amounts of cross-functional data and analyzing it quickly has been enjoyed only in the halls of big enterprises. Only the largest organizations could afford the technical, computing and finance resources to effectively and accurately analyze large, disparate data sets. Technical advancements, brought together in digital transformation initiatives, have changed that. Sophisticated analytics and reporting tools are readily available to organizations of all sizes, while cloud-based applications mean companies have access to affordable processing and the latest financial analysis features. This alleviates concerns over storage, processing power, server costs and even IT overhead, making the benefits of business intelligence available to small and medium-sized businesses. When you set out on a journey, there are usually many different paths you can take to get to your destination. You could take highways or back-roads, fly or ride a bike, take lots of detours or head in a straight line. Your plan depends on the goals of your trip. Business financial planning is similar. If you know where you're going, and what your goals are in getting there, you can lay out a plan to get to your destination. Of course, financial planning for your business can be a bit more involved than planning a vacation. In truth, there are five crucial steps needed to create a financial planning strategy for your business. The essential first step to creating your strategic financial plan is to know your business goals. Just as each company's goals are unique, so too is the financial plan that they must lay out to reach those goals. For instance, a company who has a goal of 30% revenue growth in the next 3 years will want a completely different financial strategy than one who is looking to sell the company in five years. An acquisition strategy is much different than a consolidation strategy. Your company goals are the driver of financial planning strategy for your business. While your budget outlines your financial position and cash flow against your goals, your forecast helps you see your company's future potential outcomes based on your past performance and your expectations in the future. For instance, if sales have gone up 20% every year, and you see no reason for that to change, you can add that to the forecast. Plus, you'll add in the associated costs that go with that assumption, including the cost for product materials, warehouse space to handle orders, team members to build products or additions to the sales team. These forecasts should be realistic based on what you've seen in the past and expect in the near future, and they should point you in the direction of your goals. The forecast should be updated frequently as new information comes in or as the business climate shifts. If you aren't measuring your progress against your goals, how do you know you're making progress? Setting Key Performance Indicators (KPIs) for your financial strategy will let you identify what parts of your financial plan are working, and what parts may be straying from the path. If, for instance, your goal is to pay off debt, you might set a KPI for a certain percentage of debt to be paid off every quarter. If you fall behind on that KPI, you can look at your budget and forecast and see if you fell short. Or, if you're paying off debt faster than expected, you might revisit your budget to ensure you aren't being optimistic and risking coming up short on your cash flow by the end of the year. Once you've completed the four previous steps, you can set your financial plan in motion. Reaching your goals is not automatic, however, and isn't a "set-and-forget" proposition. Revisit your plan frequently and review it against your KPIs. This will let you make adjustments as needed to stay on the right path to your destination.
{ "redpajama_set_name": "RedPajamaC4" }
286
\section{Introduction} \label{intro} In the current post-digital era, quantum cryptography has generated significant interest in the information security domain. Security of quantum cryptographic protocols mainly depends on the ``no-cloning theorem"~\cite{wootters1982single} and the fact that, without disturbance, two non-orthogonal states can not be distinguished with a finite number of samples. The first-ever quantum cryptographic protocol was BB84 quantum-key-distribution (QKD), proposed by Bennett and Brassard in 1984~\cite{tcs/BennettB14}. QKD allows two or more remote users to establish a shared secret key between themselves. In BB84 protocol, two users, namely, Alice and Bob, exchange single-qubit states to generate a secret key. In 2000, Shor and Preskill showed that this protocol is secure and they gave a simple proof of security of the BB84 protocol~\cite{shor2000simple}. In 1991, Ekert proposed another QKD protocol using entangled states~\cite{ekert1991quantum}. Till now, there are many variants of QKD protocols proposed by many researchers, for example, BBM92~\cite{Brassard1992quantum}, B92~\cite{bennett1992quantum} and many others~\cite{long2002theoretically,xue2002conditional,deng2004bidirectional,hwang2003quantum,lo2005decoy,lo2012measurement,barrett2005no,grosshans2003quantum}. Quantum secure direct communication (QSDC) is another nice part of quantum cryptography, whose purpose is to securely send a secret message from one party (Alice) to another party (Bob), without using any shared key. The famous ping-pong-protocol~\cite{bostrom2002ping} is an example of QSDC protocol, where the receiver Bob prepares two-qubit entangled states and sends one qubit to the sender Alice. Then Alice performs some unitary operations on that qubit to encode her information and sends it back to Bob. By measuring the joint state, Bob gets the message. Recently, other QSDC protocols with different approaches are also explored~\cite{deng2003two,deng2004secure,wang2005quantum,wang2005multi,wang2006quantum,long2007quantum,xi2007quantum,das2020improving,das2020cryptanalysis}. A two-way QSDC protocol is called quantum dialogue (QD), where Alice and Bob can simultaneously exchange their messages with a single channel, was proposed by BA Nguyen in 2004~\cite{nguyen2004quantum}. Since then, many QD protocols ware proposed~\cite{zhang2004deterministic,zhong2005quantum,xia2006quantum,xin2006secure,yan2007controlled,tan2008classical,gao2008revisiting,gao2010two,qip/Maitra17,das2020two}. In~\cite{qip/Maitra17}, authors proposed a measurement device independent QD (MDI-QD) with the help of an untrusted third party (UTP) and showed that this protocol is secure against information leakage. QSDC protocols for more than two parties are discussed in~\cite{gao2005deterministic,jin2006three,ting2005simultaneous,tan2014multi,zhang2005multiparty,banerjee2018quantum}. In~\cite{banerjee2018quantum}, the authors proposed the concept of quantum conference or $N$-party communication, $N\geq 2$, where each party sends their message to the other $(N-1)$ parties. In this protocol, to communicate $m$-bit classical messages, they need at least $(N-1)$ pairwise disjoint subgroups of unitary operators, where the cardinality of each subgroup is at-least $2^m$. For large $m$, finding these subgroups is quite difficult. All the above primitives are multi-party protocols, but not multi-party computation. In the multi-party protocol, two or more parties exchange messages over a public channel and perform some local computation to achieve a communication task. On the other hand, in multi-party computation, two or more parties exchange messages over a public channel and perform some local computation to jointly compute the value of a function on their private data as inputs. The requirement is that, after the end of the computation, each party will have the output of the function, but no party will have access to the input of any other party. Quantum multi-party computation (QMPC) is an interesting research area in quantum cryptography, where the parties possess some quantum states as inputs. Quantum secret sharing (QSS)~\cite{hillery1999quantum,zhang2005multiparty,zhang2005multiparty_qss,gottesman2000theory,guo2003quantum}, QMPC protocol for summation and multiplication~\cite{shi2016secure,chen2010}, quantum private comparison~\cite{Liu2013,Zhang2013,liu2015} are some examples of QMPC protocols. \subsection*{Our Contribution} In this paper, we make four distinct contributions. First, we revisit the two-party MDI-QD protocol~\cite{qip/Maitra17}, and show that this is not secure against intercept-and-resend attack. Then we modify the two-party MDI-QD protocol to make it secure against this attack. Second, using a similar approach, we propose a three-party quantum conference protocol with the help of an untrusted fourth party. Next, we generalize our three-party quantum conference protocol to a multi-party version. We show that both these conference protocols are correct and secure against intercept-and-resend attack, entangle-and-measure attack, Denial-of-Service (DoS) attack and man-in-the-middle attack. As the fourth and final contribution, we show how to use part of our multi-party quantum conference protocol to compute multi-party XOR function, and establish it's correctness and security. \subsection*{Outline} The rest of this paper is organized as follows: in Section~\ref{sec2}, we revisit the MDI-QD protocol proposed in~\cite{qip/Maitra17}. Then in the next section, we discuss intercept-and-resend attack on the MDI-QD protocol~\cite{qip/Maitra17} and we propose its modified version. Section~\ref{sec3} describes our proposed protocol for a three-party quantum conference and it's correctness and security analysis. We generalize our three-party quantum conference to $N$-party quantum conference in Section~\ref{sec4}. Next, we present a protocol for multi-party XOR computation, by using tools of $N$-party quantum conference in Section~\ref{sec5}. Section~\ref{sec6} concludes our results. \subsection*{Notations} Here we describe the common notations that will be used throughout the paper. \begin{itemize}[label=$\bullet$] \item $\ket{+}=\frac{1}{\sqrt{2}}(\ket{0}+ \ket{1})$, $\ket{-}=\frac{1}{\sqrt{2}}(\ket{0}- \ket{1})$; \item $Z$ basis $=\{\ket{0},\ket{1}\}$; \item $X$ basis $=\{\ket{+},\ket{-}\}$; \item $\{S[i]\}_{i=1}^{m}=S$ is a finite sequence of length $m$; \item $S[i]=S_i=i$-th element of $S$ ; \item $\bar{b}$= bit complement of $b$; \item $i_1i_2\ldots i_N = N$ bit binary representation of $i$; \item $\ket{i}=\ket{i_1}\ket{i_2}\ldots \ket{i_N}$ is an $N$-qubit state; \item $\ket{\Phi^{+}}=\frac{1}{\sqrt{2}}(\ket{00}+ \ket{11})$, $\ket{\Phi^{-}}=\frac{1}{\sqrt{2}}(\ket{00}- \ket{11})$; \item $\ket{\Psi^{+}}=\frac{1}{\sqrt{2}}(\ket{01}+ \ket{10})$, $\ket{\Psi^{-}}=\frac{1}{\sqrt{2}}(\ket{01}- \ket{10})$; \item $\mathcal{B}_N=\{\ket{\Phi_{0}^{+}}, \ket{\Phi_{0}^{-}},\ket{\Phi_{1}^{+}},\ket{\Phi_{1}^{-}}, \ldots, \ket{\Phi_{2^{(N-1)}-1}^{+}}, \ket{\Phi_{2^{(N-1)}-1}^{-}}\}$ basis,\\ where $\ket{\Phi_{i}^{\pm}}=\frac{1}{\sqrt{2}}(\ket{i}\pm \ket{2^N-1-i})$ for $i \in \{0,1,\ldots ,2^{(N-1)}-1\}$.\\ For example : \begin{enumerate} \item $\mathcal{B}_2=\{\ket{\Phi_{0}^{+}}, \ket{\Phi_{0}^{-}},\ket{\Phi_{1}^{+}},\ket{\Phi_{1}^{-}}\}$ is called Bell basis; where \begin{itemize} \item $\ket{\Phi_{0}^{+}}=\frac{1}{\sqrt{2}}(\ket{00}+ \ket{11})=\ket{\Phi^{+}}$, $\ket{\Phi_{0}^{-}}=\frac{1}{\sqrt{2}}(\ket{00}- \ket{11})=\ket{\Phi^{-}}$ \item $\ket{\Phi_{1}^{+}}=\frac{1}{\sqrt{2}}(\ket{01}+ \ket{10})=\ket{\Psi^{+}}$, $\ket{\Phi_{1}^{-}}=\frac{1}{\sqrt{2}}(\ket{01}- \ket{10})=\ket{\Psi^{-}}$ \end{itemize} \item $\mathcal{B}_3=\{\ket{\Phi_{0}^{+}}, \ket{\Phi_{0}^{-}},\ket{\Phi_{1}^{+}},\ket{\Phi_{1}^{-}}, \ket{\Phi_{2}^{+}}, \ket{\Phi_{2}^{-}}, \ket{\Phi_{3}^{+}}, \ket{\Phi_{3}^{-}}\}$ basis; where \begin{itemize} \item $\ket{\Phi_{0}^{+}}=\frac{1}{\sqrt{2}}(\ket{000}+ \ket{111})$, $\ket{\Phi_{0}^{-}}=\frac{1}{\sqrt{2}}(\ket{000}- \ket{111})$ \item $\ket{\Phi_{1}^{+}}=\frac{1}{\sqrt{2}}(\ket{001}+ \ket{110})$, $\ket{\Phi_{1}^{-}}=\frac{1}{\sqrt{2}}(\ket{001}- \ket{110})$ \item $\ket{\Phi_{2}^{+}}=\frac{1}{\sqrt{2}}(\ket{010}+ \ket{101})$, $\ket{\Phi_{2}^{-}}=\frac{1}{\sqrt{2}}(\ket{010}- \ket{101})$ \item $\ket{\Phi_{3}^{+}}=\frac{1}{\sqrt{2}}(\ket{011}+ \ket{100})$, $\ket{\Phi_{3}^{-}}=\frac{1}{\sqrt{2}}(\ket{011}- \ket{100})$; \end{itemize} \end{enumerate} \item $\Pr(A)=$ Probability of occurrence of an event $A$; \item $\Pr(A|B)=$ Probability of occurrence of an event $A$ given that the event $B$ has already occurred; \item $wt(v)=$ number of 1's in a binary vector $v$. \end{itemize} \section{Revisiting the Measurement Device Independent \\Quantum Dialogue (MDI-QD) Protocol of~\cite{qip/Maitra17} } \label{sec2} Here, in this section, we shortly describe the MDI-QD protocol proposed in~\cite{qip/Maitra17}, where two legitimate parties, namely Alice and Bob, can simultaneously exchange their messages. The proposal in~\cite{qip/Maitra17} composed two different protocols from~\cite{tcs/BennettB14} and~\cite{lo2012measurement}. Alice and Bob first perform some QKD, namely, BB84~\cite{tcs/BennettB14} and generate a shared key $k$ between themselves. Then they prepare their sets of qubits ${Q_A}$ and $Q_B$, corresponding to $k$ and their respective messages $a$ and $b$. Alice and Bob send ${Q_A}$ and $Q_B$ to an untrusted third party or UTP (who may be an Eavesdropper). Then the UTP measures the two qubit states in Bell basis (i.e, $\mathcal{B}_2$) and announces the result. From the result, Alice and Bob decode the messages of each other (see Table~\ref{table_qd}). Details are given in Figure~\ref{algo:1qd}. \begin{table}[h] \centering \caption{Different cases in MDI QD.} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Bits to communicate by} & \multicolumn{2}{|c|}{Qubits prepared by} & \multicolumn{4}{|c|}{Probabilities of measurement}\\ \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{} & \multicolumn{4}{|c|}{results at UTP's end}\\ \hline {} Alice & Bob & Alice (${Q_A}_i$) & Bob (${Q_B}_i$) & $\ket{\Phi^+}$ & $\ket{\Phi^-}$ & $\ket{\Psi^+}$ & $\ket{\Psi^-}$ \\ \hline $0$ & $0$ & $\ket{0}$ & $\ket{0}$ & $1/2$ & $1/2$ & $0$ & $0$ \\ $0$ & $1$ & $\ket{0}$ & $\ket{1}$ & $0$ & $0$ & $1/2$ & $1/2$ \\ $1$ & $0$ & $\ket{1}$ & $\ket{0}$ & $0$ & $0$ & $1/2$ & $1/2$ \\ $1$ & $1$ & $\ket{1}$ & $\ket{1}$ & $1/2$ & $1/2$ & $0$ & $0$ \\ \hline $0$ & $0$ & $\ket{+}$ & $\ket{+}$ & $1/2$ & $0$ & $1/2$ & $0$ \\ $0$ & $1$ & $\ket{+}$ & $\ket{-}$ & $0$ & $1/2$ & $0$ & $1/2$ \\ $1$ & $0$ & $\ket{-}$ & $\ket{+}$ & $0$ & $1/2$ & $0$ & $1/2$ \\ $1$ & $1$ & $\ket{-}$ & $\ket{-}$ & $1/2$ & $0$ & $1/2$ & $0$\\ \hline \end{tabular} \label{table_qd} \end{table} \begin{algorithm}[!tb] \begin{enumerate} \item Alice and Bob share an $n$-bit key stream ($k=k_1k_2\ldots k_n$) between themselves using BB84 protocol. \item Let the $n$-bit message of Alice (Bob) be $a=a_1a_2\ldots a_n$ ($b=b_1b_2\ldots b_n$). \item For $1\leq i \leq n$, Alice (Bob) prepares the qubits $Q_A={Q_A}_1{Q_A}_2\ldots {Q_A}_n ~(Q_B={Q_B}_1{Q_B}_2\ldots {Q_B}_n)$ at her (his) end according to the following strategy: \begin{enumerate} \item if $a_i$ ($b_i $)$ = 0$ and $k_i = 0 \Rightarrow {Q_A}_i~({Q_B}_i)=\ket{0}$; \item if $a_i$ ($b_i $)$ = 1$ and $k_i = 0 \Rightarrow {Q_A}_i~({Q_B}_i)=\ket{1}$; \item if $a_i$ ($b_i $)$ = 0$ and $k_i = 1 \Rightarrow {Q_A}_i~({Q_B}_i)=\ket{+}$; \item if $a_i$ ($b_i $)$ = 1$ and $k_i =1 \Rightarrow {Q_A}_i~({Q_B}_i)=\ket{-}$. \end{enumerate} \item Alice (Bob) sends her (his) prepared qubits $Q_A~(Q_B)$ to an untrusted third party (UTP). \item For $1\leq i \leq n$, the UTP measures each two qubits ${Q_A}_i$ and ${Q_B}_i$ in Bell basis (i.e., $\mathcal{B}_2=\{\ket{\Phi^+}\ket{\Phi^-},\ket{\Psi^+},\ket{\Psi^-}\}$) and announces the measurement result $\mathcal{M}_i \in \{\ket{\Phi^+}\ket{\Phi^-},\ket{\Psi^+},\ket{\Psi^-}\}$ publicly. Table~\ref{table_qd} shows the possible measurements results with their occurring probabilities. \item For $1\leq i \leq n$, Alice and Bob consider the $i$-th measurement result $\mathcal{M}_i$, if $\mathcal{M}_i=\ket{\Phi^-}$ or $\ket{\Psi^+}$ and discard the other cases. \item They randomly choose $\delta n$ number of measurement results to estimate the error,\\ where $\delta \ll 1$ is a small fraction. \item Alice and Bob guess the message bits of other, corresponding to their chosen $\delta n$ number of measurement results using Table~\ref{tab:Alice' guess about $b_i$ } and Table~\ref{tab: Bob's guess about $a_i$ }. \item For the above mentioned $\delta n$ rounds, they disclose their respective guesses. \item If the estimated error is greater than some predefined threshold value, then they abort. Else they continue and go to the next step. \item For the remaining measurement results, Alice and Bob guess the message bits of \\each other, using Table~\ref{tab:Alice' guess about $b_i$ } and Table~\ref{tab: Bob's guess about $a_i$ }. \end{enumerate} \captionof{figure}{MDI-QD Protocol of~\cite{qip/Maitra17}} \label{algo:1qd} \setlength{\textfloatsep}{0.05cm} \setlength{\floatsep}{0.05cm} \end{algorithm} It is clear from Table~\ref{table_qd} that, for $1 \leq i \leq n$, \begin{itemize} \item if Alice prepares ${Q_A}_i=\ket{0}$($\ket{1})$, then she guesses $b_i$ with probability $1$ as follows: \begin{equation*} \mathcal{M}_i= \begin{cases} \ket{\Phi^+}$ or $\ket{\Phi^-} \Rightarrow &\text{$b_i=$ $0$ ($1$)};\\ \ket{\Psi^+}$ or $\ket{\Psi^-} \Rightarrow &\text{$b_i=$ $1$ ($0$)}, \end{cases} \end{equation*} \item if Alice prepares ${Q_A}_i=\ket{+}$($\ket{-})$, she guesses $b_i$ with probability $1$ as follows: \begin{equation*} \mathcal{M}_i= \begin{cases} \ket{\Phi^+}$ or $\ket{\Psi^+} \Rightarrow &\text{$b_i=$ $0$ ($1$)};\\ \ket{\Phi^-}$ or $\ket{\Psi^-} \Rightarrow &\text{$b_i=$ $1$ ($0$)}. \end{cases} \end{equation*} \end{itemize} From the above discussion and Table~\ref{table_qd}, let us construct two more tables, namely Table~\ref{tab:Alice' guess about $b_i$ } and Table~\ref{tab: Bob's guess about $a_i$ }, containing the information of Alice's guess and Bob's guess about other's message bits for different cases. \begin{table}[h] \centering \caption{Alice's guess about Bob's message bit for different cases.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Key & Alice's & Alice's & \multicolumn{4}{|l|}{Alice's guess about $b_i$ when $\mathcal{M}_i$}\\ \cline{4-7} bit $k_i$ & bit $a_i$ & qubit ${Q_A}_i$ & $\ket{\phi^+}$ &$\ket{\phi^-}$ &$\ket{\psi^+}$ &$\ket{\psi^-}$\\ \hline 0 & 0 & $\ket{0}$ & 0 & 0 & 1 & 1 \\ 0 & 1 & $\ket{1}$ & 1 & 1 & 0 & 0 \\ 1 & 0 & $\ket{+}$ & 0 & 1 & 0 & 1 \\ 1 & 1 & $\ket{-}$ & 1 & 0 & 1 & 0 \\ \hline \end{tabular} \label{tab:Alice' guess about $b_i$ } \end{table} \begin{table}[h] \centering \caption{Bob's guess about Alice's message bit for different cases.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Key & Bob's & Bob's & \multicolumn{4}{|c|}{Bob's guess about $a_i$ when $\mathcal{M}_i$}\\ \cline{4-7} bit $k_i$ & bit $b_i$ & qubit ${Q_B}_i$ & $\ket{\phi^+}$ & $\ket{\phi^-}$ & $\ket{\psi^+}$ & $\ket{\psi^-}$\\ \hline 0 & 0 & $\ket{0}$ & 0 & 0 & 1 & 1\\ 0 & 1 & $\ket{1}$ & 1 & 1 & 0 & 0\\ 1 & 0 & $\ket{+}$ & 0 & 1 & 0 & 1\\ 1 & 1 & $\ket{-}$ & 1 & 0 & 1 & 0\\ \hline \end{tabular} \label{tab: Bob's guess about $a_i$ } \end{table} Hence from Table~\ref{tab:Alice' guess about $b_i$ } and Table~\ref{tab: Bob's guess about $a_i$ }, we can say that both Alice and Bob can exchange their message simultaneously. Now, we can see from Table~\ref{table_qd}, for $1 \leq i \leq n$, if $\mathcal{M}_i= \ket{\Phi^+}$ or $\ket{\Psi^-}$, then Eve knows the information whether $a_i=b_i$ or not. That is, Eve knows $a_i \oplus b_i$ ($1$ bit of information out of 2 bits), for those $\mathcal{M}_i$, where $\mathcal{M}_i= \ket{\Phi^+}$ or $\ket{\Psi^-}$. To avoid this information leakage, Alice and Bob discard these cases. Then they estimate the error and if the error exceeds some predefined threshold, they abort the protocol. Otherwise, they continue it and guess other's message. \section{Intercept-and-Resend Attack on the MDI-QD Protocol of~\cite{qip/Maitra17} and Our Proposed Remedy} We now show that the above MDI-QD protocol~\cite{qip/Maitra17} is not secure against intercept-and-resend attack and an adversary can get hold of some amount of information about the messages. So we propose a modified version of this protocol, which is secure against this attack. Let us consider the intercept-and-resend attack by an adversary $\mathcal{A}$ (other than the UTP). For the $i$-th message bit pair $(a_i,b_i)$ of Alice and Bob, they prepare the qubit pair $(Q_{A_i},Q_{B_i})$ depending upon the key bit $k_i$, and send those qubits $Q_{A_i},Q_{B_i}$ to the UTP by separate channels from Alice and Bob. Now $\mathcal{A}$ intercepts the qubits $Q_{A_i},~Q_{B_i}$ from the channel and guesses the corresponding key bit ${k'}_i$ to choose the measurement basis for the qubits. $\mathcal{A}$ measures $Q_{A_i}$ and $Q_{B_i}$ in the same basis and resends those qubits to the UTP. Note that, if $\mathcal{A}$ guesses the correct key bit, then she chooses the correct basis to measure $Q_{A_i},~Q_{B_i}$, and due to this measurement, the states of the qubits remain unchanged. In this case, $\mathcal{A}$ gets the correct message bit-pair of Alice and Bob, without introducing any error in the channel. Now, if $\mathcal{A}$ chooses the wrong key bit, then also she can get the correct message bit-pair $(a_i,b_i)$ with probability $1/4$ and in this case, $\mathcal{A}$ can be detected with probability $1/2$. As an illustrative example, consider $k_i=0$, ${k'}_i=1$, $a_i=0$, $b_i=0$, then $Q_{A_i}=\ket{0}$, $Q_{B_i}=\ket{0}$. Since ${k'}_i=1$, $\mathcal{A}$ measures $Q_{A_i},~Q_{B_i}$ in $X$-basis. After the measurement, let the qubits be ${Q'}_{A_i},~{Q'}_{B_i}$. If ${Q'}_{A_i}=\ket{+},~{Q'}_{B_i}=\ket{+}$, then also $\mathcal{A}$ gets the correct message bit-pair and this case arises with probability $1/4$. In that case, if the joint measurement result is $\ket{\Phi^+}$, then $\mathcal{A}$ can not be detected, but if the joint measurement result is $\ket{\Psi^+}$, then they can detect $\mathcal{A}$. The details are given in Table~\ref{table_attack}. \begin{table}[!htbp] \centering \caption{Different cases of intercept-and-resend attack on MDI-QD.} \resizebox{\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $k_i$ & ${k'}_i$ & $a_i$ & $b_i$ & $Q_{A_i}$ & $Q_{B_i}$ & ${Q'}_{A_i}$ & ${Q'}_{B_i}$ & \multicolumn{4}{|c|}{Prob. of joint measurement result} & Remark\\ & & & & & & & & $\ket{\Phi^+}$ & $\ket{\Phi^-}$ & $\ket{\Psi^+}$ & $\ket{\Psi^-}$ &\\ \hline $0$ & $1$ & $0$ & $0$ & $\ket{0}$ & $\ket{0}$ & $\ket{+}$ & $\ket{+}$ & $1/2$ & $0$ & $\mathbf{1/2}$ & $0$ & with probability\\ & & &&&& $\ket{+}$ & $\ket{-}$ & $0$ & $1/2$ & $0$ & $\mathbf{1/2}$ &$1/2$ cheating \\ & & &&&& $\ket{-}$ & $\ket{+}$ & $0$& $1/2$ & $0$ & $\mathbf{1/2}$ &can be \\ & & &&&& $\ket{-}$ & $\ket{-}$ & $1/2$ & $0$ & $\mathbf{1/2}$ & $0$ & detected \\ \hline \end{tabular} } \\ \begin{flushleft} \textit{\begin{tiny} *Bold numbers denote the probabilities that errors have occurred. \end{tiny}} \end{flushleft} \label{table_attack} \end{table} Thus, in the case of the intercept-and -resend attack, \\ $\Pr($cheating detected in $i$-th bit $ )$ = $\Pr($cheating detected in $i$-th bit $|k_i={k'}_i ) \Pr(k_i={k'}_i)+ \Pr($cheating detected in $i$-th bit $|k_i\neq {k'}_i ) \Pr(k_i \neq {k'}_i)= 0+ 1/2\times 1/2 =1/4$. Therefore, with probability $3/4$, $\mathcal{A}$ can do the attack without being detected. $\Pr(\mathcal{A}$ gets the exact $i$-th bit message pair $)= 1/2+1/2\times1/4= 5/8$, whereas $\Pr(\mathcal{A}$ guesses the exact $i$-th bit message pair randomly$)=1/4$. To avoid this attack, we have modified the previous MDI-QD protocol by introducing an extra error estimation phase before the UTP jointly measures the qubits. \subsection{Our Proposed Modification} \label{mod qd} Steps 1, 2, 3 are the same as before in the MDI-QD protocol of Figure~\ref{algo:1qd}. \begin{enumerate} \setcounter{enumi}{3} \item Alice and Bob choose some random permutation and apply those on their respective sequences of qubits $Q_A$ and $Q_B$ and get new sequences of qubits $q_A$ and $q_B$. \item They send the prepared qubits $q_A $ and $q_B$ to a UTP. \item Alice and Bob randomly choose $\delta n$ number of common positions on sequences $Q_A$ and $Q_B$ to estimate the error in the channel, where $\delta \ll 1$ is a small fraction. Corresponding to these rounds, they do the followings: \label{2party_error1} \begin{enumerate} \item Each participant tells the positions and preparation bases of those qubits for those rounds to the UTP. \item The UTP measures each single-qubit state in proper basis and announces the results. \item They reveal their respective qubits for these rounds and compare them with the results announced by the UTP. \item If the estimated error is greater than some predefined threshold value, then they abort. Else they continue and go to the next step. \end{enumerate} \item The UTP asks Alice and Bob the permutations which they have applied to their sequences. \item The UTP applies the inverse permutations, corresponding to the permutations chosen by Alice and Bob, on $q_A$ and $q_B$ to get $Q_A$ and $Q_B$ respectively. \item They discard the qubits corresponding to the above $\delta n$ positions. Their remaining sequence of prepared qubits are relabeled as $Q_A=\{Q_A[i]\}_{i=1}^{m}$ and $Q_B=\{Q_B[i]\}_{i=1}^{m}$, where $m=(1-\delta) n$. \item They update their $n$-bit key to an $m$-bit key by discarding $\delta n$ number of key bits corresponding to the above $\delta n$ rounds. The updated key is relabeled as $k=k_1k_2\ldots k_{m}$. \end{enumerate} Then they follow Step 5 to Step 11 of the MDI-QD protocol in Figure~\ref{algo:1qd}. In this modified protocol, since Alice and Bob apply random permutations on their respective sequences of qubits before sending those qubits to the UTP and since those permutations are announced only after the error estimation phase is passed, at the time of sending those sequences $\mathcal{A}$ can not just guess a key bit and measure the qubits. Even if she gets some of the key bits, she can not guess the corresponding bases for sequences of qubits $q_A,~q_B$. Alice and Bob randomly choose $\delta n$ number of rounds to estimate the error in the channel (Step~\ref{2party_error1} of the modified protocol), where $\delta \ll 1$ is a small fraction. Corresponding to these rounds, they tell the key bits to the UTP and he measures each single-qubit state in proper basis and announces the results. Alice and Bob reveal their respective qubits for these rounds and compare them with the results announced by the UTP. Let $\mathcal{A}$ intercept the sequences $q_A,~q_B$, measure those qubits and resend the sequences $q_A',~q_B'$. Let the $i$-th qubit pair be $(q_{A_i},q_{B_i})$, which is prepared in the basis $(\mathcal{B}_{A_i},\mathcal{B}_{B_i})$, and suppose $\mathcal{A}$ independently chooses two bases $\mathcal{B}_{A_i}'$ and $\mathcal{B}_{B_i}'$ to measure $q_{A_i}$ and $q_{B_i}$, since they are not dependent on the $i$-th key bit. After measurement, let the state of the qubit pair be $(q_{A_i}',q_{B_i}')$. At the time of security checking, UTP measures $(q_{A_i}',q_{B_i}')$ in $(\mathcal{B}_{A_i},\mathcal{B}_{B_i})$ and gets the result $(q_{A_i}'',q_{B_i}'')$. Thus the winning probability of $\mathcal{A}$ is \begin{equation*} \label{eq-pr1} \begin{split} &\Pr(q_{A_i}''=q_{A_i},~q_{B_i}''=q_{B_i}) \\ &=\Pr(q_{A_i}''=q_{A_i})\Pr(q_{B_i}''=q_{B_i}) \\ & = \{ \Pr(q_{A_i}''=q_{A_i}|~\mathcal{B}_{A_i} = \mathcal{B}_{A_i}')\Pr(\mathcal{B}_{A_i} = \mathcal{B}_{A_i}') + \Pr(q_{A_i}''=q_{A_i}|~\mathcal{B}_{A_i} \neq \mathcal{B}_{A_i}') \Pr(\mathcal{B}_{A_i} \neq \mathcal{B}_{A_i}')\}\times \\ & ~~~~ \{ \Pr(q_{B_i}''=q_{B_i}|~\mathcal{B}_{B_i} = \mathcal{B}_{B_i}')\Pr(\mathcal{B}_{B_i} = \mathcal{B}_{B_i}') + \Pr(q_{B_i}''=q_{B_i}|~\mathcal{B}_{B_i} \neq \mathcal{B}_{B_i}') \Pr(\mathcal{B}_{B_i} \neq \mathcal{B}_{B_i}')\} \\ &= \left[ \frac{1}{2}\{\Pr(q_{A_i}''=q_{A_i}|~\mathcal{B}_{A_i} = \mathcal{B}_{A_i}') + \Pr(q_{A_i}''=q_{A_i}|~\mathcal{B}_{A_i} \neq \mathcal{B}_{A_i}')\}\right] \times \\ & ~~~~~~~~~~ \left[ \frac{1}{2}\{ \Pr(q_{B_i}''=q_{B_i}|~\mathcal{B}_{B_i} = \mathcal{B}_{B_i}') + \Pr(q_{B_i}''=q_{B_i}|~\mathcal{B}_{B_i} \neq \mathcal{B}_{B_i}')\}\right] \\ &= \frac{1}{4}\left(1+\frac{1}{2} \right)\left(1+\frac{1}{2} \right) =\frac{9}{16}. \end{split} \end{equation*} Since Alice and Bob apply random permutations on their sequences $Q_A$ and $Q_B$, so from the measurement results, $\mathcal{A}$ can not get any information about the $i$-th bit pair of the secret message. The probability of getting the $i$-th bit pair is $1/4$ by randomly guessing the bits. However the detection probability of $\mathcal{A}$ is $1-\left( \frac{9}{16}\right)^{\delta n} $ and in this case Alice and Bob abort the protocol. Table~\ref{table:comarison} compares the probabilities of relevant events between the MDI-QD~\cite{qip/Maitra17} and its modified version. \begin{table}[h] \caption{Comparison between the MDI-QD~\cite{qip/Maitra17} and its modified version.} \label{table:comarison} \renewcommand*{\arraystretch}{1.8} \resizebox{1.0\textwidth}{!}{ \begin{tabular}{|c|c|c|} \hline \textbf{Probability of the event} & \textbf{MDI-QD~\cite{qip/Maitra17}} & \textbf{Our modified MDI-QD} \\ \hline $\mathcal{A}$ gets the $i$-th bit pair & $5/8$ & $1/4$ \\ \hline Alice, Bob can not detect $\mathcal{A}$ for the $i$-th measurement & {$3/4$} & $9/16$ \\ \hline Alice, Bob detect $\mathcal{A}$ & $1-(3/4)^{\delta n}$ & $1-(9/16)^{\delta n}$ \\ \hline \end{tabular}} \end{table} \section{Three Party Quantum Conference } \label{sec3} We extend the above QD protocol from two to three parties, thus leading to a protocol of quantum conference. Our proposed conference protocol is divided into two parts. Let Alice, Bob and Charlie be three participants of the conference. Also let Alice's, Bob's and Charlie's $m$ bit messages be $a$, $b$ and $c$ respectively, where $a=a_1a_2\ldots a_m$, $b=b_1b_2\ldots b_m$ and $c=c_1c_2\ldots c_m$. In the first part, Alice, Bob, and Charlie perform a Multi-party QKD protocol~\cite{matsumoto2007multiparty} to establish a secret key $k=k_1k_2\ldots k_m$ of $m$ bits between themselves. Then each of them uses the key to encode one's own message $M$ into the corresponding state $Q$, according to Subroutine~1. The details of the three party quantum conference protocol are given in Protocol~1. \sbline \begin{subroutine}{Message Encoding Strategy for Three Party Quantum Conference } \textbf{Inputs:} Own message ${M}=M_1M_2\ldots M_m$; key $k=k_1k_2\ldots k_m$. \sbline \textbf{Output:} Sequence of qubits $Q=Q_1Q_2\ldots Q_m$. \sbline \textit{The subroutine:}\\ For $1 \leqslant i \leqslant m,$ \begin{enumerate} \item if $M_i= 0$ and $k_i = 0$, prepares $Q_i=\ket{0}$. \item if $M_i= 1$ and $k_i = 0$, prepares $Q_i=\ket{1}$. \item if $M_i= 0$ and $k_i = 1$, prepares $Q_i=\ket{+}$. \item if $M_i= 1$ and $k_i = 1$, prepares $Q_i=\ket{-}$. \end{enumerate} \label{algo:enc_conf} \end{subroutine} \subsection{Protocol 1: Three Party Quantum Conference} \label{conf} The steps of the protocol is as follows: \begin{enumerate} \item Alice, Bob and Charlie perform any multi-party QKD protocol (e.g.,~\cite{matsumoto2007multiparty}) to establish an $m$-bit secret key $k=k_1k_2\ldots k_m$ between themselves. \item Let the $m$-bit messages of Alice, Bob and Charlie be $a$, $b$ and $c$ respectively, where $a=a_1a_2\ldots a_m$, $b=b_1b_2\ldots b_m$ and $c=c_1c_2\ldots c_m$. \item For $1\leqslant i \leqslant m$, Alice, Bob and Charlie prepare the sequences of qubits $Q_A=\{Q_A[i]\}_{i=1}^{m}=({Q_A}_1,{Q_A}_2,\ldots,{Q_A}_m), Q_B=\{Q_B[i]\}_{i=1}^{m}=({Q_B}_1,{Q_B}_2,\ldots,{Q_B}_m)$ and $Q_C=\{Q_C[i]\}_{i=1}^{m}=({Q_C}_1,{Q_C}_2,\ldots,{Q_C}_m)$ respectively at their end by using Subroutine 1. \item Alice, Bob, and Charlie choose some random permutation and apply those on their respective sequences of qubits $Q_A,Q_B$, and $Q_C$ and get new sequences of qubits $q_A,q_B$ and $q_C$. \item They send the prepared sequences of qubits $q_A ,q_B$, and $q_C$ to an untrusted fourth party (UFP). \item Alice, Bob, and Charlie randomly choose $\delta m$ number of common positions on sequences $Q_A ,Q_B$ and $Q_C$ to estimate the error in the channel, where $\delta \ll 1$ is a small fraction. Corresponding to these $\delta m$ rounds, they do the following: \label{3party_error1} \begin{enumerate} \item Each participant tells the positions and preparation bases of those qubits for those rounds to the UFP. \item The UFP measures each single-qubit state in proper basis and announces the results. \item They reveal their respective qubits for these rounds and compare them with the results announced by the UFP. \item If the estimated error is greater than some predefined threshold value, then they abort. Else they continue and go to the next step. \end{enumerate} \item The UFP asks Alice, Bob, and Charlie to tell the permutations which they have applied to their sequences. \item The UFP applies the inverse permutations, corresponding to the permutations chosen by Alice, Bob, and Charlie, on $q_A,q_B$, and $q_C$ to get $Q_A ,Q_B$ and $Q_C$ respectively. \item They discard the qubits corresponding to the above $\delta m$ positions. Their remaining sequence of prepared qubits are relabeled as $Q_A=\{Q_A[i]\}_{i=1}^{m'}$, $Q_B=\{Q_B[i]\}_{i=1}^{m'}$ and $Q_C=\{Q_C[i]\}_{i=1}^{m'}$, where $m'=(1-\delta) m$. \item They update their $m$-bit key to an $m'$-bit key by discarding $\delta m$ number of key bits corresponding to the above $\delta m$ rounds. The updated key is relabeled as $k=k_1k_2\ldots k_{m'}$. \item For $1\leqslant i \leqslant m'$, the UFP measures the each three qubits state $(Q_{A_i},Q_{B_i},Q_{C_i})$ in basis $\mathcal{B}_3$ and announces the result. \item Alice, Bob and Charlie make a finite sequence $\{\mathcal{M}[i]\}_{i=1}^{m'}$ containing the measurement results, i.e., for $1\leqslant i \leqslant m'$, $\mathcal{M}[i]\in \{\ket{\Phi_{0}^{+}},\ket{\Phi_{0}^{-}},\ket{\Phi_{1}^{+}},\ket{\Phi_{1}^{-}},\ket{\Phi_{2}^{+}}, \ket{\Phi_{2}^{-}}, \ket{\Phi_{3}^{+}}, \ket{\Phi_{3}^{-}}\}$ is the $i$-th measurement result announced by the UFP . \item They randomly choose $\gamma m'$ number of measurement results $\mathcal{M}[i]$ from the sequence $\{\mathcal{M}[i]\}_{i=1}^{m'}$ to estimate the error (may be introduced by the UFP ), where $\gamma \ll 1$ is a small fraction. \label{3party_error2} \begin{enumerate} \item They reveal their respective message bits for these rounds. \item If the estimated error is greater than some predefined threshold value, then they abort. Else they continue and go to the next step.\label{3prty_error_end} \end{enumerate} \item Their remaining sequence of measurement results is relabeled as $\{\mathcal{M}[i]\}_{i=1}^{n}$, where $n=(1-\gamma) m'$. \item They update their $m'$-bit key to an $n$-bit key by discarding $\gamma m'$ number of key bits corresponding to the above $\gamma m'$ rounds. The updated key is relabeled as $k=k_1k_2\ldots k_{n}$. \item Each of Alice, Bob, and Charlie applies Algorithm~\ref{3_Party_msg_recons} to get others' messages. \end{enumerate} Note that in this protocol, there are two error estimation phases. The first one checks if there is any adversary (other than the UFP ) in the channel who tries to get some information about the messages or change the messages. In this case, if the 1st error estimation phase does not pass, then Alice, Bob, and Charlie abort the protocol. Thus, in this step, the motivation of the UFP being correct is that there is no information gain for him/her if the parties abort the protocol. The next error estimation phase is to check if there is any error introduced by the UFP . \begin{algorithm} \setlength{\textfloatsep}{0.05cm} \setlength{\floatsep}{0.05cm} \KwIn{Own message , measurement results $\{\mathcal{M}[i]\}_{i=1}^{n}$, key $k$. } \KwOut{Others' messages.} \begin{enumerate} \item For $1\leqslant i \leqslant n$, if $k_i = 0$, then each participant can learn the $i$-th bit of others' messages from the measurement result $\mathcal{M}[i]$ and their own message (see Table-\ref{conf_table}). \item For $1\leqslant i \leqslant n$, if $k_i = 1$, then from the measurement result $\mathcal{M}[i]$ and their own message each participant can learn the $i$-th bit of others messages are same or different (see Table-\ref{conf_table}). Let $c=wt(k)$. \label{msg_info} \begin{enumerate} \item Alice, Bob and Charlie prepare ordered sets of qubits $S_A$, $S_B$ and $S_C$ respectively, corresponding to their message bit where the key bit is $1$. They prepare the qubits at their end according to the following strategy. Each of $S_A$, $S_B$ and $S_C$ contain $c$ number of qubits. For $1\leqslant j \leqslant c$ and if $k_i=1$ is the $j$-th $1$ in $k$, then \begin{itemize} \label{2nd encode} \item if $a_i$ ($b_i,c_i$)$ = 0$ and $i$ is even, prepares $S_A[j] ~(S_B[j],~S_C[j])~=\ket{0}$. \item if $a_i$ ($b_i,c_i$)$ = 1$ and $i$ is even, prepares $S_A[j] ~(S_B[j],~S_C[j])~=\ket{1}$. \item if $a_i$ ($b_i,c_i$)$ = 0$ and $i$ is odd, prepares $S_A[j] ~(S_B[j],~S_C[j])~=\ket{+}$. \item if $a_i$ ($b_i,c_i$)$ = 1$ and $i$ is odd, prepares $S_A[j] ~(S_B[j],~S_C[j])~=\ket{-}$. \end{itemize} \item Alice, Bob and Charlie prepare sets of $d$ decoy photons $D_A$, $D_B$ and $D_C$ respectively, where the decoy photons are randomly chosen from $\{\ket{0},\ket{1},\ket{+},\ket{-}\}$. They randomly insert their decoy photons into their prepared qubits sets and make new ordered sets $S_A'$, $S_B'$ and $S_C'$. They also choose random permutations $R_A$, $R_B$, $R_C$ and apply those on their respective sets $S_A'$, $S_B'$, $S_C'$ to get the sets $S_A''$, $S_B''$, $S_C''$ respectively. \item Each of them sends its set to the next participant in a circular way. That is, Alice sends $S_A''$ to Bob, who sends $S_B''$ to Charlie, who in turn sends $S_C''$ to Alice. \item After receiving the qubits from the previous participant, each of them announces the random permutations and the positions, states of their decoy photons. \item They apply the inverse permutations and verify the decoy photons to check eavesdropping. If there exists any eavesdropper in the quantum channel, they abort the protocol, else they go to the next step.\label{3party_error3} \item Now everyone knows the basis of the qubits of $S_A$, $S_B$ and $S_C$. So they can measure those qubits to get the exact message bits of the previous participant from whom they got those qubits. \label{rcv_qubit} \end{enumerate} \end{enumerate} \caption{Three Party Message Reconstruction Algorithm.} \label{3_Party_msg_recons} \setlength{\textfloatsep}{0.05cm} \setlength{\floatsep}{0.05cm} \end{algorithm} \begin{sidewaystable}[] \centering \renewcommand*{\arraystretch}{1.8} \caption{Different cases in the three party quantum conference.} \setlength{\tabcolsep}{8pt} \resizebox{1.0\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{Bits to Communicate} & \multicolumn{3}{|c|}{ Qubits prepared by } & \multicolumn{8}{|c|}{Probabilities of measurement results $\mathcal{M}[i]$ at UFP's end}\\ \hline {} Alice & Bob & Charlie & Alice (${Q_A}_i$) & Bob (${Q_B}_i$) & Charlie (${Q_C}_i$) & $\ket{\Phi_0^+}$ & $\ket{\Phi_0^-}$ & $\ket{\Phi_1^+}$ & $\ket{\Phi_1^-}$ & $\ket{\Phi_2^+}$ & $\ket{\Phi_2^-}$ & $\ket{\Phi_3^+}$& $\ket{\Phi_3^-}$ \\ \hline $0$ & $0$ & $0$ & $\ket{0}$ & $\ket{0}$ & $\ket{0}$ & $1/2$ & $1/2$ & $0$ & $0$ &$0$ & $0$ & $0$ & $0$ \\ $0$ & $0$ & $1$ & $\ket{0}$ & $\ket{0}$ & $\ket{1}$ & $0$ & $0$ & $1/2$ & $1/2$ & $0$ & $0$ & $0$ & $0$ \\ $0$ & $1$ & $0$ & $\ket{0}$ & $\ket{1}$ & $\ket{0}$ & $0$ & $0$ & $0$ & $0$ & $1/2$ & $1/2$ & $0$ & $0$ \\ $0$ & $1$ & $1$ & $\ket{0}$ & $\ket{1}$ & $\ket{1}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $1/2$ & $1/2$ \\ $1$ & $0$ & $0$ & $\ket{1}$ & $\ket{0}$ & $\ket{0}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $1/2$ & $1/2$ \\ $1$ & $0$ & $1$ & $\ket{1}$ & $\ket{0}$ & $\ket{1}$ & $0$ & $0$ & $0$ & $0$ & $1/2$ & $1/2$ & $0$ & $0$ \\ $1$ & $1$ & $0$ & $\ket{1}$ & $\ket{1}$ & $\ket{0}$ & $0$ & $0$ & $1/2$ & $1/2$ & $0$ & $0$ & $0$ & $0$ \\ $1$ & $1$ & $1$ & $\ket{1}$ & $\ket{1}$ & $\ket{1}$ & $1/2$ & $1/2$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ \hline $0$ & $0$ & $0$ & $\ket{+}$ & $\ket{+}$ & $\ket{+}$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ \\ $0$ & $0$ & $1$ & $\ket{+}$ & $\ket{+}$ & $\ket{-}$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ \\ $0$ & $1$ & $0$ & $\ket{+}$ & $\ket{-}$ & $\ket{+}$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ \\ $0$ & $1$ & $1$ & $\ket{+}$ & $\ket{-}$ & $\ket{-}$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ \\ $1$ & $0$ & $0$ & $\ket{-}$ & $\ket{+}$ & $\ket{+}$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ \\ $1$ & $0$ & $1$ & $\ket{-}$ & $\ket{+}$ & $\ket{-}$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ \\ $1$ & $1$ & $0$ & $\ket{-}$ & $\ket{-}$ & $\ket{+}$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ \\ $1$ & $1$ & $1$ & $\ket{-}$ & $\ket{-}$ & $\ket{-}$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$\\ \hline \end{tabular}} \label{conf_table} ` \end{sidewaystable} \subsection{Correctness of Three Party Quantum Conference Protocol} In our proposed protocol, Alice, Bob and Charlie first prepare qubits corresponding to their messages and shared key and then send those qubits to the fourth party (UFP). After that, UFP measures each of the three qubits state (one from Alice, one from Bob and one from Charlie) in basis $\mathcal{B}_3=\{\ket{\Phi_{0}^{+}},\ket{\Phi_{0}^{-}},\ket{\Phi_{1}^{+}},\ket{\Phi_{1}^{-}},\ket{\Phi_{2}^{+}}, \ket{\Phi_{2}^{-}}, \ket{\Phi_{3}^{+}}, \ket{\Phi_{3}^{-}}\}$ and announces the result. Now, we can say the following from Table~\ref{conf_table}: \begin{itemize} \item If the prepared qubit of Alice is $\ket{0}$($\ket{1})$, then Alice guesses message bit of Bob and Charlie ($b_i$ and $c_i$) with probability $1$ as follows: \begin{equation*} \text{Measurement result} = \begin{cases} \ket{\Phi_{0}^{+}}$ or $\ket{\Phi_{0}^{-}} \Rightarrow & b_i=0(1) \text{ and }c_i=0(1);\\ \ket{\Phi_{1}^{+}}$ or $\ket{\Phi_{1}^{-}} \Rightarrow & b_i=0(1) \text{ and }c_i=1(0);\\ \ket{\Phi_{2}^{+}}$ or $\ket{\Phi_{2}^{-}} \Rightarrow & b_i=1(0) \text{ and }c_i=0(1);\\ \ket{\Phi_{3}^{+}}$ or $\ket{\Phi_{3}^{-}} \Rightarrow & b_i=1(0) \text{ and }c_i=1(0). \end{cases} \end{equation*} \item If the prepared qubit of Alice is $\ket{+}$($\ket{-})$, then Alice guesses the XOR function of message bits of Bob and Charlie with probability $1$ as follows: \begin{equation*} \text{Measurement result} = \begin{cases} \ket{\Phi_{0}^{+}}$ or $\ket{\Phi_{1}^{+}}$ or $\ket{\Phi_{2}^{+}}$ or $\ket{\Phi_{3}^{+}} \Rightarrow & b_i \oplus c_i =0(1);\\ \ket{\Phi_{0}^{-}}$ or $\ket{\Phi_{1}^{-}}$ or $\ket{\Phi_{2}^{-}}$ or $\ket{\Phi_{3}^{-}} \Rightarrow & b_i \oplus c_i =1(0). \end{cases} \end{equation*} In this case, Charlie sends her encoded qubit to Alice (the encoding process is given in Step~\ref{2nd encode} of Algorithm~\ref{3_Party_msg_recons}). Since Alice knows the basis of the received qubit from Charlie, by measuring the qubit in the proper basis, Alice can know the message bit $c_i$ of Charlie. Then from $b_i \oplus c_i$, she can get $b_i$ also. \end{itemize} A similar thing happens for Bob and Charlie too. From the above discussion, we see that for all the cases Alice, Bob, and Charlie can conclude the communicated bit of the other parties with probability $1$. Hence our protocol is giving the correct result. \subsection{Security Analysis of the Three Party Quantum Conference Protocol} In this section, we discuss the security of our proposed three-party quantum conference protocol against the common known attacks which $\mathcal{A}$ can adopt. If there exists some adversary in the channel and the legitimate parties can detect her with a non-negligible probability, then we call our protocol as secure. We first show that if the UFP does some cheating, it can be detected by the players at the error estimation phase of the protocol (Step~\ref{3party_error2} of Protocol 1). \begin{sidewaystable}[] \centering \renewcommand*{\arraystretch}{1.8} \caption{Different cases when UFP is dishonest in the three party quantum conference.} \setlength{\tabcolsep}{8pt} \resizebox{1.0\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline {UFP choses} & \multicolumn{3}{|c|}{UFP's measurement results} & \multicolumn{8}{|c|}{Probability that UFP guesses $\mathcal{M}'[i]$}\\ \hline {} measurement basis & Alice (${Q_A}'_i$) & Bob (${Q_B}'_i$) & Charlie (${Q_C}'_i$) & $\ket{\Phi_0^+}$ & $\ket{\Phi_0^-}$ & $\ket{\Phi_1^+}$ & $\ket{\Phi_1^-}$ & $\ket{\Phi_2^+}$ & $\ket{\Phi_2^-}$ & $\ket{\Phi_3^+}$& $\ket{\Phi_3^-}$ \\ \hline \multirow{8}{*}{$Z$} & $\ket{0}$ & $\ket{0}$ & $\ket{0}$ & $1/2$ & $1/2$ & $0$ & $0$ &$0$ & $0$ & $0$ & $0$ \\ & $\ket{0}$ & $\ket{0}$ & $\ket{1}$ & $0$ & $0$ & $1/2$ & $1/2$ & $0$ & $0$ & $0$ & $0$ \\ & $\ket{0}$ & $\ket{1}$ & $\ket{0}$ & $0$ & $0$ & $0$ & $0$ & $1/2$ & $1/2$ & $0$ & $0$ \\ & $\ket{0}$ & $\ket{1}$ & $\ket{1}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $1/2$ & $1/2$ \\ & $\ket{1}$ & $\ket{0}$ & $\ket{0}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $1/2$ & $1/2$ \\ & $\ket{1}$ & $\ket{0}$ & $\ket{1}$ & $0$ & $0$ & $0$ & $0$ & $1/2$ & $1/2$ & $0$ & $0$ \\ & $\ket{1}$ & $\ket{1}$ & $\ket{0}$ & $0$ & $0$ & $1/2$ & $1/2$ & $0$ & $0$ & $0$ & $0$ \\ & $\ket{1}$ & $\ket{1}$ & $\ket{1}$ & $1/2$ & $1/2$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ \hline \multirow{8}{*}{$X$} & $\ket{+}$ & $\ket{+}$ & $\ket{+}$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ \\ & $\ket{+}$ & $\ket{+}$ & $\ket{-}$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ \\ & $\ket{+}$ & $\ket{-}$ & $\ket{+}$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ \\ & $\ket{+}$ & $\ket{-}$ & $\ket{-}$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ \\ & $\ket{-}$ & $\ket{+}$ & $\ket{+}$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ \\ & $\ket{-}$ & $\ket{+}$ & $\ket{-}$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ \\ & $\ket{-}$ & $\ket{-}$ & $\ket{+}$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ \\ & $\ket{-}$ & $\ket{-}$ & $\ket{-}$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$ & $0$ & $1/4$\\ \hline \end{tabular}} \label{ufp_conf_table} \end{sidewaystable} Let UFP measure each of the three qubits $Q_{A_i},Q_{B_i},Q_{C_i}$ in a randomly chosen basis ($Z$ or $X$) instead of measuring $(Q_{A_i},Q_{B_i},Q_{C_i})$ in $\mathcal{B}_3$ basis. Now UFP checks the individual measurement results and decides to announce an $\mathcal{M}'[i]\in \{\ket{\Phi_{0}^{+}},\ket{\Phi_{0}^{-}},\ket{\Phi_{1}^{+}},\ket{\Phi_{1}^{-}},\ket{\Phi_{2}^{+}}, \ket{\Phi_{2}^{-}}, \ket{\Phi_{3}^{+}}, \ket{\Phi_{3}^{-}}\}$ corresponding to the states which can arrive if he measures in the correct basis (see Table~\ref{ufp_conf_table}). For example, if UFP measures in $Z$-basis and gets the result $\ket{0}\ket{0}\ket{1}$ then he announces $\mathcal{M}'[i]$ from the set $\{\ket{\Phi_{1}^{+}},\ket{\Phi_{1}^{-}}\}$. Again if he measures in $X$-basis and gets the result $\ket{-}\ket{+}\ket{+}$ then he announces $\mathcal{M}'[i]$ from the set $\{\ket{\Phi_{0}^{-}},\ket{\Phi_{1}^{-}},\ket{\Phi_{2}^{-}},\ket{\Phi_{3}^{-}}\}$. \\We now calculate the winning probability $p$ of UFP for correctly guessing the $i$-th measurement result $\mathcal{M}[i]$. Let the preparation basis for the initial qubits $Q_{A_i},Q_{B_i},Q_{C_i}$ be $\mathcal{B}$ and UFP chooses the basis $\mathcal{B}'$. Then we have, \begin{equation*} \label{eq-UFP-3party} \begin{split} p &=\Pr(\mathcal{M}'[i]=\mathcal{M}[i]) \\ & = \Pr(\mathcal{M}'[i]=\mathcal{M}[i] |~\mathcal{B} = \mathcal{B}')\Pr(\mathcal{B} = \mathcal{B}') + \Pr(\mathcal{M}'[i]=\mathcal{M}[i] |~\mathcal{B} \neq \mathcal{B}')\Pr(\mathcal{B} \neq \mathcal{B}')\\ &= \frac{1}{2}\{\Pr(\mathcal{M}'[i]=\mathcal{M}[i] |~\mathcal{B} = \mathcal{B}') + \Pr(\mathcal{M}'[i]=\mathcal{M}[i] |~\mathcal{B} \neq \mathcal{B}')\}\\ &= \frac{1}{2}\{\Pr(\mathcal{M}'[i]=\mathcal{M}[i] |~\mathcal{B} = \mathcal{B}') + \Pr(\mathcal{M}'[i]=\mathcal{M}[i] |~\mathcal{B}=X, \mathcal{B}'=Z)+ \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~\Pr(\mathcal{M}'[i]=\mathcal{M}[i] |~\mathcal{B}=Z, \mathcal{B}'=X)\}\\ &= \frac{1}{2}\left(1+\frac{1}{2}+\frac{1}{4} \right) =\frac{7}{8}. \end{split} \end{equation*} Therefore the legitimate parties can detect this eavesdropping with probability $1-p^{\gamma m'}$, which is a non-negligible probability for large $\gamma m'$. Next, we consider four types of attacks (intercept-and-resend attack, entangle-and-measure attack, Denial-of-Service (DoS) attack, man-in-the-middle attack) and show that our protocol is secure against these attacks. \begin{enumerate} \item \textbf{Intercept-and-resend attack}\\ Here we consider the intercept-and-resend attack by an adversary $\mathcal{A}$ (other than the UFP). In this attack model, $\mathcal{A}$ intercepts the qubits from the quantum channel, then she measures those qubits and resends to the receiver. First let us assume that $\mathcal{A}$ intercepts $q_A$, measures the qubits in randomly chosen bases ($Z$ or $X$) and notes down the measurement results. Due to the measurements by $\mathcal{A}$, let the sequence $q_A$ changes to $q_A'$ and she resends $q_A'$ to UFP. After receiving the sequence $q_A'$, Alice tells UFP some random positions of the sent qubits and their preparation bases, then UFP measures those qubits and announces the results. Let the $i$-th qubit $q_{A_i}$ prepared in basis $\mathcal{B}_{A_i}$, and $\mathcal{A}$ chooses basis $\mathcal{B}_{A_i}'$ to measure $q_{A_i}$. At the time of security checking, UFP measures $q_{A_i}'$ in $\mathcal{B}_{A_i}$ and gets the result $q_{A_i}''$. Thus the winning probability of $\mathcal{A}$ is \begin{equation*} \label{eq-intercept-3party} \begin{split} p_1 &=\Pr(q_{A_i}''=q_{A_i}) \\ & = \Pr(q_{A_i}''=q_{A_i}|~\mathcal{B}_{A_i} = \mathcal{B}_{A_i}')\Pr(\mathcal{B}_{A_i} = \mathcal{B}_{A_i}') + \Pr(q_{A_i}''=q_{A_i}|~\mathcal{B}_{A_i} \neq \mathcal{B}_{A_i}') \Pr(\mathcal{B}_{A_i} \neq \mathcal{B}_{A_i}')\\ &= \frac{1}{2}\{\Pr(q_{A_i}''=q_{A_i}|~\mathcal{B}_{A_i} = \mathcal{B}_{A_i}') + \Pr(q_{A_i}''=q_{A_i}|~\mathcal{B}_{A_i} \neq \mathcal{B}_{A_i}')\}\\ &= \frac{1}{4}\left(1+\frac{1}{2} \right) =\frac{3}{4}. \end{split} \end{equation*} Similarly, when $\mathcal{A}$ intercepts $q_B$ and $q_C$, then the winning probability of $\mathcal{A}$ is $p_2=\frac{3}{4}$ and $p_3=\frac{3}{4}$ respectively. Note that Alice, Bob, and Charlie apply random permutations on their respective sequences of qubits, and those permutations are announced only if the error estimation phase is passed after the qubits arrive at their destinations. So at the time of sending those sequences, $\mathcal{A}$ can not just guess a key bit and measure the qubits in the corresponding bases. Even if she gets some of the key bits, she can not guess the corresponding bases for sequences of qubits $q_A ,q_B$, $q_C$. Therefore measuring the qubits of $q_A ,q_B$, $q_C$ are independent events to $\mathcal{A}$ and thus the winning probability of $\mathcal{A}$ for this attack is $p_1p_2p_3=(\frac{3}{4})^3$. Alice, Bob, and Charlie randomly choose $\delta m$ number of rounds to estimate the error in the channel (Step~\ref{3party_error1} of Protocol 1), where $\delta \ll 1$ is a small fraction. Corresponding to these rounds, they tell the positions and preparation bases of the qubits to the UFP . Next, the UFP measures each single qubit state in proper basis and announces the result. Alice, Bob, and Charlie reveal their respective qubits for these rounds and compare them with the results announced by UFP and calculate the error rate in the quantum channel. Thus the probability that they can detect the existence of $\mathcal{A}$ is $1-\left( \frac{3}{4}\right) ^{3\delta m}$, and in this case the legitimate parties terminate the protocol.\\ Next we consider $\mathcal{A}$ tries to eavesdrop in the second phase of transmission of qubits (Step~\ref{msg_info} of Algorithm~\ref{3_Party_msg_recons}). Suppose $\mathcal{A}$ intercepts the sequences $S_A'',S_B'',S_C''$ from the quantum channel, measures them in $Z$ or $X$ basis and then resends those sequences to the receivers. Since each of $S_A'',S_B'',S_C''$ contains $d$ decoy photons, then these intermediate measurements change the states of those decoy photons. Let the $i$-th decoy photon of Alice be $D_{A_i}$ prepared in basis $\mathcal{B}$, where $\mathcal{B}=Z$ or $X$, and after $\mathcal{A}$ measures in $\mathcal{B}'$ basis the state becomes $D_{A_i}'$. When Alice announces the preparation basis of $D_{A_i}$, then Bob measures $D_{A_i}'$ in basis $\mathcal{B}$ and gets $D_{A_i}''$. We now calculate the probability that $D_{A_i}=D_{A_i}''$ as follows, \begin{equation*} \label{eq-pr-intercept-2nd} \begin{split} &\Pr(D_{A_i}''=D_{A_i}) \\ & = \Pr(D_{A_i}''=D_{A_i}|~\mathcal{B} = \mathcal{B}')\Pr(\mathcal{B} = \mathcal{B}') + \Pr(D_{A_i}''=D_{A_i}|~\mathcal{B} \neq \mathcal{B}')\Pr(\mathcal{B} \neq \mathcal{B}') \\ &= \frac{1}{2}[\Pr(D_{A_i}''=D_{A_i}|~\mathcal{B} = \mathcal{B}') + \Pr(D_{A_i}''=D_{A_i}|~\mathcal{B} \neq \mathcal{B}')] \\ &= \frac{1}{2}\left[1 + \frac{1}{2}\right]=\frac{3}{4}. \end{split} \end{equation*} Thus the probability that Alice and Bob can detect the existence of $\mathcal{A}$ is $1-\left( \frac{3}{4}\right) ^d$, where $d$ is the number of decoy photon. Similarly for the other sequences of qubits. \item \textbf{Entangle-and-measure attack}\\ Let us discuss another attack, called entangle-and-measure attack, by an adversary $\mathcal{A}$. For this attack, $\mathcal{A}$ does the following: when Alice sends her sequence of qubits $q_A$ to the UFP , then $\mathcal{A}$ takes each qubit $q_{A_i}$, $1 \leqslant i \leqslant m$, from the channel and takes an ancillary qubit $\ket{b}$, which is in state $\ket{0}$, from her own. $\mathcal{A}$ applies a CNOT gate with control $q_{A_i}$ and target $\ket{b}$, and then she sends $q_{A_i}$ to the UFP . The joint state becomes $\ket{00}$, $\ket{11}$, $\ket{\Phi^+}$ and $\ket{\Phi^-}$, corresponding to the state of $q_{A_i}$, which are $\ket{0}$, $\ket{1}$, $\ket{+}$ and $\ket{-}$ respectively. Also $\mathcal{A}$ does the same thing with the qubits of Bob and Charlie. After the UFP receives all the qubits, Alice, Bob and Charlie randomly choose $\delta m$ number of rounds to estimate the error in channel (Step~\ref{3party_error1} of Protocol 1), where $\delta \ll 1$ is a small fraction. Corresponding to these rounds, they tell the positions and preparation bases of the qubits to the UFP , who then measures each of the single qubit state in proper basis and announces the result. Alice, Bob and Charlie reveal their respective qubits for these rounds and compare with the results announced by the UFP. Let UFP get the measurement result $q_{A_i}'$ by measuring the state $q_{A_i}$ prepared in basis $\mathcal{B}$. Now if the original state of $q_{A_i}$ is $\ket{0}$ or $\ket{1}$, then no error occurs. But if the original state of $q_{A_i}$ is $\ket{+}$ or $\ket{-}$, then an error will occur with probability $1/2$, as $\ket{\Phi^+}= \frac{1}{\sqrt{2}}(\ket{00}+ \ket{11})=\frac{1}{\sqrt{2}}(\ket{++}+ \ket{--})$ and $\ket{\Phi^{-}}=\frac{1}{\sqrt{2}}(\ket{00}- \ket{11})=\frac{1}{\sqrt{2}}(\ket{++}- \ket{--})$. Thus Alice, Bob and Charlie abort the protocol. Let us calculate the probability of the event $q_{A_i}'=q_{A_i}$. \begin{equation*} \label{eq-pr-entangled} \begin{split} p_1 &= \Pr(q_{A_i}'=q_{A_i}) \\ & = \Pr(q_{A_i}'=q_{A_i}|~\mathcal{B} = Z)\Pr(\mathcal{B} = Z) + \Pr(q_{A_i}'=q_{A_i}|~\mathcal{B} =X)\Pr(\mathcal{B} =X) \\ &= \frac{1}{2}[q_{A_i}'=q_{A_i}|~\mathcal{B} = Z) + \Pr(q_{A_i}'=q_{A_i}|~\mathcal{B} =X)] \\ &= \frac{1}{2}\left[1 + \frac{1}{2}\right]=\frac{3}{4}. \end{split} \end{equation*} Similarly we can calculate $p'_2=\Pr(q_{B_i}'=q_{B_i})=\frac{3}{4}$, $p'_3=\Pr(q_{C_i}'=q_{C_i})=\frac{3}{4}$. Thus for $1 \leqslant i \leqslant m$, the winning probability of $\mathcal{A}$ is $p'_1p'_2p'_3=\left( \frac{3}{4}\right) ^3$ and the legitimate party can detect him at the time of security checking with probability $1-\left( \frac{3}{4}\right) ^{3\delta m}$. Similar argument follows for the second round of communication. \item \textbf{Denial-of-service (DoS) attack}\\ In this attack model, $\mathcal{A}$ applies a random unitary operator $\mathcal{U} \neq I$ on the qubits to tamper the original message and introduce noise in the channel. This attack can also be detected in the same way as discussed above. Let $\mathcal{U}=\sum_{j=1}^4 w_jP_j$, where $P_j$s are the Pauli matrices $I$, $\sigma_x$, $i\sigma_y$ and $\sigma_{z}$ for $1 \leq j \leq 4$ respectively~\cite{nielsen2002quantum}, and they form a basis for the space of all $2 \times 2$ Hermitian matrices. Since $\mathcal{U}$ is unitary, $\sum_{j=1}^4 w^2_j=1$. Now the winning probability of $\mathcal{A}$ is $p_4=\sum_{j=1}^4 h_jw^2_j$, where $h_j$s are the winning probabilities of $\mathcal{A}$ when she applies $P_j$s respectively. Thus $h_1=1$, $h_2=1/2$, $h_3=0$ and $h_4=1/2$ as $I$ does not change any state, $\sigma_x$ changes the states in $Z$-basis, $i\sigma_y$ changes the states in both $Z$-basis and $X$-basis, and $\sigma_z$ changes the states in $X$-basis. Hence in the security check process Alice, Bob and Charlie find this eavesdropping with probability $1-{p_4}^{3\delta m}>0$. Similarly for the second phase of communication, the legitimate parties can detect $\mathcal{A}$ with probability $1-{p_4}^{3d}>0$, where $d$ is the number of decoy states. \item \textbf{Man-in-the-middle attack}\\ For this attack, $\mathcal{A}$ prepares three finite sequences of length $m$, of single qubit states $q_A',q_B'$ and $q_C'$, whose elements are randomly selected between $\ket{0}, \ket{1}, \ket{+}$ and $\ket{-}$. When Alice, Bob, and Charlie send their prepared sequences of qubits $q_A ,q_B$ and $q_C$ to the UFP , then $\mathcal{A}$ intercepts $q_A ,q_B$, $q_C$ and keeps those with her. Instead of $q_A ,q_B$ and $q_C$, she sends $q_A',q_B'$ and $q_C'$ to the UFP . Note that Alice, Bob, and Charlie apply random permutations on their respective sequences of qubits, and those permutations are announced only if the error estimation phase is passed after the qubits arrive at their destinations. So at the time of sending those sequences, $\mathcal{A}$ can not just guess a key bit and prepare her qubits. Even if she gets some of the key bits, she can not guess the corresponding bases for the sequences of qubits $q_A ,q_B$, $q_C$. Alice, Bob, and Charlie randomly choose $\delta m$ number of rounds to estimate the error in channel (Step~\ref{3party_error1} of Protocol 1), where $\delta \ll 1$ is a small fraction. Corresponding to these rounds, they tell the positions and preparation bases of the qubits to the UFP. Next, the UFP measures each single qubit state in proper basis and announces the result. Alice, Bob, and Charlie reveal their respective qubits for these rounds and compare them with the results announced by UFP. Since the elements of $q_A',q_B'$, and $q_C'$ are randomly chosen by $\mathcal{A}$, thus they introduce error in the channel. Let us calculate the probability that Alice, Bob and Charlie can detect this eavesdropping and so they abort the protocol.\\ For each $i$, let the $i$-th qubit of Alice be $q_{A_i}$ prepared in basis $\mathcal{B}_{A_i}$, and $\mathcal{A}$ prepare $q_{A_i}'$ in basis $\mathcal{B}_{A_i}'$. At the time of security checking, UFP measures $q_{A_i}'$ in $\mathcal{B}_{A_i}$ and gets the result $q_{A_i}''$. Now three cases may arise, \begin{itemize} \item If $\mathcal{B}_{A_i} = \mathcal{B}_{A_i}'$ and $q_{A_i}=q_{A_i}'$, then $q_{A_i}''=q_{A_i}$ with probability $1$. \item If $\mathcal{B}_{A_i} = \mathcal{B}_{A_i}'$ and $q_{A_i} \neq q_{A_i}'$, then $q_{A_i}''=q_{A_i}$ with probability $0$. \item If $\mathcal{B}_{A_i} \neq \mathcal{B}_{A_i}'$, then $q_{A_i}''=q_{A_i}$ with probability $1/2$. \end{itemize} Thus the winning probability of $\mathcal{A}$ is \begin{equation*} \label{eq-mitm-3party} \begin{split} &\Pr(q_{A_i}''=q_{A_i}) \\ & = \Pr(q_{A_i}''=q_{A_i}|~\mathcal{B}_{A_i} = \mathcal{B}_{A_i}')\Pr(\mathcal{B}_{A_i} = \mathcal{B}_{A_i}') + \Pr(q_{A_i}''=q_{A_i}|~\mathcal{B}_{A_i} \neq \mathcal{B}_{A_i}') \Pr(\mathcal{B}_{A_i} \neq \mathcal{B}_{A_i}')\\ &= \frac{1}{2}\{\Pr(q_{A_i}''=q_{A_i}|~\mathcal{B}_{A_i} = \mathcal{B}_{A_i}') + \Pr(q_{A_i}''=q_{A_i}|~\mathcal{B}_{A_i} \neq \mathcal{B}_{A_i}')\}\\ &= \frac{1}{2}[\Pr(q_{A_i}''=q_{A_i}|~\mathcal{B} = \mathcal{B}',~q_{A_i}=q_{A_i}') \Pr(q_{A_i}=q_{A_i}') + \\ &~~~~~~~~~~~~~~ \Pr(q_{A_i}''=q_{A_i}|~\mathcal{B} = \mathcal{B}',~q_{A_i} \neq q_{A_i}') \Pr(q_{A_i} \neq q_{A_i}') +1/2]\\ & = \frac{1}{2}\left[1 \times \frac{1}{2} + 0 \times \frac{1}{2} + \frac{1}{2}\right]=\frac{1}{2}. \end{split} \end{equation*} We can calculate the winning probabilities for $q_{B_i}$ and $q_{C_i}$ in a similar way. Hence Alice, Bob and Charlie can detect this eavesdropping with probability $1-\left(\frac{1}{2} \right)^{3\delta m}>0$. Again, if $\mathcal{A}$ tries to eavesdrop in the second phase of transmission of qubits (Step~\ref{msg_info} of Algorithm~\ref{3_Party_msg_recons}), Alice, Bob and Charlie can detect it in the error estimation phase (Step~\ref{3party_error3} of Algorithm~\ref{3_Party_msg_recons}) and abort the protocol. \end{enumerate} Hence our protocol is secure against a dishonest UFP , intercept-and-resend attack, entangle-and-measure attack, DoS attack and man-in-the-middle attack. \section{Multi-Party Quantum Conference} \label{sec4} In this section, we generalize our three-party quantum conference protocol to a multi-party quantum conference protocol. Suppose there are $N$ ($\geqslant 3$) parties $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$; each of them wants to send one's message to the other $N-1$ parties by taking help from an untrusted $(N+1)$-th party $\mathcal{P}_{(N+1)}$, who may be an eavesdropper. Let the $m$-bit messages of $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ be ${M_1}=M_{1,1} M_{1,2}\ldots M_{1,m};\:~ {M_2}=M_{2,1} M_{2,2}\ldots M_{2,m} ;\: \ldots; \: M_N=M_{N,1} M_{N,2}\ldots M_{N,m}$ respectively, where $M_{i,j}$ is the $j$-th message bit of the $i$-th party $\mathcal{P}_i$. To do this task, first, they have to share an $m$-bit key $k=k_1k_2\ldots k_m$ and according to the key, they prepare their sequence of qubits to encode their message bits. The encoding algorithm is the same as the three-party case, i.e., Subroutine 1. Then they send their qubit sequences to $\mathcal{P}_{(N+1)}$, who measures each $N$-qubit states in $\mathcal{B}_N$ basis and announces the result publicly. Depending on the measurement results, one's message bits and key bits, each of them prepares another sequence of qubits, which contains some encoded message bits and some decoy photons, and sends it to the next party circularly. By measuring these qubits on appropriate bases, each of them gets the message bits of the previous party, but the states of the qubits corresponding to the message bits remain the same. Each adds some decoy photons to the message qubits sequence of the previous party and send it to their next party circularly and repeat this process for $N-2$ times. From the previous measurement results announced by $\mathcal{P}_{(N+1)}$, each can get other $N-1$ messages from the other $N-1$ parties. Details are given in Section~\ref{N-conf}. Note that for $N=3$, the protocol is given in Section~\ref{N-conf} reduces to the three-party protocol of Section~\ref{conf}. \subsection{Protocol 2: $N$-Party Quantum Conference} \label{N-conf} The steps of the protocol are as follows: \begin{enumerate} \item $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ perform a Multi-party QKD protocol (e.g.,~\cite{liu2013multiparty}) to establish an $m$ bit secret key $k=k_1k_2\ldots k_m$ between themselves. \item Let the $m$-bit message of $\mathcal{P}_i$ be ${M_i}=M_{i,1} M_{i,2}\ldots M_{i,m}$ for $i=1,2,\ldots,N$. \item For $i=1,2,\ldots,N$, the $i$-th party $\mathcal{P}_i$ prepares the sequence of qubits ${Q_i}=\{Q_i[j]\}_{j=1}^m=(Q_{i,1},Q_{i,2},\ldots ,$ $Q_{i,m})$ at its end by using the Subroutine 1. \item $\mathcal{P}_i$ chooses some random permutation and applies on its respective sequence of qubits $Q_i$ and get new sequence of qubits $q_i$, for $i=1,2,\ldots,N$. \item They send the prepared qubits $q_1,q_2,\ldots, q_N$ to $\mathcal{P}_{(N+1)}$. \item $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ randomly choose $\delta m$ number of common positions on the sequences $Q_1 ,Q_2,$ $ \ldots, Q_N$ to estimate the error in the channel, where $\delta \ll 1$ is a small fraction. Corresponding to these rounds, they do the followings: \label{N-party_error1} \begin{enumerate} \item Each participant tells the positions and the preparation bases of those qubits for those rounds to $\mathcal{P}_{(N+1)}$. \item $\mathcal{P}_{(N+1)}$ measures each single qubit states in proper bases and announces the results. \item $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ reveal their respective qubits for these rounds and compare with the results announced by $\mathcal{P}_{(N+1)}$. \item If the estimated error is greater than some predefined threshold value, then they abort. Else they continue and go to the next step. \end{enumerate} \item $\mathcal{P}_{(N+1)}$ asks $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ to tell the permutations which they have applied to their sequences. \item $\mathcal{P}_{(N+1)}$ applies the inverse permutations, corresponding to the permutations chosen by $\mathcal{P}_1, \mathcal{P}_2,$ $ \ldots, \mathcal{P}_N$, on $q_1,q_2,\ldots, q_N$ to get $Q_1 ,Q_2,\ldots, Q_N$ respectively. \item They discard the qubits corresponding to the above $\delta m$ positions. Their remaining sequences of prepared qubits are relabeled as ${Q_1}=\{Q_1[i]\}_{i=1}^{m'}$, ${Q_2}=\{Q_2[i]\}_{i=1}^{m'}$, $\ldots $, ${Q_N}=\{Q_N[i]\}_{i=1}^{m'}$, where $m'=(1-\delta) m$. \item They update their $m$-bit key to an $m'$-bit key by discarding $\delta m$ number of key bits corresponding to the above $\delta m$ rounds. The updated key is relabeled as $k=k_1k_2\ldots k_{m'}$. \item For $1\leqslant i \leqslant m'$, $\mathcal{P}_{(N+1)}$ measures each $N$ qubit states $Q_{1,i},Q_{2,i},\ldots ,Q_{N,i}$ in basis $\mathcal{B}_N$ and announces the result. \item $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ make a finite sequence $\{\mathcal{M}[i]\}_{i=1}^{m'}$ containing the measurement results, i.e., for $1\leqslant i \leqslant m'$, $\mathcal{M}[i]\in \{\ket{\Phi_{0}^{+}},\ket{\Phi_{0}^{-}},\ket{\Phi_{1}^{+}},\ket{\Phi_{1}^{-}},\ldots , \ket{\Phi_{2^{(N-1)}-1}^{+}}, \ket{\Phi_{2^{(N-1)}-1}^{-}}\}$ is the $i$-th measurement result announced by $\mathcal{P}_{(N+1)}$. \item They randomly choose $\gamma m'$ number of measurement results $\mathcal{M}[i]$ from the sequence $\{\mathcal{M}[i]\}_{i=1}^{m'}$ to estimate the error, where $\gamma \ll 1$ is a small fraction. \begin{enumerate} \item They reveal their respective message bits for these rounds. \item If the estimated error is greater than some predefined threshold value, then they abort. Else they continue and go to the next step. \end{enumerate} \item Their remaining sequence of measurement results is relabeled as $\{\mathcal{M}[i]\}_{i=1}^{n}$, where $n=(1-\gamma) m'$. \item They update their $m'$-bit key to an $n$-bit key by discarding $\gamma m'$ number of key bits corresponding to the above $\gamma m'$ rounds. The updated key is relabeled as $k=k_1k_2\ldots k_{n}$. \item For $1\leqslant \alpha \leqslant N$, $\mathcal{P}_\alpha$ uses the Algorithm~\ref{N_Message Reconstruction} to recover others' messages. \end{enumerate} Note that in this protocol, there are two error estimation phases. The first one checks if there is any adversary (other than $\mathcal{P}_{(N+1)}$) in the channel, who tries to get some information about the messages or change the messages. In this case, if the 1st error estimation phase does not pass, then the participants abort the protocol. Thus in this step, the motivation of $\mathcal{P}_{(N+1)}$ being correct is, there is no information gain if the parties abort the protocol. The next error estimation phase is to check, if there is any error introduced by $\mathcal{P}_{(N+1)}$. \subsection{Correctness and Security Analysis of $N$-Party Quantum Conference Protocol} In our proposed protocol, for $1\leqslant \alpha \leqslant N$, each $\mathcal{P}_\alpha$ first prepares qubits corresponding to his (her) message and shared key and then send those qubits to $\mathcal{P}_{(N+1)}$. After that, $\mathcal{P}_{(N+1)}$ measures each $N$-qubit state (one from each $\mathcal{P}_\alpha$) in basis $\mathcal{B}_N=\{\ket{\Phi_{0}^{+}}, \ket{\Phi_{0}^{-}},\ket{\Phi_{1}^{+}},\ket{\Phi_{1}^{-}}, \ldots,$ $ \ket{\Phi_{2^{(N-1)}-1}^{+}},\ket{\Phi_{2^{(N-1)}-1}^{-}}\}$ and announces the result. Now for $1 \leqslant i \leqslant m$, if $k_i=0$ (i.e preparation basis of each ${Q^\alpha}_i$ is $\{\ket{0},\ket{1}\}$) and the $N$-qubit state is $\ket{j}=\ket{j_1}\ket{j_2}\ldots \ket{j_N}$ or $\ket{2^N-1-j}=\ket{j'}=\ket{{j'}_1}\ket{{j'}_2}\ldots $ $\ket{{j'}_N}$, then after measurement, $\mathcal{P}_{(N+1)}$ will get $\ket{\Phi_{j}^{+}}$ and $\ket{\Phi_{j}^{-}} $ with probability $1/2$. Again if $k_i=1$ (i.e., the preparation basis of each ${Q^\alpha}_i$ is $\{\ket{+},\ket{-}\}$) and there are even number of $\alpha$, such that $Q_{\alpha,i}=\ket{-}$, then $\mathcal{P}_{(N+1)}$ will get $\ket{\Phi_{j}^{+}}$ ($j \in \{0,1,\ldots ,2^{(N-1)}-1\}$) with probability $1/{2^{(N-1)}}$. Else if $k_i=1$ (i.e., preparation basis of each ${Q^\alpha}_i$ is $\{\ket{+},\ket{-}\}$) and there are odd number of $\alpha$, such that $Q_{\alpha,i}=\ket{-}$, then $\mathcal{P}_{(N+1)}$ will get $\ket{\Phi_{j}^{-}}$ ($j \in \{0,1,\ldots ,2^{(N-1)}-1\}$) with probability $1/{2^{(N-1)}}$. For better understanding, we write the table for $N=4$ (Table~\ref{4_Party_table} in Appendix A). Now for $1 \leqslant i \leqslant m$ and $1 \leqslant \alpha \leqslant N$, if $k_i=0$, we can say the following: if the prepared qubit of $\mathcal{P}_\alpha$ is $\ket{0}$ or $\ket{1}$, then $\mathcal{P}_\alpha$ guesses message bit of other parties with probability $1$ as follows: $\mathcal{M}[i]=\ket{\Phi_{j}^{+}} \text{or } \ket{\Phi_{j}^{-}} \Rightarrow $ the $N$-qubit state was $\ket{j}$ or $\ket{2^N-1-j}$. Since $\ket{2^N-1-j}=\ket{\bar{j_1}}\ket{\bar{j_2}}\ldots \ket{\bar{j_N}}$, from his/her own message bit, $\mathcal{P}_\alpha$ can get the others' message bits. If the prepared qubit of $\mathcal{P}_\alpha$ is $\ket{+}$ or $\ket{-}$, then $\mathcal{P}_\alpha$ guesses the XOR function of message bits of all parties with probability $1$ as follows: \begin{equation*} \text{Measurement result} = \begin{cases} \ket{\Phi_{j}^{+}} \Rightarrow & M_{1,i}\oplus M_{2,i} \oplus \ldots \oplus M_{N,i}=0;\\ \ket{\Phi_{j}^{-}} \Rightarrow & M_{1,i}\oplus M_{2,i} \oplus \ldots \oplus M_{N,i}=1. \end{cases} \end{equation*} for some $j \in \{0,1,\ldots ,2^{(N-1)}-1\}$. In this case, $\mathcal{P}_1,\mathcal{P}_2,\ldots,\mathcal{P}_{(\alpha-1)},\mathcal{P}_{(\alpha+2)},\ldots, \mathcal{P}_{(N-1)},\mathcal{P}_{N}$ send their encoded qubits to $\mathcal{P}_\alpha$ (encoding algorithm is given in Step~\ref{N-2nd_encoding} of Algorithm~\ref{N_Message Reconstruction}). Since $\mathcal{P}_\alpha$ knows the basis of the received qubits, by measuring the qubits in the proper basis, $\mathcal{P}_\alpha$ can know the message bits $M_{1,i}, M_{2,i},\ldots , M_{{(\alpha-1)},i}, M_{{(\alpha+2)},i},$ $\ldots, M_{N,i}$. Then from the XOR value, $\mathcal{P}_\alpha$ can get $M_{{(\alpha+1)},i}$ also. From the above discussion, we see that for all cases, all parties can conclude the communicated bits of the other parties with probability $1$. Hence our protocol is giving the correct result. The security analysis is the same as the three-party quantum conference protocol and so we will not repeat it here. \begin{algorithm} \setlength{\textfloatsep}{0.05cm} \setlength{\floatsep}{0.05cm} \KwIn{Own message ${M_\alpha}$, key $k$, joint measurement results $\{\mathcal{M}[i]\}_{i=1}^{n}$ announced by $\mathcal{P}_{(N+1)}$.} \KwOut{Others' messages $M_1, M_2,\ldots, M_{(\alpha-1)},M_{(\alpha+1)}, \ldots,M_N$.} \begin{small} \begin{enumerate} \item For $1\leqslant i \leqslant n$, if $k_i = 0$,\\ $\mathcal{P}_\alpha$ can learn the $i$-th bit of others' messages from the measurement result $\mathcal{M}[i]$ and his(her) own message (same as three party quantum conference, e.g., see Table~\ref{4_Party_table} for $N=4$). \item For $1\leqslant i \leqslant n$, if $k_i = 1$,\\ from the measurement result $\mathcal{M}[i]$ and his (her) own message, $\mathcal{P}_\alpha$ can learn the XOR value of the $i$-th bit of all $N$ messages. If $\mathcal{M}[i]=\ket{\Phi_{l}^{+}}$ for some $l \in \{0,1,\ldots,2^{(N-1)}-1\}$, then the value of $\chi_i=M_{1,i} \oplus M_{2,i} \oplus \ldots \oplus M_{N,i}$ becomes $0$, else $\chi_i=1$. Let $c=wt(k)$. \begin{enumerate} \item $\mathcal{P}_\alpha$ prepares an ordered set of $c$ qubits $S_{\alpha}$, corresponding to his (her) message bit where the key bit is $1$. He (she) prepares the qubits at his (her) end according to the following strategy. For $1\leqslant j \leqslant c$ and if $k_i=1$ is the $j$-th $1$ in $k$, then \begin{itemize} \label{N-2nd_encoding} \item if $M_{\alpha,i} = 0$ and $i$ is even, prepares $S_{\alpha}[j]=\ket{0}$. \item if $M_{\alpha,i} = 1$ and $i$ is even, prepares $S_{\alpha}[j]=\ket{1}$. \item if $M_{\alpha,i} = 0$ and $i$ is odd, prepares $S_{\alpha}[j]=\ket{+}$. \item if $M_{\alpha,i} = 1$ and $i$ is odd, prepares $S_{\alpha}[j]=\ket{-}$. \end{itemize} \item There are $N-2$ rounds. \begin{itemize} \item \textbf{$1$st round:} \begin{enumerate}[label={1-\arabic*.}] \item $\mathcal{P}_\alpha$ prepares a set of decoy photons $D_{\alpha,1}$, where the decoy photons are randomly chosen from $\{\ket{0},\ket{1},\ket{+},\ket{-}\}$. He (she) randomly inserts his (her) decoy photons into $S_{\alpha}$ and makes new ordered sets ${S_{\alpha}}^1$. $\mathcal{P}_\alpha$ sends ${S_{\alpha}}^1$ to $\mathcal{P}_{(\alpha+1) (Mod ~N)}$ and receives ${S^1_{(\alpha-1)(Mod ~N)}}$ from $\mathcal{P}_{(\alpha-1)(Mod ~N)}$. \item After $\mathcal{P}_{(\alpha+1) (Mod ~N)}$ receives ${S_{\alpha}}^1$, $\mathcal{P}_\alpha$ sends the positions and states of $D_{\alpha,1}$ to $\mathcal{P}_{(\alpha+1) (Mod ~N)}$ through a public channel. Also $\mathcal{P}_\alpha$ receives the positions and states of $D_{(\alpha-1) (Mod ~N),1}$. \item Then $\mathcal{P}_\alpha$ verifies the decoy photons to check eavesdropping. If there exists any eavesdropper in the quantum channel it aborts the protocol, else it goes to the next step. \item $\mathcal{P}_\alpha$ measures the qubits of $S_{(\alpha-1) (Mod ~N)}$ in proper bases and knows the corresponding message bits of $\mathcal{P}_{(\alpha-1) (Mod ~N)}$. Also after measurements in the proper bases, the states of the qubits of $S_{(\alpha-1) (Mod ~N)}$ remain unchanged. \end{enumerate} \item \textbf{$l$-th round ($2\leqslant l \leqslant N-2$):} \begin{enumerate}[label={l-\arabic*.}] \item $\mathcal{P}_\alpha$ prepares a set of decoy photons $D_{\alpha,l}$, where the decoy photons are randomly chosen from $\{\ket{0},\ket{1},\ket{+},\ket{-}\}$. He (she) randomly inserts his (her) decoy photons into $S_{(\alpha-l+1) (Mod ~N)}$ and makes new ordered sets ${S_{\alpha}}^l$. $\mathcal{P}_\alpha$ sends ${S_{\alpha}}^l$ to $\mathcal{P}_{(\alpha+1) (Mod ~N)}$ and receives ${S_{(\alpha-1) (Mod ~N)}}^l$ from $\mathcal{P}_{(\alpha-1) (Mod ~N)}$. \item After $\mathcal{P}_{(\alpha+1) (Mod ~N)}$ receives ${S_{\alpha}}^l$, $\mathcal{P}_\alpha$ sends the positions and states of $D_{\alpha,l}$ to $\mathcal{P}_{(\alpha+1) (Mod ~N)}$ through a public channel. Also $\mathcal{P}_\alpha$ receives the positions and states of $D_{(\alpha-1) (Mod ~N),l}$. \item Then $\mathcal{P}_\alpha$ verifies the decoy photons to check eavesdropping. If there exists any eavesdropper in the quantum channel, it aborts the protocol. Else it goes to the next step. \item $\mathcal{P}_\alpha$ measures the qubits of $S_{(\alpha-l+1) (Mod ~N)}$ in proper bases and knows the corresponding message bits of $\mathcal{P}_{(\alpha-l+1) (Mod ~N)}$. Also after measurements in the proper bases, the states of the qubits of $S_{(\alpha-l+1) (Mod ~N)}$ remain unchanged. \end{enumerate} \end{itemize} \item $\mathcal{P}_\alpha$ gets all the message bits of previous $N-2$ participants. As $\mathcal{P}_\alpha$ knows $\chi_i$ and its own message bit, it gets all the other $N-1$ message bits. \end{enumerate} \end{enumerate} \end{small} \caption{$N$-Party Message Reconstruction Algorithm for $\mathcal{P}_\alpha$. } \label{N_Message Reconstruction} \setlength{\textfloatsep}{0.05cm} \setlength{\floatsep}{0.05cm} \end{algorithm} \section{Multi-party XOR Computation} \label{sec5} In this section, we present a protocol for multi-party XOR computation. Suppose there are $N$ parties $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$; each of them has an $m$-bit number. Let $m$-bit numbers of $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ be ${M_1}=M_{1,1} M_{1,2}\ldots M_{1,m};\:~ {M_2}=M_{2,1} M_{2,2}\ldots M_{2,m} ;\: \ldots; \: M_N=M_{N,1} M_{N,2}\ldots $ $ M_{N,m}$ respectively, where $M_{i,j}$ is the $j$-th bit of the $i$-th party $\mathcal{P}_i$'s message. They want to compute $M_1\oplus M_2\oplus \ldots \oplus M_N$ securely, such that their numbers remain private. To execute this protocol, they will take help from an untrusted $(N+1)$-th party (or $\mathcal{P}_{(N+1)}$). Also, one participant among $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$, must be semi-honest (i.e., it follows the protocol properly), who have to play a vital role in this computation. Let $\mathcal{P}_1$ be the semi-honest participant. Other participants are only allowed to prepare and send the states corresponding to their numbers. If other participants do not follow the protocol properly (i.e., they will prepare states corresponding to a number other than their own numbers), then the computed value will be incorrect, which they definitely do not want. To compute $M_1\oplus M_2\oplus \ldots \oplus M_N$, first $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ have to share an $2m$-bit key $k=k_1k_2\ldots k_{2m}$ and according to the key they prepare their sequence of qubits to encode their numbers. The encoding algorithm is almost similar to conference cases. Then they send their qubit sequences to $\mathcal{P}_{(N+1)}$, who measures each $N$-qubit states in $\mathcal{B}_N$ basis and announces the result publicly. Then from this announcement and the key, they get the XOR value of their numbers. Details of this protocol are given in Section~\ref{algo:XOR}. \subsection{Protocol 3: Multi-party XOR Computation}\label{algo:XOR} \textbf{Input:} The $m$-bit numbers ${M_1}=M_{1,1} M_{1,2}\ldots M_{1,m};\: {M_2}=M_{2,1} M_{2,2}\ldots M_{2,m} ;\: \ldots; \: M_N=M_{N,1}$ $ M_{N,2}\ldots M_{N,m}$ of $N$ parties $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ respectively. \\ \textbf{Output:} $M_1\oplus M_2\oplus \ldots \oplus M_N$. The steps of the protocol are as follows: \begin{enumerate} \item $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ perform a Multi-party QKD protocol~\cite{matsumoto2007multiparty} to establish an $2m$ bit secret key $k=k_1k_2\ldots k_{2m}$ between themselves. \item \begin{enumerate} \item If $wt(k) = m$, then calculate $c=\oplus k_i$, $1\leq i \leq 2m$. \item Else if $wt(k)>m$, then $c=1$. \item Else $c=0$. \end{enumerate} \item $\mathcal{P}_1$ prepares an $m$-bit random number $k'=k'_1k'_2 \ldots k'_m$ and sends it to $\mathcal{P}_2, \ldots, \mathcal{P}_N$ by using Algorithm~\ref{send_number} with the inputs $k'$ and $k$. \item $\mathcal{P}_1$ calculates $M_{1_\Delta}=M_1 \oplus k'$ and uses $M_{1_\Delta}$ as his/her number. \item $\mathcal{P}_1$ generates a $2m$ bit string ${M'}_1$ from his/her number and the key in such a way that, for $1\leq i \leq 2m$ and $1\leq j \leq m$: \begin{enumerate} \item if $k_i=c$ and $j < m$, then ${M'}_{1,i}=M_{1_\Delta,j}$, $i=i+1$, $j=j+1$; \item else, ${M'}_{1,i}=x$, where $x \in \{0,1\}$ is random and $i=i+1$. \end{enumerate} \item For $2 \leqslant \alpha \leqslant N$: $\mathcal{P}_\alpha$ generates $2m$ bit string ${M'}_\alpha$ from his/her own number as follows. For $1\leq i \leq 2m$ and $1\leq j \leq m$: \begin{enumerate} \item if $k_i=c$ and $j < m$, then ${M'}_{\alpha,i}=M_{\alpha,j}$, $i=i+1$, $j=j+1$; \item else, ${M'}_{\alpha,i}=x$, where $x \in \{0,1\}$ is random and $i=i+1$. \end{enumerate} \item Each $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ prepares the sequence of qubits ${Q_1}=\{Q_1[i]\}_{i=1}^{2m}=(Q_{1,1},Q_{1,2},\ldots , $ $Q_{1,{2m}});$ ${Q_2}=\{{Q_2}[i]\}_{i=1}^{2m}=(Q_{2,1},Q_{2,2},\ldots, Q_{2,{2m}});\: \ldots ;\: {Q_N}=\{{Q_N}[i]\}_{i=1}^{2m}=(Q_{N,1},Q_{N,2},$ $ \ldots, Q_{N,{2m}})$ at their end by using Algorithm~\ref{xor_encode}. \item $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ choose some random permutations and apply those on their respective sequences of qubits $Q_1 ,Q_2,\ldots, Q_N$ and get new sequences of qubits $q_1,q_2,\ldots, q_N$. They send their prepared sequences of qubits $q_1,q_2,\ldots, q_N$ to $\mathcal{P}_{(N+1)}$. \item $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ randomly choose $2\delta m$ number of common positions on sequences $Q_1 ,Q_2, \ldots,$ $ Q_N$ to estimate the error in the channel, where $\delta \ll 1$ is a small fraction. Corresponding to these rounds, they do the followings: \label{XOR_error1} \begin{enumerate} \item Each participant tells the positions and preparation bases of those qubits for those rounds to $\mathcal{P}_{(N+1)}$. \item $\mathcal{P}_{(N+1)}$ measures each single qubit states in proper bases and announces the results. \item $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ reveal their respective qubits for these rounds and compare with the results announced by $\mathcal{P}_{(N+1)}$. \item If the estimated error is greater than some predefined threshold value, then they abort. Else they continue and go to the next step. \end{enumerate} \item $\mathcal{P}_{(N+1)}$ asks $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ to tell the permutations which they have applied to their sequences. \item $\mathcal{P}_{(N+1)}$ applies the inverse permutations, corresponding to the permutations chosen by $\mathcal{P}_1, \mathcal{P}_2,$ $ \ldots, \mathcal{P}_N$, on $q_1,q_2,\ldots, q_N$ to get $Q_1 ,Q_2,\ldots, Q_N$ respectively. \item They discard the qubits corresponding to the above $2\delta m$ positions. Their remaining sequences of prepared qubits are relabeled as ${Q_1}=\{Q_1[i]\}_{i=1}^{2m'}$, ${Q_2}=\{Q_2[i]\}_{i=1}^{2m'}$, $\ldots $, ${Q_N}=\{Q_N[i]\}_{i=1}^{2 m'}$ where $m'=(1-\delta) m$. \item They update their $2m$-bit key to an $2m'$-bit key by discarding $2\delta m$ number of key bits corresponding to the above $2\delta m$ rounds. The updated key is relabeled as $k=k_1k_2\ldots k_{2m'}$. \item For $1\leqslant i \leqslant 2m'$, $\mathcal{P}_{(N+1)}$ measures each $N$ qubit states $Q_{1,i},Q_{2,i},\ldots ,Q_{N,i}$ in basis $\mathcal{B}_N$ and announces the result. \item $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ make a finite sequence $\{\mathcal{M}[i]\}_{i=1}^{2m'}$ containing the measurement results, i.e., for $1\leqslant i \leqslant 2m'$, $\mathcal{M}[i]\in \{\ket{\Phi_{0}^{+}},\ket{\Phi_{0}^{-}},\ket{\Phi_{1}^{+}},\ket{\Phi_{1}^{-}},\ldots , \ket{\Phi_{2^{(N-1)}-1}^{+}}, \ket{\Phi_{2^{(N-1)}-1}^{-}}\}$ is the $i$-th measurement result announced by $\mathcal{P}_{(N+1)}$. \item They randomly choose $2\gamma m'$ number of measurement results $\mathcal{M}[i]$ from the sequence $\{\mathcal{M}[i]\}_{i=1}^{2m'}$ to estimate the error, where $\gamma \ll 1$ is a small fraction. \begin{enumerate} \item For these rounds, they reveal respective bits of their numbers. \item If the estimated error is greater than some predefined threshold value, then they abort. Else they continue and go to the next step. \end{enumerate} \item Their remaining sequence of measurement results is relabeled as $\{\mathcal{M}[i]\}_{i=1}^{2n}$, where $n=(1-\gamma) m'$. \item They update their $2m'$-bit key to an $2n$-bit key by discarding $2\gamma m'$ number of key bits corresponding to the above $2\gamma m'$ rounds. The updated key is relabeled as $k=k_1k_2\ldots k_{2n}$. \item For $1\leqslant i \leqslant 2n$, \begin{enumerate} \item if $k_i = \bar{c}$, then each participant can learn $i$-th bit of others' number from the measurement result $\mathcal{M}[i]$ and their own number (see Algorithm~\ref{N-conf}). \item Else, from the measurement result $\mathcal{M}[i]$, each participant can learn the XOR value of the $i$-th bit of all $N$ numbers. If $\mathcal{M}[i]=\ket{\Phi_{l}^{+}}$ for some $l \in \{0,1,\ldots,2^{(N-1)}-1\}$, then the value of $\chi_i=M_{1_\Delta,i} \oplus M_{2,i} \oplus \ldots \oplus M_{N,i}$ becomes $0$, else $\chi_i=1$.\label{xor__msg_info} \end{enumerate} \item Combining the knowledges from Step-\ref{xor__msg_info} and the key, they can get $M_{1_\Delta} \oplus M_2 \oplus \ldots \oplus M_N$. \item $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_N$ calculate $M_1 \oplus M_2 \oplus \ldots \oplus M_N= k' \oplus M_{1_\Delta} \oplus M_2 \oplus \ldots \oplus M_N$. \end{enumerate} \begin{algorithm} \setlength{\textfloatsep}{0.05cm} \setlength{\floatsep}{0.05cm} \KwIn{Random number $k'=k'_1k'_2\ldots k'_m$ chosen by $\mathcal{P}_1$, key $k=k_1k_2\ldots k_{2m}$. } \KwOut{For $2 \leqslant \alpha \leqslant N$, $\mathcal{P}_\alpha$ has $k'$.} \begin{enumerate} \item To encode random number $k'$, $\mathcal{P}_1$ prepares $N-1$ sets of qubits $Q_\alpha=Q_{\alpha,1}Q_{\alpha,2}\ldots Q_{\alpha,m}$ for $\mathcal{P}_\alpha$ ($2 \leqslant \alpha \leqslant N$), by using the following strategy: for $1 \leqslant i \leqslant m$ and $2 \leqslant \alpha \leqslant N$, \begin{enumerate} \item if $k'_i=0$ and $k_i=0 \Rightarrow Q_{\alpha,i}=\ket{0}$ \item if $k'_i=1$ and $k_i=0 \Rightarrow Q_{\alpha,i}=\ket{1}$ \item if $k'_i=0$ and $k_i=1 \Rightarrow Q_{\alpha,i}=\ket{+}$ \item if $k'_i=1$ and $k_i=1 \Rightarrow Q_{\alpha,i}=\ket{-}$ \end{enumerate} \item For $2 \leqslant \alpha \leqslant N$, $\mathcal{P}_1$ chooses a set of decoy photons $D_\alpha$ and randomly inserts those decoy photons into $Q_\alpha$ and gets new set of qubits $q_\alpha$. \item $\mathcal{P}_1$ sends $q_\alpha$ to $\mathcal{P}_\alpha$. \item All $\mathcal{P}_\alpha$ inform $\mathcal{P}_1$ that they receive $q_\alpha$. \item $\mathcal{P}_1$ announces the positions and states of the decoy photons. \item Each $\mathcal{P}_\alpha$ measures the decoy photons in their appropriate bases and calculate the error in the channel (or check that if there is any eavesdropper). \item If the error rate is in a tolerable range, then $\mathcal{P}_\alpha$ measures the qubits of $Q_\alpha$ in their appropriate bases (determined by the key) and get $k'$. \end{enumerate} \caption{Algorithm for Sending a Number to $(N-1)$-Participant.} \label{send_number} \setlength{\textfloatsep}{0.05cm} \setlength{\floatsep}{0.05cm} \end{algorithm} \begin{algorithm} \setlength{\textfloatsep}{0.05cm} \setlength{\floatsep}{0.05cm} \KwIn{$M'_\alpha$ = $2m$-bit message of $\mathcal{P}_\alpha$, key $k=k_1k_2\ldots k_{2m}$. } \KwOut{Sequence of qubits ${Q_\alpha}=\{Q_\alpha[i]\}_{i=1}^{2m}=(Q_{\alpha,1},Q_{\alpha,2},\ldots ,Q_{\alpha,{2m}})$.} \begin{enumerate} \item \end{enumerate} \begin{enumerate} \item \begin{enumerate} \item If $wt(k) = m$, then calculate $c=\oplus k_i$, $1\leq i \leq 2m$. \item Else if $wt(k)>m$, then $c=1$. \item Else $c=0$. \end{enumerate} \item For $1 \leqslant i \leqslant 2m$, \begin{enumerate} \item if ${M'}_{\alpha,i} = 0$ and $k_i = \bar{c}$, set $Q_{1,i}$ (or $Q_{2,i} \ldots $ or $Q_{N,i}=\ket{0}$; \item if ${M'}_{\alpha,i}= 1$ and $k_i = \bar{c}$, set $Q_{1,i}$ (or $Q_{2,i} \ldots $ or $Q_{N,i}=\ket{1}$; \item if ${M'}_{\alpha,i}= 0$ and $k_i = {c}$, set $Q_{1,i}$ (or $Q_{2,i} \ldots $ or $Q_{N,i}=\ket{+}$; \item if ${M'}_{\alpha,i}= 1$ and $k_i = c$, set $Q_{1,i}$ (or $Q_{2,i} \ldots $ or $Q_{N,i}=\ket{-}$. \end{enumerate} \end{enumerate} \caption{Message Encoding Algorithm for Multi-party XOR Computation.} \label{xor_encode} \setlength{\textfloatsep}{0.05cm} \setlength{\floatsep}{0.05cm} \end{algorithm} \subsection{Correctness and Security Analysis of the Quantum Protocol for Multi-party XOR computation} The correctness of this protocol directly follows from the previous one (i.e., multi-party quantum conference protocol). Also, we can say this protocol is secure against intercept-and-resend attack, disturbance attack, entangle-and-measure attack, and dishonest $\mathcal{P}_{(N+1)}$, as this is a part of the previous protocol discussed in the last section. Now, we only have to prove that, no one can get the computed XOR-value other than the legitimate parties. Let an adversary \textit{A} constructs a $2m$-bit string $\tau=\tau_1\tau_2\ldots \tau_{2m}$, from the measurement results in such a way that, if $\mathcal{M}[i]=\ket{\Phi_{l}^{+}}$ for some $l \in \{0,1,\ldots,2^{(N-1)}-1\}$, then $\tau_i=0$, else if $\mathcal{M}[i]=\ket{\Phi_{l}^{-}}$ for some $l \in \{0,1,\ldots,2^{(N-1)}-1\}$, then $\tau_i=1$. Now $m$-bit string $\eta=M_{1_\Delta}\oplus M_2\oplus \ldots \oplus M_N$ is a subsequence of $\tau$. If \textit{A} can guess $\eta$ from $\tau$ with some low probability, then also it can not get any information about $\mu= M_1\oplus M_2\oplus \ldots \oplus M_N$ as $\mu=\eta \oplus k'$, where $k'$ is unknown to him/her. Then from the notion of security of the famous ``one time pad" protocol~\cite{Shannon1949}, we can say that our proposed protocol is secure. It is to be noted that, if $\mathcal{P}_1$ is dishonest, then he/she can cheat and get the exact XOR value, whereas the other participants get some random value instead of the exact XOR value. This thing happens in the following way: $\mathcal{P}_1$ calculates $M_{1_\Delta}=M_1 \oplus R$, where $R\neq k'$ is a random $m$-bit number and it is used instead of $k'$. Then $\mathcal{P}_1$ follows all the next steps of the protocol. At the end of the protocol, everyone get $M_{1_\Delta}\oplus M_2 \oplus \ldots \oplus M_N$. Then $\mathcal{P}_2, \ldots, \mathcal{P}_N$ calculate $M_1 \oplus M_2 \oplus \ldots \oplus M_N= k' \oplus M_{1_\Delta} \oplus M_2 \oplus \ldots \oplus M_N$, which is not true as $R\neq k'$. But, $\mathcal{P}_1$ calculates $M_1 \oplus M_2 \oplus \ldots \oplus M_N = R \oplus M_{1_\Delta} \oplus M_2 \oplus \ldots \oplus M_N$, which is correct. That is, after executing the protocol, $\mathcal{P}_1$ has the exact value of $M_1 \oplus M_2 \oplus \ldots \oplus M_N$ and other participants have the value of $k' \oplus R \oplus M_1 \oplus M_2 \oplus \ldots \oplus M_N$, which is nothing but a random number. Thus here we are assuming that $\mathcal{P}_1$ is semi-honest, that is, follows the protocol properly. Hence each participant gets the computed XOR-value exactly, but no other party can not get any information about the value. \section{Conclusion} \label{sec6} In this paper, first we identify that the MDI-QD protocol presented in~\cite{qip/Maitra17} is not secure against the intercept-and-resend attack, and we modify the protocol to make it secure against this attack. Then we present three more protocols, two of them for the quantum conference, i.e., securely and simultaneously exchanging secret messages between the participants. The first protocol is for three parties and then we generalize it to a multi-party scenario, i.e., for $N$-parties (where $N \geqslant 3$). Another protocol presented in this paper is for multi-party XOR computation, where $N$-parties can compute the XOR function of their own numbers, but their numbers remain private. All the protocols discussed above are proven to be correct and secure.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,619
from datetime import timedelta import copy from bs4 import BeautifulSoup from pyiso.base import BaseClient import re class ERCOTClient(BaseClient): NAME = 'ERCOT' base_report_url = 'http://mis.ercot.com' report_type_ids = { 'wind_5min': '13071', 'wind_hrly': '13028', 'gen_hrly': '12358', } TZ_NAME = 'US/Central' def utcify(self, local_ts, **kwargs): # ERCOT is hour ending, want hour beginning utc_ts = super(ERCOTClient, self).utcify(local_ts, **kwargs) return utc_ts - timedelta(hours=1) def _request_report(self, report_type): # request reports list params = {'reportTypeId': self.report_type_ids[report_type]} report_list_contents = self.request(self.base_report_url+'/misapp/GetReports.do', params=params).content report_list_soup = BeautifulSoup(report_list_contents) # find the endpoint to download report_endpoint = None for elt in report_list_soup.find_all('tr'): label = elt.find(class_='labelOptional_ind') if label: if 'csv' in label.string: report_endpoint = self.base_report_url + elt.a.attrs['href'] break # test endpoint found if not report_endpoint: raise ValueError('ERCOT: No report available for %s, soup:\n%s' % (report_type, report_list_soup)) # read report from zip r = self.request(report_endpoint) if r: content = self.unzip(r.content) else: return [] # parse csv rows = content[0].decode('unicode_escape').split('\n') header = rows[0].split(',') raw_data = [dict(zip(header, self.parse_row(row))) for row in rows[1:-1]] # return return raw_data def is_dst(self, val, standard): return val != standard def get_generation(self, latest=False, **kwargs): # get nonwind gen data raw_gen_data = self._request_report('gen_hrly') assert len(raw_gen_data) == 1 total_dp = raw_gen_data[0] total_gen = float(total_dp['SE_MW']) # get timestamp on hour # TODO is this what this timestamp means?? raw_ts = self.utcify(total_dp['SE_EXE_TIME'], is_dst=self.is_dst(total_dp['SE_EXE_TIME_DST'], 's')) ts_hour_rounded_down = raw_ts.replace(minute=0, second=0, microsecond=0) # if raw_ts.minute > 30: # ts_hour_rounded = ts_hour_rounded_down + timedelta(hours=1) # else: # ts_hour_rounded = ts_hour_rounded_down # process wind data wind_gen = None for wind_dp in self._request_report('wind_hrly'): wind_ts = self.utcify(wind_dp['HOUR_BEGINNING'], is_dst=self.is_dst(wind_dp['DSTFlag'], 'N')) if wind_ts == ts_hour_rounded_down: try: wind_gen = float(wind_dp['ACTUAL_SYSTEM_WIDE']) except ValueError: # empty string wind_gen = None self.logger.error('No wind data available at %s in ERCOT' % (raw_ts)) break # set up storage parsed_data = [] base_dp = {'timestamp': ts_hour_rounded_down, 'freq': self.FREQUENCY_CHOICES.hourly, 'market': self.MARKET_CHOICES.hourly, 'gen_MW': 0, 'ba_name': self.NAME} # collect parsed data if wind_gen is not None: nonwind_gen = total_gen - wind_gen for gen_MW, fuel_name in [(wind_gen, 'wind'), (nonwind_gen, 'nonwind')]: parsed_dp = copy.deepcopy(base_dp) parsed_dp['fuel_name'] = fuel_name parsed_dp['gen_MW'] = gen_MW parsed_data.append(parsed_dp) # return return parsed_data def get_load(self, latest=False, **kwargs): # set args self.handle_options(data='load', latest=latest, **kwargs) # only can get latest load if not self.options['latest']: raise ValueError('Load only available for latest in ERCOT') # get load site response = self.request('http://www.ercot.com/content/cdr/html/real_time_system_conditions.html') # parse load from response data = self.parse_load(response.text) # return return data def parse_load(self, content): # make soup soup = BeautifulSoup(content) # load is after 'Actual System Demand' text load_label_elt = soup.find(text='Actual System Demand') load_parent_elt = load_label_elt.parent.parent.parent load_elt = load_parent_elt.find(class_='labelValueClassBold') load_val = float(load_elt.text) # timestamp text starts with 'Last Updated' timestamp_elt = soup.find(text=re.compile('Last Updated')) timestamp_str = timestamp_elt.strip('Last Updated ') timestamp = self.utcify(timestamp_str) # assemble dp dp = { 'timestamp': timestamp, 'ba_name': self.NAME, 'market': self.options.get('market', self.MARKET_CHOICES.fivemin), 'freq': self.options.get('freq', self.FREQUENCY_CHOICES.fivemin), 'load_MW': load_val, } # return return [dp]
{ "redpajama_set_name": "RedPajamaGithub" }
2,447
biomass pellet steam boiler for sale Introduction. Introduction Biomass boiler is substitutes for natural gas, oil and coal boilers, the operation of oil-fired boilers, and the operating costs of coal-fired boilers. Steam is traditionally created by heating a boiler via burning coal and other fuels, but it is also possible to create steam Water tube boiler- Water tube boilers for sale – 2018-7-17 · Indeck offers the largest inventory of water tube boilers for sale in the U.S. Steamact is a Mini Steam Boiler & highly reliable, packaged instant steam generator suitable for steam or hot water requirements in laundries or factory canteens. Steamact is a compact, a hassle-free alternative solution to cut down operating costs of these small heating loads, replacing expensive electrical systems. Home » Steam Boiler » weben jarco boiler Quick inquiry: I need the quotation of -Please select product- Steam Boiler Hot Water Boiler Industrial Autoclave, the fuel is (not for autoclave), the pressure is, the capacity is, used for . Offer energy efficient heat and consistent warm comfort for your home with Slant/Fin Intrepid Hot Water Oil-Fired Steam Tankless Boiler with BTU Output.: $3180.56: Replacement Gas Boilers - Energy Depothttps://Replacement Gas Boilers. Unlike furnaces that heat air, boilers heat water, providing either hot water or steam for heating. Steam is distributed via pipes to steam radiators, and hot water can be distributed via baseboard radiators or radiant floor systems. Steam boiler refers to an industrial boiler that heats water to certain parameters and produces high-temperature steam. Water is heated in the drum and turns into steam. The fire emits heat in the hearth, which is the principle of the steam boiler. Rebates & Special Offers. Rebates in Canada for Water Heaters and Natural Gas Boiler Replacements. View Special: Artic Energy Alliance - Energy Efficiency Incentive Program CANADA ONLY: For Natural Gas and Propane Boilers 95% AFUE and beyond as well as Water Heaters with Energy Star Efficiency above 0.92.
{ "redpajama_set_name": "RedPajamaC4" }
7,336
Q: Drag & Resize image underneath with a DIV above Here is the scenario. I have a logo which can be draggable & resizable via jQuery UI (version is 1.9.2, but it doesn't really matter), bounded by a parent DIV. It works well in dragging & resizing. However, when I try to overlay a DIV with a background image exactly above, the mouse clicks are blocked by the DIV above. HTML <div id="appleLogo"></div> <div id="frameBorder"> <div id="draggableHelper" style="display:inline-block"> <img id="image" src="http://www.google.com.br/images/srpr/logo3w.png" /> </div> </div> JS $('#draggableHelper').draggable({ containment: "#frameBorder", scroll: false }); $('#image').resizable(); CSS #appleLogo { position: absolute; width: 400px; height: 400px; background-image: url(http://wbridgewaterschools.org/school/images/Apple%20logo.png); background-size: cover; opacity: 0.7; filter: alpha(opacity=70); z-index: 10; } #frameBorder { position: absolute; width: 400px; height: 400px; border: 1px solid #F00; overflow: hidden; z-index: 1; } For better demonstration, here is the jsFiddle. How can I bypass the above DIV ? Here are some references I've read, but none applies to this case: * *How to prevent Resizable and Draggable elements from collapsing on each other? *Drag & Resize div overlapped other div A: A little css/html/js shuffle is all you needed, code is posted below and here is the fiddle: http://jsfiddle.net/EVSZQ/10/ HTML <div id="frameBorder"> <div id="draggableHelper" style="display:inline-block"> <div id="image"> <img id="logo" src="http://wbridgewaterschools.org/school/images/Apple%20logo.png" /> </div> </div> </div> CSS #frameBorder { width: 400px; height: 400px; border: 1px solid #F00; } #image { width: 50px; height: 50px; border: 1px solid black; background-size: 100% 100%; background-image: url(http://www.google.com.br/images/srpr/logo3w.png); } #logo { position: fixed; width: 400px; height: 400px; opacity: .55; } JS $('#draggableHelper').draggable({ containment: "#frameBorder", scroll: false }); $('#image').resizable(); Hope that helps! Best, Josh A: edit : new fiddle : http://jsfiddle.net/EVSZQ/5/ Here is the js code : but i didn't optimize it in thinking that it will be easy to understand... $('#image').resizable(); $('#draggableHelper').draggable({ containment: "#frameBorder", scroll: false }); $('#appleLogo').on('mousedown', function(event){ var gxstart = $('#image').offset().left; var gxend = $('#image').offset().left + $('#image').width(); var gystart = $('#image').offset().top; var gyend = $('#image').offset().top + $('#image').height(); var mouseX = event.clientX; var mouseY =event.clientY; if( gxstart < mouseX ) { if ( mouseX < gxend ) { if(gystart < mouseY) { if(mouseY < gyend) { $('#draggableHelper').trigger(event); } } } } });
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,277
\section{Introduction} \label{sec:intro} \subsection{Motivation} Given graphs $G$ and $H$, a simple and natural question to ask is whether it is possible to perfectly tile $G$ with copies of~$H$, that is, to find vertex-disjoint copies of~$H$ in~$G$ which together cover every vertex of $G$. An obvious necessary condition for this is that $|V(H)|$ divides $|V(G)|$, which we assume implicitly throughout this discussion. In the case where $H$ consists of a single edge, a perfect $H$-tiling is simply a perfect matching; a classical theorem of Tutte~\cite{Tu47} gives a characterisation of all graphs for which this is possible, and Edmond's algorithm~\cite{E65} returns a perfect matching (or reports that no such matching exists) in polynomial time. However, if the graph $H$ has a~connected component with at least three vertices, then we see sharply different behaviour. In particular Hell and Kirkpatrick~\cite{KH} showed that, for any fixed graph $H$ of this form, the problem of determining whether a graph $G$ admits a perfect $H$-tiling is NP-hard, so it is unlikely that there exists a `nice' characterisation of such graphs analogous to Tutte's theorem. Due to this, there has been much study of sufficient conditions which, for a fixed graph $H$, ensure the existence of a perfect $H$-tiling in a~graph~$G$ on~$n$ vertices (we refer the reader to the survey of K\"uhn and Osthus~\cite{KO} for a more detailed overview). The most natural of these are minimum degree conditions; to discuss these we define $\delta(H, n)$ to be the smallest integer $m$ such that any graph $G$ on $n$ vertices with $\delta(G) \geq m$ admits a perfect $H$-tiling. One early sufficient condition was given by the celebrated Hajn\'al-Szemer\'edi theorem~\cite{HSz}, which states that for any integer $r$ we have $\delta(K_r, n) = \tfrac{r-1}{r}n$ (the case $r=3$ was previously given by Corr\'adi and Hajnal~\cite{CH}). Turning to general graphs~$H$, Alon and Yuster~\cite{AY} later showed that $\delta(H, n) \leq \tfrac{\chi(H)-1}{\chi(H)}n + o(n)$; using the Blow-up Lemma Koml\'os, S\'ark\"ozy and Szemer\'edi \cite{KSSz} then improved this result by replacing the $o(n)$ error term with an additive constant (which cannot be removed in general). In the other direction, Koml\'os \cite{K} introduced the \emph{critical chromatic number} $\chi_{\rm cr}(H)$ of~$H$, and observed that for any~$H$ we have $\delta(H, n) \geq \tfrac{\chi_{\rm cr}(H)-1}{\chi_{\rm cr}(H)}n$. Finally K\"uhn and Osthus~\cite{KO2} completed our understanding by classifying graphs $H$ according to their \emph{greatest common divisor}, and showing that for any~$H$ we have either $\delta(H, n) = \tfrac{\chi_{\rm cr}(H)-1}{\chi_{\rm cr}(H)}n + O(1)$ or $\delta(H, n) = \tfrac{\chi(H)-1}{\chi(H)}n + O(1)$, according to the value of this parameter. In parallel with the results described above, much attention has been devoted to the problem of perfectly tiling multipartite graphs; these have presented significant additional challenges. There is a natural multipartite notion of minimum degree: for a $r$-partite graph $G$ with vertex classes $V_1, \dots, V_r$ we define $\delta^*(G)$ to be the largest integer $s$ such that, for any $i \neq j$, any vertex of $V_i$ has at least $s$ neighbours in $V_j$. Similarly as before, let $\delta^*(H, n)$ denote the smallest integer $m$ such that any $\chi(H)$-partite graph $G$ whose $\chi(H)$ vertex classes each have size $n$ and which satisfies $\delta^*(G) \geq m$ admits a perfect $H$-tiling. Fischer~\cite{F} conjectured the natural multipartite analogue of the Hajnal-Szemer\'edi theorem, namely that $\delta^*(K_r, n) = \tfrac{r-1}{r}n$. Perhaps surprisingly, Catlin gave counterexamples demonstrating this natural conjecture to be false for each odd $r \geq 3$. However, for large $n$, Fischer's conjecture is `almost-true', in that Catlin's counterexamples are the only counterexamples to the conjecture, as shown by Keevash and Mycroft~\cite{KM} (this was previously demonstrated for $r=3$ and $r=4$ by Magyar and Martin~\cite{MM} and Martin and Szemer\'edi~\cite{MSz} respectively, whilst an asymptotic form for all $r$ was independently given by Lo and Markstr\"om~\cite{LM}). Subsequently Martin and Skokan~\cite{MS} continued this direction of research by establishing a multipartite analogue of the Alon-Yuster theorem, namely that for any $H$ we have $\delta^*(H, n) \leq \tfrac{\chi(H)-1}{\chi(H)}n + o(n)$. \begin{theo}[\cite{MS}]\label{theo:alonyuster} Let $H$ be a graph on $h$ vertices with $\chi(H)=r\geq 3$. For any $\alpha > 0$ there exists $n_0 = n_0(\alpha,H)$ such that if $G$ is a balanced $r$-partite graph on $rn$ vertices with $\delta^*(G) \geq \left(\frac{r - 1}{r} + \alpha \right) n$, where $n \geq n_0$ is divisible by $h$, then $G$ admits a perfect $H$-tiling. \end{theo} In this paper we prove an asymptotic multipartite analogue of the K\"uhn-Osthus theorem (Theorem~\ref{theo:degalpha}), which establishes the asymptotic value of $\delta^*(H, n)$ for any graph $H$ with $\chi(H) \geq 3$. Together with a~theorem of Bush and Zhao~\cite{BZ}, who previously gave the corresponding result for bipartite graphs $H$ up to an additive constant, this determines the asymptotic value of $\delta^*(H, n)$ for every graph $H$. It is also natural to ask for the minimum degree condition needed to ensure that we can find an $H$-tiling in $G$ covering almost all the vertices of $G$. In the non-partite setting Koml\'os~\cite{K} showed that $\delta(G) \geq \tfrac{\chi_{\rm cr}(H)-1}{\chi_{\rm cr}(H)}n$ is sufficient to ensure an $H$-tiling covering all but $o(n)$ vertices. He conjectured that in fact this condition guarantees an $H$-tiling covering all but a constant number of vertices, and this was subsequently confirmed by Shokoufandeh and Zhao~\cite{SZ}. Our second main result (Theorem~\ref{theo:almosttiling}) gives a multipartite analogue of Koml\'os's result, namely that any $\chi(H)$-partite graph $G$ whose vertex classes each have size $n$ and which satisfies $\delta^*(G) \geq \tfrac{\chi_{\rm cr}(H)-1}{\chi_{\rm cr}(H)}n$ admits an $H$-tiling covering all but $o(n)$ vertices of $G$. Again, an analogous result for bipartite graphs $H$ was previously given by Bush and Zhao~\cite{BZ}. \subsection{Main results} Let $G$ and $H$ be graphs. An \emph{$H$-tiling} in $G$ is a collection of vertex-disjoint copies of $H$ in $G$; it is \emph{perfect} if every vertex of $G$ is covered by some member of the tiling. Let $H$ be a graph with chromatic number $\chi(H) = r$, and let ${\mathcal C}$ denote the set of proper $r$-colourings of $H$. Then for any proper $r$-coloring $\phi \in {\mathcal C}$ with colour classes $X_1^\phi, \dots, X_r^\phi$, we define \begin{equation*} {\mathcal D}(\phi) := \big\{|X_i^\phi| - |X^\phi_j| : i, j \in [r]\big\} \mbox{ and } {\mathcal D}(H) := \bigcup_{\phi \in {\mathcal C}} {\mathcal D}(\phi) . \end{equation*} (Throughout this paper we write $[r]$ to denote the set $\{1, \dots, r\}$.) The \emph{greatest common divisor} of $H$, denoted $\gcd(H)$, is then defined to be the highest common factor of the set ${\mathcal D}(H)$ if ${\mathcal D}(H)\neq 0$. If ${\mathcal D}(H)=0$ (that is, if every $r$-coloring of $H$ is \emph{equitable}, meaning that all colour classes have the same size) then we write $\gcd(H) = \infty$. We also define \begin{equation} \label{eq:defsigma} \sigma(H) := \min_{\phi \in {\mathcal C}, i \in [r]} \frac{|X_i^\phi|}{|V(H)|}. \end{equation} So $0 < \sigma(H) \leq 1/r$, with equality if and only if every $r$-colouring of $H$ is equitable. The \emph{critical chromatic number} of $H$, introduced by Koml\'os~\cite{K}, is denoted $\chi_{\rm cr}(H)$ and is defined by \begin{equation} \label{eq:defchicr} \chi_{\rm cr}(H) := \frac{\chi(H)-1}{1-\sigma(H)}. \end{equation} So for any graph $H$ we have $\chi(H)-1 < \chi_{\rm cr}(H) \leq \chi(H)$, again with equality if and only if every $\chi(H)$-coloring of $H$ is equitable. Note that the definition of $\sigma(H)$ that we use differs by a factor of $|V(H)|$ from that used by K\"uhn and Osthus~\cite{KO2}, but our definition of $\chi_{\rm cr}(H)$ is the same. Finally, following K\"uhn and Osthus~\cite{KO2}, we define \begin{align*} \chi^*(H) := \begin{cases} \chi_{\rm cr}(H) & \mbox{if $\gcd(H) = 1$,} \\ \chi(H) & \mbox{otherwise.} \end{cases} \end{align*} Recall that if $G$ is an $r$-partite graph with vertex classes $V_1, \dots, V_r$, then the multipartite minimum degree of $G$, denoted $\delta^*(G)$, is defined to be the largest integer $m$ such that for any $i \neq j$ every vertex of $V_i$ has at least $m$ neighbours in $V_j$. Also, we say that $G$ is \emph{balanced} if every vertex class has the same size. Our first main result is Theorem~\ref{theo:degalpha} in which the optimal degree condition is relaxed by an additive factor which is linear in the number of vertices. This is an asymptotic multipartite version of a theorem of K\"uhn and Osthus~\cite{KO2}. \begin{theo}\label{theo:degalpha} Let $H$ be a graph on $h$ vertices with $\chi(H)=r\geq 3$. For any $\alpha > 0$ there exists $n_0=n_0(\alpha,H)$ such that if $G$ is a balanced $r$-partite graph $G$ on $rn$ vertices with $\delta^*(G) \geq \left(1-\frac{1}{\chi^*(H)} + \alpha \right) n$, where $n \geq n_0$ is divisible by $h$, then $G$ contains a perfect $H$-tiling. \end{theo} \begin{rem} In the case where $\gcd(H) = 1$, the proof of Theorem~\ref{theo:degalpha} only uses the weaker assumption that $h$ divides $rn$ (rather than $h$ divides $n$). However, as observed in~\cite{MS}, in the case $\gcd(H) > 1$ we do indeed require that $h$ divides $n$. \end{rem} Constructions given in Section~\ref{sec:construct} demonstrate that, for any graph~$H$, Theorem~\ref{theo:degalpha} is best-possible up to the $\alpha n$ error term in the degree condition. A similar but slightly-different result holds in the case $r=2$; this case was fully settled by Bush and Zhao~\cite{BZ} up to an additive constant. \begin{theo}[Bush-Zhao~\cite{BZ}] \label{BZ:degalpha} For any bipartite graph $H$ there exist constants $n_0=n_0(H)$ and $c=c(H)$ such that if $G$ is a balanced bipartite graph with $n\geq n_0$ vertices in each part, and $|H|$ divides $2n$, then the following statements hold (where $\gcd(H)$ is defined as above and ${\rm gcd}_{\mathrm{cc}}(H)$ is the greatest common divisor of the sizes of the connected components of $H$). \begin{enumerate}[(a)] \item If $\gcd(H)>1$ and ${\rm gcd}_{\mathrm{cc}}(H)=1$, then $\delta(G)\geq (1-1/\chi(H))n+c$ suffices to ensure a perfect $H$-tiling in $G$. \item If $\gcd(H)=1$ or ${\rm gcd}_{\mathrm{cc}}(H)>1$, then $\delta(G)\geq (1-1/\chi_{\rm cr}(H))n+c$ suffices to ensure a perfect $H$-tiling in $G$. \end{enumerate} \end{theo} The necessity of considering ${\rm gcd}_{\mathrm{cc}}(H)$ is unique to the case of $r=2$, which we discuss in Section~\ref{sec:EditingClusterSizes}, after the proof of Proposition~\ref{prop:meetingKrs}. We can also consider almost-perfect $H$-tilings, that is, $H$-tilings covering almost all of the vertices of $G$. In the non-partite case the minimum degree condition needed to ensure a tiling covering all but a~linear number of vertices was established by Koml\'os~\cite{K}; this result was later strengthened by Shokoufandeh and Zhao~\cite{SZ} to tilings covering all but a constant number of vertices. Our next theorem provides a~multipartite analogue of the result of Koml\'os. \begin{theo} \label{theo:almosttiling} Let $H$ be a graph with $\chi(H) = r \geq 3$. For any $\psi > 0$ there exists $n_0=n_0(\psi,H)$ such that for any $n \geq n_0$, if $G$ is a~balanced $r$-partite graph on $rn$ vertices with $\delta^*(G) \geq \left(1-\frac{1}{\chi_{\rm cr}(H)} \right) n$, then $G$ contains an $H$-tiling covering all but at most $\psi n$ vertices of~$G$. \end{theo} Bush and Zhao also addressed this problem for the case of $r=2$ and obtained a similar result to Theorem~\ref{BZ:degalpha} -- one without considering the sizes of the connected components -- but their result gives an $H$-tiling covering all but a constant number of vertices. \begin{theo}[Bush-Zhao~\cite{BZ}] \label{BZ:almosttiling} For any bipartite graph $H$, there exist constants $n_0=n_0(H)$ and $c=c(H)$ such that whenever $G$ is a~balanced bipartite graph with $n\geq n_0$ vertices in each part, $\delta(G)\geq (1-1/\chi_{\rm cr}(H))n$ suffices to ensure an $H$-tiling of $G$ that covers all but at most $c$ vertices. \end{theo} To avoid repetition in proving Theorems~\ref{theo:degalpha} and Theorems~\ref{theo:almosttiling}, we deduce each from the following combined statement. \begin{theo}\label{theo:combined} Let $H$ be a graph on $h$ vertices with $\chi(H)=r\geq 3$. For any $\alpha > 0$ there exist $n_0 = n_0(\alpha,H)$ and $C = C(\alpha, H)$ such that the following statements hold for any balanced $r$-partite graph $G$ on $rn$ vertices with $\delta^*(G) \geq \left(1 - \frac{1}{\chi_{\rm cr}(H)} + \alpha \right) n$ and $n \geq n_0$. \begin{enumerate}[(i)] \item $G$ admits an $H$-tiling covering all but at most $C$ vertices of $G$. \item If $\gcd(H) = 1$ and $rn$ is divisible by $h$ then $G$ admits a perfect $H$-tiling. \end{enumerate} \end{theo} We prove Theorem~\ref{theo:combined} in Section~\ref{sec:proof} after establishing the necessary preliminaries in Section~\ref{sec:preliminaries}. Theorem~\ref{theo:almosttiling} then follows from Theorem~\ref{theo:combined} by a short deduction, which is given in Section~\ref{sec:deduce}, whilst Theorem~\ref{theo:degalpha} is immediate from combining Theorems~~\ref{theo:alonyuster} and \ref{theo:combined} as follows. \begin{proof}[Proof of Theorem~\ref{theo:degalpha}] If $\gcd(H) > 1$ then $\chi^*(H) = \chi(H) = r$, so the existence of such an $n_0$ is given by Theorem~\ref{theo:alonyuster}. On the other hand, if $\gcd(H) = 1$ then $\chi^*(H) = \chi_{\rm cr}(H)$, so the existence of such an $n_0$ is given by Theorem~\ref{theo:combined}(ii). \end{proof} \subsection{Notation} We write $x \ll y$ to mean that for any $y > 0$ there exists $x_0 > 0$ such that for any $x$ with $0 < x \leq x_0$ the subsequent statements hold. Similar statements with more variables are defined similarly. We also write $x = y \pm z$ to mean that $y-z \leq x \leq y+z$. We omit floor and ceiling symbols when these do not affect the argument. \section{Preliminaries} \label{sec:preliminaries} \subsection{Fractional tilings via linear programming} We use the well-known Farkas' Lemma (see \cite[Corollary 7.1d]{Sch}). For this, recall that for a set $Y \subseteq \mathbb{R}^d$ the positive cone $\mathrm{PosCone}(Y)$ of $Y$ is the set of all linear combinations of members of $Y$ with non-negative coefficients. \begin{theo}[Farkas' Lemma] \label{farkas} Suppose that ${\bf v} \in \mathbb{R}^d \setminus \mathrm{PosCone}(Y)$ for some finite set $Y \subseteq \mathbb{R}^d$. Then there is some ${\bf x} \in \mathbb{R}^d$ such that ${\bf x} \cdot {\bf y} \leq 0$ for every ${\bf y} \in Y$ and ${\bf x} \cdot {\bf v} > 0$. \end{theo} Let $G$ be a graph on $n$ vertices and let $v_1, \ldots, v_{n}$ be any fixed ordering of its vertices. For a subset of vertices $S$, we denote by ${\bf 1}_G(S)$ the characteristic vector of $S$, that is, the vector $(x_1,\ldots, x_{n}) \in \mathbb{R}^n$ such that $x_i=1$ for $v_i\in S$ and $x_i=0$ for $v_i\not\in S$. If $H$ is a subgraph of $G$, we write ${\bf 1}_G(H)$ instead of~${\bf 1}_G(V(H))$. We also write ${\bf 1}$ for the all-ones vector (the dimension will always be clear from the context). For a graph $H$, denote by ${\mathcal K}_H(G)$ the set of subgraphs in $G$ isomorphic to $H$. A \emph{fractional $H$-tiling} in $G$ assigns a weight $w(H') \geq 0$ to each $H'\in{\mathcal K}_H(G)$ such that for any vertex $x \in V(G)$ we have \begin{align} \sum \{w(H') \ |\ H'\in {\mathcal K}_H(G), x\in V(H')\} \leq 1 . \label{eq:tiling} \end{align} The fractional $H$-tiling is \emph{perfect} if we have equality in (\ref{eq:tiling}) for every $x \in V(G)$. Equivalently, weights form a fractional $H$-tiling in $G$ if $\sum_{H'\in {\mathcal K}_H(G)} w(H')\,{\bf 1}_G(H') \leq {\bf 1}$, where the vector inequality is pointwise, and we have equality if and only if the fractional $H$-tiling is perfect. Given an integer $r \geq 2$, a \emph{rooted copy of $K_r$} in $G$ is a copy of $K_r$ in which one vertex is designated to be the root. Similarly, given rational numbers $a, b > 0$, an \emph{$(a, b)$-weighted rooted copy of $K_r$} in $G$ is a rooted copy of $K_r$ with the root labelled by $a$ and the remaining vertices by $b$. With every $(a, b)$-weighted rooted copy of $K_r$, $K$, we associate the \emph{weighted characteristic vector} ${\bf 1}_{a, b, G}(K)$ with $a$ at the coordinate corresponding to the root, $b$ at the other vertices of $K$, and $0$ otherwise. We denote by ${\mathcal K}_{a, b, r}(G)$ the set of all $(a, b)$-weighted rooted copies of $K_r$ in $G$. This notion extends to the definition of a weighted fractional tiling of $G$: an \emph{$(a, b)$-weighted fractional $K_r$-tiling} in $G$ consists of a weight $w(K)$ for every $K \in {\mathcal K}_{a, b, r}(G)$ such that $\sum_{K\in{\mathcal K}_{a, b, r}(G)} w(K){\bf 1}_{a, b, G}(K) \leq {\bf 1}$ (where the inequality should be again interpreted pointwise). The tiling is \emph{perfect} if we have equality. \begin{lem} \label{fractiling} Let $r\geq 3$ be an integer, let $a$ and $b$ be rational numbers such that $0<a\leq b$ and define $h:= a+(r-1)b$. If $G$ is a balanced $r$-partite graph on $rn$ vertices with $\delta^*(G) \geq (1-b/h)n$ then $G$ admits a perfect $(a, b)$-weighted fractional $K_r$-tiling. \end{lem} \proof We will first prove the lemma using the assumption that $bn/h$ is an integer and justify that assumption at the end of the proof. Suppose for a contradiction that some graph $G$ as in the statement of the lemma does not admit a~perfect $(a, b)$-weighted fractional $K_r$-tiling, and that $bn/h$ is an integer. This is equivalent to saying that ${\bf 1} \notin \mathrm{PosCone}(Y)$ for the set $Y = \{{\bf 1}_{a, b, G}(K): K \in {\mathcal K}_{a, b, r}(G)\}$. So by Farkas' Lemma, there exists ${\bf x} \in \mathbb{R}^{rn}$ such that \begin{align} {\bf x} \cdot {\bf 1} &> 0 \label{eq:farkasone} \intertext{and} {\bf x} \cdot {\bf 1}_{a, b, G}(K) &\leq 0, \mbox{ for every $K \in {\mathcal K}_{a, b, r}(G)$.} \label{eq:farkasK} \end{align} Fix such an ${\bf x}$, and let $v_j^1, \dots, v_j^n$ be the vertices of the $j$th vertex class of $G$, ordered by decreasing ${\bf x}$-coordinate, that is, so that ${\bf x} \cdot {\bf 1}_G(\{v_j^s\}) \geq {\bf x} \cdot {\bf 1}_G(\{v_j^t\})$ for any $s \leq t$. Because $bn/h$ is an integer, each vertex class $V_i$ can be partitioned as follows: \begin{align} V_i^j &:= \{v_i^\ell : (j-1)\, bn/h+1\leq\ell\leq j\, bn/h\}, & \forall j\in [r-1] \label{eq:toprows} \\ V_i^r &:= \{v_i^\ell : (r-1)\, bn/h+1\leq\ell\leq n\} . & \label{eq:bottomrow} \end{align} For any permutation $\pi$ of $[r]$, we can greedily form an $(a, b)$-weighted rooted copy $K_\pi$ of $K_r$ as follows: First, let $u_1$ be the vertex in $V_{\pi(1)}$ for which the ${\bf x}$-coordinate is largest. In our notation, $u_1=v_{\pi(1)}^{1}$. Next, for each $j \in \{2,\ldots,r\}$, let $u_j=v_{\pi(j)}^{t_j}$ be the vertex in $V_{\pi(j)}$ in the common neighborhood of $u_1, \ldots, u_{j-1}$ for which the ${\bf x}$-coordinate is largest. It follows from the minimum degree condition that $t_j \leq (j-1)\, bn/h+1$, so for every $v_{\pi(j)}^\ell \in V_{\pi(j)}^j$ we have $\ell \geq t_j$; in other words every vertex in $V_{\pi(j)}^j$ has ${\bf x}$-coordinate at most that of $u_j$. We assign weight $b$ to each of $u_1,\ldots,u_{r-1}$ and weight $a$ to $u_r$ (so $u_r$ is the root of $K_\pi$). Since every vertex in $V_{\pi(j)}^j$ has ${\bf x}$-coordinate at most that of $u_j$, we have \begin{align} {\bf x}\cdot{\bf 1}_G\left(\bigcup_{j=1}^rV_{\pi(j)}^j\right) &\leq \sum_{j=1}^{r-1}{\bf x}\cdot{\bf 1}_G(\{u_j\}) \frac{bn}{h} \nonumber + {\bf x}\cdot{\bf 1}_G(\{u_r\})\left(n-(r-1) \frac{bn}{h}\right) \nonumber \\ &= \sum_{j=1}^{r-1}{\bf x}\cdot{\bf 1}_G(\{u_j\}) \frac{bn}{h} + {\bf x}\cdot{\bf 1}_G(\{u_r\}) \frac{an}{h} \nonumber \\ & = {\bf x}\cdot\left(\frac{n}{h} {\bf 1}_{a,b,G}(K_\pi)\right) . \label{eq:xdomination} \end{align} This gives the following contradiction \begin{align*} 0 < (r-1)!{\bf x} \cdot {\bf 1} = \sum_\pi {\bf x}\cdot{\bf 1}_G\left(\bigcup_{i=1}^r V_{\pi(j)}^j\right)\leq \frac{n}{h}\sum_\pi {\bf x}\cdot{\bf 1}_{a,b,G}(K_\pi)\leq 0, \end{align*} where each sum is taken over all permutations $\pi$ of $[r]$. The equality in this calculation is due to the fact that the sets $V_i^j$ partition $V(G)$, and for any $i, j \in [r]$ there are precisely $(r-1)!$ permutations of $[r]$ with $\pi(j) = i$. The first inequality follows from (\ref{eq:farkasone}), the second inequality follows from (\ref{eq:xdomination}), and the final inequality follows from (\ref{eq:farkasK}). In order to complete the proof, we justify the assumption that $bn/h$ is an integer. To see this, fix an integer $m$ such that $bnm/h$ is an integer, and let $G'$ be the \emph{$m$-fold blow-up of $G$}, in which each vertex $v \in V(G)$ is replaced by $m$ copies of $v$ in $G'$, and each edge $uv \in E(G)$ is replaced by $m^2$ edges between the copies of $u$ and $v$ in $G'$. Also set $n' := nm$. Then $G'$ is a balanced $r$-partite graph on $rn'$ vertices with $\delta^*(G') = m\delta^*(G) \geq (1-b/h)n'$, and $bn'/h = bnm/h$ is an integer. Given that the lemma holds in this case, $G'$ admits a perfect $(a, b)$-weighted fractional $K_r$-tiling. This naturally yields a perfect $(a, b)$-weighted fractional $K_r$-tiling in $G$ by taking the weight of each rooted copy of $K_r$ in $G$ to be the average of the weights of the $m^r$ corresponding rooted copies of $K_r$ in $G'$. \endproof \begin{rem} \label{rem:rationalweights} A perfect $(a, b)$-weighted fractional $K_r$-tiling as guaranteed by \linebreak Lemma~\ref{fractiling} is the solution to a linear programming instance in which all coefficients are rational. Such an instance must have a rational solution, so we may assume that all weights in a perfect $(a, b)$-weighted fractional $K_r$-tiling given by Lemma~\ref{fractiling} are rational. \end{rem} \subsection{Editing cluster sizes} \label{sec:EditingClusterSizes} Proposition~\ref{defU} below shows that we may `combine' copies of $H$ to form a complete $r$-partite graph ${\mathcal U}(H)$ whose vertex classes are all equal except for one class which has one extra vertex and one class which has one fewer vertex. This will allow us, in the proof of Theorem~\ref{theo:combined}, to delete copies of ${\mathcal U}(H)$ and thus modify the sizes of clusters of $H$ modulo $rh$. \begin{prop} \label{defU} Let $H$ be a graph on $h$ vertices with $\chi(H) = r \geq 3$ and $\gcd(H) = 1$. Then there exists an integer $s = s(H)$ for which the complete $r$-partite graph ${\mathcal U}(H)$ with one vertex class of size $srh+1$, one vertex class of size $srh-1$ and $r-2$ vertex classes of size $srh$ admits a perfect $H$-tiling. \end{prop} The proof of Proposition~\ref{defU} is straightforward and essentially identical to that of Proposition~3.6 from~\cite{M} (which gave an analogous statement for $r$-partite $r$-uniform hypergraphs $H$), so we omit it. To apply Proposition~\ref{defU} we make use of the following elementary proposition, which we will apply in the `reduced graph', and then apply Proposition~\ref{defU} within the graph induced by the clusters corresponding to $K$ and $K'$. \begin{prop} \label{prop:meetingKrs} Fix $r \geq 3$, and let $G$ be a balanced $r$-partite graph on $rn$ vertices with $\delta^*(G) > (1-\frac{1}{r-1})n$. Then for any vertices $u, v \in V(G)$ there are copies $K$ and $K'$ of $K_r$ in $G$ such that $u \in K$, $v \in K'$, and such that $K$ and $K'$ have at least one vertex in common. \end{prop} \begin{proof} Let $V_1, \dots, V_r$ be the vertex classes of $G$, and assume without loss of generality that $u, v \notin V_r$. Since $r \geq 3$ we have $\delta^*(G) > n/2$, so we may fix a common neighbour $w$ of $u$ and $v$ in $V_r$. It then suffices to extend $\{u, w\}$ and $\{v, w\}$ to copies of $K_r$ in $G$, and we may do this greedily. Indeed, any set $S$ of $j$ vertices of $G$ has at least $n-j(n-\delta^*(G)) > n - \frac{jn}{r-1}$ common neighbours in each vertex class not intersected by $S$, and in forming a copy of $K_r$ we choose each vertex to be a common neighbour of at most $r-1$ previously-chosen vertices, so there is always a common neighbour available. \end{proof} Observe that the statement of Proposition~\ref{prop:meetingKrs} does not hold for $r=2$, as then $G$ need not be connected. This is the fundamental reason for the different behaviour of Theorem~\ref{theo:degalpha} compared to Theorem~\ref{BZ:degalpha} (in which the greatest common divisor of the sizes of connected components plays a role). \subsection{Completing the tiling} At the end of the proof of Theorem~\ref{theo:combined}, all the remaining vertices of our graph $G$ lie in vertex-disjoint $r$-partite subgraphs $G''$ of $G$ whose vertex classes are pairwise super-regular with positive density. We then complete the $H$-tiling of $G$ by finding a~perfect $H$-tiling of each $G''$. For this it would be natural to arrange that the sizes of the $r$ vertex classes of $G''$ are in the ratio $b:b:\dots:b:a$, so that the proportion of vertices of $G''$ in the smallest vertex class is the same as the proportion of vertices of $H$ in the smallest vertex class. But it is to our advantage to ensure that the smallest class of $G''$ actually has a slightly larger proportion of the vertices. Indeed, Proposition~\ref{completetiling} below guarantees that such a distribution of sizes (together with certain divisibility assumptions) ensures a perfect $H$-tiling in the complete $r$-partite graph $G'$ with the same vertex class sizes. This is enough for the Blow-up Lemma to ensure that $G''$ also admits a perfect $H$-tiling. \begin{prop}[\cite{M}, Corollary~6.13] \label{completetiling} Let $H$ be a graph on $h$ vertices with $\chi(H) = r \geq 3$ and $\sigma(H) < \frac{1}{r}$. Then for any $\alpha > 0$ there exist $\beta = \beta(\alpha,H) > 0$ and $n_0 = n_0(\alpha,H)$ such that the following statement holds. Let $G'$ be a complete $r$-partite graph on $n \geq n_0$ vertices with vertex classes $V_1, \dots, V_r$, where $|V_1| \leq |V_2|, \cdots, |V_r|$. Suppose also that \begin{enumerate}[(1)] \item $\sigma(G') \geq \sigma(H) + \alpha$, \item $\big||V_i| - |V_j|\big| \leq \beta n$ for any $2 \leq i, j \leq r$, and \item $rh\cdot\gcd(H)$ divides $|V_j|$ for each $j \in [r]$. \end{enumerate} Then $G'$ admits a perfect $H$-tiling. \end{prop} Proposition~\ref{completetiling} is also taken from~\cite{M}, where it was stated for $r$-partite $r$-uniform hypergraphs $H$; here it is easy to see that the $r$-partite graph form is identical, since the problem of tiling a complete $r$-partite $r$-uniform hypergraph $G'$ with copies of a smaller $r$-partite $r$-uniform hypergraph $H$ is identical to the problem of tiling a complete $r$-partite graph $G'$ with copies of a smaller $r$-partite graph $H'$. \subsection{Tidying up atypical vertices} In the proof of Theorem~\ref{theo:combined} we will encounter `bad' vertices in $G$ which have atypical neighbourhoods. At an early stage in the proof we will greedily remove each such vertex $v$ from $G$ by deleting a copy of $H$ in $G$ which contains $v$. The following proposition shows that the degree condition of Theorem~\ref{theo:combined} is (more than) strong enough to ensure that this is possible. \begin{prop} \label{deletevertex} Let $H$ be a graph on $h$ vertices with $\chi(H) = r \geq 3$ and $\gcd(H) = 1$. For any $\alpha > 0$, there exists $n_0=n_0(\alpha,H)$ such that the following statement holds. Let $G$ be a balanced $r$-partite graph on $rn$ vertices such that $n \geq n_0$ and $\delta^*(G) \geq \frac{r-2}{r-1}n + \alpha n$. Then, for any vertex $v \in V(G)$, there is a copy of $H$ in $G$ which contains $v$. \end{prop} We omit any proof of Proposition~\ref{deletevertex} in that it is a straightforward application of Szemer\'edi's Regularity Lemma and is implicit in many papers, including~\cite{BZ,KO2,MS}. \subsection{The regularity method} We use a variant of Szemer\'edi's Regularity Lemma. Before we can state it, we need a few basic definitions. For disjoint vertex sets $A$ and $B$ in some graph, let $e(A,B)$ denote the number of edges with one endpoint in $A$ and the other in $B$. Further, let the \textit{density} of the pair $(A,B)$ be $d(A,B)=e(A,B)/|A||B|$. We say that the pair $(A,B)$ is \textit{$\varepsilon$-regular} if $X\subseteq A$, $Y\subseteq B$, $|X|\geq\varepsilon |A|$, and $|Y|\geq\varepsilon|B|$ imply $|d(X,Y)-d(A,B)|\leq\varepsilon$, and likewise that a pair $(A,B)$ is \textit{$(\varepsilon, \delta)$-super-regular} if $(A, B)$ is $\varepsilon$-regular and also $\deg_B(a)\geq\delta |B|$ for all $a\in A$ and $\deg_A(b)\geq\delta |A|$ for all $b\in B$. The degree form of Szemer\'edi's Regularity Lemma (see, for instance, \cite[Theorem 1.10]{KS}) is sufficient here, modified for the multipartite setting. \begin{theo}\label{thm:SzemRegLem} For every integer $r \geq 2$ and every $\varepsilon>0$, there is an $M=M(r,\varepsilon)$ such that if $G=(V_1,\ldots,V_r;E)$ is a balanced $r$-partite graph on $rn$ vertices and $d\in[0,1]$ is any real number, then there exist integers $\ell$ and $L$, a spanning subgraph $G'=(V_1,\ldots,V_r;E')$ and, for each $i=1,\ldots,r$, a partition of $V_i$ into clusters $V_i^{0},V_i^{1},\ldots,V_i^{\ell}$ with the following properties. \begin{enumerate}[(P1)] \item $\ell\leq M$, \item $|V_i^{0}|\leq \varepsilon n$ for $i\in [r]$, \item $|V_i^{j}|=L\leq\varepsilon n$ for $i\in [r]$ and $j\in [\ell]$, \item $\deg_{G'}(v,V_{i'})>\deg_G(v,V_{i'})-(d+\varepsilon)n$ for all $v\in V_i$, $i\neq i'$, and \item all pairs $(V_i^{j},V_{i'}^{j'})$, $i,i'\in [r]$, $i\neq i'$, $j,j'\in [\ell]$, are $\varepsilon$-regular in $G'$, each with density either $0$ or exceeding $d$. \label{it:P5} \end{enumerate} \end{theo} The final step in the proof of Theorem~\ref{theo:combined} is to apply the Blow-up Lemma of Koml\'os, S\'ark\"ozy, and Szemer\'edi~\cite{KSSz97} in the following form. \begin{theo}[Blow-up Lemma]\label{theo:blow-up} For any integers $r$ and $\Delta$ and any $\delta > 0$ there exist $\varepsilon = \varepsilon(r, \Delta, \delta) > 0$ and $N_0 = N_0(r, \Delta, \delta)$ such that the following holds for any integer $N \geq N_0$ and any graph $R$ on vertex set~$[r]$. Let $V_1, \dots, V_r$ be pairwise-disjoint sets each of size $N$, and set $V = \bigcup_{i \in [r]} V_i$. Let $K$ be the graph on vertex set $V$ in which $(V_i, V_j)$ is a~complete bipartite graph for $ij \in E(R)$ (and which has no other edges than these). Also let $G$ be any graph on $V$ in which $(V_i, V_j)$ is $(\varepsilon, \delta)$-super-regular for any $ij \in E(R)$. Then for any graph $H$ with maximum degree $\Delta(H) \leq \Delta$, if $H$ can be embedded in $K$, then $H$ can also be embedded in $G$. \end{theo} This essentially states that we may treat super-regular pairs as being complete for the sake of embedding bounded degree spanning subgraphs (such as a perfect $H$-tiling). \section{Proof of Theorem~\ref{theo:combined}} \label{sec:proof} We now give the full proof of Theorem~\ref{theo:combined}. Recall that $r=\chi(H) \geq 3$ and $h = |V(H)|$, and set $\sigma := \sigma(H)$, $a := \sigma h$ and $b := (1-\sigma)h/(r-1)$. Then $a, b$ and $\sigma$ are positive rational numbers with $h = a + (r-1)b$ and $\sigma \leq \tfrac{1}{r}$ (see~\eqref{eq:defsigma}). If $\sigma = \frac{1}{r}$ then $\chi_{\rm cr}(H) = r$ by definition of $\chi_{\rm cr}$ (see~\eqref{eq:defchicr}), so Theorem~\ref{theo:alonyuster} gives the theorem in this case. We may therefore assume that $\sigma < \tfrac{1}{r}$. Since both $\sigma$ and $\tfrac{1}{r}$ can be written as rationals with denominator $rh$ it follows that $\sigma \leq \tfrac{1}{r}-\tfrac{1}{rh}$, so $b-a \geq \tfrac{1}{r-1}$. Without loss of generality we assume that $\alpha$ is rational and that $\alpha \leq \tfrac{1}{rh}$. Introduce new constants $n_0, C, D, M, \varepsilon, \varepsilon', \beta, d$ with $$ \tfrac{1}{n_0} \ll \tfrac{1}{C} \ll \tfrac{1}{D} \ll \tfrac{1}{M} \ll \varepsilon \ll \varepsilon' \ll \beta \ll d \ll \alpha, \tfrac{1}{r}, \tfrac{1}{h}.$$ Let $G$ be an $r$-partite graph whose vertex classes $V_1, \dots, V_r$ each have size $n \geq n_0$ and which satisfies $$\delta^*(G) \geq \left(1-\frac{1}{\chi_{\rm cr}(H)} + \alpha \right) n \stackrel{\eqref{eq:defchicr}}{=} \left(1 - \frac{1-\sigma}{r-1} + \alpha \right) n= \left(1-\frac{b}{h}\right) n + \alpha n.$$ We shall construct an $H$-tiling in $G$ covering all but at most $C$ vertices of $G$, or, if $\gcd(H) = 1$ and $h$ divides $rn$, a perfect $H$-tiling in $G$. Define $a' := a + \frac{\alpha h}{2}$ and $b' := b - \frac{\alpha h}{2(r-1)}$, so $a'$ and $b'$ are rational numbers with $0 < a' \leq b'$ (the latter inequality follows from our assumption that $\alpha \leq \frac{1}{rh}$) and $ a' + (r-1)b' = a + (r-1)b = h$. Note also that $1 -\frac{b'}{h} \leq 1 - \frac{b}{h} + \frac{\alpha}{2(r-1)} \leq 1 - \frac{b}{h} + \frac{\alpha}{2}$, so \begin{equation} \label{eq:mindegG} \delta^*(G) \geq \left(1 - \frac{b'}{h}\right) n + \frac{\alpha n}{2}. \end{equation} \medskip \noindent \emph{Step 1: Apply the Regularity Lemma and define the reduced graph $R$.} We apply the Regularity Lemma (Theorem~\ref{thm:SzemRegLem}) to $G$, with $r$, $\varepsilon$, $d$ and $M$ playing the same role there as here, to obtain integers $\ell$ and $L$, a spanning subgraph $G'$ of $G$ and a partition of each $V_i$ into clusters $V_i^0, V_i^1, \dots, V_i^\ell$ which satisfy properties (P1)-(P5). In particular, (P3) tells us that for any $i \in [r]$ and $j \in [\ell]$ the cluster $V_i^j$ has size $L$, so $(1-\varepsilon)n/\ell \leq L \leq n/\ell$. We define the \emph{reduced graph} $R$ of $G'$ in a standard way: the vertices of $R$ are the clusters $V_i^j$ for $i \in [r]$ and $j \in [\ell]$, and the edges of $R$ are those $V_{i}^{j}V_{i'}^{j'}$ for which there is at least one edge of $G'$ between $V_i^j$ and $V_{i'}^{j'}$ (note that (P5) then implies that the pair $(V_i^j, V_{i'}^{j'})$ is $\varepsilon$-regular with density at least $d$). So $R$ is $r$-partite with vertex classes of size $\ell$. Moreover, for any $i \neq j$, any vertex $v \in V_i$ has at least $\delta^*(G) - (d+\varepsilon)n$ neighbours in $V_j$ by (P4). By (P2) at most $\varepsilon n$ of these neighbours are in $V_j^0$, so $v$ has neighbours in at least $\tfrac{1}{L} \cdot (\delta^*(G) - (d+2\varepsilon)n)$ of the clusters $V_j^1, \dots, V_j^\ell$. Since $L \leq n/\ell$, it follows from (\ref{eq:mindegG}) that \begin{align} \label{eq:mindegR} \delta^*(R) & \geq \frac{\ell}{n}\left(\left(1 - \frac{b'}{h} \right)n + \frac{\alpha n}{2} - (d+2\varepsilon)n \right) \geq \left(1 - \frac{b'}{h}\right)\ell. \end{align} \medskip \noindent\emph{Step 2: Obtain a perfect fractional $(a', b')$-weighted $K_r$-tiling ${\mathcal T}$ in~$R$.} This can be done immediately by applying Lemma~\ref{fractiling} to~$R$ (inequality (\ref{eq:mindegR}) tells us that the minimum degree condition is satisfied). Let ${\mathcal K}^+$ be the set of $(a', b')$-weighted rooted copies of $K_r$ of non-zero weight in ${\mathcal T}$, that is, ${\mathcal K}^+ = \{K \in {\mathcal K}_{a', b', r}(R) : w(K) > 0\}$. Also observe that $R$ has $r\ell \leq rM$ vertices, so the number of possibilities for the reduced graph $R$ is bounded by a function of $M$. For each possible $R$, Lemma~\ref{fractiling} would give us a perfect fractional $(a', b')$-weighted $K_r$-tiling of $R$ in which all weights are rational (see Remark~\ref{rem:rationalweights}). So, as observed in Section 3.2 from~\cite{MS}, there is a common denominator, bounded by a function of $M$, of all weights used in our perfect fractional $(a', b')$-weighted $K_r$-tilings for each possible reduced graph $R$. Since $1/D \ll 1/M$, we may assume that $D!$ is a multiple of this common denominator, and therefore that $w(K)D!$ is an integer for any $K \in {\mathcal K}^+$. In particular, $w(K) \geq 1/D!$ for every $K \in {\mathcal K}^+$. \medskip \noindent\emph{Step 3: Partition the clusters $U_i$ into subclusters according to the fractional tiling ${\mathcal T}$.} For each $i \in [r]$ and $j \in [\ell]$ let ${\mathcal K}^+_{i, j}$ consist of all members of ${\mathcal K}^+$ which contain $V^j_i$. So each member of ${\mathcal K}^+$ appears in precisely $r$ of the sets ${\mathcal K}^+_{i, j}$. Also, since ${\mathcal T}$ is perfect, for any cluster $V^j_i$ we have $$\sum_{K \in {\mathcal K}^+} w(K) {\bf 1}_R(\{V^j_i\}) \cdot {\bf 1}_{a', b', R}(K) = \sum_{K \in {\mathcal K}^+_{i,j}} w(K) {\bf 1}_R(\{V^j_i\}) \cdot {\bf 1}_{a', b', R}(K) = 1 . $$ Recall that ${\bf 1}_{a', b', R}(K)$ is the vector where the entries are $a$ at the coordinate corresponding to the root, $b$ at the other vertices of $K$, and $0$ otherwise. So we may partition the cluster $V^j_i$ into parts ${V^j_i}(K)$ for $K \in {\mathcal K}_{i,j}^+$ such that $|{V^j_i}(K)| = w(K)L {\bf 1}_R(\{V^j_i\}) \cdot {\bf 1}_{a', b', R}(K)$; we refer to these parts as \emph{subclusters}. Having partitioned each cluster in this manner, for each $K \in {\mathcal K}^+$ we collect together the corresponding $r$ parts $V^{i}_{j}(K)$. One of these parts (taken from the root of $K$) has size $a'w(K) L$, and we relabel this subcluster as $U_1^K$; the remaining $r-1$ parts have size $b' w(K) L$, and we relabel these subclusters as $U_2^K, \dots, U_r^K$. For each $K \in {\mathcal K}^+$ define $m_1^K := a'w(K)L$ and $m_i^K :=b'w(K)L$ for $2 \leq i \leq r$, so that each subcluster $U_i^K$ has size $m_i^K$. We refer to the cluster from which a subcluster is taken as the \emph{parent cluster} of that subcluster. Moreover, we choose the partition into subclusters in such a way that whenever $U_i^K$ and $U_{j}^{K'}$ are subclusters whose parent clusters form an edge of $R$, the pair $(U_i^K, U_{j}^{K'})$ is $\varepsilon'$-regular in $G'$ with density $d(U_i^K, U_{j}^{K'}) \geq d/2$. This is possible since each subcluster has size at least $a' w(K) L \geq a'L/D!$. Indeed, the Random Slicing Lemma (see e.g.~\cite[Lemma 10]{MS}) states that the described event holds with high probability if we choose the partition of each cluster uniformly at random. For each $K \in {\mathcal K}^+$ let $G^K$ denote the subgraph of $G'$ induced by $U^K := \bigcup_{i \in [r]} U_i^K$. So $G^K$ is naturally $r$-partite with vertex classes $U_i^K$ for $i \in [r]$. Furthermore, the graphs $G^K$ for $K \in {\mathcal K}^+$ are vertex-disjoint and collectively cover all vertices of $G$ other than those in the sets $V_i^0$ for $i \in [r]$. Over the next three steps of the proof we will remove or delete some vertices from each subcluster $U_i^K$; whenever we do so we continue to write $U_i^K$, $U^K$ and $G^K$ for the restriction of these sets/graphs to the vertices which were not removed or deleted. Note, however, that we do not edit the quantities $m_i^K$, $L$ and $n$ as vertices are removed or deleted. \medskip \noindent\emph{Step 4: Remove some vertices to make each $G^K$ super-regular.} For each $K \in {\mathcal K}^+$ and $i \in [r]$ we say that a vertex $v \in U_i^K$ is \emph{bad} if $|N_{G'}(v) \cap U_j^K| < (d/2-\varepsilon')m_j^K$ for some $j \neq i$. By our choice of partition of clusters into subclusters, $(U_i^K,U_j^K)$ is an $\varepsilon'$-regular pair in $G'$ with $d(U_i^K,U_j^K) \geq d/2$ for each $j \neq i$, so there are at most $(r-1)\varepsilon'm_i^K$ bad vertices in $U_i^K$. We now remove all bad vertices from $U_i^K$ for each $K \in {\mathcal K}^+$ and $i \in [r]$. Let the set $X$ consist of all removed vertices and also the vertices of $V_i^0$ for each $i \in [r]$, so $|X| \leq (r-1) \varepsilon' n + r \varepsilon n \leq r\varepsilon' n$, and the set $X$ and subclusters $U_i^K$ partition $V(G)$. Moreover, since all bad vertices were removed, for each $K \in {\mathcal K}^+$ and each $i \neq j$ the pair $(U_i^K, U_j^K)$ is now $(2\varepsilon', d/3)$-super-regular. At this point we note that over the next two steps of the proof at most $2\beta m_i^K + C$ vertices will be deleted from each subcluster $U_i^K$, in addition to the at most $(r-1)\varepsilon' m_i^K$ vertices removed during the current step. Since $C \leq \frac{1}{h} \cdot \frac{(1-\varepsilon)n}{\ell} \cdot \frac{1}{D!} \leq a'Lw(K) \leq \varepsilon m_i^K$, this means that in total at most $3\beta m_i^K \leq \frac{d}{12} m_i^K$ vertices are removed or deleted from $U_i^K$, and so even after some or all of these deletions it will remain the case that \begin{enumerate} \item[(S1)] If $W_1$ and $W_2$ are subclusters whose parent clusters form an edge of $R$, then $(W_1, W_2)$ is a $2\varepsilon'$-regular pair in $G'$ with density at least $d/3$. \item[(S2)] For any $K \in {\mathcal K}^+$ and $i \neq j$ the pair $(U_i^K,U_j^K)$ is $(3\varepsilon', d/4)$-super-regular in $G'$. \end{enumerate} \medskip \noindent\emph{Step 5: Delete copies of $H$ which cover all vertices of $X$.} We now delete at most $|X|+r$ vertex-disjoint copies of $H$ from $G$ so that every vertex of $X$ is deleted, at most $2 \beta m_i^K$ vertices are deleted from any subcluster $U_i^K$, and also, if $h$ divides $rn$, so that the total number of undeleted vertices is divisible by $rh$. This can be done greedily. Indeed, since in total we choose at most $|X| + r \leq 2r \varepsilon' n$ copies of $H$, at most $2r \varepsilon' n h$ vertices are deleted in total. Prior to any deletion, we `mask' any vertices in any subcluster $U_i^K$ from which at least $\beta m_i^K$ vertices (i.e. at least a $\beta$-proportion of the vertices) have previously been deleted; there are then at most $2r \varepsilon' nh/\beta \leq \beta n$ vertices which lie in masked subclusters. Together with the at most $2r \varepsilon' n h \leq \beta n$ vertices in copies of $H$ already deleted in this step, this means we must choose the next copy of $H$ so as to avoid at most $2\beta n$ vertices of $G$. So the restriction of $G$ to the as-yet-undeleted vertices of $G$ has minimum multipartite degree at least $\frac{r-2}{r-1}n + \alpha n$ (recall from \eqref{eq:defchicr} that $\chi_{\rm cr}(H) > \chi(H)-1 = r-1$). We may therefore select any as-yet-undeleted vertex $v$ and apply Proposition~\ref{deletevertex} to obtain a copy of $H$ within this restriction which contains $v$, which we then delete. Whilst $X$ remains non-empty we always choose $v \in X$, which ensures that after at most $|X|$ deletions every vertex of $X$ will have been deleted. If $h$ does not divide $rn$ we are then done, so suppose now that $h$ divides $rn$. We continue as before, now choosing $v$ at each step to be an arbitrary unmasked vertex. Since each time we delete a copy of $H$ we delete $h$ vertices from $G$, the number of undeleted vertices of $G$ is always divisible by $h$, and so we can ensure that the number of undeleted vertices of $G$ is divisible by $rh$ by deleting at most a further $r-1$ copies of $H$, as claimed. Finally, the fact that masked vertices cannot be deleted ensures that at most $\beta m_i^K + h \leq 2 \beta m_i^K$ vertices are deleted from any subcluster $U_i^K$, as required. \medskip \noindent \emph{Step 6: Delete vertices or copies of $H$ from $G$ to ensure divisibility of subcluster sizes.} For (i) of Theorem~\ref{theo:combined}, in which we only wish to find an $H$-tiling covering all but at most $C$ vertices of $G$, we now simply delete vertices of $G$ individually so that, following these deletions, the size of each subcluster is divisible by $rh\cdot\gcd(H)$ (the deleted vertices will not be covered by the $H$-tiling we construct). Since we have $r\ell \leq rM$ clusters, each of which was partitioned into at most $D!$ subclusters, we can achieve this by deleting at most $rMD! \cdot rh\cdot\gcd(H) \leq C$ vertices. These are the only vertices of $G$ which will not be covered by the $H$-tiling we are constructing. Now consider (ii), in which we assume that $\gcd(H) = 1$ and that $h$ divides $rn$. By Proposition~\ref{defU}, we may choose an integer $s$ for which the complete $r$-partite graph ${\mathcal U}(H)$ with vertex classes $Y_1, Y_2, \dots, Y_r$ of sizes $|Y_1| = srh+1$, $|Y_2| = \dots = |Y_{r-1}| = srh$ and $|Y_r| = srh-1$ admits a perfect $H$-tiling. Moreover, since $s$ depends only on $H$, and $1/M \ll 1/h$, we may assume that $s \leq M$. We now delete vertex-disjoint copies of ${\mathcal U}(H)$ from $G$ so that, following these deletions, the size of each subcluster is divisible by $rh$ (since ${\mathcal U}(H)$ admits a perfect $H$-tiling, deleting a copy of ${\mathcal U}(H)$ from $G$ is equivalent to deleting $sr^2$ vertex-disjoint copies of $H$ from $G$). We do this by iterating the following steps. If every subcluster has size divisible by $rh$, then we are done. Otherwise, since the total number of undeleted vertices is divisible by $rh$, there must be two subclusters $W_1$ and $W_1'$ whose size is not divisible by $rh$. Let $X_1$ and $X_1'$ be the parent clusters of $W_1$ and $W_1'$ respectively. Then by (\ref{eq:mindegR}) and Proposition~\ref{prop:meetingKrs} we may choose clusters $X_2, \dots, X_r$ and $X'_2, \dots, X'_{r-1}$ such that $\{X_1, X_2, \dots, X_r\}$ and $\{X'_1, X'_2 \dots, X'_{r-1}, X_r\}$ each induce copies of $K_r$ in $R$. Arbitrarily choose subclusters $W_2, \dots, W_r$ and $W'_2, \dots, W'_{r-1}$ such that $X_i$ and $X_i'$ are the parent clusters of $W_i$ and $W_i'$ respectively. Now let $z \in [rh-1]$ be such that $|W_1| \equiv z$ modulo $rh$. Greedily choose and delete $z$ vertex-disjoint copies of ${\mathcal U}(H)$ in $G$ in which $Y_i$ is embedded to $W_i$ for each $i \in [r]$. Having done so, greedily choose and delete a further $z$ vertex-disjoint copies of ${\mathcal U}(H)$ in $G$ in which $Y_1$ is embedded to $W_r$, $Y_r$ is embedded to $W'_1$, and $Y_i$ is embedded to $W'_i$ for each $2 \leq i \leq r-1$ (we shall explain shortly why it is possible to choose copies of ${\mathcal U}(H)$ in this way). Then, modulo $rh$, the effect of these deletions is to reduce $|W_1|$ by $z$, to increase $|W'_1|$ by $z$, and to leave the size of all other subclusters unchanged. So $W_1$ now has size divisible by $rh$, and so the number of subclusters whose size is not divisible by $rh$ has been reduced by at least $1$. At this point we proceed to the next round of the iteration. Since there are at most $r M D!$ subclusters, this process terminates after at most $r M D!$ iterations, at which point each subcluster has size divisible by $rh$. In each iteration we delete fewer than $2rh$ copies of $|{\mathcal U}(H)|$, each of which has $sr^2h \leq Mr^2h$ vertices, so in total at most $r M D! \cdot 2rh \cdot Mr^2h \leq C$ vertices are deleted in this step. It remains only to explain why it is always possible to choose copies of ${\mathcal U}(H)$ as desired. To see this, suppose that we have already deleted copies of ${\mathcal U}(H)$ covering up to $C$ vertices of $G$, and that we next wish to choose and delete a copy of ${\mathcal U}(H)$ within subclusters $W_1, \dots, W_r$ whose parent clusters $X_1, \dots, X_r$ form a copy of $K_r$ in $R$. It follows from (S1) that at this point $(W_i, W_j)$ is a $2\varepsilon'$-regular pair in $G'$ of density at least $d/3$ for each $i \neq j$. The fact that $n \geq n_0$ is sufficiently large implies that each subcluster $W_i$ is large enough to apply the Counting Lemma (see, e.g.,~\cite{RS}), which guarantees that a copy of ${\mathcal U}(H)$ can be found in $G'[\bigcup_{i \in [r]} W_i]$, with vertex classes embedded in the desired manner. Observe that since at most $2 \beta m_i^K$ vertices were deleted from any subcluster $U_i^K$ in Step 5, and at most $C$ vertices were deleted in total in this step, the total number of vertices deleted from any subcluster is at most $2\beta m_i^K + C \leq 3\beta m_i^K$, justifying our assertion at the end of Step 4. \medskip \noindent \emph{Step 7: Blow-up a perfect $H$-tiling in each $G^K$.} Consider any $K \in {\mathcal K}^+$. Recall that prior to any removals or deletions each subcluster $U_i^K$ had size $m_i^K$, where $m_1^K = a'w(K)L$ and $m_i^K = b'w(K)L$ for $2 \leq i \leq r$. Since then we have removed or deleted at most $3\beta m_i^K$ vertices (i.e. at most a $3\beta$-proportion) from each $U_i^K$, so in particular (since $b' \geq a'$) each subcluster $U_i^K$ now has size at least $m_1^K - 3 \beta m_1^K$. So if we let $\hat{G}^K$ denote the complete $r$-partite graph whose vertex classes are the subclusters $U_1^K, \dots, U_r^K$, then we now have \begin{align*} \sigma(\hat{G}^K) & = \frac{\min_{i \in [r]} |U_i^K|}{|U^K|} \geq \frac{m_1^K - 3\beta m_1^K}{\sum_{i=1}^{r} m_i^K} \geq \frac{(1-3\beta) a' w(K)L}{(a' + (r-1)b') w(K)L} \\ &= (1-3\beta) \frac{a'}{h} \geq \frac{a}{h} + \frac{\alpha}{2} - 3\beta \geq \sigma + \frac{\alpha}{3}. \end{align*} Also, we now have $|U^K| \geq (1-3\beta) \sum_{i=1}^r m_i^K \geq m_2^K$, so for any $2 \leq i, j \leq k$ we have $||U_i^K| - |U_j^K|| \leq 3 \beta m_2^K \leq 3 \beta |U^K|.$ Since our deletions in Step~6 ensured that $rh\cdot\gcd(H)$ now divides $|U_i^K|$ for each $i \in [r]$, the graph $\hat{G}^K$ satisfies the conditions of Proposition~\ref{completetiling} (with $\alpha/3$, $3 \beta$ and $|U^K|$ in place of $\alpha$, $\beta$ and $n$ respectively, with the smallest subcluster $U_i^K$ in place of $V_1$, and the remaining subclusters in place of $V_2, \dots, V_r$). By this proposition $\hat{G}^K$ contains a perfect $H$-tiling. Since by (S2) each pair $(U_i^K, U_j^K)$ is $(3\varepsilon', d/4)$-super-regular in $G'$, the Blow-up Lemma (Theorem~\ref{theo:blow-up}) implies that there is also a perfect $H$-tiling $M^K$ in $G^K$. Let $M^*$ be the $H$-tiling in $G$ consisting of all the copies of $H$ which were deleted in Steps~5 and~6. Then $M := M^* \cup \bigcup_{K \in {\mathcal K}^+} M^K$ is an $H$-tiling in $G$ which covers all vertices of~$G$ except the at most $C$ vertices deleted individually in Step~6, proving (i). For (ii) recall that in this case no vertices were deleted individually in Step~6, so $M$ is a perfect $H$-tiling in~$G$. \qed \section{Proof of Theorem~\ref{theo:almosttiling}} \label{sec:deduce} The proof of Theorem~\ref{theo:almosttiling} is an immediate corollary of Theorem~\ref{theo:combined}. Indeed, fix $0 < \psi \leq 1$, and let $H$ be a graph on $h$ vertices with $\chi(H) = r \geq 3$. Set $k: =\chi_{\rm cr}(H)$ and $\alpha : = \tfrac{\psi}{2rkh}$, and take $C$ and $n_0$ large enough to apply Theorem~\ref{theo:combined} and such that $C \leq \alpha n_0$. Consider a balanced $r$-partite graph $G$ on $rn$ vertices with $\delta^*(G)\geq\frac{k-1}{k}\, n$ and $n \geq n_0$. We construct an auxiliary graph $G'$ from $G$ by adding the same number $m$ of dummy vertices to each vertex class, where $m := 2k\alpha n \leq n$. We make these dummy vertices adjacent to every other vertex, except vertices in their own vertex class. As a result, $G'$ is a balanced $r$-partite graph on $rn'$ vertices with $n'=n+m$ and \begin{align*} \delta^*(G')=\delta^*(G)+m & \ge \frac{k-1}{k}n+ m = \frac{k-1}{k}(n+m) + \frac{m}{k} \\ &= \frac{k-1}{k}n' + 2 \alpha n \ge \left(\frac{k-1}{k}+\alpha\right) n'. \end{align*} So we may apply Theorem~\ref{theo:combined}(i) to $G'$ to obtain an $H$-tiling of $G'$ which covers all but at most $C$ vertices of $G'$. There are at most $rm$ copies of $H$ in this tiling that contain a dummy vertex. We remove these copies of $H$ to obtain an $H$-tiling of $G$ that covers all but at most $rm(h-1) + C \leq 2rk\alpha (h-1) n + \alpha n \leq \psi n$ vertices of~$G$. \qed \section{Lower bound constructions} \label{sec:construct} In this section we present simple constructions which show that the minimum degree condition of Theorem~\ref{theo:degalpha} is best-possible up to the error term. These are all variations of the following construction. \begin{construct}\label{construct:general} Let $r$, $n$ and $n_{ij}$ for $i, j \in [r]$ be positive integers with \linebreak $\sum_{j \in [r]} n_{ij} = n$ for each $i \in [r]$. Choose pairwise-disjoint sets $V_i^j$ with $|V_i^j| = n_{ij}$ for each $i,j \in [r]$. Let $G = G((n_{ij}), r)$ be the graph with vertex set $\bigcup_{i, j \in [r]} V_i^j$ and in which the pairs $(V_i^j,V_{i'}^{j'})$ induce complete bipartite graphs whenever both $i\neq i'$ and $j\neq j'$ (and no other edges exist). We refer to the sets $V_i^j$ as \emph{blocks}, to the sets $V_i := \bigcup_{j \in [r]} V_i^j$ as \emph{columns} and to the sets $V^j := \bigcup_{i \in [r]} V_i^j$ as \emph{rows}. So each vertex is adjacent to every other vertex which is not in the same row or column. Moreover we view $G$ as a balanced $r$-partite graph whose vertex classes are the columns $V_i$ for $i \in [r]$, so each vertex class $V_i$ has size $|V_i| = \sum_{j \in [r]} n_{ij} = n$. Observe that we then have $\delta^*(G) = n - \max_{i, j \in [r]} n_{ij}$. \end{construct} Consider any graph $H$ on $h$ vertices with $\chi(H) = r \geq 3$. Since each row of $G= G((n_{ij}), r)$ induces an independent set in $G$, each copy $H'$ of $H$ in $G$ inherits an $r$-colouring from $G$ with colour classes $V(H') \cap V^j$ for $j \in [r]$. It follows from this that $H'$ has at least $\sigma(H)h$ vertices in each row $V^j$ of $G$, and that $|V(H') \cap V^j| - |V(H') \cap V^{j'}|$ is divisible by $\gcd(H)$ for any $j, j' \in [r]$. Suppose first that $\gcd(H) > 1$, and fix any integer $n$. If $r$ divides $n$ then set $n_{11} = n/r + 1$, $n_{13} = n/r-1$ and $n_{ij} = n/r$ for each other pair $i, j \in [r]$, and note that we then have $|V^1| - |V^2| = 1$. Otherwise, set each $n_{ij}$ to be equal to either $\lfloor n/r \rfloor$ or $\lceil n/r \rceil$ in such a way that $\sum_{j \in [r]} n_{ij} = n$ for each $i \in [r]$ but $\sum_{i \in [r]} n_{i1} - \sum_{i \in [r]} n_{i2} = 1$; the latter implies that $|V^1| - |V^2| = 1$. In either case we have $\delta^*(G) \geq n - \frac{n}{r} - 1 = (1 - \tfrac{1}{\chi^*(H)})n-1$ but $G$ has no perfect $H$-tiling. To see this, let ${\mathcal T}$ be an $H$-tiling in $G$. We observed above that $\gcd(H)$ divides $|V(H') \cap V^1| - |V(H') \cap V^2|$ for any $H' \in {\mathcal T}$. It follows that $\gcd(H)$ also divides $|V({\mathcal T}) \cap V^1| - |V({\mathcal T}) \cap V^2|$; since $|V^1| - |V^2| = 1$ and $\gcd(H) > 1$ this implies that ${\mathcal T}$ is not perfect. This shows that Theorem~\ref{theo:degalpha} is best-possible up to the $\alpha n$ error term for any $H$ with $\gcd(H) > 1$ and any $n$. Now suppose instead that $\gcd(H) = 1$, and fix any integer $n$. For each $i \in [r]$ set $n_{i1} := \lceil \sigma(H) n \rceil-1$ and take $n_{i2}, \dots, n_{ir}$ to be as equal as possible with $\sum_{j = 1}^n n_j = n$. Then we have \begin{align*} \delta^*(G) &=n-\left\lceil \frac{n-\lceil \sigma(H) n\rceil+1}{r-1}\right\rceil \geq n - \frac{n-\sigma(H) n}{r-1} - 1 \\ & = \left(1 - \frac{1-\sigma(H)}{r-1} \right) n - 1 = \left(1-\frac{1}{\chi^*(H)}\right)n - 1. \end{align*} However we observed above that any copy of $H$ in $G$ has at least $\sigma h$ vertices in the row $V^1$, so any $H$-tiling in $G$ has size at most $$ \frac{|V^1|}{\sigma(H) h} = \frac{r (\lceil \sigma(H) n \rceil-1)}{\sigma(H) h} < \frac{rn}{h},$$ so it is not perfect. This shows that Theorem~\ref{theo:degalpha} is best-possible up to the $\alpha n$ error term for any $H$ with $\gcd(H) = 1$ and any $n$. \section{Concluding remarks}~ \medskip \noindent {\bf Comparison to non-partite results:} We note that Theorem~\ref{theo:degalpha} is strictly stronger than the analogous result in the non-partite setting. Indeed, let $H$ be a graph on $h$ vertices with $\chi(H) = r \geq 3$, and let $G$ be a balanced $r$-partite graph on $rn$ vertices with $\delta(G) \geq (1-1/\chi^*(H) + \alpha) n$, where $n$ is large and $h$ divides $rn$. We may arbitrarily delete at most $r$ copies of $H$ from $G$ so that the number of remaining vertices of $G$ is divisible by $r$, following which we partition the remaining vertices of $G$ into $r$ vertex classes of equal size uniformly at random. A standard probabilistic argument shows that with high probability we then have $\delta^*(G) \geq (1-1/\chi^*(H)) + \alpha n/2$, whereupon we may apply Theorem~\ref{theo:degalpha} to obtain a perfect $H$-tiling in $G$. On the other hand, the (non-partite) minimum degree of $G$ as in Theorem~\ref{theo:degalpha} may be as low as $(r-1)(1-1/\chi^*(H)) rn < (1-1/\chi^*(H)) rn$, which is too small for us to apply the analogous non-partite result. Similar comments apply to Theorem~\ref{theo:almosttiling}. \medskip \noindent {\bf The case where $\chi(H) \neq r$:} In a similar manner, one can extend Theorem~\ref{theo:degalpha} to the case where $G$ has more vertex classes than $H$. Indeed, let $H$ be a graph on $h$ vertices with $\chi(H) = r \geq 3$, and let $G$ be a balanced $k$-partite graph on $kn$ vertices with $\delta(G) \geq (1-1/\chi^*(H) + \alpha) n$, where $n$ is large and divisible by $k$. If $k < r$ then $G$ does not contain even a single copy of $H$, whilst the case $k=r$ is dealt with by Theorem~\ref{theo:degalpha}. If instead $k > r$, then we first delete a small number of copies of $H$ in $G$ similarly as above, which allows us to assume that $n$ is divisible by $r$. We then partition each vertex class $V_i$ of $G$ uniformly at random into $r$ parts $V_i^1, \dots, V_i^r$ each of size $n/r$. We then arrange these parts into $k$ vertex-disjoint balanced $r$-partite graphs $G_1, \dots, G_k$, where $V(G_\ell) = \bigcup_{j \in [r]} V_{\ell+j}^{j}$ (with addition taken modulo $k$). So each $G_\ell$ has $n$ vertices in total. Again a standard probabilistic argument shows that with high probability each $G_\ell$ has $\delta^*(G_\ell) \geq (1-1/\chi^*(H) + \alpha/2) \tfrac{n}{r}$. Theorem~\ref{theo:degalpha} then yields a perfect $H$-tiling in each $G_\ell$, and together these tilings form a perfect $H$-tiling in $G$.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,114
For cats weighing 3-10 lbs (1.4-4.5 kg), give 1/4 teaspoon (1.1 g) daily. For cats weighing over 10 lbs (4.5 kg), give 1/2 teaspoon (2.2g ) daily. For use in cats only. Recommended to promote optimal urinary tract, bladder and kidney health. For animal use only. Keep out of reach of children and animals. In case of accidental overdose, contact a health care professional immediately.
{ "redpajama_set_name": "RedPajamaC4" }
2,036
Beer is Back at Busch Gardens Tampa Bay! #BEERISBACK AT BUSCH GARDENS TAMPA BAY ​TAMPA, Fla. (April 30, 2018) – Busch Gardens Tampa Bay is going back to its roots with the return of free beer all summer long, and the introduction of a new Busch Gardens Brew Club program. From complimentary seasonal offerings to brand new featured programs, guests age 21 and older can toast to new adventures each time they visit the park. NEW! Beer is Back! Starting May 1 through Aug. 5, guests can get a taste of the park's beer garden history when they visit the former Hospitality House, now called the Garden Gate Café, to receive two complimentary beers during every visit to Busch Gardens Tampa Bay. Every two weeks, new featured beer brands will be highlighted with Corona kicking off the promotion just in time to celebrate Cinco de Mayo. Guest favorites will be on rotation throughout the summer, including Bud Light, Founders All Day IPA, M.I.A 305, Miller Light, Shock Top and Yuengling. Guests can return to the park each day through August 5 to enjoy two complimentary 7-ounce beers, included in their park admission. NEW! Tapping into the growing Tampa beer culture, guests can sign up to join the Busch Gardens Brew Club, an all-new beer stein program featuring memberships that include a reserved Stein on display at the park, and 5 dollar refills from more than 20 on-tap brews all year-long! NEW! Beer appreciation reaches new heights in August with the Busch Gardens all-new Bier Fest event, bringing more than 200 years of Oktoberfest and Tampa Bay's hottest beer culture to life each weekend from Aug 25 to Sept 16. The festival selection features 100 brews from local and global breweries along with traditional German cuisine and festive music. Guests can join in the celebration of Bier Fest with their regular theme park admission. Florida residents can experience the best of Busch Gardens all year with an annual pass starting as low as $14 per month. To learn more and purchase tickets, guests can visit BuschGardensTampaBay.com. Be the first to know about new events, special deals and future announcements by following the park's blog at BuschGardensTampaBlog.com, or join the conversation using #BeerIsBack and #BuschGardens on Facebook, Twitter, Instagram and Snapchat. ​About SeaWorld Parks & Entertainment SeaWorld Parks & Entertainment™ is a leading theme park and entertainment company providing experiences that matter and inspiring guests to protect animals and the wild wonders of our world. The company is one of the world's foremost zoological organizations and a global leader in animal welfare, training, husbandry and veterinary care. The company also rescues and rehabilitates marine and terrestrial animals that are ill, injured, orphaned or abandoned, with the goal of returning them to the wild. The SeaWorld® rescue team has helped more than 31,000 animals in need over the last 50 years. The company owns or licenses a portfolio of recognized brands including SeaWorld, Busch Gardens® and Sea Rescue®. Over its more than 50-year history, the company has built a diversified portfolio of 12 destination and regional theme parks that are grouped in key markets across the United States. The company's theme parks feature a diverse array of rides, shows and other attractions with broad demographic appeal, which deliver memorable experiences and a strong value proposition for its guests. SeaWorld Parks & Entertainment is a wholly owned subsidiary of SeaWorld Entertainment, Inc., a publicly traded company. Visit www.seaworldentertainment.com for more. Trip Report: First Visit to Busch Gardens Tampa and SeaWorld Orlando Ocean Explorer Media Day at SeaWorld San Diego SeaWorld Orlando announces Infinity Falls for Summer 2018 SeaWorld San Diego announces Electric Eel for 2018 Coaster Crusade at Busch Gardens Tampa and SeaWorld Orlando
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,360
\section{INTRODUCTION} \label{sec:introduction} The characterisation of diffuse emission from the Galaxy is important for the detailed study of fluctuations in the cosmic microwave background (CMB). Knowledge of the spectral shape and spatial morphology is important for the accurate subtraction of the foreground emission. This in turn provides a more precise view of the CMB anisotropies thus providing the most reliable cosmological information. The frequency range of greatest interest for CMB observations is $\sim 30-200$~GHz, which is close to the minimum of foreground emission at $\sim 70$~GHz (Bennett et al.~2003b). The diffuse Galactic foregrounds include synchrotron, free-free and vibrational (thermal) dust emissions. However there is considerable evidence, from deep CMB data at high Galactic latitudes, for an additional component that emits in the frequency range $\sim 10-60$~GHz (Kogut et al. 1996; Leitch et al.~1997; de Oliveira-Costa et al.~2002,2004; Banday et al.~2003; Finkbeiner, Langston \& Minter, 2004; Lagache 2003; Davies et al.~2006; Fern{\'a}ndez-Cerezo et al.~2006). The data show a strong correlation with FIR ($\sim 100~\mu$m) emission that suggests a connection to dust grains. The most popular candidate for this anomalous component is rapidly spinning small dust grains (Draine \& Lazarian 1998a,b), referred to as spinning dust. Models of spinning dust emission predict a strongly peaked spectrum at $~20-40$~GHz, which appears to be broadly consistent with the data. Furthermore, targetted observations of Galactic sources have shown excess emission in this frequency range (Finkbeiner et al. 2002,2004; Casassus et al.~2004,2006; Watson et al. 2005; Scaife et al.~2007). At radio/microwave frequencies ($\ifmmode\stackrel{<}{_{\sim}}\else$\stackrel{<}{_{\sim}}$\fi 100$~GHz), {\rm H}{\sc ii} regions are dominated by free-free (thermal bremsstrahlung) emission from ionised gas with electron temperatures, $T_{e}\approx 8000$~K. The spectrum of free-free radiation is well-understood; in the optically thin regime ($\ifmmode\stackrel{>}{_{\sim}}\else$\stackrel{>}{_{\sim}}$\fi 1~$GHz), it has a well-defined flux density spectral index\footnote{Throughout the paper, the flux density spectral index, $\alpha$, is defined as $S \propto \nu^{\alpha}$.} $\alpha\approx-0.1$ that does not vary greatly with frequency or $T_e$ \cite{Dickinson03}. However, emission from spinning dust could be inherent since ion collisions with grains are expected to be the largest contributory factor in maintaining large rotational velocities required to produce significant rotational dust emission (Draine \& Lazarian 1998b). Indeed, several detections are associated with {\rm H}{\sc ii} regions (Watson et al.~2005) or PNe (Casassus et al. 2004,2007). One previous tentative detection of spinning dust from the {\rm H}{\sc ii} region LPH96[201.663+1.643], which appeared to show a rising spectrum from $5-10$~GHz, suggestive of spinning dust (Finkbeiner et al.~2002; Finkbeiner~2004), was shown to be a spurious result with only little room for spinning dust (Dickinson et al.~2006; Scaife et al.~2007). In this paper we have imaged several southern {\rm H}{\sc ii} regions with the Cosmic Background Imager (CBI) to look for excess emission at 31~GHz, which could be attributed to spinning dust. These are among the brightest {\rm H}{\sc ii} regions in the sky and are known to be dominated by free-free emission at radio frequencies up to $\sim 100$~GHz. Furthermore, they contain copious amounts of dust within the same volume, as traced by correlated FIR ($\sim 100~\mu$m) emission. Using data from the literature, we model the spectrum of free-free radiation and compare the CBI data to measure or place limits on possible excess emission at 31~GHz. We also use CBI polarisation data to measure and place upper limits on the polarisation of free-free emission. \begin{table*} \caption{Summary of CBI observations.$^{*}$Integration times take into account data lost due to bad weather and data editing.} \begin{tabular}{lcccl} \hline Object/ &Centre R.A. &Centre Dec. &Integration$^{*}$ (hr)/ &Notes \\ Region &(J2000) &(J2000) &noise level (Jy)& \\ \hline $G267.9-1.1$ &$08^{h}58^{m}55^{s}$&$-47\deg30^{m}58^{s}$ &1.25/ &Contains $G267.9-1.1$ (RCW38) and fainter extended components \\ (RCW38) & & &0.17 &to the north ($G267.8-0.9$) and to the east ($G268.1-1.0$). \\ $G284.3-0.3$ &$10^{h}24^{m}20^{s}$&$-57\deg44^{m}57^{s}$ &0.43/ &Low level extension to the east. \\ (RCW49) & & &0.14 & \\ $G287.4-0.6$ &$10^{h}43^{m}52^{s}$&$-59\deg34^{m}33^{s}$ &2.28/ &Carina nebula (RCW53,NGC3372) covers a region $\sim 2\times2$ degrees. \\ (Carina nebula) & & &0.15 &2 bright spots: Car-I \& Car-II. \\ $G291.6-0.5$ &$11^{h}15^{m}00^{s}$&$-61\deg16^{m}00^{s}$ &1.15/ &Two distinct {\rm H}{\sc ii} regions: $G291.6-0.5$ (NGC3603) \& \\ (RCW57) & & &0.18 &$G291.3-0.7$ (NGC3576,RCW57) \\ \hline \end{tabular} \label{tab:obs} \end{table*} \section{Data} \subsection{The CBI interferometer} The CBI is a 13-element interferometer, located on the high altitude site (5080-m elevation) of Chajnantor Observatory, Chile. The 0.9-m diameter Cassegrain antennas are co-mounted on a 6-m platform which tracks the sky on a Alt-Az mount but at a constant parallactic angle through rotation of the ``deck'' platform. This gives a static $u,v$-coverage for a given configuration and given ``deck'' angle. Using the best low-noise amplifiers provides a typical system temperature $T_{\rm sys} \sim 30$~K in a 10~GHz band centred at 31-GHz. The bandwidth is split into 10 separate 1-GHz-wide bands from 26 to 36 GHz which can be used to provide spectral information. Each antenna can measure a single left (L) or right (R) circular polarisation mode therefore allowing observations in total intensity (LL or RR) or polarisation (LR or RL). Rotation of the deck allows the ``filling'' of the $u,v$ plane to improve beam quality in Stokes $I$ (total intensity) and combinations of LR or RL thus allowing mapping of Stokes $Q$ and $U$. We use the CBI in a compact configuration that gives optimal sensitivity to extended objects and many redundant baselines. Baseline lengths range from 1-m to 4-m, which corresponds to angular scales $\sim 6$~arcmin to $\sim 24$ arcmin and a primary beam FWHM of $45.2$~arcmin at 31~GHz. \subsection{Observations} Observations of several of the brightest southern {\rm H}{\sc ii} regions were made with the CBI in a compact configuration during the period April - July 2005. Here we present data for four regions: $G267.9-1.1$ (RCW38), $G284.3-0.3$ (RCW49), $G287.4-0.6$ (Carina nebula) and $G291.6-0.5/G291.3-0.7$ (RCW57). These were chosen for their brightness, well-studied radio spectra and strong FIR emission that is aligned with the radio emission. Each region was observed for $\sim 2-3$ hours, at a range of deck angles to improve $u,v$ coverage. Only short observations were required since the images are limited by dynamic range due to beam deconvolution residuals and calibration errors, that will dominate over the thermal noise for the brightest sources. Longer integrations would not significantly improve the signal-to-noise ratio, except for allowing more deck rotations. A summary of the observations is given in Table~\ref{tab:obs}. The noise level was estimated from areas of the map well outside the primary beam. \subsection{Data reduction} The data were reduced and calibrated using in-house software, {\sc cbical}, originally written for CMB data analysis (see Readhead et al.~2004a,b and references therein for more details). The majority of data editing and flagging was done automatically, such as removing bad antennas, baselines or channels that were noisy or not working correctly. Flux calibration was achieved by observations of bright calibrator sources (primarily Jupiter) tied to the temperature of Jupiter of $T_{J}=147.3\pm 1.8~K$ at 32~GHz (Readhead et al. 2004a). This, in principle, gives an accuracy of 1.3 per cent. We note that short-term gain variations and elevation corrections have not been applied due to instabilities of the noise calibration diodes in the CBI system. Checks on the data, by comparing flux densities on different nights, showed these to be below the 1~per cent level. Ground spillover is a source of relatively strong contamination at the level of $\sim 0.5$~Jy on the shortest baselines of the CBI. Due to the co-mounted design, filtering based on varying fringe rates of the astronomical signal (e.g. Watson et al.~2003) cannot be used. For CMB measurements, lead/trail fields or other strategies must be employed for subtraction of ground spillover (e.g. Pearson et al.~2003). Fortunately, for very bright objects ($\ifmmode\stackrel{>}{_{\sim}}\else$\stackrel{>}{_{\sim}}$\fi 10$~Jy), the ground signal is essentially negligible. For the southern Galactic plane (longitudes $\sim250-300$), lead/trail fields are difficult to observe since the plane is approximately parallel to lines of constant declination, thus we have not performed the ground subtraction technique. We find that the majority of the data are unaffected by such contamination, which would be highly visible on the shortest (1-m) baselines. \subsection{Imaging and fitting} \label{sec:imaging} Imaging of the visibility data was carried out using the {\sc difmap} package employing uniform weights to give optimal resolution since we are mainly limited by dynamic range (typically 500:1 for the CBI) rather than thermal noise. The dirty images were deconvolved using the CLEAN algorithm \cite{Hogbom74}. Primary beam corrections were applied to the CLEAN components directly so that each of the frequency channels were corrected separately with a Gaussian function of FWHM $45.2 \times (\nu/31~{\rm GHz})$ arcmin, which is a good approximation to the measured CBI primary beam (Pearson et al.~2003). The incomplete $u,v$ coverage of interferometric data can potentially lead to loss of flux for sources that are extended relative to the synthesised beam; $\sim 6$~arcmin for these data. The exact amount of flux loss depends on the structure of the source and the $u,v$ coverage. To estimate the flux loss for CBI maps presented in this paper, we simulated observations based on $100~\mu$m {\it IRIS}\footnote{Throughout the paper we use the recently re-processed {\it IRAS} 100~$\mu$m map of Miville-Desch{\^e}nes \& Lagache~(2005), ``{\it IRIS}'', which retains optimal resolution (4.3~arcmin) while removing the majority of artifacts such as striping.} maps using the same CBI real visibilities to define the $u,v$-coverage for each region. This is particularly important for complex extended structures. However, we found that the peak and integrated flux densities for the individual sources studied were within 5 per cent of the real values\footnote{All fits were made using the {\sc aips} task {\sc jmfit}, which provides integrated flux densities and errors for the fitted parameters.}. For example, $G267.9-1.1$, which is located in a region of extended emission, was reduced by just 3 per cent. On the other hand, the integrated flux density within a 30 arcmin radius was 58 per cent indicating that the extended emission is more strongly affected. This shows that for sources with angular diameters comparable to the synthesised beam, ($\sim 6$~arcmin or smaller), the fitted flux densities are not significantly affected by the missing spatial frequencies and are correct to better than a few per cent. We therefore make no flux loss corrections for the relatively compact sources considered here. Such corrections would only increase the CBI 31~GHz flux densities quoted in Table~\ref{tab:flux}. To estimate a possible 31~GHz excess, data at lower frequencies were used to make a power-law fit of the form $S=S_{31}(\nu_{\rm GHz}/31)^{\alpha}$. We used data that were believed to be reliable and did not include very low frequencies ($\ifmmode\stackrel{<}{_{\sim}}\else$\stackrel{<}{_{\sim}}$\fi 1$~GHz) where the free-free emission becomes optically thick and the spectrum no longer obeys a simple power-law. The best-fitting power-law was then used to estimate the flux density at 31~GHz, which was compared to the observed value from the CBI. Fitted flux densities include an error term due to the fitting procedure. However, we include an additional error of 2 per cent for instabilities in fitting Gaussians when choosing different box sizes. In general, we found the fits to be relatively stable to changes in box size. Experimentation with sensible choices of boxes showed that the integrated flux densities could vary by $\sim 1-2$ per cent. The errors quoted for the CBI fluxes therefore contain 3 components: an absolute calibration error (1.3 per cent), a variable fitting error, and an additional 2 per cent error. This results in a typical CBI flux density error of $\approx 5$ per cent. \section{Results} \subsection{$G267.9-1.1$ (RCW38)} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth,angle=0]{fig1.ps} \caption{Map of the $G267.9-1.1$ (RCW38) region. CBI 31~GHz contours are overlaid on a greyscale image of the {\it IRAS} $100~\mu$m map, with a square-root stretch. Contours are at $-1~(dashed),1,2,4,8,16,32$ and 64 per cent of the peak intensity, $S_{p}=124.4$~Jy~beam$^{-1}$. The uniform-weighted beam is $6.0 \times 6.0$ arcmin. \label{fig:g267_cbi_map}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth,angle=0]{fig2.ps} \caption{Spectrum of $G267.9-1.1$ (RCW38). Data points ({\it solid circles}) are integrated flux densities taken from the literature (see text). Uncertainties of 10 per cent were assumed when no error was given. The CBI 31~GHz value is plotted as a {\it square} symbol. The best fitting power-law ({\it solid line}) was fitted to the data over the range $5-150$~GHz and extended to higher frequencies ({\it dashed line}).} \label{fig:g267_spec} \end{center} \end{figure} $G267.9-1.1$, (also known as RCW38, Kes5; RA(J2000)$=08^{h}58^m55^s$, Dec.(J2000)$=-47^d30^m58^s$) is one of the brightest and most dense H{\sc ii}~regions in the southern sky. The CBI 31~GHz primary-beam-corrected map is shown in Fig.~\ref{fig:g267_cbi_map} as contours overlaid on the {\it IRAS} 100~$\mu$m map. The peak 31~GHz flux density is $124.4$~Jy~beam$^{-1}$ with a uniform-weighted synthesised beam FWHM of $6.0$~arcmin. The dominant central component is $G267.9-1.1$. It has a fainter companion, $G267.8-0.9$ to the north and also $G268.1-1.0$ to the east that can be seen as extensions to the much brighter central object\footnote{The name $G268.0-1.0$ is sometimes used in the older literature and usually refers to the integrated emission from the entire region.}. The morphology is very similar to the low frequency (0.4 and 5~GHz) maps of \cite{Shaver70a,Goss70}. The {\it IRAS} 100~$\mu$m emission is similar to that seen in the radio (Fig.~\ref{fig:g267_cbi_map}). A compact source $\approx 13$~arcmin to the north-west of $G267.9-1.1$ is visible in the $100~\mu$m image but is not seen in the CBI or other radio maps. It is likely to be the IR source IRAS08563-4711 and appears to be associated with the reflection nebula BRAN213. A relatively strong 100~$\mu$m source is visible $\sim 30$~arcmin to the south-east of the RCW38 region and is also detected in the CBI map, but is significantly attenuated by the 45~arcmin FWHM primary beam. This is identified as the H{\sc ii}~region GAL268.4-00.9 (IRAS09002-4732). Three Gaussian components were found to be a very good fit to the central region of the primary-beam-corrected image. However, previous data in the literature have been simply fitted with a single Gaussian (plus a baseline offset) to each source so we have tried to replicate this Gaussian fitting procedure to make a fairer comparison. This makes a difference of $\sim 5-10$ per cent in the integrated flux densities due to the overlapping Gaussian contributions of several closely spaced sources. The bright compact component ($G267.9-1.1$) contains an integrated flux density, $S_{i}^{31}= 140.3 \pm 5.1$~Jy with a deconvolved size of FWHM $2.6 \times 2.2$~arcmin. The northern component ($G267.8-0.9$) has $S_i\approx 19$~Jy and is $\sim10 \times 7$~arcmin. A more extended Gaussian accounts for the eastern extension with $S_i \approx 39$~Jy and is $\sim 15 \times 6$~arcmin but this was not included when fitting for $G267.9-1.1$. The spectrum of $G267.9-1.1$ is plotted in Fig.~\ref{fig:g267_spec}. Data from the literature are plotted if they were believed to be reliable in relation to the fitting procedure used to determine the CBI flux densities. For example, the integrated flux densities from VLA data at 1.4~GHz could not be reliably summed due to significant flux losses which would not make a valid comparison. Similarly, the lower resolution {\it WMAP} data (Bennett et al.~2003a) is sensitive to local extended emissions. The data points are at 0.4 and 5~GHz \cite{Shaver70b}, 2.7~GHz \cite{Day72}, 8.9~GHz \cite{McGee75}, 14.7~GHz \cite{McGee81}, 90 and 150~GHz\footnote{These data were verified to be the most up-to-date measurements at this frequency (P. Mauskopf; private communication).} \cite{Coble03} and 300~GHz \cite{Cheung80}. The free-free emission is clearly optically thick at 408~MHz and turns over at $\sim 1$~GHz. Above 5~GHz, the emission appears to be optically thin and is best fitted by a power-law over the range $5-150$~GHz (omitting the CBI data point) with flux density spectral index, $\alpha=-0.115\pm0.023$. This agrees well with the theoretical value of $\alpha\approx-0.12$ \cite{Dickinson03} for $\nu\sim30$~GHz and $T_{e}\approx 7500$~K \cite{McGee75,Caswell87}. From visibility-visibility correlations with the Parkes 6~cm map \cite{Haynes78}, we found a consistent value of $\alpha=-0.06\pm0.10$. Within the CBI band ($26-36$~GHz), the best-fitting index was $\alpha=-0.15\pm0.09$, where the error was estimated assuming a 2 per cent error over the range $26-36$~GHz. All the data, including the CBI data point at 31~GHz, fit extremely well with this simple free-free model, with a predicted 31~GHz flux density $S_i^{31}=140.2\pm5.1$~Jy. Contributions from vibrational dust emission are only important above 200~GHz. The fitted values to $G267.9-1.1$ observations are summarised in Table~\ref{tab:flux}. The data give an upper limit to a possible excess component of 14.2~Jy at the 95 per cent confidence level (c.l.)\footnote{Throughout the paper, upper limits are quoted at the 95 per cent confidence level (c.l.), which is $\approx 2\sigma$.}. When the spectral index was fixed at $\alpha=-0.12$, the upper limit remained at $<14.2$~Jy. The FIR peak is well aligned in position with the peak in the radio. We used the {\it IRAS} $100~\mu$m map to place limits on the relative dust emission. Assuming a dust emissivity of $10~\mu$K~(MJy/sr)$^{-1}$ at 31~GHz, the CBI-simulated $100~\mu$m map results in a peak flux density of 13.8~Jy~beam$^{-1}$ and an integrated flux density of 15.5~Jy. This corresponds to an upper limit on the dust emissivity of $<9.2~\mu$K~(MJy/sr)$^{-1}$; see Table~\ref{tab:flux}. \begin{table*} \caption{31~GHz integrated flux densities and derived limits on 31~GHz excess emission. Errors are quoted at the $1\sigma$ level while upper limits are given at 95 per cent ($\approx 2\sigma$) confidence level. Fits were made for both a floating and fixed spectral index. $^{*}$For $G291.3-0.7$, the CBI 31~GHz data point was included when the spectral index was fitted for. FWHM is the deconvolved size.} \begin{tabular}{lcccccc} \hline Source &Fitted &FWHM &Spectral index $\alpha$&Predicted &31~GHz excess &Excess $100~\mu$m emissivity\\ &$S_{i}^{31}$ (Jy) &(arcmin) &$(S\propto \nu^{\alpha})$ &$S_{i}^{31}$ (Jy) &(Jy) &[$\mu$K~(MJy/sr)$^{-1}$] \\ \hline $G267.9-1.1$ &$140.3\pm5.1$&$2.6\times2.2$&$-0.115\pm0.023$&$140.2\pm5.1$&$<14.2$&$<9.2$ \\ & & &$-0.12$ &$140.3\pm5.1$&$<14.2$&$<9.2$ \\ $G284.3-0.3$ &$146.5\pm5.2$&$7.8\times5.6$&$-0.220\pm0.074$&$99.6\pm13.4$&$46.9\pm14.4$&$13.6\pm4.2$ \\ & & &$-0.12$ &$117.0\pm5.7$&$29.5\pm7.8$ &$8.6\pm2.3$ \\ Car-I &$83.9\pm7.6$&$8.8\times6.1$&$-0.145\pm0.038$&$79.8\pm8.3$ &$<24.8$&$<6.1$ \\ & & &$-0.12$ &$84.8\pm3.8$ &$<16.0$&$<5.7$ \\ Car-II &$92.1\pm8.1$&$9.4\times6.9$&$-0.101\pm0.048$&$77.5\pm11.1$&$<38.1$ &$<15.9$ \\ & & &$-0.12$ &$73.4\pm3.7$ &$18.7\pm8.9$&$7.8\pm3.7$ \\ $G291.6-0.5$ &$158.7\pm5.8$&$7.1\times7.1$&$-0.071\pm0.078$&$143.2\pm19.0$&$<50.3$ &$<15.7$ \\ & & &$-0.12$ &$132.5\pm6.6$ &$26.2\pm8.8$&$12.3\pm4.3$\\ $G291.3-0.7^{*}$ &$88.8\pm3.3$&$2.6\times2.6$&$-0.161\pm0.006$&$88.6\pm0.3$&$<6.6$ &$<6.7$ \\ & & &$-0.12$ &$85.5\pm6.1$&$<15.8$ &$<16.1$ \\ \hline \end{tabular} \label{tab:flux} \end{table*} \subsection{$G284.3-0.3$ (RCW49)} \label{sec:g284} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth,angle=0]{fig3.ps} \caption{Map of the $G284.3-0.3$ (RCW49) region. CBI 31~GHz contours are overlaid on a greyscale image of the {\it IRAS} $100~\mu$m map, with a square-root stretch. Contours are at $-1~(dashed),1,2,4,8,16,32$ and 64 per cent of the peak intensity, $S_{p}=79.6$~Jy~beam$^{-1}$. The uniform-weighted beam is $6.78 \times 6.78$ arcmin. \label{fig:g284_cbi_map}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth,angle=0]{fig4.ps} \caption{Spectrum of $G284.3-0.3$ (RCW49). Data points ({\it solid circles}) are integrated flux densities taken from the literature (see text). Uncertainties of 10 per cent were assumed when no error was given. The CBI 31~GHz value is plotted as a {\it square} symbol. The best fitting power-law ({\it solid line}) was fitted to the data over the range $2.7-15$~GHz and extended to higher frequencies ({\it dashed line}).} \label{fig:g284_spec} \end{center} \end{figure} The bright HII region $G284.3-0.3$ (RCW49, NGC3247, MSH10-54, Gum29; RA(J2000)$=10^{h}24^{m}15^{s}$, Dec.(J2000)$=-57^{d}46^{m}58^{s}$ ) has a peak flux density of 79.6 Jy~beam$^{-1}$ in the 31~GHz CBI primary-beam-corrected map shown in Fig.~\ref{fig:g284_cbi_map}. The synthesised beam has a FWHM of $6.78$~arcmin. There are low level extensions to the north and east that include the diffuse source $G284.6-0.2$. Wilson et al.~(1970) note that the shoulder of emission to the east is probably not related to the brighter object. The CBI map agrees very well with the 5 GHz map \cite{Goss70} and the 100~$\mu$m map (Fig.~\ref{fig:g284_cbi_map}). A single Gaussian fit to the brightest object (not including the NE extension, but allowing for a curved baseline) gave an integrated flux density of $146.5 \pm 5.2$~Jy with a deconvolved size of $7.8 \times 5.6$~arcmin. The eastern and northern extensions contain integrated flux densities of $\approx 25$~Jy and $\approx 30$~Jy, respectively. However, they have negligible effect ($\approx 1$~per cent) on the fitting of the much brighter and compact component, $G284.3-0.3$. Stellar emission from massive O-type stars, such as those found in Westerlund 2 cluster, is negligible. The strongest emission is likely to come from colliding winds in Wolf-Rayet systems that is typically at the mJy level \cite{Benaglia05,Rauw07}. The spectrum of $G284.3-0.3$ is plotted in Fig.~\ref{fig:g284_spec}. Data points from the literature are as for $G267.9-1.1$ where data are available, with the addition of 5.0~GHz \cite{Caswell87}. The 5~GHz value appears to be above the line of the other data points with $S_i=178.8$~Jy. We note that Caswell \& Haynes (1987) find a flux density of 161~Jy at 5~GHz, but for a slightly smaller size of FWHM $7\times5$~arcmin, that is more consistent with the fitted spectrum. We also performed a re-analysis of the Parkes 6~cm data \cite{Haynes78} and find $S_i=175$~Jy. Still, it is possible that the 14.7~GHz is a little low due to a smaller beam and smaller fitted area of $5.0\times5.2$~arcmin. A power-law fit over the range $2.7-15$~GHz has a spectral index $\alpha=-0.220\pm0.074$, and the CBI point is well above this line; the predicted 31~GHz flux density is $99.6\pm 13.4$~Jy. As can be seen from the spectrum in Fig.~\ref{fig:g284_spec}, the CBI point appears to be significantly above the expected emission from optically thin free-free alone. For this model, the excess is $46.9\pm14.4$~Jy. This is a detection of excess emission at the $3.3\sigma$ level and could account for 32 per cent of the total emission at 31~GHz. The significance increases further when fixing the spectral index to the slightly flatter value of $-0.12$, which is more consistent with that expected from theory for $T_e\approx8500$K \cite{Azcarate92}. For this model, the excess is $29.5\pm7.8$~Jy ($3.8\sigma$). Only when omitting the 14.7~GHz data point did the CBI point come in line with the model with a spectral index $\alpha=+0.004\pm0.059$. Using only the 2.7~GHz and 8.9~GHz data gave a spectral index of $-0.11 \pm 0.13$, consistent with the 408~MHz data point. In this case, there still remained a significant ($2.5\sigma$) excess at 31~GHz of $29.1 \pm 11.8$~Jy. The spectral index within the CBI band is $\alpha=-0.11\pm0.09$. Cross-correlation of the simulated 5~GHz visibilities and CBI visibilities show a very tight correlation of $P=0.94$ with a mean slope of $0.02387$K~K$^{-1}$, which corresponds to $\alpha=-0.05\pm0.10$. We therefore consider this a tentative detection of anomalous emission. Clearly, more precise data in the $5-15$~GHz range are required to determine the free-free spectrum more accurately and confirm this result. The 100~$\mu$m image is well-matched to the CBI image. A simulated observation, assuming a dust emissivity of $10~\mu$K~(MJy/sr)$^{-1}$ gave a flux density of $34.5 \pm 1.0$~Jy; the error was estimated from trying different fitting boxes. The 31~GHz excess seen in $G284.3-0.3$ therefore has a 100~$\mu$m emissivity of $13.6\pm 4.2~\mu$K~(MJy/sr)$^{-1}$. For a fixed spectral index, $\alpha=-0.12$, the emissivity becomes $8.6\pm 2.3~\mu$K~(MJy/sr)$^{-1}$. \subsection{$G287.4-0.6$ (Carina nebula) region} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth,angle=0]{fig5.ps} \caption{Map of the $G287.9-1.1$ (Carina nebula) region. CBI 31~GHz contours are overlaid on a greyscale image of the {\it IRAS} $100~\mu$m map, with a square-root stretch. Contours are at $-1~(dashed),1,2,4,8,16,32$ and 64 per cent of the peak intensity, $S_{p}=45.8$~Jy~beam$^{-1}$. The uniform-weighted beam is $6.9 \times 6.7$ arcmin. \label{fig:g287_cbi_map}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth,angle=0]{fig6.ps} \caption{Spectrum of Car-I. Data points ({\it solid circles}) are integrated flux densities taken from the literature (see text). Uncertainties of 10 per cent were assumed when no error was given. The CBI 31~GHz value is plotted as a {\it square} symbol. The best fitting power-law ({\it solid line}) was fitted to the data over the range $0.4-9$~GHz and extended to higher frequencies ({\it dashed line}).} \label{fig:car-I_spec} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth,angle=0]{fig7.ps} \caption{Spectrum of Car-II. Data points ({\it solid circles}) are integrated flux densities taken from the literature (see text). Uncertainties of 10 per cent were assumed when no error was given. The CBI 31~GHz value is plotted as a {\it square} symbol. The best fitting power-law ({\it solid line}) was fitted to the data over the range $0.4-5$~GHz and extended to higher frequencies ({\it dashed line}).} \label{fig:car-II_spec} \end{center} \end{figure} The Carina nebula (RCW53,NGC3372,MSH10-57,Gum 33, Keyhole nebula) consists of two main radio sources: Car-I (NW) ($G287.4-0.6$) and Car-II (SE) ($G287.6-0.6$). These are excited by the young open clusters Tr14 and Tr16 \cite{Tateyama91} and are at a common distance of $2.2 \pm 0.2$~kpc. A number of weaker sources have been identified within the Carina nebula complex, which covers an area of 4 sq. degrees. The CBI 31~GHz primary-beam-corrected map is shown in Fig.~\ref{fig:g287_cbi_map}, with a synthesised beam $6.9\times6.7$~arcmin and peak flux density, $S_p=45.8$~Jy~beam$^{-1}$. There is much extended emission in this region, particularly the several ``lobes'' that extend to the south, which are also seen at lower frequencies \cite{Whiteoak94,Duncan95} and are thought to be non-thermal \cite{Tateyama91}. The non-thermal lobes (LE, LC and LW) of Tateyama et al.~(1991) are clearly seen as extensions to the south of Car-I/Car-II in Fig.~\ref{fig:g287_cbi_map}, including Car-III (southern lobe of Car-II). The source $G287.69-0.33$ can be identified $\sim 20$~arcmin to the north of Car-II with a peak flux density of $\approx 2$~Jy. At this resolution, the brighter central region can just be resolved into the two known components, Car-I and Car-II, which are clearly seen in the 100~$\mu$m map (Fig.~\ref{fig:g287_cbi_map}). Two Gaussians were fitted simultaneously to the central part of the primary-beam-corrected image with a baseline slope to account for the surrounding extended emission. We found that two Gaussians could be well-fitted to the data with $S_{i}^{31}=83.9\pm7.6$~Jy and $S_{i}^{31}=92.1\pm8.1$~Jy, for Car-I and Car-II, respectively (Table~\ref{tab:flux}). The larger errors reflect the fact that the components are slightly confused at this resolution. Their deconvolved sizes were measured to be $8.8\times6.1$~arcmin and $9.4\times6.9$~arcmin, respectively. The spectrum of Car-I is shown in Fig.~\ref{fig:car-I_spec}. Data points from the literature are as for $G267.9-1.1$ where data are available, with the addition of 1.4~GHz \cite{Retallack83}, 8.9~GHz \cite{Huchtmeier75} and 22~GHz \cite{Tateyama91}. We re-analysed the Parkes 5~GHz map of Haynes et al.~(1978) and found it to be consistent with the Goss \& Shaver (1970) result. In Fig.~\ref{fig:car-I_spec} we include the 22~GHz flux density from Tateyama et al.~(1991) by scaling their peak flux density with their reported source size, but do not include it in the fit due to possible errors in this extrapolation. It is interesting to see the 22~GHz data point is well above the free-free model. This could be real and is consistent with spinning dust models that predict a peak at this frequency (Draine \& Lazarian 1998a,b). The spectrum lies close to the optically thin free-free value down to 408~MHz \cite{Gardner70}. The best-fitting power-law over the range $0.4-9$~GHz has a spectral index $\alpha=-0.145\pm0.038$, consistent with that predicted by theory for $T_e\approx 6600-7400$~K \cite{Gardner70,Caswell87}. The best-fitting spectral index within the CBI band is $\alpha=-0.13\pm0.09$. The model predicts a 31~GHz flux density $S_{i}^{31}=79.8\pm8.3$~Jy and the CBI 31~GHz data point fits well within this model with an upper limit for an excess of 24.8~Jy (95 per cent c.l.). For a fixed spectral index, $\alpha=-0.12$, the predicted 31~GHz flux was $84.8\pm3.8$~Jy corresponding to an upper limit of $<16.0$~Jy. The CBI-simulated 100~$\mu$m map, scaled with $10~\mu$K~(MJy/sr)$^{-1}$ gives a flux density of $28.0\pm 2.0$~Jy for a point-like source. This translates to an upper limit on the excess dust emissivity of $<6.1~\mu$K~(MJy/sr)$^{-1}$ at the 95 per cent c.l. For the fixed spectral index model, the emissivity is $<5.7~\mu$K~(MJy/sr)$^{-1}$. The spectrum of Car-II is shown in Fig.~\ref{fig:car-II_spec} with the same data plotted as for Car-I, except for omitting the 8.9~GHz value from Huchtmeier \& Day (1975), which appears to be anomalously low. This is probably due to a mismatch in fitted source size and proximity of Car-I. The free-free emission remains optically thin down to 408~MHz \cite{Gardner70} with a spectral index $\alpha=-0.101\pm0.048$ fitted over the range $0.4-5$~GHz, again close to the theoretical value. The predicted 31~GHz flux density is then $77.5\pm11.1$~Jy. The CBI data point lies slightly above the prediction with an upper limit of $<38.1$~Jy (95 per cent c.l.) for excess emission. Although not a statistically strong detection, it is interesting to see that the 22~GHz value is also above the free-free model alongside the 31~GHz data point. For a fixed spectral index, $\alpha=-0.12$, the predicted flux density is $73.4\pm3.7$~Jy, allowing an excess of $18.7 \pm 8.9$~Jy {\it i.e.} a $2.1\sigma$ detection. The CBI-simulated 100~$\mu$m map, scaled with $10~\mu$K~(MJy/sr)$^{-1}$ gives a flux density of $24.0\pm 1.6$~Jy for a point-like source. This translates to an upper limit on the excess dust emissivity of $<15.9~\mu$K~(MJy/sr)$^{-1}$. For the fixed spectral index model, the emissivity is at $7.8\pm3.7~\mu$K~(MJy/sr)$^{-1}$, as summarised in Table~\ref{tab:flux}. \subsection{$G291.6-0.5/G291.3-0.7$ (RCW57) region} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth,angle=0]{fig8.ps} \caption{Map of the $G291.6-0.5$ (NGC3603) and $G291.7-0.3$ (NGC3576) region. CBI 31~GHz contours are overlaid on a greyscale image of the {\it IRAS} $100~\mu$m map, with a square-root stretch. Contours are at $-1~(dashed),1,2,4,8,16,32$ and 64 per cent of the peak intensity, $S_{p}=88.0$~Jy~beam$^{-1}$. The uniform-weighted beam is $6.7 \times 6.7$ arcmin. \label{fig:g291_cbi_map}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth,angle=0]{fig9.ps} \caption{Spectrum of $G291.6-0.5$. Data points ({\it solid circles}) are integrated flux densities taken from the literature (see text). Uncertainties of 10 per cent were assumed when no error was given. The CBI 31~GHz value is plotted as a {\it square} symbol. The best fitting power-law ({\it solid line}) was fitted to the data over the range $5-15$~GHz, fixing the spectral index $\alpha=-0.12$ (see text) and extended to higher frequencies ({\it dashed line}). \label{fig:g291.6_spec}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth,angle=0]{fig10.ps} \caption{Spectrum of $G291.3-0.7$. Data points ({\it solid circles}) are integrated flux densities taken from the literature (see text). Uncertainties of 10 per cent were assumed when no error was given. The CBI 31~GHz value is plotted as a {\it square} symbol. The best fitting power-law ({\it solid line}) was fitted to the data over the range $15-150$~GHz including the CBI data point (see text) and extended to higher frequencies ({\it dashed line}). \label{fig:g291.3_spec}} \end{center} \end{figure} The RCW57 region is dominated by two bright H{\sc ii}~regions: $G291.6-0.5$ (NGC3603) and $G291.3-0.7$ (NGC3576), which are two of the highest luminosity optically visible H{\sc ii}~regions in the Galaxy \cite{Goss69}. The 31~GHz CBI map, with a synthesised beam FWHM of $6.7$~arcmin, is shown in Fig.~\ref{fig:g291_cbi_map}. The two H{\sc ii}~regions dominate the map: $G291.6-0.5$ is the larger eastern component at the center of the image (RA(J2000)$=11^h15^m00^s$, Dec(J2000)$=-61^d16^m00^s$; $G291.3-0.7$ is the more compact component located $\approx 25$~arcmin to the west. The CBI 31~GHz map shows NGC3603 as the brighter and slightly extended source with peak flux density $S_p= 88.0$~Jy~beam$^{-1}$ and NGC3576 has $S_p=79.8$~Jy~beam$^{-1}$ after correcting for the primary beam. The 31~GHz map agrees very well with the low frequency GHz maps \cite{Shaver70a,Goss70} and the 100~$\mu$m map (Fig.~\ref{fig:g291_cbi_map}). Some low-level extended emission is also detected in the vicinity of the dominant H{\sc ii}~regions. A compact source to the south-east of NGC3603 is detected and is identified as $G291.9-0.7$ with an integrated flux density of $\approx 2$~Jy. Single Gaussians were fitted to the 2 bright sources in the CBI primary-beam-corrected map. For $G291.6-0.5$, we find an integrated flux density $S_i=158.7 \pm 5.8$~Jy with a deconvolved size $7.1 \times 7.1$ arcmin. The spectrum is plotted in Fig.~\ref{fig:g291.6_spec}. Data points from the literature are as for $G267.9-1.1$ where data are available, with the addition of 150 and 240~GHz data (Sabattini et al.~2005). The spectrum remains relatively flat up to a frequency of several GHz possibly indicating optically thick components. Nevertheless, we fitted a power-law to data in the range $2.7-15$~GHz and obtained $\alpha=-0.071 \pm 0.078$. This model gave a predicted 31~GHz flux of $143.2 \pm 19.0$ or an upper limit to an excess component of $<50.3$~Jy (95 per cent c.l.). For a fixed spectral index, $\alpha=-0.12$, the prediction becomes $132.5\pm6.6$~Jy, or an excess of $26.2\pm 8.8$~Jy {\it i.e.} a $3\sigma$ detection. The spectral index within the CBI band was found to be $\alpha=-0.12\pm0.09$, consistent with a typical electron temperature of $T_{e}\approx 7000-8000$~K \cite{Wilson70,McGee75,McGee81,dePree99}. The two data points at 150 and 240~GHz (Sabattini et al.~2005) provide a useful limit to the contribution of vibrational (thermal) dust emission, where they find a dust temperature $T_{d}=25.6$~K. Unless there exists a very cold dust component, the contribution from vibrational dust at 31~GHz is relatively small. Extrapolating from the Sabattini et al. values, assuming an emissivity index $\beta=+2.0$, gives a flux density of 8.9~Jy, or 6 per cent of the total; this may explain the small excess observed and slightly flatter spectral index. Indeed, making a correction for the vibrational dust component, reduces the significance of the detection (for a fixed spectral index) to $1.2\sigma$. The CBI-simulated $100~\mu$m, scaled with $10~\mu$K~(MJy/sr)$^{-1}$ gives a peak brightness of 12.1~Jy~beam$^{-1}$ and an integrated flux density $S_{i}^{31}=32.1\pm3.0$~Jy. The 95 per cent c.l. upper limit on the excess dust emissivity is then $15.7~\mu$K~(MJy/sr)$^{-1}$. For the fixed spectral index model, with no correction for a vibrational dust contribution, the emissivity is $12.3\pm 4.3~\mu$K~(MJy/sr)$^{-1}$, as summarised in Table~\ref{tab:flux}. For $G291.3-0.7$, we find $S_i=88.8\pm3.3$~Jy with a deconvolved size $2.6\times2.6$~arcmin. The spectrum is plotted in Fig.~\ref{fig:g291.3_spec} with data taken from the literature. The turn-over is much more gradual indicating optically thick components and only becomes truly optically thin above $\sim 10$~GHz. With so few data points to fit, we included the 14.7~GHz, 150~GHz and CBI 31~GHz data point itself in the fit. The best-fitting spectral index in the range $15-150$~GHz is $\alpha=-0.161 \pm 0.006$ and is an excellent fit to the data. The upper limit is 6.7~Jy (95 per cent c.l.) for an additional component. From the data at 240 and 300~GHz, there appears to be a small contribution from vibrational dust at 150~GHz, which was found to be typically warmer ($T_d=31.3$~K) than for $G291.6-0.5$ (Sabattini et al.~2005). This is somewhat discrepant with the values found by Kuiper et al.~(1987) who find warmer dust temperatures, $T_{d}\approx 50$~K, based on the $60/100~\mu$m ratios. Extrapolating from the Sabattini et al. values, for $\beta=+2.0$, gives 1.6~Jy at 31~GHz ($\approx 2$~per cent). This would steepen the spectral index further and therefore leave the possibility for a small excess at 31~GHz. However, we tried several different fits with varying assumptions (e.g. including lower frequency data), which did not allow a significant additional component at 31~GHz. The CBI-simulated $100~\mu$m, scaled with $10~\mu$K~(MJy/sr)$^{-1}$ has a peak brightness of 5.5~Jy~beam$^{-1}$ and in integrated flux density $S_{i}^{31}=9.8\pm0.9$~Jy. The upper limit on the excess dust emissivity is then $<6.7~\mu$K~(MJy/sr)$^{-1}$ when the fit was done including the CBI data point. This will under-estimate any possible excess emission since including the CBI point reduces the allowable range. However, it is such a good fit to the model, the fit is unlikely to change by much. For a fixed spectral index, $\alpha=-0.12$, the upper limit becomes 15.8~Jy (95 per cent c.l.), or a dust emissivity $<16.1~\mu$K~(MJy/sr)$^{-1}$. As with $G291.6-0.5$, the data points at 150 and 240~GHz (Sabattini et al.~2005) suggest there may be a contribution from vibrational dust emission at 31~GHz of a few Jy, or a few per cent of the total 31~GHz flux density and a much larger fraction at 150~GHz. Without more data points, and detailed modelling of the dust spectrum, it is difficult to calculate a more precise limit for this source. However, our upper limits can be considered as being conservative since these corrections are likely to steepen the free-free model allowing more room for excess emission. Nevertheless, it is clear from Fig.~\ref{fig:g291.3_spec} that the CBI 31~GHz data point fits in well with the other data points following a smooth curve, leaving little or no room for possible excess emission. \subsection{Polarisation limits} \label{sec:polarisation} Stokes $Q$ and $U$ maps were made for each region and imaged/cleaned using the same procedure as for the total-intensity maps. Polarised intensity maps, $P=(\sqrt{Q^2+U^2}-C)$, were made using the {\sc aips} task {\sc comb}, where $C$ is the correction for the Ricean noise bias, using estimates of the noise from areas of the map away from the primary beam. For all four regions, small polarised signals were detected. For both $G267.9-1.1$ and $G284.3-0.3$, a ring-like structure was observed with a peak frequency centred close to the map centre. In the Carina nebula map, we observed two point-like peaks centred on Car-I and Car-II. For $G291.6-0.5$ a similar faint ring-like feature is seen. The largest polarised signal, at 480~mJy~beam$^{-1}$ or 0.61 per cent polarisation fraction, was observed in $G291.3-0.7$; a highly significant ($>10\sigma$) detection is observed in both $Q$ and $U$ maps. The peak polarised flux densities for all the {\rm H}{\sc ii} regions are given in Table~\ref{tab:polarisation} along with the polarised fraction calculated from the ratio of peak flux densities. \begin{table} \caption{31~GHz polarised intensity measurements. Statistically significant polarisation fractions were detected in all the sources at similar level, but are likely to be contaminated by leakage terms and hence are considered upper limits (see text).} \begin{tabular}{lccc} \hline Source &$S_{p}^{31}$ &r.m.s. noise &Polarisation \\ &(mJy~bm$^{-1}$) &(mJy~bm$^{-1}$) &fraction (per cent)\\ \hline $G267.9-1.1$ &348 &75 &$0.28\pm0.06$ \\ $G284.3-0.3$ &190 &25 &$0.24\pm0.03$ \\ Car-I &119 &23 &$0.32\pm0.06$ \\ Car-II &123 &23 &$0.33\pm0.06$ \\ $G291.6-0.5$ &204 &35 &$0.25\pm0.04$ \\ $G291.3-0.7$ &480 &35 &$0.61\pm0.04$ \\ \hline \end{tabular} \label{tab:polarisation} \end{table} Given the brightness in total intensity, the observed polarised signals are unlikely to be real since no corrections were made for instrumental leakage terms, which are expected to be at the $\sim 1$ per cent level. The observed polarisation is therefore consistent with polarisation generated by the instrument itself, which is discussed further in section \ref{sec:pol_limits}. \section{Discussion} \subsection{Free-free emission} \begin{table*} \caption{Physical and observed properties. Dust temperatures are taken from Kuiper et al.~(1987). The average dust temperature at high Galactic latitudes is 18.2~K (Schlegel et al.~1998). Electron temperatures were average values taken from the literature (see text for details).} \begin{tabular}{lccccc} \hline Source &Radio size &FIR size &$T_{d}$&$\alpha$ &$T_{e}$ \\ &$@31$~GHz (arcmin) &$@100~\mu$m (arcmin) &(K) &($S\propto \nu^{\alpha}$) &(K) \\ \hline $G267.9-1.1$ &$2.6\times2.2$&$4.9\times4.2$ &45 &$-0.115\pm0.023$ &7500 \\ $G284.3-0.3$ &$7.8\times5.6$&$7.9\times4.9$ &50 &$-0.220\pm0.074$ &8500 \\ Car-I &$8.8\times6.1$&$6.0\times5.2$ &48 &$-0.145\pm0.038$ &7000 \\ Car-II &$9.4\times6.9$&$9.7\times5.7$ &70 &$-0.101\pm0.048$ &6600 \\ $G291.6-0.5$ &$7.1\times7.1$&$11.6\times7.1$ &55 &$-0.071\pm0.078$ &7500 \\ $G291.3-0.7$ &$2.6\times2.6$&$4.9\times4.7$ &45 &$-0.161\pm0.006$ &7500 \\ \hline \end{tabular} \label{tab:properties} \end{table*} All the H{\sc ii}~regions discussed in this paper are dominated by thermal free-free emission, which becomes optically thin at frequencies $\ifmmode\stackrel{>}{_{\sim}}\else$\stackrel{>}{_{\sim}}$\fi 1$~GHz. These bright sources usually consist of many compact objects that are unresolved by the CBI beam, many of which contain substantial dust which emits primarily in the FIR band ($\sim 100~\mu$m) within a similar volume (Table~\ref{tab:properties}). At higher frequencies, the blackbody tail from the vibrating dust mechanism typically dominates and can extend down to frequencies $\sim 100$~GHz. These two emission mechanisms largely explain the general shape of the spectrum over a wide range of frequencies, from the radio to the mid-IR. Indeed, we have found that the 31~GHz flux observed with the CBI is broadly consistent with free-free emission when combined with multi-frequency data taken from the literature. The original purpose of this study was to search for evidence of spinning dust, which would show up as an additional excess component at 31~GHz, which is close to the peak of current models of spinning dust (Draine \& Lazarian 1998a,b). There are inconsistencies with some of the data in the literature, where calibration errors are typically 10 per cent on quoted flux densities. In addition to this, there can be difficulties when comparing data taken at different resolutions and where different fitting techniques have been employed. Fortunately, the spectrum of free-free emission is well understood. When the radiation becomes optically thin it has a well-defined spectral index that varies slowly with frequency and electron temperature. In fact, many authors simply fix the spectral index to the canonical radio spectral index, $\alpha=-0.1$. At higher frequencies ($\sim 30$~GHz), it steepens slightly to $\alpha=-0.14$ for $T_e=7000-8000$~K \cite{Dickinson03}. We have found that the best-fitting spectral index (Table \ref{tab:properties}), not including CBI data, for all H{\sc ii}~regions was essentially consistent with this range of values. The electron temperatures for all the H{\sc ii}~regions are within the range $\sim 7000-8000$~K (Table~\ref{tab:properties}). Furthermore, the CBI 31~GHz data point, was found to be close to the predicted flux density from a simple power-law fit to data from the literature. This confirms the dominance of free-free emission in bright H{\sc ii}~regions. Fits were also made with a fixed spectral index, $\alpha=-0.12$, which is the mean spectral index expected for free-free emission for the range $1-30$~GHz for $T_e=8000$~K \cite{Dickinson03}. Although this artificially reduces the error in the model, it can help limit the impact of low or high data points that can artificially bias the spectral index, particularly when only a few data points are being fitted. For most sources, we found that the results remained stable either way. \subsection{Anomalous dust emission} We have found that at least one of the sources, $G284.3-0.3$ (RCW49), shows evidence for a significant excess component, suggestive of spinning dust. Furthermore, it is compelling that all six sources are found to have a slightly higher flux density at 31~GHz than the predicted value given by a power-law model for the free-free emission. The average $100~\mu$m dust emissivity for all the 6 sources is $3.3\pm1.7~\mu$K~(MJy/sr)$^{-1}$, which corresponds to a 95 per cent upper limit of $6.1~\mu$K~(MJy/sr)$^{-1}$. We have discussed some possible systematics that may lead to this apparent excess, but given the conservative error bars assigned to the data, and the relative accuracy of the CBI data, this result is unlikely to be due to a systematic error. Moreover, no flux loss correction was made to the CBI data points, since it was shown to be a small correction for sources comparable to the beam size (see section~\ref{sec:imaging}). The most significant ($3.3\sigma$) result was for $G284.3-0.3$ (RCW49). We consider this to be a tentative detection. As remarked upon in section~\ref{sec:g284}, there is some level of inconsistency in the lower frequency data in the range $5-14$~GHz. In particular, the 5~GHz data point seems high relative to the other frequencies, yet we obtained a consistent value when we independently analysed the Parkes 6~cm map. Moreover, the 14~GHz point appears to be on the low side, while the 9~GHz value has a larger error than its neighbours. For example, taking just the 5~GHz data and the 31~GHz data alone, the spectral index is significantly flatter and is more consistent with free-free alone. Clearly more precise data in the range $5-20$~GHz data is required to clarify the situation. If spinning dust emission is found to be a significant fraction of the 31~GHz flux emission, it would be expected to originate from very small grains that can spin fast enough to produce observable emission. The smallest grains, polycyclic aromatic hydrocarbons (PAHs) are one possibility. PAHs are most readily identified as broad lines in the mid-IR spectrum that have been seen in many H{\sc ii}~regions and PNe. The observed survival of small dust grains in hostile environments is difficult to reconcile with models \cite{Spitzer78}, yet strong mid-IR PAH signals are observed in active star-forming regions including RCW49 \cite{Churchwell04}. The spinning dust mechanism appears therefore to be still viable in such environments. It is rather surprising that the spectral index at $\sim 30$~GHz is so similar to the canonical free-free value ($\alpha \approx -0.1$). However, the spinning dust spectrum is expected to turn over in the range $\sim 20-40$~GHz and hence it may appear to be locally flat in this range. \subsection{Anomalous dust emissivity} The radio emission from dust in HII regions is found to be a factor of 3-4 less than in the cooler diffuse dust at intermediate latitudes. The limits on excess emission at 31~GHz have been converted to a dust emissivity, relative to the IRIS re-processed version of the {\it IRAS} $100~\mu$m map \cite{Miville05}, which has units MJy~sr$^{-1}$, thus our emissivities have units $\mu$K~(MJy/sr)$^{-1}$. We did this for simplicity and because it is model independent\footnote{Some authors have calculated dust emissivities relative other standards, including the DIRBE $140~\mu$m map, the Finkbeiner, Davis \& Schlegel (1999) model 8 map normalised at 94~GHz, or the hydrogen column density, $n_{\rm H}$, estimated from the $100~\mu$m map of Schlegel, Finkbeiner \& Davis (1998); see Finkbeiner (2004) for a useful discussion of units.}. From CMB data at frequencies $\sim 31$~GHz, and at high Galactic latitudes, the dust emissivity has a typical value of $\sim 10~\mu$K~(MJy/sr)$^{-1}$, with variations of about a factor of $\sim 2$ \cite{Davies06}. We can immediately see that our tentative detection, in $G284.3-0.3$, is consistent with the dust emissivity found at high latitude. On the other hand, the upper limits listed in Table~\ref{tab:flux} indicate that the dust emissivity is considerably lower than that found at high latitudes; the average emissivity for all 6 H{\sc ii}~regions (when the spectral index was fitted for) is $3.3\pm1.7~\mu$K~(MJy/sr)$^{-1}$. In other words, if the spinning dust were to emit at the same levels as seen at in more quiescent high latitude regions of sky (at least relative to the $100~\mu$m intensity map), we would have detected a larger excess in most of the sources studied here. In Table~\ref{tab:emissivities}, we have listed the 31~GHz normalised dust emissivities for H{\sc ii}~regions and the cooler dust clouds from the literature. This emphasises that the dust emissivity appears to be less in the H{\sc ii}~regions than in the diffuse interstellar medium by a factor of $\sim 3-4$, but where the average dust temperature is $18.2$~K (Schlegel et al.~1998). It is also clear that the $T^{1.6}$ scaling of emissivity at high latitudes, found by Davies et al.~(2006), does not hold in these regions; the warmer dust does not emit at higher levels relative to $100~\mu$m data. This is presumably due to the different conditions in the interstellar medium, where in H{\sc ii}~regions there is a considerably larger flux of $UV$-photons from O-B stars that formed the ionised regions, which in turn disassociates the smaller grains required for current models of spinning dust emission. H{\sc ii} regions exhibit a wide range of environmental conditions (UV radiation field, X-rays, $\gamma$-rays, electron temperatures) which affect the distribution of grain sizes and properties. The emissivity of spinning dust could then vary considerably from cloud to cloud \cite{Davies06}. This could explain the apparent lack of anomalous emission from some regions but not others. It is also possible that some other mechanism is responsible for the bulk of the anomalous signal, including magneto-dipole emission \cite{Draine99}, which strongly depends on the abundance of ferromagnetic material. \begin{table} \caption{Comparison of $100~\mu$m dust emissivities for H{\sc ii}~regions and cooler dust clouds, from data at or near 30~GHz. Data are the mean of the six H{\sc ii}~regions studied in this paper, LPH96 (Dickinson et al.~2006), average of 15 high latitude regions from {\it WMAP} and the all-sky {\it WMAP} value outside the Kp2 mask (Davies et al.~2006), LDN1622 (Casassus et al.~2006) and $G159.6-18.5$ in the Perseus molecular cloud (Watson et al.~2005). Emissivities, in units $\mu$K~(MJy/sr)$^{-1}$, have been normalised to 31~GHz.} \begin{tabular}{lcl} \hline Source &Dust emissivity &Reference \\ &$\mu$K~(MJy/sr)$^{-1}$ & \\ \hline {\bf H{\sc ii}~regions} & & \\ 6 H{\sc ii}~regions (mean) &$3.3\pm1.7$ &This paper. \\ \vspace{1mm} LPH96 &$5.8 \pm 2.3$&Dickinson et al. (2006) \\ {\bf Cool dust clouds} & & \\ 15 regions {\it WMAP} &$11.2\pm1.5$ &Davies et al. (2006) \\ All-sky {\it WMAP} &$10.9\pm1.1$ &Davies et al. (2006) \\ LDN1622 &$24.1\pm0.7$ &Casassus et al. (2006) \\ $G159.618.5$ &$17.8\pm0.3$ &Watson et al. (2005) \\ \hline \end{tabular} \label{tab:emissivities} \end{table} \subsection{Polarisation limits} \label{sec:pol_limits} The CBI polarisation maps all showed some polarised emission for these sources, but at a very low level of 0.3 per cent, except for $G291.3-0.7$, which was at 0.6 per cent. This may be due to the fact that $G291.3-0.7$ is located away from the map centre by almost half a primary beam width. This could contribute extra leakage and/or errors due to the primary beam correction. We therefore take these values to be upper limits to the polarisation on these angular scales. This is consistent with little or no polarisation, as expected for pure free-free emission. Free-free emission is intrinsically unpolarised, but can be polarised at the edges of H{\sc ii}~regions by Thomson scattering. The radiation is then tangentially polarised to the edges of the cloud, at a level that depends on the viewing angle relative to the incident radiation \cite{Rybicki79}. At these angular resolutions, we did not expect to see this effect since the sources are barely resolved in the CBI beam thus any secondary scattering will be averaged out by the beam. Spinning dust emission is expected to be weakly polarised at the few per cent level \cite{Lazarian00}. However, we did expect some level of instrumental leakage, which converts Stokes $I$ to Stokes $Q$ and $U$, at the level of $\sim 0.5$~per cent based on earlier observations of W44 \cite{Cartwright05}. We have not attempted to correct for leakage terms and hence it is not surprising to see such levels of polarisation. This naturally explains the remarkably similar polarisation fractions observed in the different H{\sc ii}~regions. We therefore consider the quoted polarisation fractions (Table~\ref{tab:polarisation}) to be upper limits. The level of the leakage shown here is encouraging since the recent CBI polarisation results \cite{Readhead04b,Sievers07} had no corrections made for instrumental leakage. Given the signal-to-noise ratio of the CMB polarisation detections, this level of leakage can be safely ignored, and we can be sure that the contamination is certainly below the 1 per cent level. For the most significant detection of excess emission in $G284.3-0.3$, the polarisation limit translates to an upper limit on the spinning dust polarisation. If indeed 30 per cent of the 31~GHz emission in $G284.3-0.3$ is anomalous (e.g. from spinning dust), then the effective upper limit to the polarisation fraction of this component is $\sim 1$~per cent. This is lower than that observed in the Perseus cloud, $G159.6-18.5$, which was observed to have a polarisation fraction of $3.4^{+1.5}_{-1.9}$~per cent \cite{Battistelli06}. Such low levels of polarisation are consistent with electro-dipole emission from spinning grains. The slight discrepancy in polarisation level may be attributed to varying levels of ferromagnetic material, which through magneto-dipole emission can be much more highly polarised \cite{Draine99}. \section{Conclusions} Observations of 6 bright H{\sc ii}~regions suggest a small amount of excess emission at 31~GHz, based on fitting a free-free model from data in the literature. The dominant source of emission comes from optically thin free-free emission with a spectral index $\alpha \approx -0.12$. But we find that all the sources were slightly brighter at 31~GHz relative to the simple free-free model. The average $100~\mu$m dust emissivity for the 6 H{\sc ii}~regions was found to be $3.3\pm1.7~\mu$K~(MJy/sr)$^{-1}$, or a 95 per cent confidence limit of $<6.1~\mu$K~(MJy/sr)$^{-1}$. This is lower by a factor of $\sim 3-4$ compared to cooler diffuse clouds at high Galactic latitudes (Table~\ref{tab:emissivities}). However, only one source, $G284.3-0.3$ (RCW49), was found to be statistically significant with a $100~\mu$m dust emissivity of $13.6\pm 4.2~\mu$K~(MJy/sr)$^{-1}$ ($3.3\sigma$). For this source, there are several caveats in interpreting and using data from the literature, which could reduce the significance of this result. New data in the range $5-30$~GHz, particularly from well-calibrated instruments, are required to clarify whether our tentative detection holds. The dust emissivity, relative to the $100~\mu$m map, for this object is consistent with that found in diffuse clouds at high Galactic latitudes (Table~\ref{tab:emissivities}). For the majority of the other sources, only upper limits could be obtained which appear to show that the dust emissivity is in fact lower than that observed at high Galactic latitudes. We observed very low level ($0.3-0.6$ per cent) polarisation at 31~GHz from all the H{\sc ii}~regions studied here. The level is consistent with that expected from instrumental leakage in the CBI instrument. This validates claims that the instrumental leakage is negligible ($<1$ per cent) for recent detections of CMB polarisation with the CBI. \section*{ACKNOWLEDGMENTS} CD thanks Barbara and Stanley Rawn Jr. for funding a fellowship at the California Institute of Technology for part of this work. We thank the staff and engineers at the Chajnantor observatory for their hard work and continuing support. In particular, we thank Cristobal Achermann, Ricardo Bustos, Rodrigo Reeves and Nolberto Oyarace. SC aknowledges support from FONDECYT grant 1060827. SC and LB acknowledge support from the Chilean Center for Astrophysics FONDAP 15010003. Part of the research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
{ "redpajama_set_name": "RedPajamaArXiv" }
330
Capafonts és un municipi de la comarca del Baix Camp. Segons dades de 2012 la seva població era de 121 habitants. Anomenat també Capafons, amb ortografia tradicional, el topònim deriva del llatí Capite fontium, "cap de les fonts". S'han censat un total de 53 fonts, entre les quals destaca la font de la Llódriga o Llúdriga, que no s'asseca mai. Geografia Llista de topònims de Capafonts (Orografia: muntanyes, serres, collades, indrets..; hidrografia: rius, fonts...; edificis: cases, masies, esglésies, etc). El terme de Capafonts es troba a les Muntanyes de Prades, envoltat pels termes de Vilaplana al sud-est, La Febró al sud-oest, Prades al nord i al nord-oest, i amb el de Mont-ral, ja comarca de l'Alt Camp, a l'est. Té una extensió de 13,27 km², i ocupa la part alta de la vall del riu Brugent. Els punts més encimbellats del terme són el puig Pelat, de 1.071 m., la Pena Roja, de 1.025 i el Picorandan, de forma característica, i que segons Coromines significa en mossàrab "bec d'oreneta". La part sud-oriental del terme correspon als Motllats, i és una zona pedregosa i seca, ocupada bàsicament per boscos de pins i d'alzines, de les quals abans se n'obtenia carbó. Situat a 751 metres d'altitud, es troba al mig d'un amfiteatre de muntanyes, i està constituït bàsicament per un carrer principal, del qual en surten alguns de secundaris, i dues places. Des dels anys setanta del , després de la considerable emigració de la seva població cap a nuclis com Reus i Tarragona, s'han refet diverses cases com a lloc d'estiueig, gràcies al seu clima fresc i sec. El poble es troba a 36 quilòmetres de Reus i el seu aeroport, i a 140 de Barcelona. També a poc més de 40 quilòmetres de Salou i del parc temàtic PortAventura. El clima és típic de muntanya mediterrània. Història Sembla que el 1151 Capafonts estava ja en mans dels conqueridors francs, i apareix esmentat a la carta de poblament de Prades de 1159 i es creu que va ser reconquerit per Ponç II i Ramon de Cervera, fills de Ponç I de Cervera, després que el seu pare hagués ocupat tota la plana de sota les Muntanyes de Prades. Integrada al Comtat de Prades, abans de 1392 la vila va ser donada en dot pel comte Ramon Berenguer a la seva muller Blanca. Al llarg del Capafonts va patir diverses calamitats agrícoles, sequera i una plaga de llagosta el 1681, que es repetí el 1685. Segons Madoz, el 1846 produïa sègol, llegums, cànem i patates, i molt poc vi. Tenia dos molins fariners moguts pel riu Brugent i s'hi feia carbó. Des del el poble tenia un forn de propietat i gestió comunal, on els veïns duien la farina i n'obtenien el pa, que va estar en funcionament fins al 1985, des de 2009 està museïtzat. Cultura L'església parroquial és dedicada a Santa Maria. És d'estil barroc senzill, sense filigranes, i la seva construcció va finalitzar el 1763. Té tres campanes, la major de les quals pesa 250 quilos. Damunt de cada una de les portes de la sagristia hi ha dos caps de bou, que, segons la tradició, és la paga que demanà l'amo del Mas del Fortet per haver estat el que més pedra havia portat per a la construcció del temple. Als afores s'hi troba l'ermita de la Mare de Déu de Barrulles que fou construïda el . L'ermita va romandre molt de temps abandonada fins que fou restaurada el 1956. La imatge de la Verge, que es guardava a l'església del poble, és d'alabastre policromat i és datada del . La seva festa major se celebra el 15 d'agost. Al terme hi ha també el Mas del Fortet, una masia del que va ser refeta el , i restaurada el 1976 per convertir-la en casa de colònies. Capafonts conserva una tradició culinària que s'ha perdut en altres llocs de muntanya. En destaquen les sopes de farigola, el mandongo i les orelletes. Economia La principal activitat econòmica és l'agricultura. L'abundància d'aigua a la zona permet alternar els cultius de secà amb els de regadiu. Destaca el cultiu d'avellaners, hortalisses i patates. Demografia Notes Referències Enllaços externs Pàgina web de l'Ajuntament Informació de la Generalitat de Catalunya Informació de l'Institut d'Estadística de Catalunya
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,651
\section{Introduction}\label{sec:1} Recognizing human emotions from visual contents has attracted significant attention in numerous computer vision applications such as health care and human-computer interaction systems~\cite{d2007toward,lisetti2003developing,yannakakis2011experience}. \begin{figure} \centering \renewcommand{\thesubfigure}{} \subfigure{\includegraphics[width=0.33\linewidth]{1_1a.jpg}}\hfill \subfigure{\includegraphics[width=0.33\linewidth]{1_1b.jpg}}\hfill \subfigure{\includegraphics[width=0.33\linewidth]{1_1e.jpg}}\hfill \\ \vspace{-10pt} \subfigure{\includegraphics[width=0.33\linewidth]{1_2a.jpg}}\hfill \subfigure{\includegraphics[width=0.33\linewidth]{1_2b.jpg}}\hfill \subfigure{\includegraphics[width=0.33\linewidth]{1_2e.jpg}}\hfill \\ \vspace{-10pt} \subfigure[(a)]{\includegraphics[width=0.33\linewidth]{1_3a.jpg}}\hfill \subfigure[(b)]{\includegraphics[width=0.33\linewidth]{1_3b.jpg}}\hfill \subfigure[(c)]{\includegraphics[width=0.33\linewidth]{1_3e.jpg}}\hfill \\ \vspace{-3pt} \caption{Intuition of CAER-Net: for untrimmed videos as in (a), conventional methods that leverage the facial regions only as in (b) often fail to recognize emotion. Unlike these methods, CAER-Net focuses on both face and attentive context regions as in (c).} \label{fig:1}\vspace{-10pt} \end{figure} Previous researches for emotion recognition based on handcrafted features~\cite{shan2009facial,zhong2012learning} or deep networks~\cite{fabian2016emotionet,li2018occlusion, li2017reliable} have mainly focused on the perception of the facial expression, based on the assumption that facial images are one of the most discriminative features of emotional responses. In this regard, the most widely used datasets, such as the AFEW~\cite{dhall2011acted} and FER2013~\cite{goodfellow2013challenges}, only provide the cropped and aligned facial images. However, those conventional methods with the facial image dataset frequently fail to provide satisfactory performance when the emotional signals in the faces are indistinguishable and ambiguous. Meanwhile, people recognize the emotion of others from not only their faces but also surrounding contexts, such as action, interaction with others, and place~\cite{barrett2011context,aminoff2013role}. Given untrimmed videos as in~\figref{fig:1}(a), could we catch how a woman feels solely from her facial expression as in~\figref{fig:1}(b)? It is ambiguous to estimate the emotion only with cropped facial videos. However, we could easily guess the emotion as ``surprise" with her facial expression and contexts that an another woman comes close to her as shown in \figref{fig:1}(c). Nevertheless, such contexts have been rarely considered in most existing emotion recognition methods and benchmarks. Some methods~\cite{chen2016emotion, kosti2017emotion} have shown that emotion recognition performance can be significantly boosted by considering context information such as gesture and place~\cite{chen2016emotion, kosti2017emotion}. In addition, in visual sentimental analysis~\cite{li2012context,yang2018weakly} that recognizes the sentiment of an image, similar to emotion recognition but not tailored to humans, the holistic visual appearance was used to encode such contexts. However, these approaches are not practical for extracting the salient context information from visual contents. Moreover, large-scale emotion recognition datasets, including various context information close in real environments, are absence. To overcome these limitations, we present a novel framework, called Context-Aware Emotion Recogntion Networks (CAER-Net), to recognize human emotion from images and videos by exploiting not only human facial expression but also scene contexts in a joint and boosting manner, instead of only focusing on the facial regions as in most existing methods~\cite{shan2009facial,zhong2012learning,fabian2016emotionet,li2018occlusion,li2017reliable}. The networks are designed in a two-stream architecture, including two feature encoding stream; face encoding and context encoding streams. Our key ingradient is to seek other relevant contexts by hiding human faces based on an attention mechanism, which enables the networks to reduce an ambiguity and improve an accuracy in emotion recognition. The face and context features are then fused to predict the emotion class in an adaptive fusion network by inferring an optimal fusion weight among the two-stream features. In addition, we build a novel database, called Context-Aware Emotion Recognition (CAER), by collecting a large amount of video clips from TV shows and annotating the ground-truth emotion category. Experimental results show that CAER-Net outperforms baseline networks for context-aware emotion recognition on several benchmarks, including AFEW~\cite{dhall2011acted} and our CAER dataset. \section{Related Work}\label{sec:2} \paragraph{Emotion recognition approaches.}\label{sec:21} Most approaches to recognize human emotion have focused on facial expression analysis~\cite{shan2009facial,zhong2012learning,fabian2016emotionet,li2018occlusion,li2017reliable}. Some methods are based on the facial action coding system ~\cite{friesen1978facial,eleftheriadis2015discriminative}, where a set of localized movements of the face is used to encode facial expression. Compared to conventional methods that have relied on handcrafted features and shallow classifiers~\cite{shan2009facial,zhong2012learning}, recent deep convolutional neural networks (CNNs) based approaches have made significant progress~\cite{fabian2016emotionet}. Various techniques to capture temporal dynamics in videos have also been proposed making connections across the time using recurrent neural networks (RNNs) or deep 3D-CNNs~\cite{fan2016video,lee2018spatiotemporal}. However, most works have been relied on human face analysis, and thus they have limited ability to exploit context information for emotion recognition in the wild. To solve these limitations, some approaches using other visual clues have been proposed~\cite{nicolaou2011continuous, schindler2008recognizing, chen2016emotion, kosti2017emotion}. Nicolaou~\etal~\cite{nicolaou2011continuous} used the location of shoulders and Schindler \etal~\cite{schindler2008recognizing} used the body pose to recognize six emotion categories under controlled conditions. Chen \etal~\cite{chen2016emotion} detected events, objects, and scenes using pre-learned CNNs and fused each score with context fusion. In~\cite{kosti2017emotion}, manually annotated body bounding boxes and holistic images were leveraged. However, \cite{kosti2017emotion} have a limited ability to encode dynamic signals (\ie, video) to estimate the emotion. Moreover, the aforementioned methods are a lack of practical solutions to extract the sailent context information and exploit it to context-aware emotion recognition. \begin{figure*} \centering \renewcommand{\thesubfigure}{} \subfigure[]{ \includegraphics[width=1\textwidth]{2.pdf}}\\ \vspace{-10pt} \caption{Network configuration of CAER-Net, consisting of two-stream encoding networks and adaptive fusion networks.} \label{fig:2}\vspace{-10pt} \end{figure*} \vspace{-10pt} \paragraph{Emotion recognition datasets.}\label{sec:22} Most of the datasets that focus on detecting occurrence of expressions, such as CK+~\cite{lucey2010extended} and MMI~\cite{pantic2005web}, have been taken in lab-controlled environments. Recently, datasets recorded in the wild condition for including naturalistic emotion states~\cite{dhall2011acted,dhall2011static,mollahosseini2016facial} have attracted much attention. AFEW benchmark~\cite{dhall2011acted} of the EMOTIW challenge~\cite{dhall2016emotiw} provides video frames extracted from movies and TV shows, while SFEW database~\cite{dhall2011static} has been built as a static subset of the AFEW. FER-Wild~\cite{mollahosseini2016facial} database contains 24,000 images that are obtained by querying emotion-related terms from search engines. MS-COCO database~\cite{patterson2016coco} has been recently annotated with object attributes, including some emotion categories for human, but the attributes are not intended to be exhaustive for emotion recognition, and not all people are annotated with emotion attributes. Some studies~\cite{kleinsmith2007recognizing, kleinsmith2011automatic} built the database consisting of a spontaneous subset acquired under a restrictive setting to establish the relationship between emotion and body posture. EMOTIC database~\cite{kosti2017emotion} has been introduced providing the manually annotated body regions which contains emotional state. Although these datasets investigate a different aspect of emotion recognition with contexts, a large-scale dataset for context-aware emotion recognition is absence that contains various context information. \vspace{-10pt} \paragraph{Attention inference.}\label{sec:23} Since deep CNNs have achieved a great success in many computer vision areas~\cite{krizhevsky2012imagenet,simonyan2014very,he2016deep}, numerous attention inference models~\cite{zhou2016learning,selvaraju2017grad} have been investigated to identify discriminative regions where the networks attend, by mining discriminative regions~\cite{kumar2016track}, implicitly analyzing the higher-layer activation maps~\cite{zhou2016learning,selvaraju2017grad}, and designing different architecture of attention modules~\cite{woo2018cbam,hu2018squeeze}. Although the attention produced by these conventional methods could be used as a prior for various tasks, it only covers most discriminative regions of the object, and thus frequently fails to capture other discriminative parts that can help performance improvement. Most related methods to our work discover attentive areas for visual sentiment recognition~\cite{yang2018weakly,you2017visual}. Although those produce the emotion sentiment map using deep CNNs, it only focuses on image-level sentiment analysis, not human-centric emotion like us. \section{Proposed Method}\label{sec:3} \subsection{Motivation and Overview}\label{sec:31} In this section, we describe a simple yet effective framework for context-aware emotion recognition in images and videos that exploits the facial expression and context information in a boosting and synergistic manner. A simple solution is to use the holistic visual appearance similar to~\cite{kosti2017emotion,chen2016emotion}, but such a model cannot encode salient contextual regions well. Based on the intuition that emotions can be recognized by understanding the context components of scene, as well as facial expression together, we present an attention inference module that estimates the context information in images and videos. By hiding the facial regions in inputs and seeking the attention regions, our networks localize more discriminative context regions that are used to improve emotion recognition accuracy in a context-aware manner. Concretely, let us denote an image and a video that consists of a sequence of $T$ images as $I$ and $V=\{I_1, \dots, I_T\}$, respectively. Our objective is to infer the discrete emotion label $y$ among $K$ emotion labels $\{y_1,\dots, y_K\}$ of the image $I$ or video clip $V$ with deep CNNs. To solve this problem, we present a network architecture consisting of two sub-networks, including a \textit{two-stream encoding network} and an \textit{adaptive fusion network}, as illustrated in \figref{fig:2}. The two-stream encoding networks consist of \textit{face stream} and \textit{context stream} in which facial expression and context information are encoded in the separate networks. By combining two features in the adaptive fusion network, our method attains an optimal performance for context-aware emotion recognition. \subsection{Network Architectures}\label{sec:32} \subsubsection{Two-stream Encoding Networks}\vspace{-5pt} In this section, we first present a dynamic model of our networks for analyzing videos, and then present a static model for analyzing images.\vspace{-10pt} \paragraph{Face encoding stream.} As in existing facial expression analysis approaches~\cite{fabian2016emotionet,lee2018spatiotemporal, fan2018video}, our networks also have the facial expression encoding module. We first detect and crop the facial regions using the off-the-shelf face detectors~\cite{king2009dlib} to build input of face stream $V_F$. The facial expression encoding module is designed to extract the facial expression features denoted as $X_F$ from temporally stacked face-cropped inputs $V_F$ by feed-foward process such that \begin{equation} X_F = \mathcal{F}(V_F; W_F), \end{equation} with face stream parameters $W_F$. The facial expression encoding module is designed based on the basic operations of 3D-CNNs which are well-suited for spatiotemporal feature representation. Compared to 2D-CNNs, 3D-CNNs have the better ability to model temporal information for videos using 3D convolution and 3D pooling operations. Specifically, the face encoding module consist of 5 convolutional layers with $3 \times 3 \times 3$ kernels followed by batch normalization (BN), rectified linear unit (ReLU) layers and 4 max-pooling layers with stride $2 \times 2 \times 2$ except for the first layer. The first pooling layer has a kernel size $1 \times 2 \times 2$ with the intention of not to merge the temporal signal too early. The number of kernels for five convolution layers are 32, 64, 128, 256 and 256, respectively. The final feature ${X}_F$ is spatially averaged in the average-pooling layer.\vspace{-10pt} \paragraph{Context encoding stream.} In comparison to the face encoding stream, the context encoding stream includes a context encoding module and an attention inference module. To extract the context information except the facial expression, we present a novel strategy that hides the faces and seeks contexts based on the attention mechanisms. Specifically, the context encoding module is designed to extract the context features denoted as $X_C$ from temporally stacked face-hidden inputs $V_C$ by feed-foward process: \begin{equation} X_C = \mathcal{F}(V_C; W_C), \end{equation} with context stream parameters $W_C$. In addition, an attention inference module is learned to extract attention regions of input, enabling the context encoding stream to focus on the sailent contexts. Concretely, the attention inference module takes an intermediate feature $X_C$ as input to infer the attention $A \in \mathbb{R}^{H \times W}$, where $H\times W$ is the spatial resolution of the $X_C$. To make the sum of attention for each pixel to be 1, we spatially normalize the attention $A$ by using the spatial softmax~\cite{sharma2015action} as follows: \begin{equation} \hat{A}_{i} = \frac{\mathrm{exp}(A_i)}{\sum_{j} \mathrm{exp}(A_j)}, \end{equation} where $\hat{A}$ is the attention for context at each pixel $i$ and $j \in \{1, \cdots, H \times W\}$. Since we temporally aggregate the features using 3D-CNNs, we only normalize the attention weight across spatial axises not temporal axis. Note that the attention is implicitly learned in an unsupervised manner. Attention $\hat{A}$ is then applied to the feature $X_C$ to make the attention-boosted feature $\hat{X}_C$ as follows: \begin{equation} \bar{X}_C = \hat{A} \odot X_C, \end{equation} where $\odot$ is an element-wise multiplication operator. Specifically, we use five convolution layers to extract intermediate feature volumes $X_C$ followed by BN and ReLU, and 4 max-pooling layers. All max-pooling layers except for the first layer have $2 \times 2 \times 2$ kernel with stride $2$. The first pooling layer has kernel size $1 \times 2 \times 2$ similar to facial expression encoding stream. The number of filters for five convolution layers are 32, 64, 128, and 256, respectively. In the attention inference module, we use two convolution layers with $3 \times 3 \times 3$ kernels producing 128 and 1 feature channels, followed by BN and ReLU layers. The final feature $\bar{X}_C$ is spatially averaged in the average-pooling layer.\vspace{-10pt} \begin{figure} \centering \renewcommand{\thesubfigure}{} \subfigure[(a) input]{\includegraphics[width=0.33\linewidth]{3_1.png}}\hfill \subfigure[(b) static model]{\includegraphics[width=0.33\linewidth]{3_2.png}}\hfill \subfigure[(c) dynamic model]{\includegraphics[width=0.33\linewidth]{3_3.png}}\hfill \caption{Visualization of the attention maps of (b) static and (c) dynamic context encoding models of CAER-Net.} \label{fig:3}\vspace{-10pt} \end{figure} \paragraph{Static model.} Dynamic model described above can be simplified for emotion recognition in images. A static model, called CAER-Net-S, takes both a single frame face-cropped image $I_F$ and face-hidden image $I_C$ as input. In networks, all 3D convolution layers and 3D max-pooling layers are replaced with 2D convolution layers and 2D max-pooling layers, respectively. Thus, our two types of models can be applied in various environments regardless of the data type. \figref{fig:3} visualizes the attention maps of static and dynamic models. As expected, our networks both with static and dynamic models localize the context information well, except for the face expression. By exploiting the temporal connectivity, the dynamic model can localize more sailent regions compared to the static model. \begin{figure} \centering \renewcommand{\thesubfigure}{} \subfigure{\includegraphics[width=0.33\linewidth]{4_1.pdf}}\hfill \subfigure{\includegraphics[width=0.33\linewidth]{4_2.pdf}}\hfill \subfigure{\includegraphics[width=0.33\linewidth]{4_3.pdf}}\hfill \\ \vspace{-5pt} \subfigure{\includegraphics[width=0.33\linewidth]{4_4.pdf}}\hfill \subfigure{\includegraphics[width=0.33\linewidth]{4_5.pdf}}\hfill \subfigure{\includegraphics[width=0.33\linewidth]{4_6.pdf}}\hfill \\ \caption{Some examples of the attention weights, i.e., $\lambda_F$ and $\lambda_C$, in our networks.} \label{fig:4}\vspace{-10pt} \end{figure} \subsubsection{Adaptive Fusion Networks}\vspace{-5pt} To recognize the emotion by using the face and context information in a joint manner, the features extracted from two modules should be combined. However, a direct concatenation of different features~\cite{kosti2017emotion} often fails to provide optimal performance. To alleviate this limitation, we build the adaptive fusion networks with an attention model for inferring an optimal fusion weight for each feature $X_F$ and $\bar{X}_C$. The attentions are learned such that $\lambda_F = \mathcal{F}(X_F; W_D)$ and $\lambda_C = \mathcal{F}(\bar{X}_C; W_E)$ with network parameters $W_D$ and $W_E$, respectively. Softmax function make the sum of these attentions to be $1$, \ie, $\lambda_F + \lambda_C = 1$. \figref{fig:4} shows some examples of the attention weights, i.e., $\lambda_F$ and $\lambda_C$, in CAER-Net. According to contents, the attention weights are adaptively determined to yield an optimal solution. Unlike methods using the simple concatenation~\cite{kosti2017emotion}, the learned attentions are applied to inputs as \begin{equation} X_A = \Pi(X_F \odot \lambda_F, \bar{X}_C \odot \lambda_C), \end{equation} where $\Pi$ is a concatenation operator. We then estimate the final output $y$ for emotion category by classifier: \begin{equation} y = \mathcal{F}(X_A; W_G), \end{equation} where $W_G$ represents the remainder parameters of the adaptive fusion networks. \begin{figure} \centering \renewcommand{\thesubfigure}{} \subfigure[]{\includegraphics[width=1.0\linewidth]{5.pdf}}\\ \vspace{-10pt} \caption{Procedure for building CAER benchmark: we divide the video clips to the shot with shot boundary detection method, and remove face-undetected shots, group-level and ambiguous shots to estimate the emotion. Finally, we annotate the emotion category.}\label{fig:5}\vspace{-10pt} \end{figure} \begin{figure*} \centering \renewcommand{\thesubfigure}{} \subfigure[(a) EMOTIC~\cite{kosti2017emotion}]{\includegraphics[height=0.192\textheight]{6_1.pdf}} \subfigure[(b) AffectNet~\cite{mollahosseiniaffectnet}]{\includegraphics[height=0.192\textheight]{6_2.pdf}} \subfigure[(c) CAER]{\includegraphics[height=0.192\textheight]{6_3.pdf}} \vspace{-5pt} \caption{Examples in the EMOTIC~\cite{kosti2017emotion}, AffectNet~\cite{mollahosseiniaffectnet} and CAER. While EMOTIC includes face-unvisible images to yeild ambiguous emotion recognition, AffectNet includes face-cropped images which have limited to use of context.} \label{fig:6}\vspace{-5pt} \end{figure*} Specifically, the fusion networks consist of 6 convolution layers with $1 \times 1$ kernels. The four layers use to produce fusion attention $\lambda_{F}$ and $\lambda_{C}$. While the intermediate two layers that receive each stream feature as input produce 128 channel feature, the remaining two layers produce 1 channel attention for facial and contextual features. For the two layers that act as final classifiers, the first convolution layer produces 128 channel feature followed by ReLU and dropout layers to prevent the problem of the network overfitting, and the second convolution layer produces $K$ channel feature to estimated the emotional category. \section{The CAER Benchmark}\label{sec:4} Most existing datasets~\cite{goodfellow2013challenges, mollahosseiniaffectnet} have focused on the human facial analysis, and thus they are inappropriate for context-aware emotion recogntion. In this section, we introduce a benchmark by collecting large-scale video clips from TV shows and annotating them for context-aware emotion recogntion. \subsection{Annotation}\label{sec:41} We first collected the video clips from 79 TV shows and then refined them using the shot boundary detector, face detector/tracking and feature clustering~\footnote{https://github.com/pyannote/pyannote-video}. Each video clip was manually annotated with six emotion categories, including ``anger", ``disgust", ``fear", ``happy", ``sad", and ``surprise``, as well as ``neutral". Six annotators were recruited to assign the emotion category on the 20,484 clips of the initial collection. Since all the video clips have audio and visual tracks, the annotators labeled them while listening to the audio tracks for more accurate annotations. Each clip was evaluated by three different annotators. The annotation was performed blindly and independently, \ie the annotators were not aware of the other annotator's response. Importantly, in comparison of existing datasets~\cite{dhall2011acted,kosti2017emotion}, confidence scores were annotated as well as emotion category, which can be thought as the probability of the annotation reliability. If two more annotators assigned the same emotion categories, the clip was remained in the database. We also removed the clips which have lower confidence average under the $0.5$. Finally, 13,201 clips and about 1.1M frames were available. The videos range from short (around 30 frames) to longer clips (more than 120 frames). The average of sequence length is 90 frames. In addition, we extracted about 70K static images from CAER to create a static image subset, called CAER-S. The dataset is randomly split into training (70\%), validation (10\%), and testing (20\%) sets. Overall stage of data acquisition and annotation is illustrated in \figref{fig:5}. \tabref{tab:1} summarizes the number of clips per each cateogry in the CAER benchmark. \begin{table} \begin{center} \begin{tabular*}{\linewidth}{l @{\extracolsep{\fill}} ccc} \hlinewd{0.8pt} Category & \# of clips & \# of frames & \% \tabularnewline \hline \hline Anger & 1,628 & 139,681 & 12.33 \tabularnewline Disgust & 719 & 59,630 & 5.44\tabularnewline Fear & 514 & 46,441 & 3.89\tabularnewline Happy & 2,726 & 219,377 & 20.64\tabularnewline Neutral & 4,579 & 377,276 & 34.69\tabularnewline Sad & 1,473 & 138,599 & 11.16\tabularnewline Surprise & 1,562 & 126,873 & 11.83\tabularnewline \hline Total & 13,201 & 1,107,877 & 100\tabularnewline \hlinewd{0.8pt} \end{tabular*} \end{center}\vspace{-5pt} \caption{Amount of video clips in each category on CAER dataset.} \label{tab:1}\vspace{-10pt} \end{table} \begin{table*} \begin{center} \begin{tabular}{ >{\raggedright}m{0.15\linewidth} >{\raggedright}m{0.13\linewidth} >{\centering}m{0.13\linewidth} >{\centering}m{0.13\linewidth} >{\centering}m{0.13\linewidth} >{\centering}m{0.13\linewidth}} \hlinewd{0.8pt} {Data type} & {Dataset} & {Amount of data} & {Setting} & {Annotation type} & {Context}\tabularnewline \hline \hline \multirow{3}{*}{{Static (Images)}} & EMOTIC~\cite{kosti2017emotion} & 18,316 images & Web & 26 Categories & \cmark \tabularnewline & AffectNet~\cite{mollahosseiniaffectnet} & 450,000 images & Web & 8 Categories & \xmark \tabularnewline & {CAER-S} & 70,000 images & TV show & 7 Categories & \cmark \tabularnewline \hline \multirow{2}{*}{{Dynamic (Videos)}} & AFEW~\cite{dhall2012collecting} & 1,809 clips & Movie & 7 Categories & \xmark \tabularnewline & {CAER} & 13,201 clips & TV show & 7 Categories & \cmark \tabularnewline \hlinewd{0.8pt} \end{tabular} \end{center} \vspace{-5pt} \caption{Comparison of the CAER with existing emotion recognition datasets such as EMOTIC~\cite{kosti2017emotion}, AffectNet~\cite{mollahosseiniaffectnet}, AFEW~\cite{dhall2012collecting}, and Video Emotion~\cite{jiang2014predicting} datasets. Compared to existing datasets, CAER contains large amount of video clips for context-aware emotion recognition.} \label{tab:2}\vspace{-10pt} \end{table*} \subsection{Analysis}\label{sec:42} We compare CAER and CAER-S datasets with other widely used datasets, such as EMOTIC~\cite{kosti2017emotion}, AffectNet~\cite{mollahosseiniaffectnet}, AFEW~\cite{dhall2012collecting}, and Video Emotion datasets~\cite{jiang2014predicting}, as shown in \tabref{tab:2}. According to the data type, the datasets are grouped into the static and dynamic. Even if static databases for facial expression analysis such as AffectNet~\cite{mollahosseiniaffectnet} and FER-Wild~\cite{mollahosseini2016facial} collect a large amount of facial expression images from the web, they have only face-cropped images not including surrounding context. In addition, EMOTIC~\cite{kosti2017emotion} do not contain human facial images, as exampled in \figref{fig:6}, thus causing subjective and ambiguous labelling from observers. On the other hand, commonly used video emotion recognition datasets had insufficient amount of data than image-based datasets~\cite{jiang2014predicting,kossaifi2017afew}. Compared to these datasets, the CAER dataset provides the large-scale video clips which are sufficient amount to learn the machine learning algorithms for context-aware emotion recognition. \section{Experiments}\label{sec:5} \subsection{Implementation Details}\label{sec:51} CAER-Net was implemented with PyTorch library~\cite{paszke2017automatic}. We trained CAER-Net from scratch with learning rate initialized as $5 \times 10^{-3}$ and dropped by a factor of 10 every 4 epochs. CAER-Net was learned with the cross-entropy loss function~\cite{kim2019unified} with ground-truth emotion labels with batch size to 32. As CAER dataset has various length of videos, we randomly extracted single non-overlapped consecutive 16 frame clips from every training video which sampled at 10 frames per second. While the clips of facial $V_F$ are resized to have the frame size of $96 \times 96$, the clips of contextual parts $V_C$ are resized to have the frame size of $128 \times 171$ and randomly cropped into $112 \times 112$ at training stage. We also trained static model of CAER-Net-S with CAER-S dataset with the input size of $224 \times 224$. To reduce the effects of overfitting, we employed the dropout scheme with the ratio of 0.5 between $1 \times 1$ convolution layers, and data augmentation schemes such as flips, contrast, and color changes. At testing phase, we used a single center crop per contextual parts clips. For video predictions, we split a video into 16 frame clips with a 8 frame overlap between two consecutive clips then average clip predictions of all clips. \subsection{Experimental Settings}\label{sec:51} We evaluated CAER-Net on the CAER and AFEW dataset~\cite{dhall2011acted}, respectively. For evaluation of the proposed networks quantitatively, we measured the emotion recognition performance by classification accuracy as used in~\cite{dhall2016emotiw}. We reproduced four classical deep network architectures before the fully-connected layers, including AlexNet~\cite{krizhevsky2012imagenet}, VGGNet~\cite{simonyan2014very}, ResNet~\cite{he2016deep}, and C3D~\cite{tran2015learning}, as the baseline methods. We adopt two fully-connected layers as classifiers for the baseline methods. We initialized the feature extraction modules of all the baselines using pretrained models from two large-scale classification datasets such as ImageNet~\cite{deng2009imagenet} and Sports-1M~\cite{karpathy2014large}, and fine-tuned whole networks on CAER benchmark. We trained all parameters of learning rate $10^{-4}$ for fine-tuned models. \begin{table} \begin{center} \begin{tabular}{ >{\raggedright}m{0.22\linewidth} >{\centering}m{0.07\linewidth} >{\centering}m{0.07\linewidth} >{\centering}m{0.08\linewidth} >{\centering}m{0.08\linewidth} >{\centering}m{0.16\linewidth}} \hlinewd{0.8pt} Methods & w/F & w/C & w/cA & w/fA & Acc. (\%) \tabularnewline \hline \hline \multirow{3}{*}{CAER-Net-S} & \cmark & & & & 70.09\tabularnewline & & \cmark & \cmark& & 65.65 \tabularnewline & \cmark & \cmark & \cmark& \cmark& 73.51 \tabularnewline \hline \multirow{6}{*}{CAER-Net} & \cmark & &&& 74.13\tabularnewline & & \cmark & \cmark && 71.94 \tabularnewline & \cmark & \cmark & && 74.36 \tabularnewline & \cmark & \cmark & \cmark&& 74.94 \tabularnewline & \cmark & \cmark & & \cmark& 75.57 \tabularnewline & \cmark & \cmark & \cmark & \cmark& {77.04} \tabularnewline \hlinewd{0.8pt} \end{tabular} \end{center} \vspace{-5pt} \caption{Ablation study of CAER-Net-S and CAER-Net on the CAER-S and CAER datasets, respectively. `F', `C', `cA', and `fA' denote face encoding stream, context encoding stream, context attention module and fusion attention module, respectively.}\label{tab:3}\vspace{-10pt} \end{table} \subsection{Results on the CAER dataset}\label{sec:53} \paragraph{Ablation study.} We analyzed CAER-Net-S and CAER-Net with ablation studies as varying the combination of different inputs such as cropped face and context, and attention modules such as context and fusion attention modules. For all those experiments, CAER-Net-S and CAER-Net were trained and tested on the CAER-S and CAER datasets, respectively. For quantitative analysis of ablation study, we examined the classification accuracy on the CAER benchmark as shown in \tabref{tab:3}. The results show that the best result can be obtained when both the face and context are used as inputs. As our baseline, CAER-Net w/F that considers facial expression only for emotion recognition provides the accuracy 74.13 $\%$. Compared to this, our CAER-Net that fully makes use of both face and context shows the best performance. When we compared the static and dynamic models, CAER-Net shows 3.53 $\%$ improvement than CAER-Net-S, which shows the importance to consider the temporal dynamic inputs for context-aware emotion recognition. \begin{figure} \centering \renewcommand{\thesubfigure}{} \subfigure[(a) CAER-Net w/F]{\includegraphics[width=0.5\linewidth]{7_1.pdf}}\hfill \subfigure[(b) CAER-Net]{\includegraphics[width=0.5\linewidth]{7_2.pdf}}\hfill\\ \caption{Confusion matrix of CAER-Net with face stream only and with face and context streams on the CAER benchmark.} \label{fig:7}\vspace{-8pt} \end{figure} \begin{figure} \centering \renewcommand{\thesubfigure}{} \subfigure{\includegraphics[width=0.248\linewidth]{8_1.png}}\hfill \subfigure{\includegraphics[width=0.248\linewidth]{8_2.png}}\hfill \subfigure{\includegraphics[width=0.248\linewidth]{8_3.png}}\hfill \subfigure{\includegraphics[width=0.248\linewidth]{8_4.png}}\hfill\\ \vspace{-10pt} \subfigure{\includegraphics[width=0.248\linewidth]{8_5.png}}\hfill \subfigure{\includegraphics[width=0.248\linewidth]{8_6.png}}\hfill \subfigure{\includegraphics[width=0.248\linewidth]{8_7.png}}\hfill \subfigure{\includegraphics[width=0.248\linewidth]{8_8.png}}\hfill\\ \vspace{-10pt} \subfigure[(a)]{\includegraphics[width=0.248\linewidth]{8_9.png}}\hfill \subfigure[(b)]{\includegraphics[width=0.248\linewidth]{8_10.png}}\hfill \subfigure[(c)]{\includegraphics[width=0.248\linewidth]{8_11.png}}\hfill \subfigure[(d)]{\includegraphics[width=0.248\linewidth]{8_12.png}}\hfill\\ \vspace{-3pt} \caption{Visualization of the attention: (from top to bottom) inputs, attention maps of CAER-Net-S and CAER-Net. (a) and (b) are results of ablation study without hiding the face during training, (c) and (d) with hiding the face.} \label{fig:10}\vspace{-10pt} \end{figure} \figref{fig:7} demonstrates the confusion matrix of CAER-Net w/F and CAER-Net, which also verify that compared to the model that only focuses on facial stream only, a joint model that considers facial stream and context stream simultaneously can highly boost the emotion recognition performance. Happy and neutral accuracies were increased by 7.48\% and 5.65\%, respectively, which clearly shows that context information helps distinguishing these two categories rather than only using facial expression. Finally, we conducted an ablation study for the context attention module. First of all, when we trained CAER-Net-S and CAER-Net without hiding the face, they tended to focus on the most discriminative parts only (\ie, faces) as depicted in the preceding two columns \figref{fig:10}. Secondly, we conducted another experiment on \emph{actionless} frames as depicted in the second and last columns. As shown in the last two columns \figref{fig:10}, both CAER-Net-S and CAER-Net attend to not only ``things that move" but also the salient scene that can be an emotion signals. To summarize, our context encoding stream enables the networks to attend salient context that boost performance for both images and videos. \begin{figure} \centering \renewcommand{\thesubfigure}{} \subfigure[]{\includegraphics[width=1\linewidth]{9.pdf}}\\ \vspace{-13pt} \caption{Quantitative evaluation of CAER-Net-S in comparison to baseline methods on each category in the CAER-S benchmark.}\vspace{-5pt}\label{fig:8} \end{figure} \begin{figure*} \centering \renewcommand{\thesubfigure}{} \subfigure{\includegraphics[width=0.164\linewidth]{10_1.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_2.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_3.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_4.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_5.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_6.jpg}}\hfill\\ \vspace{-10pt} \subfigure{\includegraphics[width=0.164\linewidth]{10_1b.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_2b.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_3b.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_4b.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_5b.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_6b.jpg}}\hfill\\ \vspace{-10pt} \subfigure{\includegraphics[width=0.164\linewidth]{10_1c.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_2c.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_3c.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_4c.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_5c.jpg}}\hfill \subfigure{\includegraphics[width=0.164\linewidth]{10_6c.jpg}}\hfill \\ \vspace{-10pt} \subfigure[(a) ``Disgust"]{\includegraphics[width=0.164\linewidth]{10_1d.jpg}}\hfill \subfigure[(b) ``Fear"]{\includegraphics[width=0.164\linewidth]{10_2d.jpg}}\hfill \subfigure[(c) ``Surprise"]{\includegraphics[width=0.164\linewidth]{10_3d.jpg}}\hfill \subfigure[(d) ``Sad"]{\includegraphics[width=0.164\linewidth]{10_4d.jpg}}\hfill \subfigure[(e) ``Happy"]{\includegraphics[width=0.164\linewidth]{10_5d.jpg}}\hfill \subfigure[(f) ``Fear"]{\includegraphics[width=0.164\linewidth]{10_6d.jpg}}\hfill\\ \caption{Visualization of learned attention maps in CAER-Net-S: (from top to bottom) inputs, attention maps of CAM~\cite{zhou2016learning}, inputs of context encoding stream, attention maps in context encoding stream. Note that red color indicates attentive regions and blue color indicates suppressed regions. Best viewed in color.}\vspace{-10pt} \label{fig:9} \end{figure*} \vspace{-10pt} \paragraph{Comparison to baseline methods.} In \figref{fig:8} and \tabref{tab:4}, we evaluated CAER-Net-S with baseline 2D CNNs based approaches. The standard networks including AlexNet~\cite{krizhevsky2012imagenet}, VGGNet~\cite{simonyan2014very}, and ResNet~\cite{he2016deep} pretrained with ImageNet were reproduced for comparison with CAER-Net-S. In addition, we also fine-tuned these networks on the CAER-S dataset. Compared to these baseline methods, our CAER-Net-S improves the classification performance than fine-tuned ResNet by 5.05$\%$. Moreover, CAER-Net-S consistently performs favorably against baseline deep networks on each category in the CAER-S benchmark, which illustrates that CAER-Net can learn more discriminative representation for this task. In addition, we evaluated CAER-Net with a baseline 3D CNNs based approach in \tabref{tab:5}. Compared to C3D~\cite{tran2015learning}, our CAER-Net has shown the state-of-the-art performance on the CAER benchmark. \begin{table}[!t] \begin{center} \begin{tabular}{ >{\raggedright}m{0.5\linewidth} >{\centering}m{0.16\linewidth}} \hlinewd{0.8pt} Methods & Acc. (\%)\tabularnewline \hline \hline ImageNet-AlexNet~\cite{krizhevsky2012imagenet} & 47.36\tabularnewline ImageNet-VGGNet~\cite{simonyan2014very} & 49.89\tabularnewline ImageNet-ResNet~\cite{he2016deep} & 57.33\tabularnewline \hline Fine-tuned AlexNet~\cite{krizhevsky2012imagenet} & 61.73\tabularnewline Fine-tuned VGGNet~\cite{simonyan2014very} & 64.85\tabularnewline Fine-tuned ResNet~\cite{he2016deep} & 68.46\tabularnewline \hline CAER-Net-S & 73.51\tabularnewline \hlinewd{0.8pt} \end{tabular} \end{center} \vspace{-5pt} \caption{Quantitative evaluation of CAER-Net-S in comparison to baseline methods on the CAER-S benchmark .}\vspace{-10pt}\label{tab:4} \end{table} \begin{table}[!t] \begin{center} \begin{tabular}{ >{\raggedright}m{0.5\linewidth} >{\centering}m{0.16\linewidth}} \hlinewd{0.8pt} Methods & Acc. (\%)\tabularnewline \hline \hline Sports-1M-C3D~\cite{tran2015learning} & 66.38 \tabularnewline Fine-tuned C3D~\cite{tran2015learning} & 71.02 \tabularnewline \hline CAER-Net &77.04 \tabularnewline \hlinewd{0.8pt} \end{tabular} \end{center} \vspace{-5pt} \caption{Quantitative evaluation of CAER-Net in comparison to C3D~\cite{tran2015learning} on the CAER benchmark .}\vspace{-5pt}\label{tab:5} \end{table} Finally, \figref{fig:9} shows the qualitative results with learned attention maps obtained by CAM~\cite{zhou2016learning} with fine-tuned VGGNet and in context encoding stream of CAER-Net-S. Note that images in \figref{fig:9} were correctly classified to ground-truth emotion categories both with fine-tuned VGGNet and CAER-Net-S. Unlike CAM~\cite{zhou2016learning} that only considers facial expressions, the attention mechanism in CAER-Net-S localizes context information well that can boost the emotion recognition performance in a context-aware manner. \subsection{Results on the AFEW dataset}\label{sec:52}\vspace{-3pt} We conducted an additional experiment to verify the effectiveness of the CAER dataset compared to the AFEW dataset~\cite{dhall2011acted}. When we trained CAER-Net on the combination of CAER and AFEW datasets, the highly improvement was attained. It demonstrates that CAER dataset could be complement data distribution of the AFEW dataset. It should be noted that Fan \etal~\cite{fan2018video} has shown the better performance, they are formulated the networks with the ensemble of various networks to maximize the performance in EmotiW challenge. Unlike this, we focused on investigating how context information helps to improve the emotion recognition performance. For this purpose, we choice shallow architecture rather than Fan~\etal~\cite{fan2018video}. If the face encoding stream adopt more complicated networks such Fan \etal~\cite{fan2018video}, the performance of CAER-Net also will be highly boosted. We reserve this as further works.\vspace{-5pt} \begin{table}[!t] \begin{center} \begin{tabular}{ >{\raggedright}m{0.398\linewidth} >{\centering}m{0.29\linewidth} >{\centering}m{0.16\linewidth}} \hlinewd{0.8pt} Methods & Training data& Acc. (\%)\tabularnewline \hline \hline VielZeuf \etal~\cite{vielzeuf2017temporal} w/F & FER+AFEW & 48.60 \tabularnewline Fan \etal~\cite{fan2016video} w/F & FER+AFEW & 48.30 \tabularnewline Hu \etal~\cite{hu2017learning} w/F & AFEW & 42.55 \tabularnewline Fan \etal~\cite{fan2018video} w/F & FER+AFEW & 57.43 \tabularnewline \hline CAER-Net w/F & AFEW & 41.86 \tabularnewline CAER-Net & CAER & 38.65 \tabularnewline CAER-Net & AFEW & 43.12 \tabularnewline CAER-Net & CAER+AFEW & 51.68 \tabularnewline \hlinewd{0.8pt} \end{tabular} \end{center} \vspace{-5pt} \caption{Quantitative evaluation of CAER-Net on the AFEW~\cite{dhall2011acted} benchmark, as varying training datasets.}\vspace{-10pt}\label{tab:6} \end{table} \section{Conclusion}\label{sec:6} We presented CAER-Net that jointly exploits human facial expression and context for context-aware emotion recognition. The key idea of this approach is to seek sailent context information by hiding the facial regions with an attention mechanism, and utilize this to estimate the emotion from contexts, as well as the facial information together. We also introduced the CAER benchmark that is more appropriate for context-aware emotion recognition than existing benchmarks both qualitatively and quantitatively. We hope that the results of this study will facilitate further advances in context-aware emotion recognition and its related tasks. {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,145
Isodontia edax är en biart som först beskrevs av Charles Thomas Bingham 1897. Isodontia edax ingår i släktet Isodontia och familjen grävsteklar. Inga underarter finns listade i Catalogue of Life. Källor Grävsteklar edax
{ "redpajama_set_name": "RedPajamaWikipedia" }
668
Scotura auriceps is a moth of the family Notodontidae. It is found in Brazil. References Moths described in 1878 Notodontidae of South America
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,900
Jump aboard Thomas' train as he travels around the Island of Sodor in this much-loved lift the flap pop-up title! With a fresh new look cover, young train-lovers will delight in this interactive book, packed with all the sights and sounds of The Fat Controller's Railway. Ever wanted to see inside the engines and the railways? This is your chance! Lift the flaps to look inside Thomas' firebox and see him make steam, pull the tab to change the signal, meet Annie and Clarabel and their passengers at the station, then open up the last page for a splendid pop-up surprise. Enter location to calculate delivery fee. Where did my other items go? Don't worry your items are safe with us. To view all other items from different categories, visit your basket and checkout in an instant! How would you like to checkout? © 2017 - 2018 Jabberwock Ventures, All Rights Reserved - Powered by Cheetay Logistics Pvt. Ltd. Note : This website may not work properly on Internet Explorer. Please use some other browser like Chrome or Mozilla. We have sent an email at your account with reset password Link. We will send password reset instructions to the email address associated with your account.
{ "redpajama_set_name": "RedPajamaC4" }
3,353
console.log('a starting'); exports.done = false; var b = require('./b.js'); console.log('in a, b.done = %j', b.done); exports.done = true; console.log('a done');
{ "redpajama_set_name": "RedPajamaGithub" }
9,344
PAN of the buyer and the seller is mandatory to make e-payment of TDS on sale of property. However, the TAN (Tax Deduction Account Number) is not required to make this payment. TDS needs to be paid on the amount paid/payable to the seller. The buyer can make the payment using the e-tax payment option. The purchaser of an immovable property (whether built up or under construction) of value Rs 50 lakh or more has the responsibility under the Income Tax Act to pay withholding tax of 1% from the sale consideration payable to the seller of the property. This withholding tax must be deducted at source and deposited in the government's account as per the income tax laws. Due date: The due date of payment of TDS on transfer of immovable property is 30 days from end of the month in which the deduction is made. Prerequisites: PAN of the buyer and the seller is mandatory to make e-payment of TDS on sale of property. The TAN (Tax Deduction Account Number) is not required to make this payment. Form 26QB: To make an online payment of tax from the e-tax payment option, the buyer is required to fi ll an online form, 26QB. PAN of the buyer and seller, details of the property, total consideration payable, and payment details must be furnished. Payment: TDS needs to be paid on the amount paid/payable to the seller. The buyer can make the payment using the e-tax payment option. The tax payment can be made via net banking portal or by visiting authorised bank branches. Once the payment is made, an acknowledgement number is generated. On entering the acknowledgment details at a later date, one can generate the submitted Form 26QB for records. 1. If the property transaction has more than one party as a buyer or seller, Form 26QB needs to be filled by each buyer for unique buyer-seller combination.
{ "redpajama_set_name": "RedPajamaC4" }
5,763
|} "One Piece" (, по система на Хепбърн: Wan Pīsu) е манга поредица, написана и нарисувана от Ейичиро Ода, първоначално публикувана на 19 юли 1997 г. в седмичното издание за младежи "Джъмп". Започва да излиза в отделни томове от 24 декември 1997 г., като са издадени 98 книжки. От 20 октомври 1999 г. е реализирана аниме адаптация, продуцирана от "Toei Animation", която продължава да се излъчва. Серията от комикси проследява приключенията на младия пират Мънки Ди Луфи, който след като изяжда дяволския плод "Гому Гому но ми", придобива способността да разтяга тялото си като гума. Той оглавява и сформира екипаж от пирати, наречен "Сламената шапка", чрез които смята да осъществи най-голяма си мечта – да открие съкровището "One Piece" и да стане Крал на пиратите. Сюжет "Кралят на пиратите" Gol D. Roger преди последния си дъх оставя съкровището си "One Piece" на тайно място. Мнозина стават пирати и се впускат в битката за несметното богатство, като светът влиза в "Ерата на великите пирати". Едно момче Мънки Ди Луфи гледа с възхищение към живота на пиратите и особено към пиратския главатар Шанкс Червенокосия, чийто екипажа пуска котва в селото на Луфи. Малкото момче без да иска изяжда един от трофеите на пиратите – дяволския плод "Гъм-Гъм". Под негово въздействие Луфи е обречен никога да не може да плува до края на живота си, но получава необикновена дарба – да разтяга тялото си като гума. Десет години по-късно необикновено момче, носещо сламена шапка, подарена му от Шанкс, тръгва на дълго пътуване, за да осъществи мечтата си – да намери съкровището "One Piece" и да стане крал на пиратите. Успява да присъдени към бъдещия си екипаж Ророноа Зоро – ловец на пирати, чиято цел е да стане най-добрия майстор на меча в света. Двамата поемат към "Великолепния маршрут", но отклонявай се от него се срещат с Нами, която сама себе си нарича "специалистка по ограбването на пирати" и която най-много на света мрази пирати. Нами забърква Луфи и Зоро в конфликт с пиратския главатар Бъги Клоунът, един от малкото изял дяволски плод, даващ му способността тялото му да се разделя на части. Коварният капитан притежава картата на "великолепния маршрут" и след битка с екипажа му, както и със самия него тримата взимат ценния предмет. По-нататък Луфи, Зоро и Нами се запознават с Усоп, селски лъжец, който се опитва да предпази местна богата девойка от лапите на коварния пиратски капитан Куро и хипнотизатора Джанго. Луфи и приятелите помагат на Усоп да се опълчи срещу екипажа на Куро наречен "Черния котарак". Развитие и Популяризация Мангата по One Piece, започва да се издава за първи път с брой 34 на седмичното издание за младежи "Джъмп" от 4 август 1997 година. Докато първият епизод на аниме адаптацията продуцирана от Toei Animation, излиза чак на 20 октомври 1999 година. Ейичиро Ода, първоначално планирал One Piece да продължи пет години и да завърши с предварително замислен край. Постепенно се увлякъл в историята и решава да не я приключва толкова рано. Въпреки това авторът заявява, че краят ще е същият какъвто е планирал от началото и че е мотивиран да стигне до него, независимо колко години ще му отнеме това. One Piece е най-продавана манга в историята на Weekly Shonen Jump, а също така и най-продаваната манга в Япония. Мангата става много популярна и увеличава многократно продажбите на Weekly Shonen Jump в последните 11 години. Със своите над 270 милиона продадени копия One Piece е на първо място по продажби в историята на Weekly Shōnen Jump. През 2011 е мангата с най-много продадени копия (37 996 373) – повече от Naruto (2), Ao no Exorcist (3), Fairy Tail (4), Toriko (5), Gintama (6), Bakuman (7), Bleach (8) взети заедно. Същата година първите четири най-продавани тома манга също са One Piece (Том 61 с 3 382 588, том 62 с 3 207 568, том 63 с 3 073 175 и том 64 с 2 652 700). Том 67 държи рекорда в Япония за книга с най-много издадени копия при първи печат – 4 050 000 (бие собствения си рекорд няколко пъти подред). Манга Има издадени 98 тома с 994 глави. Мангата започва да се издава на 4 август 1997 г. Превода на VIZ One Piece мангата е написана и нарисувана от Ейичиро Ода и първоначално се публикува от Shueisha на японски език за продажба в Япония. На запад английският превод се прави от VIZ Media, които разпространяват мангата в цяла Северна Америка и Австралия, в английския вариант на Weekly Shonen Jump – Shonen Jump. Първата цветна книжка по поредицата Color Walk 1 също е публикувана на английски език, докато втората и третата книга все още не са пуснати в Северна Америка. Аниме Невероятният успех на One Piece с публикуването му в Weekly Shonen Jump съвсем скоро ще се види като нищожен, защото милиони зрители ще започнат на гледат телевизионната адаптация (аниме) на мангата. Телевизионните серии дебютират през 1999 година, но анимираната версия на One Piece всъщност започва една година по-рано с OVA-та по мангата. Анимето е преведено на английски и броени седмици след пускането си в продажба в САЩ става много популярно. Цензурирано, за първи път дебютира в 4Kids TV, но след миграцията му в Toonami блокът на Картуун Нетуорк, цензурата е махната. Епизоди Към януари 2023 година, One Piece има 1046 телевизионни епизода. Епизодите от 207 насам започват да се излъчват както и в стандартен формат (640x480), така и във висока резолюция (1240x720). Епизодите на One Piece излизат на седмица по-веднъж в неделя, което прави четири епизода на месец или в някои случаи няколко двойни епизода последвани от кратка почивка. В края на 2006 година Toei Animation продуцират пет кратки епизода, преразкаващи историите на всеки от екипажа на Сламените шапки, използвайки сцени от старите епизоди, те се преплитат в епизодите точно преди екипажа да нападне "Tower of Justice". И отбелязват едно ново начало на сериите. Филми Toei Animation също така продуцира девет One Piece филма, които биват излъчени по един всяка пролет след 2000 година. Стандартът за аниме филмите е те да биват правени по мангата, но тези са напълно откъснати от нея и нямат нищо общо с историята в анимето, те имат собствен сюжет, отличаващ се с много добра история. В допълнение три от филмите имат специален "бонус", съдържащ сцени от всекидневието на екипажа, например как танцуват, играят футбол, баскетбол, волейбол и други. Филмите са излъчени по следният ред: One Piece: The Movie () Clockwork Island Adventure () Бонус Jango's Dance Carnival () Chopper's Kingdom on the Island of Strange Animals () Бонус Dream Soccer King! () Dead End Adventure () Curse of the Sacred Sword () Бонус Take Aim! The Pirate Baseball King () Baron Omatsuri and the Secret Island () The Giant Mechanical Soldier of Karakuri Castle () Episode of Alabasta: The Desert Princess and the Pirates Episode of Chopper: Bloom in the Winter, Miracle Sakura () Бонус епизоди На всяка година или две бива излъчен едночасов епизод, вместо нормален такъв. Излъчени епизоди 'Adventure in the Ocean's Navel (Излъчен след епизод 53) Open Upon the Great Sea! A Father's Huge, HUGE Dream! (Излъчен след епизод 149) Protect it! The Last Great Performance (Излъчен след епизод 174) The Detective Memoirs of Chief Straw Hat Luffy (Излъчен след епизод 253) The Criminal is Boss Luffy? Chase the Vanished Great Sakura Tree (Считан за Бонус епизод #5, въпреки че е епизод 303 с малко допълнителни песни) Въпреки че са считани за нормални епизоди, те са по-скоро бонус такива: Boss Luffy Returns! A Dream or Reality Lottery Trouble (Въпреки че продължава историята на Бонус епизод #4 не е считан за Бонус епизод #5, но вместо това е епизод 291 по официалното броене.) The Great Race at the Rice Cake Firewood Castle! Red Nose's Conspiracy (Въпреки че продължава историята на Бонус епизод #4 не е считан за Бонус епизод #5, но вместо това е епизод 292 по официалното броене.) Chopperman Departs! Protect the TV Station by the Shore Някои епизоди в края си имат част от така наречения "Театър на Плетената шапка" Chopper Man – Излъчен в епизод 279 Report Time – Излъчен в епизод 280 Obahan Time – Излъчен в епизод 281 No Code-of-Honor Time – Излъчен в епизод 282 Monster Time – Излъчен в епизод 283 Чуждоезикови адаптации Популярността на One Piece води до адаптиране на сериите, както и мангата на много чуждестранни езици, сред тях са: корейски, китайски, английски, немски, испански, каталонски, италиански, баски, норвежки, датски, шведски, виетнамски, индонезийски, португалски и финландски език. Заимстване Исторически личности – истински пирати В One Piece, създателят Ода се е позовавал на много истински пирати, както и други личности от Златния век на пиратите. В това число влизат Bartholomew Roberts (Batholomew Kuma), Edward Teach (Marshall D. Teach, Thatch и Edward Newgate), Samuel Bellamy (Bellamy the Hyena), Francois l'Ollonais (Roronoa Zoro), Henry Morgan (Captain Morgan), Bartolomeo Português (Portgas D. Ace и Bartolomeo the Cannibal), Samuel Burgess (Jesus Burgess), John Auger (Van Auger), Jean Lafitte (Lafitte), Francis Drake a.k.a "Драконът" (Monkey D. Dragon), Woodes Rogers (Gol D. Roger) и пиратът жена Awilda (Alvida) и други. Също така Calico Jack има две подобия: в Румба пиратите – капитан "Calico" Yorki и в неговия известен флаг, където двата пресечени меча и черепа се смятат за близки до флага на Червенокосия-Shanks. Въпреки че тези прилики са налице само за Zoro, Morgan, Alvida, Bellamy, Whitebeard, Thatch и Teach, създателят е потвърдил, че имат нещо общо с историческите личности. Още една прилика с реалните пирати са Shichibukai. Те се основават на превилигированите хора от стара Европа. Превилигированите са одобрени пирати, смятани за герой в своята страна и за тирани от други. Тяхната основна цел е била да опустошават мощната тогава Испания. Културни прилики Няколко градове в историята са обосновани на истински градове и държави. Цялата арка в Alabasta е изпълнена с елементи от Древен Египет и Арабия, като архитектурата и облеклата. Water7 наподобява Венеция, а Logue Town прилича на Флоренция. The Florian Triangle е всъщност Бермудският триъгълник, където кораби са изчезвали безследно. Като цяло Thriller Bark арката, включва много елементи на фантастичните ужаси, като Невидимия Човек (Абсалом), Вампири (Хилдон), Призраци (Перона), Луди учени (Др. Хогбак), зомбита, живи скелети (Брук) и демони (Геко Мория). Племето Шандия има прилика с американските индианци. Битката между племето и хората от Skypia е много близка до тази, когато европейците започнали да вземат земите на индианците, което довело до кървава война. Градът Шандора е почти еднакъв с така наречените "Златни Градове", за които много европейци вярвали, че се намират в Америка. Гигантите от Елбаф имат прилика с Викингите. Ейичиро Ода е фен на норвежката митология от малък, също така той е разпространил много изображения на Straw Hat Pirats като викинги. Има прилики и от библейски произход в образа на Содом и Гомор – двата Кралски бика на семейството на Франки. Имената им произлизат от библейските градове, за които се смята, че са омърсени. В своето представяне Бартолемиу Кума е изобразен държейки нещо подобно на Библия, но това все още не е доказано в хода на историята. Прилики Ророноа Зоро () показва голяма прилика с маскирания боец с меч Зоро. Въпреки че не е потвърдено, но все пак остава най-очевадната прилика, която е самото име, което се пише по един и същ начин и се произнася по същия начин. Зоро също носи черна кърпа около ръката си и когато влиза в двубой той я завързва за главата си и тя хвърля сянка около очите му почти както маската на Зоро прикрива очите му, когато той влиза в битка. Все пак трябва да бъде отбелязано, че Ода никога не е коментирал тази прилика. Сър Крокодил () от Shichibukai, напомня много на известния капитан Хук от историята за Питър Пан, главно защото лявата му ръка е заменена от кука. Той също така отглежда Бананауани – гигантски крокодил, за който се смята, че отново напомня за Хук, защото той винаги е живял в страх от своя смъртен враг, а именно огромен крокодил. Тези прилики остават не коментирани от създателя. Още един член на Шичибукайте, Геко Мория, притежава сила от плод на дявола, която му позволява да отнема сенки. По този начин той поставя сенките във вид близък до този на сенките в Питър Пан. Въпреки че повечето прилики с реални личности и пиратите от One Piece остават под въпрос, има два, които са повече от очевидни (Но отново не коментирани от Ода). Първо това е Джанго, който е ясно, че прилича много на Майкъл Джаксън (по вида му и по това, че още в представянето си той изпълнява лунната стъпка). Още прилики с известния певец има в Thriller Bark арката, където самото име Thriller е песен на певеца. Също така зомбитата са представени да танцуват в имението на Thriller Bark, въпреки че ритъмът не е от оригиналната песен. Брук също така има скеч, в който той се навежда към стената под формата на 45-градусов ъгъл, а Майкъл Джаксън е известен със своите неспазващи гравитацията 45-градусови накланяния. Вторият е Тони Тони Чопър (), който прилича на Рудолф, еленът с червен нос, като и двата героя имат носове в различни от нормалното цветове и затова са прокудени от семействата си. Също така има песен, посветена на този герой, наречена "Чопър, еленът със син нос" и е създадена точно за този герой. Песента е изпълнена от японски глас в списания Weekly Shonen Jump през ежегодната фиеста на 2007 и отново през 2008. Рожденият ден на Чопър е Бъдни вечер или вечерта, когато Дядо Коледа носи подаръците, а по това време Чопър е изобразен да дърпа шейна. Културно въздействие На Олимпийските игри в Токио през 2020 г. гръцкият атлет Милтиадис Тентоглу изпълнява поза "Gear Second", преди да спечели златен медал в състезанието по дълъг скок при мъжете. Ген в плодовата муха (Плодова мушица) е наречен "Барамицин", отчасти вдъхновен от героя на One Piece Бъги. Генът кодира протеин, който е разделен на множество части. Източници Външни препратки На български език Anime Rulezzz – Статии за One Piece На английски език One Piece Wikia – A Wiki related to One Piece Shonen Jump's One Piece site Official manga site in North America One Piece on YTV Secondary official anime site in Canada Official One Piece Anime Website from FUNimation Archive of 4Kids corporate One Piece page One Piece Episode Comparisons Anime News Network На японски език OnePiece.com (One Piece ドットコム One Piece Dottokomu) Official site from the Japanese publishers Shueisha's One Piece Site Official site from the magazine where One Piece is serialized in Japan Fuji TV's One Piece Site Official site from the network airing One Piece in Japan Toei's One Piece Site Official site from the animation studio Манга
{ "redpajama_set_name": "RedPajamaWikipedia" }
318
IAEA and Kazakhstan agreement IAEA, Kazakhstan agree on uranium bank http://www3.nhk.or.jp/nhkworld/english/news/nuclear.html Aug. 28, 2015 - Updated 01:00 UTC+2 The International Atomic Energy Agency and Kazakhstan have signed an agreement to set up what they call a Low Enriched Uranium Bank in the country. IAEA Director General Yukiya Amano and Kazakhstan's Foreign Minister Erlan Idrissov signed the deal on Thursday in the Central Asian country's capital, Astana. The facility to be built in Oskemen, eastern Kazakhstan, is aimed at promoting nuclear non-proliferation. It will help to ensure the stable supply of low enriched uranium to countries. It will also work to curb enrichment of uranium that could lead to nuclear weapons production. The Low Enriched Uranium Bank will store 90 tons of the element. That amount can power a large city for 3 years. It will open in 2 years under IAEA safeguard. Idrissov said his country will contribute to the development of nuclear energy as well as global security at the same time. Kazakhstan is the world's largest producer of uranium. More than 450 nuclear tests were conducted during 40 years in the Soviet era at the Semipalatinsk test site, in the northeast region of the country. That left about one-million people exposed to radioactivity. Kazakhstan is well known for its activity in promoting nuclear non-proliferation. "Green" zones in Fukushima and nuclear future in Japan What future for Japanese nukes? New type of nuclear fuel? No internet access Sendai: 100% output
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
713
\chapter*{Acknowledgments} \addcontentsline{toc}{chapter}{Acknowledgments} \setcounter{page}{3} I want to thank my supervisor Prof. Hidetoshi Awata for his advices and supports. Also I want to thank Prof. Noriaki Ikeda for his comments and suggestions. Finally, I want to thank my family for its understanding and support. {% \hspace{1em} }% \tableofcontents \chapter{Introduction} \pagenumbering{arabic} \setcounter{page}{1} A Courant algebroid is a 4-tuple ($E,\rho,\langle,\rangle,[,]$) where $E$ is a vector bundle over a smooth manifold $M$, $\rho$ is an anchor map to tangent bundle, $\langle,\rangle$ is a non-degenerate metric, and $[,]$ is a Courant bracket on the sections of the bundle, satisfying a set of compatibility conditions. It first appeared in $\cite{C90}$ as the generalized tangent bundle $TM\oplus T^{*}M$ with a natural projection $\rho:TM\oplus T^{*}M\rightarrow TM$, a natural pairing $\langle.\rangle$, and a Dorfman bracket $[,]$, and a general definition was given in $\cite{LWX95}$ to generalize the double of Lie bialgebroids (Lie algebroid analogue of Lie bialgebras$\cite{D83}$). We can get a map $d:C^{\infty}(M)\rightarrow\Gamma(E)$ by defining $\langle df,e\rangle=\rho(e)f$ for $f\in C^{\infty}(M),e\in\Gamma(E)$. Courant algebroids play an important role in some areas of mathematics and physics, for example, generalized geomtries$\cite{G04}$, T-dualities$\cite{CG11}$, topological sigma models$\cite{R06}$,supergravity$\cite{CSW11}$, and double field theories$\cite{V12}$. Moreover, there is a one-to-one correspondence between the isomorphism class of differential-graded (dg for short) symplectic manifolds of degree 2 and isomorphism class of Courant algebroids$\cite{R02}$. A Courant-Dorfman algebra is a 5-tuple ($R,E,\partial,\langle,\rangle,[,]$), where $R$ is a commutative algebra, $E$ is an $R$-module, $\langle ,\rangle:E\otimes E\rightarrow R$ is a symmetric bilinear form, $\partial:R\rightarrow E$ is a derivation, and $[,]:E\otimes E\rightarrow E$ is a Dorfman bracket, satisfying a set of compatibility conditions. A Courant algebroid gives a Courant-Dorfman algebra via ($C^{\infty}(M),\Gamma(E),d,\langle,\rangle,[,]$). Courant-Dorfman algebras generalize Courant algebroids in two directions: first allowing for more general commutative algebras $R$ and modules $E$ than algebras of smooth functions and modules of smooth sections, and second allowing for degenerate $\langle,\rangle$. The relation between Courant-Dorfman algebras and Poisson vertex algebras was found in the context of current algebras. Current algebras are Poisson algebras consisting of functions on mapping spaces. In classical field theories, a Poisson algebraic structure of currents plays important roles when we consider symmetries of fields. The most basic example is Kac-Moody algebra, which is the Lie algebraic structure on $\mathrm{Map}(S^{1},G)$, where $G$ is a Lie group. Let $\mathfrak{g}$ be the Lie algebra of $G$ and $e_{a}$ be generators of $\mathfrak{g}$ such that $[e_{a},e_{b}]=f^{c}_{ab}e_{c}$. The bracket is of the form \begin{equation} \label{KM} \{J_{a}(\sigma),J_{b}(\sigma')\}=f^{c}_{ab}J_{c}(\sigma)\delta(\sigma-\sigma')+k\delta_{ab}\delta'(\sigma-\sigma'), \end{equation} where $k$ is a constant. The algebra plays important roles as the symmetry of the Wess-Zumino-Witten model, 2-dimensional conformal invariant sigma model whose target space is a Lie group$\cite{KZ84}$. Alekseev and Strobl observed there was more general current algebra whose source manifold was $S^{1}$ but a target manifold was a general smooth manifold$\cite{AS04}$. Let $M$ be a smooth manifold and choose a vector field $v=v^{i}(x)\partial_{i}$ and a 1-form $\alpha=\alpha_{i}(x)dx^{i}$ on $M$. We associate to them a current, \begin{equation} J_{(v,\alpha)}(\sigma)=v^{i}(x(\sigma))p_{i}(\sigma)+\alpha_{i}(x(\sigma))\partial_{\sigma}x^{i}(\sigma). \end{equation} The Poisson bracket of these currents is of the form, \begin{equation} \{J_{(v,\alpha)}(\sigma),J_{(u,\beta)}(\sigma')\}=J_{[(v,\alpha),(u,\beta)]}(\sigma)\delta(\sigma-\sigma')+\langle(v,\alpha),(u,\beta)\rangle(\sigma)\delta'(\sigma-\sigma'), \end{equation} where $u,v$ is a vector field on $M$, $\alpha,\beta$ is a 1-form on $M$, $[(v,\alpha),(u,\beta)]=([v,u],L_{v}\beta-\iota_{u}d\alpha)$ is the Dorfman bracket on the generalized tangent bundle $TM\oplus T^{*}M$ and $\langle(v,\alpha),(u,\beta)\rangle=\iota_{u}\alpha+\iota_{v}\beta$. Let $M=G$ be a Lie group, and consider an Alekseev-Strobl current of the form \begin{equation} J=p(\sigma)-\frac{k}{4\pi}g^{-1}(\sigma)\partial_{\sigma}g(\sigma) \end{equation} , where $p(\sigma)$ is a left invariant momentum. When we decompose $J$ on a basis $e_{a}$ of $\mathfrak{g}$, the Poisson bracket of $J_{a}$'s is a Kac-Moody algebra($\ref{KM}$). Alekseev-Strobl currents appear in the description of symmetries of 2-dimensional $\sigma$-models. Inspired by $\cite{AS04}$, Ekstrand and Zabzine studied the algebraic structure underlying more general current algebras on loop spaces$\cite{EZ09}$. They found that a weak notion of Courant-Dorfman algebras (weak Courant-Dorfman algebras) appears when we consider the Poisson bracket of currents. In $\cite{E11}$, (weak) Courant-Dorfman algebras were derived using the language of Lie conformal algebras (LCA for short) and Poisson vertex algebras (PVA for short). A Lie conformal algebra is a module with a $\lambda$-bracket satisfying some conditions like a Lie algebra, and a Poisson vertex algebra is defined as an algebra which has a structure of a Lie conformal algebra and satisfies the Leibniz rule. They first appeared in the context of vertex algebras., and the relation with the Poisson bracket of currents was investigated in $\cite{BSK09}$. We can get a Lie conformal algebra from the Poisson bracket of currents, and we can get a Poisson vertex algebra by taking into account the multiplication of currents. A Poisson vertex algebra can be seen as an algebraic generalization of a Poisson algebraic structure on loop spaces, while a Lie conformal algebra can be seen as an algebraic generalization of a Lie bracket on loop spaces. In $\cite{E11}$, Ekstrand derived weak Couarnt-Dorfman algebras from Lie conformal algebras and showed that the graded Poisson vertex algebras generated by elements of degree 0 and 1 are in one-to one correspondence with the Courant-Dorfman algebras. The above discussions are summarized as follows. \begin{equation} \xymatrix{ \fbox{degree 2 dg symplectic manifolds} \ar@{<->}[r]^-{1-to-1} & \fbox{Courant algebroids} } \end{equation} \begin{equation} \label{2} \xymatrix{ \fbox{Kac-Moody algebras}\ar@{^{(}->}[d] & \fbox{generalized tangent bundles of Lie groups} \ar[l]^-{target} \ar@{^{(}->}[d] \\ \fbox{Alekseev-Strobl current algebras} \ar@{^{(}->}[d] & \fbox{Courant algebroids} \ar[l]^-{target} \ar@{^{(}->}[d] \\ \fbox{Poisson vertex algebras} \ar@{<->}[r]^-{1-to-1} \ar@{^{(}->}[d] & \fbox{Courant-Dorfman algebras} \ar@{^{(}->}[d]\\ \fbox{Lie conformal algebras} \ar[r]^-{derive}& \fbox{weak Courant-Dorfman algebras} } \end{equation} Courant algebroids are in one-to-one correspondence with degree 2 dg symplectic manifolds, and Alekseev-Strobl current algebras can be described in the language of dg symplectic geometry$\cite{IK11}$. Moreover, Poisson algebras on the mapping space whose source manifold is higher dimensional were constructed (for example, $\cite{BZ05},\cite{HK12}$) and a general framework explaining these current algebras were given using dg symplectic geometry.$\cite{IX13},\cite{BHIW15},\cite{A21}$ These currents are called BFV(Batalin-Fradkin-Vilkovisky) current algebras. There Courant algebroids(degree 2 dg symplectic manifolds) are generalized to degree $n$ dg symplectic manifolds. BFV current algebras and degree $n$ dg symplectic manifolds can be seen as a higher analog of the second line of ($\ref{2}$). The aim of this paper is to give a higher analog of the third line and fourth line of ($\ref{2}$). In other words, we consider how to make higher Poisson vertex algebras, higher Courant-Dorfman algebras, higher Lie conformal algebras and higher weak Courant-Dorfman algebras which are generalizations of BFV current algebras and algebras of functions of degree $n$ dg symplectic manifolds. In particular, with higher Courant-Dorfman algebras and higher Poisson vertex algebras, we may be able to find and unify more general current algebras including the BFV current algebras, and use the techniques of Poisson vertex algebras in the higher setting. In this paper, we give a higher analog of the relation between Poisson vertex algebras and Courant-Dorfman algebras. First, we define higher Courant-Dorfman algebras by taking an algebraic structure of functions of degree $n$ dg symplectic manifolds. We give some examples, including ordinary Courant-Dorfman algebras and higher Dorfman bracket on $TM\oplus\wedge^{n-1}T^{*}M$. We also give an extended version of higher Courant-Dorfman algebras, whose definition is more natural when we consider the relation with higher PVAs. Second, we check that non-degenerate higher Courant-Dorfman algebras have a similar property to the non-degenerate Courant-Dorfman algebras. In particular, we make a graded Poisson algebra of degree $-n$ from a non-degenerate higher Courant-Dorfman algebra. This graded Poisson algebra is a generalization of the graded Poisson algebra of degree $-2$ introduced in $\cite{KW08},\cite{R09}$. For a non-degenerate higher Courant-Dorfman algebra from a finite-dimensional graded vector bundle, this graded Poisson algebra is isomorphic to the algebra of functions of degree $n$ dg symplectic manifolds. Third, we define a higher analog of Lie conformal algebras and Poisson vertex algebras, which are to higher Courant-Dorfman algebras what Poisson vertex algebras are to Courant-Dorfman algebras. We derive a weak notion of higher Courant-Dorfman algebras from higher Lie conformal algebras, and give the correspondence between higher Poisson vertex algebras and higher Courant-Dorfman algebras. This correspondence is a higher generalization of the correspondence between Courant-Dorfman algebras and Poisson vertex algebras, and the main result of this thesis. \begin{th.*} There is a bijection between higher Poisson vertex algebras generated by elements of degree $0\leq i\leq n-1$ and extended higher Courant-Dorfman algebras. \end{th.*} Moreover, we check higher Lie conformal algebras and higher Poisson vertex algebras have LCA-like and PVA-like properties. In particular, we show we can construct a graded Lie algebra out of the tensor product of a higher LCA and an arbitrary differential graded-commutative algebra (dgca for short) and a graded Poisson algebra out of the tensor product of a higher PVA and an arbitrary dgca. Taking a tensor product of the higher Courant-Dorfman algebra arising from a dg symplectic manifold of degree $n$ and de-Rham complex of a $n-1$ dimensional manifold, we see the associated Poisson algebras can be seen as an algebraic description of BFV current algebras. This is the higher generalization of Alekseev-Strobl Poisson vertex algebras. The higher generalization of ($\ref{2}$) are summarized as follows. \begin{equation} \xymatrix{ \fbox{BFV current alegbras} \ar@{^{(}->}[d] & \fbox{functions of degree $n$ dg symplectic manifolds} \ar[l]^-{target} \ar@{^{(}->}[d] \\ \fbox{\textbf{higher Poisson vertex algebras}} \ar@{<->}[r]^-{\textbf{1-to-1}} \ar@{^{(}->}[d] & \fbox{\textbf{(extended) higher Courant-Dorfman algebras}} \ar@{^{(}->}[d]\\ \fbox{\textbf{higher Lie conformal algebras}} \ar[r]^-{\textbf{derive}}& \fbox{\textbf{higher weak Courant-Dorfman algebras}} } \end{equation} In the case of $n=2$, this coincides with ($\ref{2}$). The bold parts (second line and third line) are defined and studied in this paper. The organization of this thesis is as follows. In chapter 2, we recall some basics about dg symplectic geometry. In chapter 3, we review the relation between Poisson vertex algebras and Courant-Dorfman algebras, focusing on the Poisson structure of loop spaces. In chapter 4, we define the higher Courant-Dorfman algebra and give some examples. In chapter 5, we construct graded Poisson algebras of degree $-n$, generalizing Keller-Waldman Poisson algebras. In chapter 6, we define higher Lie conformal algebras and Poisson vertex algebras and see the relation with higher Courant-Dorfman algebras. Moreover, we show how we can see these algebras as higher generalization of ordinary LCAs and PVAs. \chapter{dg symplectic manifolds} In this chapter, we review some basics of dg symplectic manifolds. We refer to $\cite{CS10},\cite{QZ11}$. A graded vector space is a collection of vector spaces $V=\oplus_{i\in\mathbb{Z}} V_{i}$, where $V_{i}$ is the vector space of degree $i$. Denote the dual of $V$ by $V^{*}=\oplus_{i\in\mathbb{Z}} (V^{*}_{i})^{-i}$. Define the tensor algebra of $V^{*}$ by \begin{equation} \mathrm{Tens}(V^{*})=\oplus_{i\geq0}(V^{*})^{\otimes i}, \end{equation} and the symmetric algebra of $V^{*}$ by \begin{equation} \mathrm{Sym}(V^{*})=\mathrm{Tens}(V^{*})/(v\otimes w-(-1)^{|v||w|}w\otimes v), \end{equation} where $|v|,|w|$ is the degree of the homogeneous elements $v,w\in V^{*}$. The algebra of functions on $V$ is identified with $\mathrm{Sym}(V^{*})$. A graded manifold $\mathcal{M}$ is a locally ringed space $(M,C^{\infty}(\mathcal{M}))$ which is locally isomorphic to $(U,C^{\infty}(U)\otimes \mathrm{Sym} V^{*})$., where $U\subset\mathbb{R}^{n}$ is open, and $V$ is a finite-dimensional graded vector space. A morphism of graded manifolds is a morphism of graded-commutative algebras of functions. Let $V$ be a graded vector space with homogeneous coordinates $(z^{i})^{n}_{i=1}$ corresponding to a basis of $V^{*}$. A vector field $X$ on $V$ is an $\mathbb{R}$-linear derivation on $V$ satisfying the Leibniz rule \begin{equation} X(fg)=X(f)g+(-1)^{k|f|}fX(g) \end{equation} for $f,g\in V$. It is of the form \begin{equation} X=\sum^{n}_{i=1}X^{i}\frac{\partial}{\partial z^{i}} \end{equation} where $X^{i}\in \mathrm{Sym}(V^{*})$, and $\frac{\partial}{\partial z^{i}}$ is the dual basis of $V$. A vector field $X$ acts on $V^{*}$ according to the following rules: \begin{align} \frac{\partial}{\partial z^{i}}(z^{j})&=\delta^{j}_{i},\\ \frac{\partial}{\partial z^{i}}(fg)&=\left(\frac{\partial}{\partial z^{i}}(f)\right)g+(-1)^{|z^{i}||f|}f\frac{\partial}{\partial z^{i}}(g). \end{align} A vector field $X$ is graded if $|Xf|=|f|+k$ for homogeneous $f$ and fixed $k\in\mathbb{Z}$. $k$ is called the degree of $X$. A graded vector field on a graded manifold $\mathcal{M}$ of degree $k$ is a graded linear map \begin{equation} X:C^{\infty}(\mathcal{M})\rightarrow C^{\infty}(\mathcal{M})[k], \end{equation} where $W[k]^{i}=W^{k+i}$, which satisfies the graded Leibniz rule, i.e. \begin{equation} X(fg)=X(f)g+(-1)^{k|f|}fX(g) \end{equation} holds for all homogeneous smooth functions $f,g\in :C^{\infty}(\mathcal{M})$. \begin{ex.} The Euler vector field $E$ on $\mathcal{M}$ is a vector field of degree 0 which satisfies \begin{equation} Ef=|f|f, \end{equation} for a homogeneous element $f\in C^{\infty}(\mathcal{M})$. Locally, it is of the form, \begin{equation} E=\sum_{i}|z^{i}|z^{i}\frac{\partial}{\partial z^{i}}. \end{equation} \end{ex.} \begin{de.}[{$\cite[Definition 3.3.]{CS10}$}] A cohomological vector field $Q$ is a graded vector field of degree 1 which satisfies $Q^{2}=0$. \end{de.} Every cohomological vector field on $\mathcal{M}$ corresponds to a differential on $C^{\infty}(\mathcal{M})$. A morphism of dg manifolds is a morphism of dg algebras of functions. The space of graded differential forms consists of homomorphisms from the graded vector fields on $\mathcal{M}$ to the functions on $\mathcal{M}$, \begin{equation} \Omega^{1}(\mathcal{M}):=\mathrm{Hom}_{C^{\infty}(\mathcal{M})}(\mathfrak{X}(\mathcal{M}),C^{\infty}(\mathcal{M})). \end{equation} Locally, the algebra of differential forms on a graded manifold $\mathcal{M}$ is constructed by adding new coordinates $dz^{i}$ to $z^{i}$($|dz^{i}|=|z^{i}|+1$). We denote a space of $k$-th differential forms by $\Omega^{k}(\mathcal{M})$. Define the de-Rham differential and the Lie derivative $L_{V}$ ($V$:a vector field) by \begin{equation} d\omega(V_{1},...,V_{n+1})=\sum_{i=1}^{n-1}\omega(V_{1},...,V_{i-1},[V_{i},V_{i+1}],V_{i+2},...,V_{n}), \end{equation} \begin{equation} L_{V}=\iota_{V}d+(-1)^{|V|}d\iota_{V}, \end{equation} where $\omega\in\Omega^{n}(\mathcal{M}),V_{i}\in\mathfrak{X}$ and $\iota_{V}$ is the contraction. \begin{de.}[{$\cite[Definition 4.3.]{CS10}$}] A graded symplectic form of degree $k$ on a graded manifold $\mathcal{M}$ is a two-form $\omega$ which has the following properties; \begin{itemize} \item$\omega$ is homogeneous of degree $k$, \item$\omega$ is closed with respect to the de-Rham differential, \item$\omega$ is non-degenerate, i.e. the induced morphism, \begin{equation} \omega:T\mathcal{M}\rightarrow T^{*}[k]\mathcal{M}, \end{equation} is an isomorphism. There $[k]$ means degree shifting the fibres of the vector bundle. \end{itemize} \end{de.} A graded symplectic manifold of degree $k$ is a pair $(\mathcal{M},\omega)$ of a graded manifold $\mathcal{M}$ and a graded symplectic form $\omega$ of degree $k$ on $\mathcal{M}$. \begin{le.}[{$\cite[Lemma 4.5.]{CS10}$}] Let $\omega$ be a graded symplectic form of degree $k\neq0$. Then $\omega$ is exact. \end{le.} \begin{proof} Let $E$ be the Euler vector field. Then, \begin{equation} k\omega=L_{E}\omega=(d\iota_{E}+\iota_{E}d)\omega=d(\iota_{E}\omega), \end{equation} which implies $\omega=\frac{d\iota_{E}\omega}{k}$($E$:Euler vector field). \end{proof} \begin{de.}[{$\cite[Definition 4.6.]{CS10}$}] Let $\omega$ be a graded symplectic form on a graded manifold $\mathcal{M}$. A vector field $X$ is called symplectic if $L_{X}\omega=0$, and Hamiltonian if there is a smooth function $H$ such that $\iota_{X}\omega=dH$. \end{de.} \begin{le.}[{$\cite[Lemma 4.7.]{CS10}$}] Let $\omega$ be a graded symplectic form of degree $k\neq0$ and $X$ be a symplectic vector field of degree $l$. If $k+l\neq0$, then $X$ is Hamiltonian. \end{le.} \begin{proof} For the Euler vector field $E$, \begin{align} -l\iota_{X}\omega&=\iota_{[E,X]}\omega=\iota_{X}d(\iota_{E}\omega)-d(\iota_{X}\iota_{E}\omega) \notag \\ &=k\iota_{X}\omega+d(\iota_{E}\iota_{X}\omega) \end{align} Let $H:=\iota_{E}\iota_{X}\omega$, Then \begin{equation} dH=(k+l)\iota_{X}\omega. \end{equation} Hence $\iota_{X}\omega=\frac{dH}{k+l}$. \end{proof} For a degree $k$ graded symplectic manifold $(\mathcal{M},\omega)$, we can define a Poisson bracket $\{-,-\}$ on $C^{\infty}(\mathcal{M})$ via \begin{equation} \{f,g\}:=(-1)^{|f|+1}X_{f}(g) \end{equation} where $X_{f}$ is the unique graded vector field that satisfies $\iota_{X_{f}}\omega=df$. $X_{f}$ is called a Hamiltonian vector field of $f$. If the vector field $Q$ is Hamiltonian, one can find a Hamiltonian function $S$ such that \begin{equation} Q=\{S,-\}. \end{equation} Since \begin{equation} Q^{2}(f)=\{\{S,S\},f\} \end{equation} $Q^{2}=0$(i.e. $Q$ is cohomological) is equivalent to $\{S,S\}$ being a constant. Assume that $Q$ is a cohomological vector field. Then, $|S|=k+1$, while $|\{-,-\}|=-k$. Consequently, $|\{S,S\}|=k+2$. If $k\neq-2$, then \begin{equation} \{S,S\}=0. \end{equation} This equation is known as the classical master equation. A cohomological vector field with a Hamiltonian function $S$ such that $Q=\{S,-\}$ is called a symplectic cohomological vector field. \begin{de.}[{$\cite[Definition 4.10.]{CS10}$}] A graded manifold endowed with a graded symplectic form and a symplectic cohomological vector field is called a differential graded symplectic manifold, or dg symplectic manifold for short. \end{de.} A morphism between two dg symplectic manifolds is a morphism of the Poisson algebras of functions respecting the differential induced by the symplectic cohomological vector field. We consider some special cases of dg symplectic manifolds $(M,\omega,S)$, where $S$ is the Hamiltonian function associated to a cohomological vector field. When $k=-1$, these manifolds correspond to BV theories. A BV theory is a formulation of a Lagrangian formalism of a gauge theory based on dg manifold($\cite{BV81}$). In this case, the Poisson bracket induced by $\omega$ corresponds to the BV antibracket and the Hamiltonian function corresponds to the BV action. When $k=0$, they emerge in the BFV theories. A BFV theory is a formulation of a constrained Hamiltonian system based on a dg manifold, which is a Hamiltonian counterpart of the BV theory($\cite{BV77},\cite{BF83}$). In this case, the Poisson bracket induced by $\omega$ corresponds to the BFV Poisson bracket and the Hamiltonian function corresponds to the BRST charge. Note that the physical Hamiltonian cannot be decided from the dg symplectic manifold. Suppose $k>0$ and that all the coordinates are of non-negative degree. Then $\mathcal{M}$ is called an N-manifold. N-manifolds of degree 1 and 2 are analyzed in $\cite{R02}$. \paragraph{$k=1.$} Every graded symplectic manifold of degree 1 is canonically isomorphic to the graded cotangent bundle $T^{*}[1]M$ of the base manifold $M$. We denote the coordinates of degree 0 by $x^{i}$, and the coordinates in degree 1 by $p_{i}$. The Hamiltonian $S$ has degree 2, so locally it must be of the form, \begin{equation} S=\frac{1}{2}\sum^{n}_{i,j=1}\pi^{ij}(x)p_{i}p_{j}. \end{equation} Hence, locally $S$ corresponds to a bivector field $\Pi=\pi^{ij}(x)\partial_{i}\wedge\partial_{j}$ and $\{S,S\}=0$ implies that $S$ corresponds to a Poisson bivector field. Let $C^{i}(C^{\infty}(\mathcal{M}))$ be the subspace of $C^{\infty}(\mathcal{M})$ generated by degree $i$ coordinates. For $f,g\in C^{0}(C^{\infty}(\mathcal{M}))$ \begin{align} \{\{f,S\},g\}&=\{\sum^{n}_{i,j=1}\frac{\partial f}{\partial{x^{i}}}\pi^{ij}p_{j},g\} \notag \\ &=\{\sum^{n}_{i,j=1}\frac{\partial f}{\partial{x^{i}}}\frac{\partial g}{\partial{x^{j}}}\pi^{ij}\}, \end{align} which is a Poisson manifold structure. Hence, there is a one-to-one correspondence between isomorphism class of dg symplectic manifolds of degree 1 and isomorphism class of Poisson manifolds. \paragraph{$k=2.$} The graded symplectic structure induces an isomorphism between the coordinates of degree 0, which we denote by $x^{i}$, and the coordinates in degree 2, which we denote by $p_{i}$. We denote the coordinates in degree 1 by $\eta^{\alpha}$. The graded symplectic form can be written as \begin{equation} \omega=\sum^{n}_{i=1}dp_{i}dx^{i}+\frac{1}{2}\sum^{n}_{\alpha,\beta=1}d(g_{\alpha\beta}(x)\eta^{\alpha})d\eta^{\beta} \end{equation} where $g_{\alpha\beta}$ is a symmetric non-degenerate form. Globally, the dg symplectic manifold corresponds to the symplectic realization of $E[1]$ for a vector bundle $E$ over $M$, equipped with a non-degenerate fibre pairing $g$. The Hamiltonian $S$ has degree 3, so locally it must be of the form \begin{equation} S=\sum_{i,\alpha}\rho^{i}_{\alpha}(x)p_{i}\eta^{\alpha}+\frac{1}{6}\sum_{\alpha,\beta,\gamma}f_{\alpha\beta\gamma}(x)\eta^{\alpha}\eta^{\beta}\eta^{\gamma}. \end{equation} For $e\in\Gamma(E)$, the first term corresponds to a bundle map $\rho:E\rightarrow TM$ defined by $\rho(e_{\alpha})=\rho^{i}_{\alpha}(e)\partial_{i}$, while the second one gives a bracket $[,]$ on $\Gamma(E)$ defined by $[e_{\alpha},e_{\beta}]=f^{\alpha\beta}_{\gamma}e_{\gamma}$. $\{S,S\}=0$ implies that $(E,g)$ is a Courant algebroid$\cite{R02}$. \begin{de.}[{$\cite[Definition 4.2.]{R02}$}] A Courant algebroid is a vector bundle $E$ over a smooth manifold $M$, with a non-degenerate symmetric bilinear form $\langle ,\rangle $, and a bilinear bracket $*$ on $\Gamma(E)$. The form and the bracket must be compatiable, in the meaning defined below, with the vector fields on $M$. We must have a smooth bundle map, the anchor \begin{equation} \pi:E\rightarrow TM. \end{equation} These structure satisfy the following five axioms, for all $A,B,C\in\Gamma(E)$ and $f\in C^{\infty}(M)$. \begin{description} \item[Axiom.1]: $\pi(A*B)=[\pi(A),\pi(B)]$ (The bracket of the right hand side is the Lie bracket of vector fields). \item[Axiom.2]: $A*(B*C)=(A*B)*C+B*(A*C)$. \item[Axiom.3]: $A*(fB)=(\pi(A)f)B+f(A*B)$. \item[Axiom.4]: $\langle A,B*C+C*B\rangle =\pi(A)\langle B,C\rangle $. \item[Axiom.5]: $\pi(A)\langle B,C\rangle =\langle A*B,C\rangle +\langle B,A*C\rangle $. \end{description} \end{de.} From the above data, we can define a map, $\partial:C^{\infty}(M)\rightarrow\Gamma(E)$ by \begin{equation} \langle \partial f,A\rangle =\pi(A)f \end{equation} for all $A\in\Gamma(E)$. A morphism of Courant algebroids is a bundle map respecting all the operations. We give a correspondence between Courant algebroids and dg symplectic manifolds of degree 2. Denote $C^{i}(C^{\infty}(\mathcal{M}))=\{f\in C^{\infty}(\mathcal{M}:|f|\leq i\}$. Then \begin{equation} C^{0}(C^{\infty}(\mathcal{M}))\simeq C^{\infty}(M), C^{1}(C^{\infty}(\mathcal{M}))\simeq \Gamma(E). \end{equation} For $f\in C^{0}(C^{\infty}(\mathcal{M}))$ and $A,B\in C^{1}(C^{\infty}(\mathcal{M}))$, we define the anchor $\pi$ and the bilinear bracket $*$ as the derived brackets, \begin{align} \{\{A,S\},B\}&=A*B, \\ \{\{A,S\},f\}&=\pi(A)f=\partial(f)A=\{\{S,f\},A\}. \end{align} We can check this definition satisfies the conditions of a Courant algebroid. Conversely, given a Courant algebroid $(E,M,\langle,\rangle,*,\pi)$, we can associate a degree 2 dg symplectic manifold $(\mathcal{M},\omega,S)$. Locally, \begin{equation} S=\sum_{i,\alpha}\pi(e_{\alpha})x^{i}p_{i}\eta^{\alpha}+\frac{1}{6}\sum_{\alpha,\beta,\gamma}\langle [e_{\alpha},e_{\beta}],e_{\gamma}\rangle(x)\eta^{\alpha}\eta^{\beta}\eta^{\gamma}, \end{equation} where $e_{\alpha},e_{\beta},e_{\gamma}\in\Gamma(E)$. Hence, there is a one-to-one correspondence between the isomorphism class of dg symplectic manifolds of degree 2 and isomorphism class of Courant algebroids. \begin{th.}[{$\cite[Theorem 4.5.]{R02}$}] Dg symplectic manifolds of degree 2 are in 1-1 correspondence with Courant algebroids. \end{th.} \chapter{Courant-Dorfman algebras and Poisson vertex algebras} In this chapter we review the definitions of Courant-Dorfman algebras and Poisson vertex algebras and the relation between these algebras. Courant-Dorfman algebras are defined by Roytenberg in $\cite{R09}$ as an algebraic generalization of Courant algebroids$\cite{LWX95}$. These are to Courant algebroids what Lie-Rinehart algebras are to Lie algebroids. \begin{de.}[{$\cite[Definition 2.1.]{R09}$}] A Courant-Dorfman algebra consists of the following data: \begin{itemize} \item a commutative algebra $R$, \item an $R$-module $E$, \item a symmetric bilinear form $\langle ,\rangle:E\otimes E\rightarrow R$, \item a derivation $\partial:R\rightarrow E$, \item a Dorfman bracket $[,]:E\otimes E\rightarrow E$, \end{itemize} which satisfies the following conditions; \begin{equation} [e_{1},fe_{2}]=f[e_{1},e_{2}]+\langle e_{1},\partial f\rangle e_{2}, \end{equation} \begin{equation} \langle e_{1},\partial\langle e_{2},e_{3}\rangle\rangle=\langle [e_{1},e_{2}],e_{3}\rangle+\langle e_{2},[e_{1},e_{3}]\rangle, \end{equation} \begin{equation} [e_{1},e_{2}]+[e_{2},e_{1}]=\partial\langle e_{1},e_{2}\rangle, \end{equation} \begin{equation} [e_{1},[e_{2},e_{3}]]=[[e_{1},e_{2}],e_{3}]+[e_{2},[e_{1},e_{3}]], \end{equation} \begin{equation} [\partial f,e]=0, \end{equation} \begin{equation} \langle \partial f,\partial g\rangle=0, \end{equation} where $f,g\in R$ and $e_{1},e_{2},e_{3}\in E$. \end{de.} For a Courant-Dorfman algebra, when $\langle ,\rangle$ is non-degenerate, we can make a graded Poisson algebras of degree $-2$, and when $R=C^{\infty}(M)$ and $E=\Gamma(F)$ for a vector bundle $F\rightarrow M$ (i.e. $E$ is a Courant algebroid), the graded Poisson algebra is isomorphic to the Poisson algebra of functions of the associated degree $n$ dg symplectic manifolds($\cite{KW08},\cite{R09}$). An important property of Courant-Dorfman algebras is a relation with Poisson vertex algebras. \begin{de.}[{$\cite[Definition 2.7]{K}$}] A Lie conformal algebra is a $\mathbb{C}[\partial]$-module $W$ (i.e.$\partial$ acts on elements of $W$) with a $\lambda$-bracket $\{_{\lambda}\}:W\otimes W\rightarrow W[\lambda]$, $\{a_{\lambda}b\}=\sum_{j\in\mathbb{Z}_{+}}\lambda^{j}a_{(j)}b$ (a product $a_{(j)}b\in W$ is called $j$-th bracket) which satisfies the following conditions. (Here $\lambda$ is an indeterminate.) \begin{description} \item[Sesquilinearity]: \begin{equation} \{\partial a_{\lambda}b\}=\lambda\{a_{\lambda}b\},\ \{a_{\lambda}\partial b\}=(\partial+\lambda)\{a_{\lambda}b\} \end{equation} ($\partial$ is a derivation of the $\lambda$-bracket.) \item[Skew-symmetry]: \begin{equation} \{a_{\lambda}b\}=-\{b_{-\lambda-\partial}a\} \end{equation} \item[Jacobi-identity]: \begin{equation} \{a_{\lambda}\{b_{\mu}c\}\}=\{\{a_{\lambda}b\}_{\mu+\lambda}c\}+\{b_{\mu}\{a_{\lambda}c\}\} \end{equation} \end{description} \end{de.} \begin{de.}[{$\cite[Definition 1.14.]{BSK09}$}] A Poisson vertex algebra is a commutative algebra $W$ with a derivation $\partial$(i.e.$\partial(ab)=(\partial a)b+a(\partial b)$) and $\lambda$-bracket $\{_{\lambda}\}:W\otimes W\rightarrow W[\lambda]$ such that $W$ is a Lie conformal algebra and satisfies the Leibniz rule. \begin{description} \item[Leibniz rule]: \begin{equation} \{a_{\lambda}b\cdot c\}=\{a_{\lambda}b\}\cdot c+b\cdot\{a_{\lambda}c\} \end{equation} \end{description} \end{de.} Poisson vertex algebras appear when we consider functions on phase spaces $T^{*}LM$ of loop spaces $LM=Map(S^{1},M)$. We denote local coordinates on $T^{*}LM$ by $X^{i}(\sigma),P_{i}(\sigma)$ with a coordinate $\sigma$ on $S^{1}$, and define a Poisson bracket by \begin{equation} \{X^{i}(\sigma),P_{i}(\sigma')\}=\delta^{i}_{j}\delta(\sigma-\sigma'). \end{equation} We can construct local functions on $T^{*}LM$ out of the coordinates $X$, $P$ and $\partial=\partial_{\sigma}$. We consider local functions of the form \begin{equation} A(X,\partial X,...,\partial^{k}X,P,...,\partial^{l}P) \end{equation} where $k,l$ are finite. We can create a functional out of $A$ by \begin{equation} \epsilon(\sigma)\in C^{\infty}(S^{1})\mapsto J_{\epsilon}(A)=\int_{S^{1}}\epsilon(\sigma)A(X,\partial X,...,\partial^{k}X,P,...,\partial^{l}P)d\sigma. \end{equation} Considering the Poisson brackets between them, we can find geometric and algebraic structures on $M$. In $\cite{AS04}$, the Poisson brackets between currents parametrised by sections of a generalized tangent bundle $TM\oplus T^{*}M$ is written in terms of the Dorfman bracket. In $\cite{EZ09}$, considering more general currents, weak Coutant-Dorfman algebras are derived. \begin{de.}[{$\cite[Definition 4.1]{E11}$}] A weak Courant-Dorfman algebra $(E,R,\partial,\langle,\rangle,[,])$ is defined by the following data: \begin{itemize} \item a vector space $R$, \item a vector space $E$, \item a symmetric bilinear form $\langle,\rangle:E\otimes E\rightarrow R$, \item a map $\partial:R\rightarrow E$, \item a Dorfman bracket $[,]:E\otimes E\rightarrow E$, \end{itemize} which satisfy the following conditions: \begin{equation} [A,[B,C]]=[[A,B],C]+[B,[A,C]], \end{equation} \begin{equation} [A,B]+[B,A]=\partial \langle A,B\rangle, \end{equation} \begin{equation} [\partial f,A]=0. \end{equation} \end{de.} The differences with the definition of a Courant-Dorfman algebra are the properties related to the algebraic structure of $\mathcal{R}$ and $\mathcal{E}$. The relation between Poisson brackets on the local functionals and Lie conformal and Poisson vertex algebras is discussed in $\cite{BSK09}$. Denote the coordinates on $T^{*}LM$ by $u^{\alpha}(\sigma)=\{X^{i}(\sigma),P_{i-d}(\sigma)\}^{\alpha}$, where $\alpha=1,...,2d$ and let $u^{\alpha(n)}=\partial^{n}u^{\alpha}$. The local functions can be written as polynomials \begin{equation} a(u^{\alpha},...,u^{\alpha(N)}). \end{equation} We have a total derivative operator by \begin{equation} \partial=u^{\alpha(1)}\frac{\partial}{\partial u^{\alpha}}+\cdots+u^{\alpha(N+1)}\frac{\partial}{\partial u^{\alpha(N)}}. \end{equation} The algebra of these polynomials with the total derivative is called an algebra of differential equation $\mathcal{V}$. When we integrate functions over $S^{1}$, the function of the form $\partial_{\sigma}(\cdots)$ doesn't contribute. We can take the quotient $\mathcal{V}/\partial\mathcal{V}$. We denote the image of $a\in\mathcal{V}$ by $\int a\in\mathcal{V}/\partial\mathcal{V}$. A local Poisson bracket on the phase space can be described by \begin{equation} \{u^{\alpha}(\sigma),u^{\beta}(\sigma')\}=H^{\alpha\beta}_{0}(\sigma')\delta(\sigma-\sigma')+H^{\alpha\beta}_{1}(\sigma')\partial_{\sigma'}\delta(\sigma-\sigma')+\cdots+H^{\alpha\beta}_{N}(\sigma')\partial^{N}_{\sigma'}\delta(\sigma-\sigma'). \end{equation} For $a,b\in\mathcal{V}$, we have \begin{equation} \{a(\sigma),b(\sigma')\}=\sum_{m,n}\frac{\partial a(\sigma)}{\partial u^{\alpha(m)}}\frac{\partial b(\sigma')}{\partial u^{\beta(n)}}\partial^{m}_{\sigma}\partial^{n}_{\sigma'}\{u^{\alpha}(\sigma),u^{\beta}(\sigma')\}. \end{equation} Using the Fourier transformation of this Poisson bracket, we get a Poisson vertex algebra. Define the Fourier transformed bracket by \begin{equation} \{a_{\lambda}b\}=\int_{S^{1}}e^{\lambda(\sigma-\sigma')}\{a(\sigma),b(\sigma')\}d\sigma. \end{equation} This bracket (called a $\lambda$-bracket) with $\mathcal{V}$ and $\partial$ satisfies the axioms of a Lie conformal algebra$\cite{BSK09}$. The algebra of differential functions $\mathcal{V}$ with $\partial$, the $\lambda$-bracket and the multiplication of polynomials on $\mathcal{V}$ is a Poisson vertex algebra. So we can translate the relation between (weak) Courant-Dorfman algebras and currents on the phase space into that between (weak) Courant-Dorfman algebras and Poisson vertex algebras (Lie conformal algebras). From Lie conformal algebras and Poisson vertex algebras, we can make Lie algebras and Poisson algebras using formal power series. For a Lie conformal algebra $W$, $W\otimes\mathbb{C}[[t,t^{-1}]]/Im(\partial+\partial_{t})$ is a Lie algebra with the Lie bracket \begin{equation} [a\otimes t^{m},b\otimes t^{n}]=\sum_{j\in\mathbb{Z}_{+}}\binom{m}{j}(a_{(j)}b)t^{m+n-j}. \end{equation} Moreover, for a Poisson vertex algebra $W$, $W\otimes\mathbb{C}[[t,t^{-1}]]/Im(\partial+\partial_{t})\cdot W\otimes\mathbb{C}[[t,t^{-1}]]$ is a Poisson algebra with the same Lie bracket. If we define a formal distribution $a(z)(a\in W)$ by \begin{equation} a(z):=\sum_{m\in\mathbb{Z}}z^{-1-m}a t^{m} \end{equation} and the formal $\delta$-function \begin{equation} \delta(z-w):=\sum_{m\in\mathbb{Z}}z^{-m-1}w^{m}, \end{equation} then we get \begin{equation} [a(z),b(w)]=\sum_{j\geq0}(a_{(j)}b)(w)\partial^{j}_{w}\delta(z-w). \end{equation} This Lie bracket has a similar form to the bracket of local functions. We can derive the properties of a weak Courant-Dorfman algebra from a Lie conformal algebra by comparing the independent terms of $\lambda$ on both sides of the axioms. Let \begin{equation} [a_{\lambda}b]=\sum_{j\geq0}a_{(j)}b\lambda^{j},\ a_{(0)}b=[a,b],\ [a_{\lambda}b]-[a,b]=\langle a_{\lambda}b\rangle, \end{equation} \begin{equation} \ \langle a,b\rangle =\frac{1}{2}(\langle a_{-\partial}b\rangle +\langle b_{-\partial}a\rangle ). \end{equation} Then the sesquilinearity says that \begin{equation} [\partial a,b]+o(\lambda)=\{\partial a_{\lambda}b\}=\lambda\{a_{\lambda}b\}\Rightarrow[\partial a,b]=0, \end{equation} the skew-symmetry says that \begin{equation} [a,b]+o(\lambda)=\{a_{\lambda}b\}=-\{b_{-\lambda-\partial}a\}=-[b,a]+\partial\langle b_{-\partial}a\rangle +o(\lambda)\Rightarrow[a,b]+[b,a]=\partial\langle a,b\rangle, \end{equation} and the Jacobi-identity says that \begin{equation} [a,[b,c]]+o(\lambda)=[[a,b],c]+[b,[a,c]]+o(\lambda)\Rightarrow[a,[b,c]]=[[a,b],c]+[b,[a,c]]. \end{equation} The right formulas are the conditions of a weak Courant-Dorfman algebra. Moreover, in $\cite{E11}$, a one-to-one correspondence between graded Poisson vertex algebras generated by elements of degree 0 and 1 and Courant-Dorfman algebras is established as Theorem 1. In this case, the $\lambda$-bracket is of the form \begin{equation} \{a_{\lambda}b\}=[a,b]+\lambda \langle a,b\rangle. \end{equation} Substituting this for the axioms of Poisson vertex algebras, we can get the axioms of Courant-Dorfman algebraic structure. \begin{th.}[{$\cite[Theorem 4.1]{E11}$}] The Poisson vertex algebras that are graded and generated by elements of degree 0 and 1 are in a one-to-one correspondence with the Courant-Dorfman algebras via \begin{equation} W^{0}=R,\ W^{1}=E,\ \partial=\partial \end{equation} \begin{equation} [e_{\lambda}e']=[e,e']+\lambda\langle e,e'\rangle,\ [e_{\lambda}f]=\langle e,\partial f\rangle \end{equation} \end{th.} In the case of $E=TM\oplus T^{*}M$, the associated Poisson vertex algebra can be seen as the algebraic description of Alekseev-Strobl currents$\cite{AS04}$. This correspondence is used to study the duality of currents$\cite{HM12}$, and non-commutative analog is considered $\cite{AFH21}$. \chapter{Definitions and examples of higher Courant-Dorfman algebras} In this chapter, we define higher Courant-Dorfman algebras of degree $n$ and give examples. The definition of these algebras of degree $2$ coincides with that of Courant-Dorfman algebras. Let $R=E^{0}$ be a commutative algebra over a ring $K\supset\mathbb{Q}$, and $E=\oplus_{1\leq i\leq n-1}E^{i}$ be a graded $R$-module, where $E^{i}$ has degree $i$. Define a pairing $\langle ,\rangle:E\otimes E\rightarrow R$ such that $\langle a,b\rangle=0$ unless $|a|+|b|=n$. Consider the graded-commutative algebra freely generated by $E$ and denote it by $\tilde{\mathcal{E}}=(\mathcal{E}^{k})_{k\in\mathbb{Z}}$. We restrict this graded-commutative algebra to the elements of degree $n-1\geq k\geq0$ and denote it by $\mathcal{E}=(\mathcal{E}^{k})_{n-1\geq k\geq0}$. The pairing $\langle ,\rangle$ can be extended to $\mathcal{E}$ by the Leibniz rule \begin{equation} \langle a,b\cdot c\rangle=\langle a,b\rangle\cdot c+(-1)^{(|a|-n)|b|}b\cdot\langle a,c\rangle. \end{equation} \begin{de.} $\mathcal{E}=(\mathcal{E}^{k})_{n-1\geq k\geq0}$ is \textit{a higher Courant-Dorfman algebra} of degree $n$ if $\mathcal{E}$ has a differential $d:\mathcal{E}^{k}\rightarrow\mathcal{E}^{k+1}$ which satisfies $d^{2}=0$ and $d(a\cdot b)=(da)\cdot b+(-1)^{|a|}a\cdot (db)$ and a bracket $[,]:\mathcal{E}\otimes\mathcal{E}\rightarrow\mathcal{E}$ of degree $1-n$ which satisfies the following condition: \begin{description} \item[sesquilinearity]: \begin{equation} \langle da,b\rangle=-(-1)^{|a|-n}[a,b], [da,b]=0. \end{equation} \item[skew-symmetry]: \begin{equation} [a,b]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,a]=-(-1)^{|a|}d\langle a,b\rangle, \end{equation} \begin{equation} \langle a,b\rangle=-(-1)^{(|a|-n)(|b|-n)}\langle b,a\rangle. \end{equation} \item[Jacobi identity]: \begin{equation} [a,[b,c]]=[[a,b],c]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,[a,c]], \end{equation} \begin{equation} [a,\langle b,c\rangle]=\langle [a,b],c\rangle+(-1)^{(|a|+1-n)(|b|+1-n)}\langle b,[a,c]\rangle, \end{equation} \begin{equation} \langle a,\langle b,c\rangle\rangle=\langle \langle a,b\rangle,c\rangle+(-1)^{(|a|-n)(|b|-n)}\langle b,\langle a,c\rangle\rangle. \end{equation} \item[Leibniz rule]: \begin{equation} [a\cdot b,c]=[a,b]\cdot c+(-1)^{(|a|+1-n)|b|}b\cdot[a,c]. \end{equation} \end{description} \end{de.} Restricting the bracket to $\mathcal{E}^{n-1}\otimes\mathcal{E}^{n-1}\rightarrow\mathcal{E}^{n-1}$, it follows that $\mathcal{E}^{n-1}$ is a Leibniz algebra by the Jacobi identity. \begin{de.} A Leibniz algebra is an $R$-module $E$ with a bracket $[,]:E\otimes E\rightarrow E$ satisfying the Leibniz identity; \begin{equation} [a,[b,c]]=[[a,b],c]+[b,[a,c]]. \end{equation} \end{de.} Next, we define the non-degeneracy condition, and fullness condition, like Courant-Dorfman algebras. \begin{de.} The bilinear form $\langle ,\rangle$ gives rise to a map \begin{equation} (-)^{\flat}:E^{i}\rightarrow (E^{n-i})^{\vee}=Hom_{R}(E^{n-i},R) \end{equation} defined by \begin{equation} e^{\flat}(e')=\langle e,e'\rangle. \end{equation} $\langle ,\rangle$ is non-degenerate if $(-)^{\flat}$ is an isomorphism, and a higher Courant-Dorfman algebra is non-degenerate if $\langle ,\rangle$ is strongly non-degenerate. \end{de.} When a higher Courant-Dorfman algebra is non-degenerate, the inverse map is denoted by \begin{equation} (-)^{\sharp}:(E^{i})^{\vee}\rightarrow E^{n-i} \end{equation} and there is a graded-symmetric bilinear form \begin{equation} \{-,-\}:E^{\vee}\otimes_{R}E^{\vee}\rightarrow R \end{equation} defined by \begin{equation} \{\lambda,\mu\}=\langle \lambda^{\sharp},\mu^{\sharp}\rangle. \end{equation} \begin{de.} $\langle ,\rangle$ is full if ,for every $1\leq i\leq n-1$, every $a\in R$ can be written as a finite sum $a=\sum_{j}\langle x_{j},y_{j}\rangle$ with $x_{j}\in E^{i},y\in E^{m-i}$. \end{de.} Define the anchor map \begin{equation} \rho:E^{n-1}\rightarrow\mathfrak{X}=Der(R,R) \end{equation} by setting \begin{equation} \rho(e)\cdot f=\langle e,df\rangle. \end{equation} We can define a Dirac submodule, like an ordinary Couarnt-Dorfman algebra. \begin{de.} Suppose $\mathcal{E}$ is a higher Courant-Dorfman algebra. An $R$-submodule $\mathcal{D}\subset \mathcal{E}$ is said to be a Dirac submodule if $\mathcal{D}$ is isotropic with respect to $\langle ,\rangle$ and closed under $[-,-]$. \end{de.} We give some examples. \begin{ex.} Consider the case $n=2$. In this case, there is an $R$-module $E^{1}$, a pairing $\langle ,\rangle:E^{1}\otimes E^{1}\rightarrow R$, a derivation $d:R\rightarrow E^{1}$, and three brackets $[,]:R\otimes E^{1}\rightarrow R$,$[,]:E^{1}\otimes R\rightarrow R$,and $[,]:E^{1}\otimes E^{1}\rightarrow E^{1}$. From the sesquilinearity, we get $[e,f]=\langle e,df\rangle, [f,e]=-\langle df,e\rangle$. For other operations, one can see that the above definition reduces to the definition of a Courant-Dorfman algebra. \end{ex.} \begin{ex.} Given a commutative algebra $R$, let $E^{n-1}=\mathfrak{X}=Der(R,R),E^{1}=\Omega^{1}(K\ddot{a}hler\ differential)$. In this case, $\mathcal{E}^{n-1}=\mathfrak{X}\oplus\Omega^{n-1}$. It becomes a higher Courant-Dorfman algebra with respect to \begin{equation} \langle v,\alpha\rangle=\iota_{v}\alpha, \end{equation} \begin{equation} [v,\alpha]=L_{v}\alpha, [\alpha,v]=d(\iota_{v}\alpha)-L_{v}\alpha, \end{equation} \begin{equation} [v_{1},v_{2}]=\iota_{v_{1}}\iota_{v_{2}}\omega\ (\omega\in\Omega^{n+1,cl}). \end{equation} and $d$ is the de-Rham differential on $\Omega^{i}$. In the case of $R=C^{\infty}(M)$, $\mathcal{E}^{n-1}=TM\oplus\wedge^{n-1}T^{*}M$, and the bracket $[,]$ is called a higher Dorfman bracket. \end{ex.} \begin{ex.} Let $(\mathcal{M},\omega,\Theta)$ be a degree $n$ dg symplectic manifold and $C=C^{n-1}(C^{\infty}(\mathcal{M}))=\{f\in C^{\infty}(\mathcal{M}:|f|\leq n-1\}$. This is a higher Courant-Dorfman algebra with \begin{equation} [a,b]=\{\{a,\Theta\},b\},\ \langle a,b\rangle=\{a,b\},\ da=\{\Theta,a\}. \end{equation} In the previous example, the higher Courant-Dorfman algebra on $\mathcal{E}^{n-1}=TM\oplus\wedge^{n-1}T^{*}M$ coincides the algebra on $C=C^{n-1}(C^{\infty}(T^{*}[n]T[1]M))$. \end{ex.} \begin{ex.} As a variant of Example 2, we can replace $\mathfrak{X}$ by a Lie-Rinehart algebra $(R,L)$ and let $E^{n-1}=L,E^{1}=\Omega^{1}$. In this case, $\mathcal{E}^{n-1}=L\oplus\Omega^{n-1}$. It becomes a higher Courant-Dorfman algebra with respect to \begin{equation} \langle a,\alpha\rangle=\iota_{\rho(a)}\alpha, \end{equation} \begin{equation} [v,\alpha]=L_{\rho(a)}\alpha, [\alpha,v]=d(\iota_{\rho(a)}\alpha)-L_{\rho(a)}\alpha, \end{equation} \begin{equation} [v_{1},v_{2}]=\iota_{(\rho(v_{1}))}\iota_{(\rho(v_{2}))}\omega\ (\omega\in\Omega^{n+1,cl}), \end{equation} and $d$ is the de-Rham differential on $\Omega^{i}$. \end{ex.} In order to focus on the relation with higher Poisson vertex algebras, we should define extended higher Courant-Dorfman algebras, relaxing the condition on $\langle ,\rangle$. \begin{de.} Let $R=E^{0}$ be a commutative algebra, and $E=E^{i}(1\leq i\leq n-1)$ be a graded $R$-module. Consider the graded-commutative algebra freely generated by $E$ and denote it by $\tilde{\mathcal{E}}=(\mathcal{E}^{k})_{k\in\mathbb{Z}}$. We restrict this graded-commutative algebra to the elements of degree $n-1\geq k\geq0$ and denote it by $\mathcal{E}=(\mathcal{E}^{k})_{n-1\geq k\geq0}$. $\mathcal{E}=(\mathcal{E}^{k})_{n-1\geq k\geq0}$ is \textit{an extended higher Courant-Dorfman algebra} of degree $n$ if $\mathcal{E}$ has a differential $d:\mathcal{E}^{k}\rightarrow\mathcal{E}^{k+1}$ which satisfies $d^{2}=0$ and $d(a\cdot b)=(da)\cdot b+(-1)^{|a|}a\cdot (db)$, a pairing $\langle ,\rangle :\mathcal{E}\otimes\mathcal{E}\rightarrow\mathcal{E}$ of degree $-n$ and a bracket $[,]:\mathcal{E}\otimes\mathcal{E}\rightarrow\mathcal{E}$ of degree $1-n$ which satisfies the sesquilinearity, skew-symmetry, Jacobi identity, and Leibniz rule. \end{de.} The difference with a higher Courant-Dorfman algebra is that an extended Courant-Dorfman algebra allows the pairing $\langle,\rangle:E^{i}\otimes E^{j}\rightarrow E^{i+j-n}$ with $i+j\geq n+1$. From the viewpoint of graded geometry, these algebras include the case that the base manifold is a graded manifold. \chapter{Non-degenerate higher Courant-Dorfman algebras and degree $n$ dg symplectic manifolds} In this chapter, we consider the case that $\langle ,\rangle$ is non-degenerate, and study the relationship between the algebras and functions of degree $n$ dg symplectic manifolds. We construct a graded Poisson algebra of degree $-n$, generalizing the Keller-Waldman Poisson algebras$\cite{KW08}$. We assume that each $E^{i}$ is a projective, finitely generated module over $R$, and that $\langle ,\rangle $ is non-degenerate and full. \begin{de.} We assume $r\geq n$. $\mathcal{C}^{r}(\mathcal{E})\subset\oplus_{1\leq j\leq n-1}\oplus_{1\leq k\leq r-j}\oplus_{\sum^{k}_{t=1}i_{t}=r-j}\mathrm{Hom}_{K}(E^{n-i_{1}}\otimes\cdots\otimes E^{n-i_{k}},E^{j})$ consists of elements $C$ for which there exists a $K$-multilinear map \begin{equation} \sigma_{C}\in \oplus_{1\leq l\leq r-n}\oplus_{\sum^{l}_{t'=1} i_{t'}=r-n}\mathrm{Hom}_{K}(E^{n-i_{1}}\otimes\cdots\otimes E^{n-i_{l}},\mathfrak{X}), \end{equation} satisfying the following conditions: (1)For all $x_{1},...,x_{l},u,w\in E$, we have \begin{equation} \sigma_{C}(x_{1},...,x_{l})\langle u,w\rangle=\langle C(x_{1},...,x_{l},u),w\rangle+\langle u,C(x_{1},...,x_{l},w)\rangle. \end{equation} (2)For all $x_{1},...,x_{k},u\in E$, we have \begin{align} &\langle C(x_{1},...x_{i},x_{i+1},...,x_{k})-(-1)^{(|x_{i}|-n)(|x_{i+1}|-n)}C(x_{1},....,x_{i+1},x_{i},...,x_{k}),u\rangle \notag \\ &=\sigma_{C}(x_{1},...,x_{i-1},x_{i+2},....,x_{k})\langle x_{i},x_{i+1}\rangle. \end{align} Furthermore, $\mathcal{C}^{0}(\mathcal{E})=R, \mathcal{C}^{i}(\mathcal{E})=\mathcal{E}^{i}$ for $1\leq i\leq n-1$ and define \begin{equation} \mathcal{C}^{\bullet}(\mathcal{E})=\oplus_{r\geq0}\mathcal{C}^{r}(\mathcal{E}). \end{equation} We call $\sigma_{C}$ the symbol of $C$. \end{de.} Define $d_{C}\in\oplus_{1\leq l\leq r-n-k}\oplus_{\sum^{l}_{t'=1} i_{t'}=r-n-k}\mathrm{Hom}_{K}(E^{n-i_{1}}\otimes\cdots\otimes E^{n-i_{l}},\mathfrak{X}\otimes E^{k})$ by \begin{equation} \langle d_{C}(x_{1},...,x_{l})a,y\rangle:=\sigma_{C}(x_{1},...,x_{l},y)a. \end{equation} We can use instead of elements $C\in\mathcal{C}^{r}(\mathcal{E})$ $K$-multilinear forms $\omega\in \oplus_{1\leq k\leq r}\oplus_{\sum^{k}_{t=1}i_{t}=r}\mathrm{Hom}_{K}(E^{n-i_{1}}\otimes\cdots\otimes E^{n-i_{k}},R)$ defined by $\omega(x_{1},...,x_{t})=\langle C(x_{1},...,x_{t-1}),x_{t}\rangle $. \begin{de.} For $r\geq1$ the subspace $\Omega^{r}_{\mathcal{C}}(\mathcal{E})\subset\oplus_{1\leq k\leq r}\oplus_{\sum^{k}_{t=1}i_{t}=r}\mathrm{Hom}_{K}(E^{n-i_{1}}\otimes\cdots\otimes E^{n-i_{k}},R)$ consists of elements $\omega$ satisfying the following conditions; (1) \begin{equation} \omega(x_{1},...,ax_{k})=a\omega(x_{1},...,x_{k}), \end{equation} for all $a\in R$. (2)For $r\geq2$, there exists a multilinear map, \begin{equation} \sigma_{\omega}\in \oplus_{1\leq l\leq r-n}\oplus_{\sum^{l}_{t'=1} i_{t'}=r-n}\mathrm{Hom}_{K}(E^{n-i_{1}}\otimes\cdots\otimes E^{n-i_{l}},\mathfrak{X}), \end{equation} such that \begin{align} &\omega(x_{1},...x_{i},x_{i+1},...,x_{k})-(-1)^{(|x_{i}|-n)(|x_{i+1}|-n)}\omega(x_{1},....,x_{i+1},x_{i},...,x_{k}) \notag \\ &=\sigma_{\omega}(x_{1},...^{\wedge^{i}}...^{\wedge^{i+1}},x_{k})\langle x_{i},x_{i+1}\rangle. \end{align} \end{de.} By the non-degeneracy of $\langle,\rangle$, we get the following Lemma: \begin{le.} There is an isomorphism of graded $R$-modules \begin{equation} \mathcal{C}^{\bullet}(\mathcal{E})\rightarrow \Omega^{\bullet}_{\mathcal{C}}(\mathcal{E}), \end{equation} given by \begin{equation} \omega(x_{1},...,x_{t})=\langle C(x_{1},...,x_{t-1}),x_{t}\rangle. \end{equation} \end{le.} \begin{pr.} The map \begin{equation} [,]:\mathcal{C}^{r}(\mathcal{E})\otimes\mathcal{C}^{s}(\mathcal{E})\rightarrow\mathcal{C}^{r+s-n}(\mathcal{E}), \end{equation} defined by \begin{equation} \label{l1} [a,b]=0, [a,x]=0=[x,a], [x,y]=\langle x,y\rangle, [D,a]=\sigma_{D}a=-[a,D], \end{equation} \begin{equation} \label{l2} [C,x]=\iota_{x}C=-(-1)^{(r+n)(|x|+n)}[x,C], \end{equation} for elements $a,b\in R, x,y\in\mathcal{C}^{s}(\mathcal{E})$ for $s\leq n$, $D\in\mathcal{C}^{n}(\mathcal{E}),C\in\mathcal{C}^{r}(\mathcal{E})$ for $r\geq n$, and by the recursion, \begin{equation} \label{r} \iota_{x}[C_{1},C_{2}]=[[C_{1},C_{2}],x]=[C_{1},[C_{2},x]]-(-1)^{(|C_{1}|+n)(|C_{2}|+n)}[C_{2},[C_{1},x]], \end{equation} is well-defined and makes $\mathcal{C}^{\bullet}(\mathcal{E})$ a graded Lie algebra. \end{pr.} \begin{proof} It suffices to show that the recursion ($\ref{r}$) is consistent with ($\ref{l1}$) and ($\ref{l2}$), that $[C_{1},C_{2}]\in\mathcal{C}^{r+s-n}(\mathcal{E})$, and that the bracket satisfies the conditions for a graded Lie algebra. The consistency can be checked as follows: \begin{align} [[D,x],y]&=\langle D(x),y\rangle=(-1)^{(|x|-n)(|y|-n)}\langle D(y),x\rangle+\sigma_{C}\langle x,y\rangle \notag \\ &=(-1)^{(|x|-n)(|y|-n)}[[D,y],x]+[D,[x,y]]. \end{align} \begin{align} [[C,x],y]&=\iota_{y}\iota_{x}C=(-1)^{(|x|-n)(|y|-n)}\iota_{x}\iota_{y}C+d_{C}\langle x,y\rangle \notag \\ &=(-1)^{(|x|-n)(|y|-n)}[[C,y],x]+[[C,[x,y]]. \end{align} Next, we check that $[C_{1},C_{2}]$ is an element in $\mathcal{C}^{r+s-n}(\mathcal{E})$. For $N\leq 2n-1$, the claim is clear. For $N=2n$, we consider three cases. If $a\in R$ and $C\in\mathcal{C}^{2n}(\mathcal{E})$, then $[C,a]=d_{C}a\in\mathcal{C}^{n}(\mathcal{E})$ and \begin{equation} [[C,a],b]=[C,[a,b]]-(-1)^{n}[a,[C,b]]. \end{equation} If $x\in E^{i}$ and $C\in\mathcal{C}^{2n-i}(\mathcal{E})$, then $[C,a]=d_{C}a\in\mathcal{C}^{n}(\mathcal{E})$ and \begin{equation} [[C,x],a]=[C_{1},[x,a]]-(-1)^{(n-i)(i+n)}[x,[C,a]]. \end{equation} If $D_{1},D_{2}\in\mathcal{C}^{n}(\mathcal{E})$, then $[D_{1},D_{2}]\in\mathcal{C}^{r}(\mathcal{E})$ with \begin{equation} \sigma_{[D_{1},D_{2}]}a=\sigma_{D_{1}}\sigma_{D_{2}}a-\sigma_{D_{2}}\sigma_{D_{1}}a, \end{equation} \begin{equation} [[D_{1},D_{2}],a]=[D_{1},[D_{2},a]]-[D_{2},[D_{1},a]]. \end{equation} Let $C_{1}\in\mathcal{C}^{r}(\mathcal{E}),C_{2}\in\mathcal{C}^{s}(\mathcal{E})$ with $r+s\geq2n+1$. Consider a map $h:R\rightarrow\mathcal{C}^{r+s-2n}(\mathcal{E})$ defined by \begin{equation} h(a)=[C_{1},[C_{2},a]]-(-1)^{(r+n)(s+n)}[C_{2},[C_{1},a]]. \end{equation} Then $[C_{1},C_{2}]\in\mathcal{C}^{r+s-n}(\mathcal{E})$ and the symbol is \begin{equation} \sigma_{[C_{1},C_{2}]}(x_{1},...,x_{t})a=\langle h(a)(x_{1},...,x_{t-1}),x_{t}\rangle. \end{equation} The skew symmetry is clear by the construction. We check the Jacobi identity. It suffices to show \begin{equation} J(C_{1},C_{2},C_{3}):=[[C_{1},C_{2}],C_{3}]-[C_{1},[C_{2},C_{3}]]-(-1)^{(|C_{1}|+n)(|C_{2}+n|)}[C_{2},[C_{1},C_{3}]]=0. \end{equation} We prove the claim by induction for $N=\sum|C^{i}|$. For $1\leq N\leq 2n$, it is clear. By the recursion, \begin{align} &[J(C_{1},C_{2},C_{3}),x] \notag \\ =&(-1)^{(|C_{2}-n|)(|x|-n)+(|C_{3}|-n)(|x|-n)}J([C_{1},x],C_{2},C_{3}) \notag \\ +&(-1)^{(|C_{3}|-n)(|x|-n)}J(C_{1},[C_{2},x],C_{3})+J(C_{1},C_{2},[C_{3},x]) \end{align} and by induction, we obtain $[J(C_{1},C_{2},C_{3}),x]=0$. For $N\geq2n+1$, $|J(C_{1},C_{2},C_{3})|\geq1$, therefore we conclude $J(C_{1},C_{2},C_{3})=0$. \end{proof} \begin{pr.} There exists an associative, graded-commutative $K$-bilinear product $\wedge$ of degree 0 on $\mathcal{C}^{\bullet}(\mathcal{E})$ uniquely defined by \begin{equation} a\wedge b=ab=b\wedge a, a\wedge x=ax=x\wedge a, \end{equation} for $a,b\in R$ and $x\in E$ and by the recursion rule \begin{equation} [C_{1}\wedge C_{2},x]=(-1)^{(r-n)s}C_{2}\wedge[C_{1},x]+C_{1}\wedge[C_{2},x]. \end{equation} \end{pr.} \begin{proof} We prove that \begin{equation} (x_{1},...,x_{t})\rightarrow[C_{1}\wedge C_{2},x_{1}](x_{2},...,x_{t}), \end{equation} is an element in $\mathcal{C}^{r+s}(\mathcal{E})$, and that \begin{equation} [C_{1}\wedge C_{2},a]=(-1)^{(r-n)s}C_{2}\wedge[C_{1},a]+C_{1}\wedge[C_{2},a]. \end{equation} If $N\leq n$, the claim is clear. If $N=r+s\geq n+1$, the map \begin{equation} h(a)=(-1)^{(r-n)s}C_{2}\wedge[C_{1},a]+C_{1}\wedge[C_{2},a], \end{equation} is $d_{C_{1}\wedge C_{2}}$. \end{proof} \begin{th.} $(\mathcal{C}^{\bullet}(\mathcal{E}),[,],\wedge)$ is a graded Poisson algebra of degree $-n$. \end{th.} \begin{proof} It suffices to show the Leibniz rule \begin{equation} [C_{1}\wedge C_{2},x]=(-1)^{(r-n)s}C_{2}\wedge[C_{1},C_{3}]+C_{1}\wedge[C_{2},C_{3}]. \end{equation} We can check by direct calculations and the recursions. \end{proof} Since $\mathcal{C}^{\bullet}(\mathcal{E})\simeq \Omega^{\bullet}_{\mathcal{C}}(\mathcal{E})$, we can define a graded Poisson algebraic structure on $\Omega^{\bullet}_{\mathcal{C}}(\mathcal{E})$. This bracket is an extension of $\{-,-\}:E^{\vee}\otimes_{R}E^{\vee}\rightarrow R$. We can construct $m\in\Omega^{\bullet}_{C}(\mathcal{E})\simeq\mathcal{C}^{r}(\mathcal{E})$ from the map $\phi:\mathcal{E}^{i_{1}}\otimes\mathcal{E}^{i_{2}}\otimes\cdots\otimes\mathcal{E}^{i_{m}}\rightarrow\mathcal{E}^{i_{1}+\cdots+i_{m}-mn+r}$ by \begin{equation} \omega_{\phi}(e_{1},e_{2},...,e_{k})=\langle \cdots\langle \phi(e_{1},...,e_{m}),e_{m+1}\rangle\cdots\rangle,e_{k}\rangle. \end{equation} Let $\phi$ be the bracket of the higher Courant-Dorfman algebra. Then, $\omega_{\phi}$ satisfies $|\omega_{\phi}|=n+1$ and $[\omega_{\phi},\omega_{\phi}]=0$ and the map $[\omega_{\phi},-]$ is degree 1 and squares to 0, so it defines a differential on $\mathcal{C}^{\bullet}(\mathcal{E})$. This is a higher derived bracket of this algebra $\cite{V03}$. Next, we define another Poisson algebra $\mathcal{R}^{\bullet}(\mathcal{E})$ generalizing the Rothstein algebra. \begin{de.} A connection $\nabla$ for the graded module $E=(E^{i})$ is a map $\nabla:\mathfrak{X}\times E\rightarrow E$ of degree 0 such that \begin{equation} \nabla_{aD}x=a\nabla_{D}x, \end{equation} \begin{equation} \nabla_{D}(ax)=a\nabla_{D}x+D(a)x, \end{equation} for all $a\in R$ $x,y\in E$ and $D\in\mathfrak{X}$. If $\langle ,\rangle :E\otimes E\rightarrow R$ is a $K$-bilinear form, then $\nabla$ is called metric if in addition \begin{equation} D\langle x.y\rangle =\langle \nabla_{D}x,y\rangle+\langle x,\nabla_{D}y\rangle, \end{equation} for all $x,y\in E$ and $D\in\mathfrak{X}$. \end{de.} If each $E^{i}$ is finitely generated and projective then it allows for a connection $\nabla$. If $\langle ,\rangle :E\otimes E \rightarrow R$ is non-degenerate, then $\nabla$ can be chosen to be a metric connection. Indeed, if $\tilde{\nabla}$ is any connection and $\langle ,\rangle$ is strongly non-degenerate then $\nabla$ defined by \begin{equation} \langle \nabla_{D}x,y\rangle=\frac{1}{2}(\langle \tilde{\nabla}_{D}x,y\rangle-\langle x,\tilde{\nabla}_{D}y\rangle+D\langle x,y\rangle) \end{equation} is a metric connection. \begin{de.} The higher Rothstein algebra is defined as a graded symmetric algebra by \begin{equation} \mathcal{R}^{\bullet}(\mathcal{E})=\mathrm{Sym}(\oplus_{1\leq i\leq n-1}E^{i}[-i]\oplus\mathfrak{X}[-n]). \end{equation} \end{de.} Next we introduce the curvature of $\nabla$. A given connection for $E$ extends to $\mathrm{Sym}(E)$ by imposing the Leibniz rule. Thus we can consider \begin{equation} R(D_{1},D_{2})\xi:=\nabla_{D_{1}}\nabla_{D_{2}}\xi-\nabla_{D_{2}}\nabla_{D_{1}}\xi-\nabla_{[D_{1},D_{2}]}\xi, \end{equation} for $D_{i}\in\mathfrak{X}$ and $\xi\in \mathrm{Sym}(E)$. It defines an element \begin{equation} R(D_{1},D_{2})\in \mathrm{End}(\mathrm{Sym}(E)). \end{equation} Restricting $R(D_{1},D_{2})$ to $E$ gives a map $R(D_{1},D_{2}):E\rightarrow E$. For $x\in E^{i}$ and $y\in E^{n-i}$, \begin{equation} \langle R(D_{1},D_{2})x,y\rangle=(-1)^{i(n-i)}\langle R(D_{1},D_{2})y,x\rangle. \end{equation} $E^{i}$ is projective and finitely generated, so using the strongly non-degenerate inner product $\langle ,\rangle$ on $E$ we can define $r(D_{1},D_{2})\in Sym^{2}E|_{deg=n}$ by \begin{equation} R(D_{1},D_{2})x=\langle r(D_{1},D_{2}),x\rangle. \end{equation} With this preparation a Poisson structure can now be defined. \begin{th.} Let $\nabla$ be a metric connection on $E$. Then there exists a unique graded Poisson structure $\{-,-\}_{R}$ on $\mathcal{R}^{\bullet}(\mathcal{E})$ of degree $-n$ such that \begin{align} \{a,b\}_{R}&=0=\{a,x\}_{R},\\ \{x,y\}_{R}&=\langle x,y\rangle=-(-1)^{(|x|-n)(|y|-n)}\{y,x\}_{R}, \\ \{D,a\}_{R}&=-D(a)=-\{a,D\}_{R},\\ \{D,x\}_{R}&=-\nabla_{D}x=-\{x,D\}_{R},\\ \{D_{1},D_{2}\}_{R}&=-[D_{1},D_{2}]-r(D_{1},D_{2})=-\{D_{2},D_{1}\}_{R}, \end{align} for $a,b\in R$, $x,y\in E$ and $D_{1},D_{2}\in\mathfrak{X}$. \end{th.} \begin{proof} We can extend the bracket $\{\}_{R}$ to $\mathcal{R}(\mathcal{E})$ by the Leibniz rule from the above definition. The skew symmetry is clear by construction. Jacobi identity follows from the following Bianchi identity. \begin{align} &\ &\nabla_{D_{1}}r(D_{2},D_{3})+\nabla_{D_{2}}r(D_{3},D_{1})+\nabla_{D_{3}}r(D_{1},D_{2}) \notag \\ &+&r(D_{1},[D_{2},D_{3}])+r(D_{2},[D_{3},D_{1}])+r(D_{3},[D_{1},D_{2}])=0. \end{align} \end{proof} Next, we find the relation between $\mathcal{R}^{\bullet}(\mathcal{E})$ and $\mathcal{C}^{\bullet}(\mathcal{E})$. \begin{de.} Let the $R$-linear map $\mathcal{J}:\mathcal{R}^{\bullet}(\mathcal{E})\rightarrow\mathcal{C}^{\bullet}(\mathcal{E})$ be defined by \begin{equation} \mathcal{J}(a)=a,\mathcal{J}(x)=x,\mathcal{J}(D)=-\nabla_{D} \end{equation} for $a\in R,x\in E$ and $D\in\mathfrak{X}$ and extend by the Leibniz rule. \end{de.} \begin{pr.} (1) The map $\mathcal{J}$ is a homomorphism of Poisson algebras. (2) Let $\phi\in\mathcal{R}^{\bullet}(\mathcal{E})$ with $r\geq n$, then \begin{equation} \mathcal{J}(\phi)(x_{1},...,x_{k})=\{\{...\{\phi,x_{1}\}_{R},...\}_{R},x_{k}\}_{R}, \end{equation} and \begin{equation} \sigma_{\mathcal{J}(\phi)}(x_{1},...,x_{k-1})a=\{\{...\{\phi,x_{1}\}_{R},...\}_{R},x_{k-1}\}_{R},a\}_{R}, \end{equation} for all $x_{i}\in E$ and $a\in R$. \end{pr.} \begin{proof} (1):From the definition this is obvious for generators and it is true for all $\mathcal{R}^{\bullet}(\mathcal{E})$ by the Leibniz rule. (2)$[\mathcal{J}(\phi),x]=[\mathcal{J}(\phi),\mathcal{J}(x)]=\mathcal{J}(\{\phi,x\}_{R})$ and induction for $k$. \end{proof} \begin{le.} Let $\phi\in\mathcal{R}^{r}(\mathcal{E})$ with $r\geq 1$, then \begin{equation} \mathcal{J}(\phi)(x_{1},...,x_{k})=\{\{...\{\phi,x_{1}\}_{R},...\}_{R},x_{k}\}_{R}=0, \end{equation} if and only if $\phi=0$. \end{le.} \begin{proof} It is true for $r=1,...,n-1$ due to the non-degeneracy of $\langle ,\rangle$. Suppose for it is true for $1,2,...,r-1$. For $\phi\in\mathcal{R}^{r}(\mathcal{E})$, we have $|\{\phi,x\}_{R}|<r$, so it satisfies the condition if and only if $\{\phi,x\}_{R}=0$ Then \begin{equation} \{\phi,\langle x,y\rangle\}=\{\phi,\{x,y\}\}=\{\{\phi,x\},y\}+(-1)^{(\phi+n)(|x|+n)}\{x,\{\phi,y\}\}=0, \end{equation} and due to fullness $\{\phi,a\}_{R}=0$. Then, $\phi=0$. \end{proof} \begin{co.} Let $\hat{\mathcal{C}}^{\bullet}(\mathcal{E})$ be the subalgebra of $\mathcal{C}^{\bullet}(\mathcal{E})$ generated by $R,E$ and $\mathcal{C}^{n}(\mathcal{E})$. Then $\hat{\mathcal{C}}^{\bullet}(\mathcal{E})$ is closed under the bracket $[,]$ and $\mathcal{J}$ is an isomorphism of Poisson algebras \begin{equation} \mathcal{J}:\mathcal{R}^{\bullet}(\mathcal{E})\rightarrow\hat{\mathcal{C}}^{\bullet}(\mathcal{E}). \end{equation} \end{co.} \begin{proof} $\mathcal{J}$ is injective due to the above lemma. If $D\in\mathcal{C}^{n}(\mathcal{E})$ we can define an element $\xi\in Sym(E)|_{deg.=n}$ by $\langle \xi,x\rangle=D(x)-\Delta_{\sigma_{D}}x$, hence $D\in\mathcal{J}(\mathcal{R}^{n}(\mathcal{E}))$, therefore $\mathcal{C}^{n}(\mathcal{E})\simeq\mathcal{R}^{n}(\mathcal{E})$. \end{proof} \begin{le.} We have $\hat{\mathcal{C}}^{n+1}(\mathcal{E})=\mathcal{C}^{n+1}(\mathcal{E})$. \end{le.} \begin{proof} Let $C\in\mathcal{C}^{n+1}(\mathcal{E})$ and let $d_{C}\in Der(R,E^{1})$ be given $\langle d_{C}r,x\rangle =\sigma_{C}(x)r$. We can find $D^{1},...,D^{n}\in\mathfrak{X}$ and $e_{1},...,e_{n}\in\mathcal{E}$ such that $d_{C}(r)=D^{i}(r)e_{i}$. Let $T=C-\nabla_{D^{i}}\wedge e_{i}$. Then, $T\in\mathcal{E}^{n+1}$. \end{proof} Let $m\in\mathcal{C}^{n+1}(\mathcal{E})$ with $[m,m]=0$. Then $\delta_{m}=[m,-]$ squares to 0, and we get a subcomplex $\hat{\mathcal{C}}^{\bullet}(\mathcal{E})$. This complex is isomorphic to $\mathcal{R}^{\bullet}(\mathcal{E})$ with differential $\delta_{\mathcal{J}(m)}=[\mathcal{J}(m),-]$. When $R=C^{\infty}(M)$ and $E^{i}=\Gamma(M,F^{i})$ for a graded vector bundle $F^{i}\rightarrow M$, this Poisson algebra is isomorphic to the associated dg symplectic manifold$(\mathcal{M},\omega,\Theta)$. \begin{le.} Let $(F^{i}(1\leq i\leq n-1))$ be a graded bundle over a smooth manifold $M$, and $\langle ,\rangle:F^{i}\otimes F^{n-i}\rightarrow C^{\infty}(M)$ a fiberwise non-degenerate graded-symmetric bilinear form. Degree $n$ graded symplectic manifolds (with a choice of splitting in the sense of Remark$\ref{split}$) are in one-to-one correspondence with such graded vector bundles with $\langle ,\rangle$. \end{le.} \begin{proof} Any graded manifold is noncanonically diffeomorphic to a graded manifold associated to a graded vector bundle($\cite{GN12}$,Theorem 1). Let $(\mathcal{M},\omega)$ be a degree $n$ symplectic manifold and let $F^{i}$ the associated graded vector bundle. Then, $E^{n}=\Gamma(TM)$ and the Poisson bracket of degree $-n$ induced by $\omega$ is an extension of $\langle ,\rangle$ as a derivation. (In this case $C^{\infty}(\mathcal{M})\simeq\mathcal{R}^{\bullet}(\mathcal{E})$.) \end{proof} \begin{re.} \label{split} The diffeomorphism between a graded manifold and a graded manifold associated to a graded vector bundle is noncanonical. Denote the algebra of degree $i$ functions of a graded manifold $\mathcal{M}$ by $\mathcal{A}^{i}$. There exists a short exact sequence \begin{equation} \xymatrix{ 0 \ar[r] & (\mathcal{A}^{1})^{2} \ar[r] & \mathcal{A}^{2} \ar[r] & \Gamma(F^{2}) \ar[r] & 0, } \end{equation} where $F^{2}$ is a vector bundle over the base manifold $M$ of $\mathcal{M}$. Fixing a splitting, we can identify $\mathcal{A}^{2}$ with $(\mathcal{A}^{1})^{2}\oplus\Gamma(F^{2})$. For $\mathcal{A}^{i}(i\geq 2)$, we can choose such a splitting. Thus graded manifolds with a choice of splittings are in one-to-one correspondence with graded vector bundles. \end{re.} \begin{th.} Let $(R,E^{i}(1\leq i\leq n-1),\langle ,\rangle,d,[-,-])$ be a higher Courant-Dorfman algebra. Suppose $R=C^{\infty}(M)$ for a smooth manifold $M$, and each $E^{i}=\Gamma(F^{i})$ for a graded vector bundle $F^{i}$ over M. Degree $n$ dg symplectic manifolds are in one-to-one correspondence with higher Courant-Dorfman algebras of these types. \end{th.} \begin{proof} Let $(\mathcal{M},\omega)$ be a degree $n$ symplectic manifold corresponding to $(E^{i},\langle ,\rangle)$, with $\mathcal{A}$ its graded Poisson algebra of polynomial functions. Then $\mathcal{A}^{0}=C^{\infty}(M)$ and $\mathcal{A}^{i}=\mathcal{E}^{i}$ for $1\leq i\leq n-1$, and $\{-,-\}$ restricted to $\mathcal{A}^{i}$ is an extension of $\langle ,\rangle$. Let $\Theta\in\mathcal{A}^{n+1}$ satisfy $\{\Theta,\Theta\}=0$. Given arbitrary $e,e_{1},e_{2}\in\mathcal{A}^{i}$, define a differential $d$ and bracket $[,]$ by \begin{equation} d(e)=\{\Theta,e\}, [e_{1},e_{2}]=\{\{e_{1},\Theta\},e_{2}\}. \end{equation} This construction gives a higher Courant-Dorfman algebra. Conversely, given a higher Courant-Dorfman algebraic structure on $(E^{i},\{,\})$, we can define $\Theta=\mathcal{J}(\omega_{\phi})$. Locally, $\Theta$ can be written as follows. In a Darboux chart $(\xi^{a(k)})=(q^{a(l)},p^{a(n-l)})(1\leq k\leq n, 1\leq l\leq\lfloor\frac{n}{2}\rfloor)$, corresponding to a chart $(x_{i})$ on $M$ and a local basis $e^{a(k)}$ of sections of $E^{k}$ such that $\langle e^{a(k)},e^{b(n-k)}\rangle=\delta^{ab}$ \begin{equation} \Theta=\sum_{\sum i_{t}=n+1}\phi(q)\xi^{a_{1}(i_{1})}\cdots\xi^{a_{m}{i_{m}}} \end{equation} \begin{equation} \phi(q)=\langle \cdots\langle [e^{a_{1}(n-i_{1})}, e^{a_{2}(n-i_{2})}],e^{a_{3}(n-i_{3})}\rangle ,\cdots,e^{a_{m}(n-i_{m})}\rangle . \end{equation} This satisfies $\{\Theta,\Theta\}=0$ due to the properties of a higher Coutant-Dorfman algebra. \end{proof} \chapter{Higher PVAs from higher Courant-Dorfman algebras} In this chapter, we define higher PVAs corresponding to higher Courant-Dorfman algebras and check these algebras have a PVA-like property. In particular, a tensor product of a higher PVA and an arbitrary dgca has a structure of degree 0 graded Poisson algebra. First, we define higher Lie conformal algebras and derive properties of higher weak Courant-Dorfman algebras, in a similar way that we derive the properties of Courant-Dorfman algebras from Lie conformal algebras. \begin{de.} \textit{A higher Lie conformal algebra} of degree $n$ is a graded $\mathbb{C}[d]$-module $W=\oplus_{m\in\mathbb{Z}_{\geq0}} W^{m}$(i.e. $d$ acts on elements of $W$) with $|d|=1$, which has a degree $1-n$ map which we call $\Lambda$-bracket $[_{\Lambda}]:W\otimes W\rightarrow W[\Lambda]$ with $|\Lambda|=1$ which satisfy the conditions. (Here, $\Lambda$ is an indeterminate.) \begin{description} \item[Sesquilinearity] \begin{equation} [da_{\Lambda}b]=-(-1)^{-n}\Lambda[a_{\Lambda}b], [a_{\Lambda}db]=-(-1)^{|a|-n}(d+\Lambda)[a_{\Lambda}b] \end{equation} \item[Skewsymmetry] \begin{equation} [a_{\Lambda}b]=-(-1)^{(|a|+1-n)(|b|+1-n)}[b_{-\Lambda-d}a] \end{equation} \item[Jacobi identity] \begin{equation} [a_{\Lambda}[b_{\Gamma}c]]=[[a_{\Lambda}b]_{\Lambda+\Gamma}c]+(-1)^{(|a|+1-n)(|b|+1-n)}[b_{\Gamma}[a_{\Lambda}c]]. \end{equation} \end{description} \end{de.} We derive the properties of a higher weak Courant-Dorfman algebra from a higher Lie conformal algebra. The $\Lambda$-bracket is of the form \begin{equation} [a_{\Lambda}b]=\sum_{j\geq0}\Lambda^{j}a_{(j)}b\ (a_{(j)}b\in W^{|a|+|b|+1-n-j}). \end{equation} Let \begin{equation} [a,b]=a_{(0)}b,\ \langle a_{\Lambda}b\rangle=\sum_{j\geq1}\Lambda^{j}a_{(j)}b. \end{equation} \begin{equation} \langle a,b\rangle=\langle a_{-d}b\rangle. \end{equation} Then we derive the properties of a higher Courant-Dorfman algebra by comparing the independent terms of $\Lambda$ on the both sides of the axioms. From the sesquilinearity, we can get \begin{equation} [da,b]+o(\Lambda)=\{da_{\Lambda}b\}=(-1)^{-n}\Lambda\{a_{\Lambda}b\}\Rightarrow[da,b]=0, \end{equation} from the skew-symmetry, we can get \begin{align} [a,b]+o(\Lambda)&=\{a_{\Lambda}b\}=-(-1)^{(|a|+1-n)(|b|+1-n)}\{b_{-\Lambda-d}a\} \notag \\ &=-(-1)^{(|a|+1-n)(|b|+1-n)}([b,a]+d\langle b_{-d}a\rangle)+o(\Lambda) \notag \\ \Rightarrow&[a,b]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,a]=(-1)^{(|a|+1-n)(|b|+1-n)}d\langle b,a\rangle, \end{align} and from the Jacobi-identity, we can get \begin{align} [a,[b,c]]+o(\Lambda)&=[[a,b],c]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,[a,c]]+o(\Lambda) \notag \\ \Rightarrow[a,[b,c]]&=[[a,b],c]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,[a,c]]. \end{align} These are properties of a higher weak Courant-Dorfman algebras. \begin{de.} \textit{A higher weak Courant-Dorfman algebra} of degree $n$ consists of the following data: \begin{itemize} \item a graded vector space $E=(E^{i})$, \item a graded symmetric bilinear form of degree $-n$ $\langle ,\rangle:E\otimes E\rightarrow E$, \item a map of degree 1 $d:E\rightarrow E$, \item a Dorfman bracket of degree $1-n$ $[,]:E\otimes E\rightarrow E$, \end{itemize} which satisfies the following conditions. \begin{equation} [e_{1},[e_{2},e_{3}]]=[[e_{1},e_{2}],e_{3}]+(-1)^{(|e_{1}|+1-n)(|e_{2}|+1-n)}[e_{2},[e_{1},e_{3}]], \end{equation} \begin{equation} [e_{1},e_{2}]+(-1)^{(|e_{1}|+1-n)(e_{2}+1-n)}[e_{2},e_{1}]=(-1)^{(|e_{1}|+1-n)(|e_{2}|+1-n)}d\langle e_{2},e_{1}\rangle, \end{equation} \begin{equation} [de_{1},e_{2}]=0. \end{equation} \end{de.} Next, we define higher Poisson vertex algebras. We did not assume that $d$ is a differential so far. From now on, we assume $d^{2}=0$. Then, $C=(C^{k},d)$ is a cochain complex. \begin{de.} Let $C=(C^{k},d)$ a cochain complex. $C$ is a higher Lie conformal algebra of degree $n$ if it endows with a $\Lambda$-bracket $[_{\Lambda}]:C\otimes C\rightarrow C[\Lambda]$ defined by \begin{equation} a\otimes b\mapsto [a_{\Lambda}b]=a_{(0)}b+\Lambda a_{(1)}b \end{equation} satisfying the axioms of higher Lie conformal algebras. $C$ is a higher Poisson vertex algebra of degree $n$ if it is a higher LCA and a differential graded-commutative algebra which satisfies \begin{description} \item[the Leibniz rule] \begin{equation} [a_{\Lambda}bc]=[a_{\Lambda}b]c+(-1)^{(|a|+1-n)|b|}b[a_{\Lambda}c]. \end{equation} \end{description} \end{de.} From extended higher Courant-Dorfman algebras, we get the following theorem. \begin{th.} The above higher Poisson vertex algebras generated by elements of degree $0\leq i\leq n-1$ are in one-to-one correspondence with the extended higher Courant-Dorfman algebras \end{th.} \begin{proof} Assume we have a higher PVA $(C=(C^{k},d),\{_{\Lambda}\})$. We denote $R=C^{0},\mathcal{E}^{i}=C^{i}(1\leq i\leq n-1)$. $C=(C^{k},d)$ is a dgca, so $R$ is a commutative algebra and each $\mathcal{E}^{i}$ is an $R$-module. We denote the $\Lambda$-bracket by \begin{equation} a_{(0)}b=[a,b]\ a_{(1)}b=(-1)^{|a|}\langle a,b\rangle. \end{equation} Sesquilinearity says that \begin{equation} (da)_{(0)}b+\Lambda(da)_{(1)}b=-(-1)^{-n}\Lambda(a_{(0)}b+\Lambda a_{(1)}b). \end{equation} Comparing the 0th-order terms and the first-order terms of $\Lambda$, we have \begin{equation} [da,b]=0,\ \langle a,b\rangle=-(-1)^{|a|-n}[a,b]. \end{equation} In a similar way, from the skewsymmetry, \begin{equation} a_{(0)}b+\Lambda a_{(1)}b=-(-1)^{(|a|+1-n)(|b|+1-n)}(b_{(0)}a-(\Lambda+d)b_{(1)}a), \end{equation} we can get \begin{equation} [a,b]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,a]=-(-1)^{|a|}d\langle b,a\rangle, \end{equation} \begin{equation} \langle a,b\rangle=-(-1)^{(|a|-n)(|b|-n)}\langle b,a\rangle. \end{equation} From the Jacobi-identity \begin{align} \ &a_{(0)}(b_{(0)}c)+a_{(0)}(\Gamma b_{(1)}c)+\Lambda a_{(1)}(b_{(0)}c)+\Lambda a_{(1)}(\Gamma b_{(1)}c) \notag \\ &=(a_{(0)}b)_{(0)}c+(\Lambda+\Gamma)(a_{(0)}b)_{(1)}c+(\Lambda a_{(1)}b)_{(0)}c+(\Lambda+\Gamma) (\Lambda a_{(1)} b)_{(1)}c \notag \\ &+(-1)^{(|a|+1-n)(|b|+1-n)}\{b_{(0)}(a_{(0)}c)+b_{(0)}(\Lambda a_{(1)}c)+\Gamma b_{(1)}(a_{(0)}c)+\Gamma b_{(1)}(\Lambda a_{(1)}c)\}, \end{align} we can get \begin{equation} [a,[b,c]]=[[a,b],c]+(-1)^{(|a|+1-n)(|b|+1-n)}[b,[a,c]], \end{equation} \begin{equation} [a,\langle b,c\rangle]=\langle [a,b],c\rangle+(-1)^{(|a|+1-n)(|b|+1-n)}\langle b,[a,c]\rangle, \end{equation} \begin{equation} \langle a,\langle b,c\rangle\rangle=\langle \langle a,b\rangle,c\rangle+(-1)^{(|a|-n)(|b|-n)}\langle b,\langle a,c\rangle\rangle. \end{equation} From the Leibniz rule \begin{equation} a_{(0)}(bc)+\Lambda a_{(1)}(bc)=(a_{(0)}b)c+\Lambda (a_{(1)}b)c+(-1)^{(|a|+1-n)|b|}b(a_{(0)}c+\Lambda a_{(1)}c), \end{equation} we can get \begin{equation} [a\cdot b,c]=[a,b]\cdot c+(-1)^{(|a|+1-n)|b|}b\cdot[a,c], \end{equation} \begin{equation} \langle a\cdot b,c\rangle=\langle a,b\rangle\cdot c+(-1)^{(|a|-n)|b|}b\cdot\langle a,c\rangle. \end{equation} The conditions coincide the definition of extended higher Courant-Dorfman algebras. Conversely, assuming that we have an extended Courant-Dorfman algebra $\mathcal{E}=(\mathcal{E}^{k},d),\langle ,\rangle,[,]$, define a $\Lambda$-bracket $\{a_{\Lambda}b\}=[a,b]+(-1)^{|a|}\Lambda\langle a,b\rangle$. Then, this bracket satisfies the conditions of a $\Lambda$-bracket. \end{proof} Next, we check this algebra has a PVA-like property. In particular, we show we can construct a graded Lie algebra from a tensor product of a higher LCA and an arbitrary differential graded-commutative algebra (dgca for short) and a graded Poisson algebra out of that of a higher PVA and an arbitrary dgca. \begin{le.} Let $C=(C^{k},d_{1})$ be a higher LCA and $(E,d_{2})$ be a dgca. Then, the tensor product $C\otimes E$ of cochain complexes is also a higher LCA by defining a bracket as $[a\otimes f_{\Lambda}b\otimes g]=(-1)^{(|b|+1-n)|f|}[a_{\Lambda+d_{2}}b]\otimes fg, d(a\otimes f)=d_{1}a\otimes f+(-1)^{|a|}a\otimes d_{2}f$. \end{le.} \begin{proof} sesquilinearity: \begin{align} [d(a\otimes f)_{\Lambda}b\otimes g]&=[d_{1}a\otimes f_{\Lambda}b\otimes g]+(-1)^{|a|}[a\otimes d_{2}f_{\Lambda}b\otimes g] \notag \\ &=(-1)^{(|b|+1-n)|f|}\{[d_{1}a_{\Lambda+d_{2}}b]\otimes fg+(-1)^{|a|+|b|+1-n}[a_{\Lambda+d_{2}}b]\otimes (d_{2}f)g\} \notag \\ &=(-1)^{(|b|+1-n)|f|}\{-(-1)^{-n}(\Lambda+d_{2})[a_{\Lambda+d_{2}}b]\otimes fg+[a_{\Lambda+d_{2}}b]\otimes (d_{2}f)g\} \notag \\ &=(-1)^{(|b|+1-n)|f|}\{-(-1)^{-n}\Lambda[a_{\Lambda+d_{2}}b]\otimes fg-(-1)^{|a|+|b|+1-n}[a_{\Lambda+d_{2}}b]\otimes (d_{2}f)g \notag \\ &+(-1)^{|a|+|b|+1-n}[a_{\Lambda+d_{2}}b]\otimes (d_{2}f)g\} \notag \\ &=-(-1)^{-n}\Lambda[a\otimes f_{\Lambda}b\otimes g]. \end{align} skew-symmetry: \begin{align} [a\otimes f_{\Lambda}b\otimes g]&=[a_{\Lambda+d_{2}}b]\otimes fg] \notag \\ &=-(-1)^{(|a|+1-n)(|b|+1-n)+(|b|+1-n)|f|}[b_{-\Lambda-d_{2}-d_{1}}a]\otimes fg \notag \\ &=-(-1)^{(|a|+|f|+1-n)(|b|+|g|+1-n)-(|a|+1-n)|g|}[b_{-\Lambda-d}a]\otimes gf \notag \\ &=-(-1)^{(|a|+|f|+1-n)(|b|+|g|+1-n)}[b\otimes g_{-\Lambda-d}a\otimes f]. \end{align} Using the Jacobi identity of the original higher LCAs, we can check the Jacobi identity in a similar way. \end{proof} \begin{de.} A graded Lie algebra $\mathcal{C}$ of degree $N\in\mathbb{Z}$ is a cochain complex of vector spaces with a biilinear operation $[,]:\mathcal{C}\otimes\mathcal{C}\rightarrow\mathcal{C}$ of degree $N$ satisfying: (1)skew-symmetry:$[a,b]=-(-1)^{(|a|+N)(|b|+N)}[b,a]$, (2)Jacobi identity:$[a,[b,c]]=[[a,b],c]+(-1)^{(|a|+N)(|b|+N)}[b,[a,c]]$. \end{de.} \begin{le.} Let $C=(C^{k},d)$ be a higher Lie conformal algebra of degree $n$. Then $C/\mathrm{Im}d$ is naturally a graded Lie algebra of degree $(1-n)$ with bracket \begin{equation} [a+dC,b+dC]=[a_{\Lambda}b]_{\Lambda=0}+dC \end{equation} \end{le.} \begin{proof} The well-definedness follows from the sesquilinearity. \begin{equation} [d\alpha,b]=-(-1)^{-n}(\Lambda[\alpha_{\Lambda}b])_{\Lambda=0}=0, \end{equation} \begin{equation} [a,d\beta]=-(-1)^{|a|-n}((\Lambda+d)[a_{\Lambda}\beta])_{\Lambda=0}=d[a_{\Lambda}\beta]\simeq0. \end{equation} The skew-symmetry follows from the skew-symmetry of the complex. \begin{align} [a,b]&=[a_{\Lambda}b]_{\Lambda=0} \notag \\ &=-(-1)^{(|a|+1-n)(|b|+1-n)}[b_{-\Lambda-d}a]_{\Lambda=0} \notag \\ &\simeq-(-1)^{(|a|+1-n)(|b|+1-n)}[b_{\Lambda}a]_{\Lambda=0} \notag \\ &=-(-1)^{(|a|+1-n)(|b|+1-n)}[b,a] \end{align} In the similar way, we can check the Jacobi-identity follows from the Jacobi-identity of the complex. \end{proof} \begin{le.} Let $L$ be a graded Lie algebra of degree $N$. Then, $L[-N]$ is a graded Lie algebra with the same bracket. \end{le.} \begin{proof} It satisfies the skewsymmetry and the Jacobi identity due to the grade-shifting. \end{proof} For any higher LCA of degree $n$ $C$ and dgca $E$, we put $L(C,E)=C\otimes E/\mathrm{Im}d$ and $\mathrm{Lie}(C,E)=L(C,E)[n-1]$. By the above lemmas, $\mathrm{Lie}(C,E)$ is a graded-Lie algebra via \begin{equation} \{a\otimes f,b\otimes g\}=(-1)^{(|b|+1-n)|f|}(a_{(0)}b\otimes fg+(-1)^{|a|}a_{(1)}b\otimes (df)g). \end{equation} Next, we discuss the Poisson algebraic structure. Let $C=(C^{n},d)$ be a higher PVA of degree $n$. Then, $C\otimes E[n-1]$ is a dgca with products $a\otimes f\cdot b\otimes g=(-1)^{|b||f|}a\cdot b\otimes f\cdot g$, and $\mathrm{Lie}(C,E)$ is a graded Lie algebra. We put $P(C,E)=C\otimes E[n-1]/(Im d)\cdot (C\otimes E[n-1])$. \begin{th.} $P(C,E)$ is a graded Poisson algebra with \begin{equation} [a\otimes f]\cdot[b\otimes g]=(-1)^{|b||f|}[a\cdot b\otimes fg], \end{equation} \begin{equation} \{[a\otimes f],[b\otimes g]\}=(-1)^{(|b|+1-n)|f|}(a_{(0)}b\otimes fg+(-1)^{|a|}a_{(1)}b\otimes (df)g). \end{equation} \end{th.} \begin{proof} Let $I_{d}=(Im d)\cdot (C\otimes E[n-1])$. If $a,b\in I_{d}$, then $a\cdot b,da\in I_{d}$. Therefore, $I_{d}$ is a dg ideal of $C\otimes E[n-1]$ and $P(C,E)$ is a dgca. If $a,b\in I_{d}/(Imd)$, then $[a,b]\in I_{d}/(Imd)$ by the Leibiniz identity of $C$, so $I_{d}/(Im d)$ is a graded Lie ideal of $Lie(C,E)$ and $P(C,E)$ is a Lie algebra with the Lie bracket. The Leibiniz identity follows from the Leibniz identity of $C$. So $P(C,E)$ is a Poisson algebra. \end{proof} By the above theorem, we get a graded Poisson algebra from a higher PVA and a dgca. \begin{ex.} We define the BFV analog of formal distribution Lie algebras. Define the algebra of power series \begin{equation} \mathbb{C}[[t_{1},t^{-1}_{1},...t_{n},t^{-1}_{n}]][\theta_{1},...,\theta_{n}] \end{equation} where $t_{i}$ are even coordinates of degree 0, $\theta_{i}$ are odd coordinates of degree 1. Define the "de-Rham differential" as \begin{equation} df:=\sum_{i}\frac{\partial f}{\partial t^{i}}\theta_{i}. \end{equation} Let $C=(C^{n},Q)$ be a higher LCA of degree $n+1$. For \begin{equation} V:=C\otimes\mathbb{C}\left[[t_{1},t^{-1}_{1},...t_{n},t^{-1}_{n}]][\theta_{1},...,\theta_{n}\right][n]/((Q\alpha)\otimes f+\alpha\otimes df), \end{equation} the bracket \begin{equation} [a\otimes t_{1}^{p_{1}}\cdots t_{n}^{p_{n}}\theta^{J},b\otimes t_{1}^{q_{1}}\cdots t_{n}^{q_{n}}\theta^{K}] \end{equation} \begin{equation} =(a_{(0)}b)t_{1}^{p_{1}+q_{1}}\cdots t_{n}^{p_{n}+q_{n}}\theta^{J\cdot K}+\sum^{n}_{k=1}(a_{(1)}b)p_{k}t_{1}^{p_{1}+q_{1}}\cdots t_{k}^{p_{k}+q_{k}-1} t_{n}^{p_{n}+q_{n}}\theta^{J\cdot\{k\}\cdot K}, \end{equation} \begin{equation} J,K\subset\{1,...,n\},J\cdot K=\left\{\begin{array}{ll} \phi&(J\cap K\neq\phi),\\ J\cup K&(J\cap K=\phi), \end{array}\right. \end{equation} makes the graded Lie algeraic structure. We define a formal distribution, \begin{align} a(Z_{1},...,Z_{n})&=a(z_{1},...,z_{n},\zeta_{1},...,\zeta_{n}) \notag \\ &=\sum_{m_{i}\in\mathbb{Z},J\subset\{1,...,n\}}z_{1}^{-1-m_{1}}\cdots z_{n}^{-1-m_{n}}\zeta^{\{1,...,n\}\backslash J}\alpha t_{1}^{m_{1}}\cdots t_{n}^{m_{n}}\theta^{J}, \end{align} and the formal $\delta$-function, \begin{align} \delta(Z-W)&=\delta(z_{1}-w_{1})\cdots\delta({z_{n}-w_{n}})\delta({\zeta_{1}-\xi_{1}})\cdots\delta(\zeta_{n}-\xi_{n}) \notag \\ &=\sum_{m_{i}\in\mathbb{Z}}z_{1}^{-m_{1}-1}w_{1}^{m_{1}}\cdots z_{n}^{-m_{n}-1}w_{n}^{m_{n}}(\zeta_{1}-\xi_{1})\cdots(\zeta_{n}-\xi_{n}), \end{align} Then, we get \begin{equation} [a(Z),b(W)]=[a,b](W)\delta(Z-W)+\langle a,b\rangle (W)d(\delta(Z-W)). \end{equation} (For another example of formal distribution Lie algebra using superfields, see $\cite{KH06}$.) Consider the case $n=2$. Let $C=(C^{n},Q)$ be a higher PVA of degree $2$. Then $P(C,\mathbb{C}[[t,t^{-1}]][\theta])$ is a graded Poisson algebra via \begin{equation} \{a t^{m},b t^{n}\}=(a_{(0)}b)t^{m+n}+(a_{(1)}b)mt^{m+n-1}\theta,\ \{a t^{m}\theta,b t^{n}\}=(a_{(0)}b)t^{m+n}\theta. \end{equation} An extended higher Courant-Dorfman algebra of degree 2 is the same as a Courant-Dorfman algebra, and there is a PVA corresponding to a given higher PVA. We denote the PVA by $\tilde{C}$. We restrict $P(C,\mathbb{C}[[t,t^{-1}]][\theta])$ to the degree 0 part. Explicitly, \begin{equation} P(C,\mathbb{C}[[t,t^{-1}]][\theta])|_{degree 0}=\{a t^{m_{1}},b t^{m_{2}}\theta|a\in C^{1},b\in C^{0},m_{1},m_{2}\in\mathbb{Z}\}. \end{equation} We can define an isomorphism of Poisson algebras between $P(C,\mathbb{C}[[t,t^{-1}]][\theta])|_{degree 0}$ and the Poisson algebra arising from the associated Poisson vertex algebra $\tilde{C}\otimes\mathbb{C}[[t,t^{-1}]]/Im(d+\partial_{t})\cdot \tilde{C}\otimes\mathbb{C}[[t,t^{-1}]]$ by sending $a t^{m_{1}}$($a\in C^{1}$) to $a t^{m_{1}}$, and $b t^{m_{2}}\theta$($b\in C^{0}$) to $b t^{m_{2}}$. This subalgebra corresponds to the physical current algebra of a BFV current algebra. \end{ex.} \begin{ex.} Let $(\mathcal{M},\omega,Q=\{\Theta,-\})$ be a degree $n$ dg symplectic manifold and $C=C^{n-1}(C^{\infty}(\mathcal{M}))=\{a\in C^{\infty}(\mathcal{M}):|a|\leq n-1\}$ and consider a higher Courant-Dorfman algebra on $C$. Let $\Sigma_{n-1}$ be a $n-1$ dimensional manifold and $E=(\Omega^{\bullet}(\Sigma_{n-1}),D)$ be their de-Rham complex. Then, $P(C,E)$ is equipped with degree 0 Poisson bracket with \begin{equation} \label{HP} [a\otimes\epsilon_{1},b\otimes\epsilon_{2}]=\{\{a,\Theta\},b\}\otimes \epsilon_{1}\epsilon_{2}+\{a,b\}\otimes (D\epsilon_{1})\epsilon_{2}, \end{equation} where $a,b\in C$ and $\epsilon_{1},\epsilon_{2}\in E$. This is an algebraic description of BFV current algebras from dg symplectic manifolds$\cite{IX13},\cite{A21}$. BFV current algebras are Poisson brackets on $C^{\infty}(Map(T[1]\Sigma_{n-1},\mathcal{M}))$, where $T[1]\Sigma_{n-1}$ is the shifted tangent space of $\Sigma_{n-1}$. In order to get the currents, we have to take a proper Lagrangian submanifold of $Map(T[1]\Sigma_{n-1},\mathcal{M})$. One way is the zero-locus reduction$\cite{GSYT}$. \begin{pr.}[{$\cite[Proposition 1]{A21}$}] We take a degree $-n$ graded Poisson algebra $P$ with a differential $Z$, and take a quotient $P/I_{Z}$, where $I_{Z}$ is the ideal of $P$ generated by $Z$-exact terms. Then, $P/I_{Z}$ is a degree $-n+1$ Poisson algebra with the derived bracket \begin{equation} \{[a],[b]\}=[\{a,Z(b)\}]. \end{equation} \end{pr.} Applying to the BFV current algebras we get the Poisson bracket on $C^{\infty}(Map(T[1]\Sigma_{n-1},\mathcal{M}))/I_{\tilde{D}+\tilde{Q}}$, where $\tilde{D}$ and $\tilde{Q}$ is a differential on $Map(T[1]\Sigma_{n-1},\mathcal{M})$ induced by $D$ and $Q$. For $a\in C^{\infty}(\mathcal{M})$ and $\epsilon\in C^{\infty}(T[1]\Sigma_{n-1})$, We define $J_{\epsilon}\left(a\right)\in C^{\infty}(Map(T[1]\Sigma_{n-1},\mathcal{M}))$ by \begin{equation} J_{\epsilon}\left(a\right)(\phi)=\int_{T[1]\Sigma_{n-1}}\epsilon\cdot\phi^{*}(a)(\sigma,\theta)d^{n-1}\sigma d^{n-1}\theta, \end{equation} where $\epsilon\in C^{\infty}(\Sigma_{n-1})$ are test functions on $T[1]\Sigma_{n-1}$, $\sigma,\theta$ are coordinates on $T[1]\Sigma_{n-1}$ of degree 0 and 1, $\phi\in Map(T[1]\Sigma_{n-1},\mathcal{M})$ and $\phi^{*}(a)$ is the pullback of $a$. Then the Poisson bracket is \begin{align} \label{CA} &\{J_{\epsilon_{1}}\left(a\right),J_{\epsilon_{2}}\left(b\right)\}(\phi) \notag \\ &=\int_{T[1]\Sigma_{n-1}}\epsilon_{1}\epsilon_{2}\cdot\phi^{*}(\{\{a,\Theta\},b\})(\sigma,\theta)d^{n-1}\sigma d^{n-1}\theta \notag \\ &+\int_{T[1]\Sigma_{n-1}}(D\epsilon_{1})\epsilon_{2}\cdot\phi^{*}(\{a,b\})(\sigma,\theta)d^{n-1}\sigma d^{n-1}\theta, \end{align} where $\epsilon_{1},\epsilon_{2}\in C^{\infty}(T[1]\Sigma_{n-1})$ are test functions on $T[1]\Sigma_{n-1}$, $\sigma,\theta$ are coordinates on $T[1]\Sigma_{n-1}$ of degree 0 and 1, Comparing to $\ref{HP}$ and $\ref{CA}$, we see that taking the quotient corresponds to the zero-locus reduction. \end{ex.} \chapter{Outlooks} In this thesis, we gave higher analogs of Lie conformal algebras and Poisson vertex algebras. It is natural to ask whether they have the same applications as ordinary Lie conformal algebras and Poisson vertex algebras. For example, our higher PVAs may be used to analyze multi-variable Hamiltonian PDEs. Considering the algebraic description of more general currents would be important. Another interesting problem is the non-commutative analog. In $\cite{AFH21}$, non-commutative version of Courant-Dorfman algebras and Poisson vertex algebras, which are called double Courant-Dorfman algebras and double Poisson vertex algebras, are considered. Their higher generalization would be given using our algebras. Another way taking the non-commutative version is the quantization, in analogy with vertex algebras.
{ "redpajama_set_name": "RedPajamaArXiv" }
31
En mathématiques, un graphe aléatoire est un graphe qui est généré par un processus aléatoire. Le premier modèle de graphes aléatoires a été popularisé par Paul Erdős et Alfréd Rényi dans une série d'articles publiés entre 1959 et 1968. Les deux modèles de base d'Erdős et Rényi Le modèle d'Erdős et Rényi regroupe en fait deux modèles, formellement différents, mais étroitement liés. Dans les deux modèles, l'ensemble des sommets est noté par la suite ; les arêtes potentiellement présentes sont les parties à deux éléments de ; l'ensemble de ces arêtes est parfois noté Il sera noté toutefois pour des raisons de commodité typographique, et de cohérence avec l'article sur l'inégalité de Harris. ainsi, le graphe aléatoire est non orienté, et n'a ni boucles, ni arêtes multiples. Le graphe aléatoire binomial Dans ce modèle, souvent noté chacune des arêtes est présente avec probabilité , absente avec probabilité , cela indépendamment du statut des autres arêtes. Le cas a été étudié par Erdős dès 1947. Le nombre d'arêtes de suit la loi binomiale de paramètres et . Le graphe aléatoire uniforme Dans ce modèle, souvent noté on choisit uniformément un sous-ensemble de M arêtes parmi les arêtes possibles. Lorsqu'un graphe G à n sommets possède M arêtes, la probabilité de G est donnée par C'est le modèle qui est principalement étudié dans la série d'articles fondateurs publiés par Erdős et Rényi entre 1959 et 1968. Les deux processus aléatoires à valeurs graphe On peut partir d'un graphe sans arêtes, donc totalement déconnecté, et ajouter une arête tirée au hasard uniformément, puis une autre, sans remise. On obtient ainsi une suite croissante (au sens de l'inclusion de l'ensemble des arêtes), de graphes aléatoires, qui forme un processus à temps discret à valeurs dans l'ensemble des graphes. Chaque terme de la suite est un graphe aléatoire uniforme défini à la section précédente. Un avantage de cette construction est de voir coexister différents graphes aléatoires de paramètres M différents, sur le même espace probabilisé, et de pouvoir ainsi comparer leurs caractéristiques, non pas en moyenne ou en loi, mais pour chaque élément ω de l'espace probabilisé considéré. Cela permet de raisonner par couplage. On peut aussi associer à chaque arête e de J une variable aléatoire , le poids de l'arête, de sorte que la famille soit une famille de variables aléatoires i.i.d., par exemple de loi uniforme sur l'intervalle [0, 1]. On note alors le graphe formé des arêtes dont le poids est inférieur à p. Pour chaque arête, cela se produit avec probabilité On obtient ainsi une famille croissante, de graphes aléatoires, qui forme un processus à temps continu, à valeurs dans l'ensemble des graphes. Cette famille est croissante au sens de l'inclusion de l'ensemble des arêtes : une arête e présente dans est aussi présente dans puisque Chaque terme de la famille de graphes est un graphe aléatoire binomial défini précédemment. Métaphore. On peut imaginer les sommets du graphe comme n îles sur un lac, communicant à l'aide de passerelles (les arêtes e), submergées à des profondeurs respectives sous la surface de l'eau. Si le lac se vide de son eau graduellement, on va voir émerger progressivement les passerelles, et des composantes connexes regroupant de plus en plus d'îles vont se former. Liens entre les deux modèles En vertu du théorème central limite, ou de l'inégalité de Hoeffding, la loi binomiale est très concentrée autour de son espérance. Plus précisément, le nombre d'arêtes d'un graphe aléatoire de loi est donc très proche de surtout si cette dernière quantité est grande devant n : en effet, De plus, la loi conditionnelle de sachant que est précisément Pour cette raison, si est proche de , ou, de manière équivalente, si il est généralement admis (et souvent démontré) que les deux modèles et ont des propriétés très proches. En poussant plus loin, notons la k-ième valeur de la suite une fois que cette dernière suite est rangée dans l'ordre croissant : la suite est appelée la suite des statistiques d'ordre de la suite Lorsque p prend la valeur aléatoire , alors est exactement Pour corroborer les observations précédentes, notons que est très proche de au sens où, en conséquence de résultats célèbres de Donsker et de Kolmogorov, la probabilité satisfait les et étant les queues de distribution des lois de Rayleigh et de Kolmogorov, respectivement : en résumé, le supremum (lorsque M varie) des erreurs est de l'ordre de 1/n. Ordre et croissance Un graphe peut être vu comme une partie de l'ensemble J des arêtes, donc l'espace probabilisé est ici l'ensemble Ω des parties de J, qu'on peut parfois identifier à . Cette identification est en particulier utile lorsqu'on veut appliquer l'inégalité de Harris. L'inclusion est une relation d'ordre partielle sur Ω. Comme d'ordinaire, une application X définie sur Ω, à valeurs réelles, est dite croissante si Une partie A de Ω est dite croissante si De manière équivalente, une partie A de Ω est dite croissante si sa fonction indicatrice est croissante. La propriété de décroissance d'une application ou d'une partie a une définition analogue. On a l'inégalité suivante : La connexité Le seuil de connexité On dit que est un seuil étroit pour la propriété de connexité, l'étroitesse faisant référence au fait que la propriété est vérifiée même si tend vers l'infini strictement moins vite que Énumération des points isolés Il est plus facile (plus probable) de réussir à couper les n – 1 connexions entre un point et son complémentaire, que les k(n – k) connexions entre un groupe de k points et son complémentaire, car la fonction augmente très rapidement au voisinage de 1, d'où, lorsque k augmente, beaucoup plus d'arêtes à couper, et une probabilité bien plus faible de réussir à les couper toutes. En corollaire, avec le choix du paramètre p fait plus haut, le graphe G(n, p) sera non connexe « presque uniquement » s'il a des points isolés, au sens où la probabilité d'être connexe est très proche de la probabilité de ne pas avoir de points isolés, qui vaut approximativement En effet, on a le résultat suivant : Ce théorème est une illustration frappante du paradigme de Poisson, selon lequel, lorsque se présente un grand nombre d'opportunités d'observer un événement rare ( peu probable), alors le nombre total d'événements rares effectivement observés suit une loi de Poisson. Le théorème double-exponentiel Erdős et Rényi en déduisent un résultat plus précis que la propriété de seuil étroit : Notons le premier instant t où le graphe est connexe : de sorte que On peut alors voir le théorème double-exponentiel comme un résultat sur le développement asymptotique de : si est défini par la relation suivante : alors le théorème double-exponentiel stipule que converge en loi vers la distribution de Gumbel, ce qui pourrait se traduire, dans une version probabiliste de la notation de Landau, par : Le graphe aléatoire infini Erdős et Rényi ont généralisé le modèle binomial au cas d'un graphe infini dénombrable, montrant qu'on obtenait alors (presque sûrement) un graphe possédant des propriétés d'universalité (contenant en particulier tout graphe fini ou dénombrable comme sous-graphe) ; ce graphe a été redécouvert à plusieurs reprises et est le plus souvent connu sous le nom de graphe de Rado. Voir aussi Notes Bibliographie . . . Article connexe Introduction de probabilités en théorie des graphes Lien externe Laurent Massoulié, Réseaux: contrôle distribué et phénomènes émergents, 2015 Variable aléatoire nl:Complexe netwerken#Random netwerken
{ "redpajama_set_name": "RedPajamaWikipedia" }
856
\section{Introduction} \label{sec:Intro} Fermions are the fundamental building blocks of the observable universe, which, at certain level, is supposed to be understoodable in terms of quantum field theories (QFTs) like QCD, standard model of electroweak interacting, etc.. They are, however, less familiar to us theoretically compared to bosons partly due to their lack of classical correspondences. An understanding of Nature therefore requires a more direct understanding of the behavior of fermions. Represented by anticommuting Grassmann numbers in the path integration formulation of a QFT, fermions are harder to handle in numerical simulations (like the lattice ones) than bosons. It is therefore desirable to integrate the fermionic degrees of freedom out (in the path integration sense) analytically. This turns out to be an easy task, at least formally, in most of the cases which deals with a Lagrangian density that is only quadratic in the fermion fields. The result is usually a much more complicated effective action for the bosonic fields of the system to be functionally integrated by various means like a numerical calculation or simulation, an approximated analytic computation, modeling and possibly the mixture of all of them. The fermion loop effects in numerical simulations are in some sense less well understood than their bosonic counterparts in a theory at present. The problems become more sever in the presence of a finite chemical potential in lattice simulations \cite{LQCDmu}. This situation calls for more analytic efforts to understand the effects of fermionic quantum fluctuations in an interacting system since it indicates that our understanding of the problem is still insufficient. The traditional treatment of the finite density problems (at finite temperature) in statistical mechanics is based upon the grand canonic ensemble in which the partition function is $Z= Tr e^{-\beta(\widehat H - \mu_{ch}\widehat N)}$ with $\beta$ the inversed temperature, $\widehat H$ the Hamiltonian and $\widehat N$ the particle number of the system. A global chemical potential $\mu_{ch}$ is introduced to select, among all possible particle numbers, the corresponding set of particle numbers that are different from each other by a finite quantity in the thermodynamic limit. Since only those extensive quantities that are proportional to the (infinite) volume of the system are relevant, the above-mentioned differences are irrelevant to bulk thermodynamic quantities. This makes the grand canonic ensemble equivalent to the canonic ensemble \cite{Huang} in which the number of particles are kept fixed. The usually calculated quantities, called the {\em apparent particle number} here, are expressed as $\overline N_{app} =\beta^{-1}\partial \ln Z / \partial \mu_{ch}$ is formally identical to what is called the {\em absolute particle number} $\overline N_{abs} = \int d^3 x Tr \widehat \rho(\mbox{\boldmath{$x$}},t=0) e^{-\beta(\widehat H-\mu_{ch}\widehat N)}/Z$. It can be realized that the identification between $\overline N_{app}$ and $\overline N_{abs}$ is not mathematically warranted in the thermodynamic limit since the particle number $\widehat N$ is a macroscopic operator with eigenvalues proportional to the volume of the system. we thus expect that the formal equivalence between $\overline N_{app}$ and $\overline N_{abs}$ and many other physical observables so computed to be broken down under certain conditions especially when the quantum fluctuation effects are taken into account. In order to characterize the deviation of the quantity $\overline N_{app}$ from $\overline N_{abs}$, a {\em dark component} for each physical local observables in a relativistic QFT is introduced. For example, the dark component of the fermion number density operator is defined as $\overline {\Delta \rho} = (\overline N_{abs} - \overline N_{app})/\Omega$ with $\Omega$ the volume of the system. The questions to be assessed are whether or not $\overline {\Delta \rho} = 0$ and what is the origin of the dark component when $\overline {\Delta \rho}\ne 0$. Such a possibility is important to study because some of the questions posted for the vacuum state of a relativistic system governed by certain QFT are different from the ones asked for the condensed matter system in which the quantities under study like the average particle number density is finite with its {\em absolute} values playing no direct physical role in physical processes and in which the spacetime resolution (energy) of the observation is usually low. The vacuum state of a relativistic system, especially the trivial one, is characterized by its nothingness nature, namely, all physical observables in the trivial vacuum state are by definition zero. In the non-trivial vacua of the system, certain physical quantities develop finite values which need to be evaluated correctly. Those quantities like the fermion (baryon) number density and associated energy density should in principle manifest themselves in gravitational process at the macroscopic level. In addition, global quantities have no direct physical meaning in a classical relativistic system according to principle of relativity. It is expected that the apparent quantities like $\widehat N_{app}$ are not sufficient ones in the study of the vacuum state of a relativistic system. Rather, one should go back to the absolute quantities like $N_{abs}$ defined above. Therefore, for a better marriage between relativity and quantum mechanics, instead of a global chemical potential $\mu_{ch}$, a localized quantity called the {\em primary statistical gauge field} $\mu^\alpha(x)$ seems to be necessary, which leads to a local theory for the finite density problem. The functional derivative of the logarithm of the new partition functional with respect to $\mu^\alpha(x)$ at certain spacetime location does give the absolute particle number density since it is a finite number. The principle of {\em locality} has far reaching consequences in the development of modern physics. Implied in Maxwell's equations for electrodynamics, it motivated the birth of relativity in which it is raised to a principle that governs all physical laws in classical physics. Localities in quantum field theories are implemented in most of the theories regarded as fundamental like the quantum electrodynamics and quantized non-abelian Yang-Mills gauge theories of the standard model. Locality in the quantum field theories, including the fundamental ones, for non-vanishing matter and energy density is not, as a matter of fact, fully implemented. Such theories contain inconsistencies, at least at the conceptual level, that have to be removed. Locality of a symmetry generates a gauge one. For the $U(1)$ invarince corresponding to the conservations of fermion number, a new local symmetry called the {\em statistical gauge invariance} with the gauge field $\mu^\alpha(x)$ is induced after its localization. The introduction of a primary statistical gauge field is expected to produce a series of new problems and opportunities. One of the goals of this paper is to solve these problems and to explore such opportunities so that to develop a consistent framework beyond the quasiparticle picture using which the problems related to the vacuum state of a relativistic fermion system can be systematically tackled. One of the most important problems in understanding a system governed by a QFT that contains interaction is to determine the phase structure of its vacuum. The vacuum is a state of the system that has the lowest energy that can be different from the trivial one for interacting systems. The non-trivial vacuum of an interacting system covered by this study are the ones that contain macroscopic condensation of particles in various form. It has zero overlap with the corresponding trivial one when the thermodynamic limit is taken. Such a state can not be reached by a perturbative computation, which is local in nature and can only change finite number of particles. A large set of non-trivial vacua of interest are expected to be describable in terms of a set of parameters that characterize the macroscopic condensation of bare particles. Some of such parameters are called the order parameters since they are indicators of a spontaneous breaking down of certain global symmetries of the system. They are accompanied by massless Goldstone bosons that generate long range orders which stabilizes the symmetry breaking states. Others, together with the order parameters, specify the macroscopic condensation of bare particles in the non-trivial vacuum in a more detailed way. For example, the vacuum of the light quark system (up and down quarks) is known to be condensed with quark--antiquark pairs. This phenomenon, which induces the spontaneous breaking down of a chiral symmetry, is shown to happen in the lattice QCD simulation \cite{LatticQCD} and is supported by experimental facts due to the success of the partial conservation of axial vector current (PCAC) relationship and resulting current algebra \cite{PCACsuc}. For a system with a large mass gap (compared to the typical excitation mass scale of the system), the condensation of bare particles is unlikely to occur in its vacuum state. So, for the purpose of this paper, I consider light (compared to the typical excitation mass scale of the system) fermion systems. They can be approximately represented by a massless fermion system. An interaction amongst massless fermions and antifermions of the right sign and magnitude is expected to generate a non-vanishing number of fermion--antifermion pairs from the bare vacuum. Under certain conditions, the number of such pairs can become macroscopic (or proportional to the volume of the system) in the thermodynamic limit. In such a case, it is expected to has a phase transition. Such a phase transition was show to happen to the Nambu Jona--Lasinio (NJL) model in the quasiparticle approximation (the meaning of which is going to be specified in the following sections). It is also shown in Ref. \cite{Ying1,Ying11} that fermion pairs (and antifermion pairs) can condense to lead to a different phase ($\beta$ phase) that belongs to the same chiral symmetry breaking chain, namely $SU(2)_L\times SU(2)_R\to SU(2)_V$. In general, the condensation of fermion pairs in the vacuum of an interacting fermionic system belongs to one of three categories: 1) condensation of correlated {\em fermion and antifermion} pairs 2) condensation of correlated {\em fermion pairs} (and possibly some correlated {\em antifermion pairs} 3) condensation of correlated {\em antifermion pairs} (and possibly some correlated {\em fermion pairs}). The spin, flavor and color combination of the condensed pairs determines the nature of the symmetry breaking channel of the non-trivial vacuum on a finer basis. Since a macroscopic number of fermions and antifermions are pumped out of the bare vacuum in the non-trivial vacuum, it is natural to ask what are the effects on the its energy density by considering not only the contributions from the long distance or/and time interval correlated quasiparticle excitations, but also, at least partially, the contributions from some of the transient and short distance quantum fluctuations within the system. To investigate the later effects within a consistent QFT, certain new theoretical concepts and tools in describing and interpreting the related physics, to which this work put its emphasis on, is proven necessary. The paper is divided into three major parts. The first part consists of sections \ref{sec:Intro} and \ref{sec:Summ}, which gives an introduction and a summary. In the second part, which consists of sections \ref{sec:General}, \ref{sec:Instab} and \ref{sec:FDth}, a consistent general approach to the relativistic fermionic system at both zero and finite density is developed. The third part includes sections \ref{sec:Models} and \ref{sec:Vacua}, in which two 4--fermion interaction models for the strong interaction are introduced and studied using the method developed in part two; some novel properties of these models are revealed using the new tools. The more detailed arrangement of the paper is given in the following. In section \ref{sec:General}, the general framework used to handle the fermionic system is discussed. An 8--component ``real'' representation for the fermion fields is adopted. The distinction between the Minkowski and Euclidean spacetime formulation of the problem is emphasized. It is pointed out that in the Euclidean spacetime formulation of the problem, additional contributions due to certain quantum penetration of field configurations to classically forbidden region, to which quasiparticle approximation in Minkowski spacetime can not access, can be included in the effective action. I also motivate the need for a distinction between local and global observables in a relativistic QFT. The existence of the dark component is demonstrated. Section \ref{sec:Instab} is devoted a tentative local approach to the finite density problem for a Lagrangian density that conserves the fermion number. Such a formalism is used to study the question of whether or not a state with non-vanishing fermion number density can has a lower energy (density) when the vacuum of the system is in a phase different from the trivial one. Spontaneous CP violating stationary points in the Euclidean spacetime is shown to exist in a phase, called the $\alpha$ phase of a massless fermion system, with non-vanishing $\overline\psi\psi$ vacuum expectation value. It is argued that such a result is at least physically not acceptable. Based on these findings, a consistent theory is developed in section \ref{sec:FDth} for a relativistic fermion system in which a phase transition with particle condensation has occurred. A statistical blocking parameter $\epsilon$ is introduced. The question of the statistical gauge invariance of the system due to the original global $U(1)$ symmetry related to fermion number conservation is addressed. To demonstrate the relevance of the theory, two models for strong interaction are introduced in section \ref{sec:Models}. Their phase structures are then studied. The vacuum of both of them has three different phases. One is the trivial (bare) vacuum, which is called the $O$ phase; the second is the above mentioned $\alpha$ phase with fermion and anti-fermion pairs condensation; the third kind of phases, called the $\beta$ phase and $\omega$ phase respectively, are phases that spontaneously break the global $U(1)$ symmetry related to the fermion number because of a condensation of fermion pairs and antifermion pairs. These phases are further analyzed in section \ref{sec:Vacua} by using the formalism developed in section \ref{sec:FDth}. A spontaneous CP violation is found to be present in the $\beta$ and $\omega$ phases. Also, the spontaneous creation of matter, which is relevant to Cosmology, is shown to appear naturally in the $\beta$ and $\omega$ phases. Finally, the main results are summarized and discussed in section \ref{sec:Summ} which also contains an outlook. \section{General Discussions} \label{sec:General} \subsection{Fermion representation} The fermion field is represented by an 8--component ``real'' spinor $\Psi$. It is essential for a consistent formulation of a relativistic finite density field theory in a functional (or path integration) approach developed in this work \cite{TFTQFT} (see also Appendix \ref{app:FF}) and for other aspects of the QFTs \cite{Larsen}. In this representation, $\Psi$ satisfies the {\em reality} condition \begin{eqnarray} \overline\Psi(x) &=&\Psi^T(-x) \Omega_0\label{Psi}, \label{PsiBar} \end{eqnarray} where the $\Omega_0$ matrix is \begin{eqnarray} \Omega_0&=& O_1 C = \left (\begin{array}{cc} 0 & - C^{-1} \\ C & 0\end{array}\right ) \label{G} \end{eqnarray} with $C$ the charge conjugation operator. The matrices $O_1$ is one of the three Pauli matrices $O_{1,2,3}$ acting on the upper and lower 4 components of $\Psi$. For the case of the number of flavors $n_f$ for the fermions is less than three, the symmetry transformation of the 8 component $\Psi$ can be made to be the same as the 4 component representation $\psi$ by imposing a slightly different ``reality' condition on $\Psi$ given by Eq. \ref{PsiBar} with $\Omega_0$ replaced by \begin{eqnarray} \Omega&=& \left (\begin{array}{cc} 0 & - C^{- 1} \rho^{-1} \\ C \rho & 0\end{array}\right ). \label{G1} \end{eqnarray} Here $\rho=1$ if $n_f=1$ and $\rho= i\tau_2$ if $n_f=2$. More detailed properties of this representation for the case of $n_f=2$ are discussed in Refs. \cite{Ying1,Ying11,TFTQFT,SSSconf}. They will not be repeated here. If $n_f\ge 3$, the reality condition Eq. \ref{G1} is no longer valid. In such a case, the general ``reality'' condition is given by the original Eq. \ref{PsiBar} and the representation of $\Psi$ under flavor $SU(n_f)$ transformation generated by \begin{eqnarray} T_a &=& \left \{ \begin{array}{cc} t_a O_3 & \hspace{1.5cm}\mbox{If $t_a$ is symmetric}\\ t_a & \hspace{1.8cm}\mbox{If $t_a$ is antisymmetric} \end{array} \right . \end{eqnarray} have to be adopted. Here $a=1,2,\ldots,n_f^2-1$ and $t_a$ is the generator for the symmetry transformation in the 4 component representation of $\psi$. The set of matrices $T_a$ and $t_a$ ($a=1,2,\ldots,n_f$) belong to equivalent adjoint representation of flavor $SU(n_f)$ transformation. It should be emphasized that the intention of introducing an 8--component representation for the fermions is different from the one related to the doubling of degrees of freedom in the closed time path approach \cite{CTPT,CTPT1} to non-equilibrium (also equilibrium) problems or thermal field dynamical \cite{TFD} approach to equilibrium problems in two aspects: 1) this work is devoted to the study of zero temperature physics where, in the language of the closed time path approach, all the dynamical fields considered in this work lies on a single time axis running from negative infinity to positive infinity rather than on two time axis that form a closed loop 2) there is no doubling of degrees of freedom for the fermions here since the constraint Eq. \ref{PsiBar} is systematically implemented in the formalism. In case the present formalism is to be extended to finite temperature \cite{TFTQFT} or to the non-equilibrium situations, a doubling of the components of the 8--component ``real'' representation has to be made in addition. \subsection{Minkowski spacetime formulation} The generating functional can be written as \begin{eqnarray} e^{W[J,\overline\eta,\eta]} &=& \int \prod_i D[f_i] D[\Psi] e^{i\int d^4x \left ({1\over 2} \overline\Psi i S_F^{-1}[f]\Psi + {\cal L}_B[f] + \overline\Psi\eta + \overline\eta\Psi + \sum_k J_k f_k\right)}, \label{General_W} \end{eqnarray} where $J=\{J_1,J_2,\ldots,J_n\}$ are a collection of external fields coupled to the corresponding boson fields $f=\{f_1,f_2,\ldots,f_n\}$, ${\cal L}_B[f]$ is the Lagrangian density for the boson fields $f$, and $\eta$, $\overline\eta$ are external Grassmann fields coupled to the fermion fields $\overline\Psi$, $\Psi$. Here $f_i$ can be either real or complex. $W[J,\overline\eta,\eta]$ generates the Green functions of the fermion or the boson fields. The possible gauge fixing conditions, which can be implemented by multiplying $\prod_i D[f_i]$ a set of $\delta$ functions or by introducing ghost fields \cite{GaugeTheory}, are unimportant to this work and are suppressed in the sequel. The bosonic part of the Lagrangian density will not be specified in this investigation. This allows the results of this work to be useful in a wide class of problems. For example, in case of QCD, the boson fields are 8 gluon fields ${\cal B}_a^\mu(x)$ $(a=1,2,\ldots,8)$ with the full Lagrangian density in the 8 component representation for quarks provided in Appendix \ref{app:QCD}. In the study of the vacuum properties, fermion degrees of freedom can be eliminated first \begin{eqnarray} e^{W[J,\overline\eta,\eta]} &=& \int \prod_i D[f_i] e^{{1\over 2}LnDet \gamma_0 i S_F^{-1}[f] + {1\over 2} \overline\eta S_F[f] \eta + i\int d^4x\left ({\cal L}_B[f] + \sum_k J_k f_k\right )}. \label{Eff_W1} \end{eqnarray} Here ``$Det$'' denotes functional determinant. The proper vertex generating functional for the boson fields is then \begin{eqnarray} \Gamma[f] = W[J,0,0]-i\sum_k J_k f_k. \label{Vertex} \end{eqnarray} The stationary configuration $f$ determined by the equation \begin{eqnarray} {\delta \Gamma[f]\over \delta f_i} &=& 0\hspace{0.7cm}(i=1,2,\ldots,n) \label{EqofMotion} \end{eqnarray} with $f_i$ ($i=1,2,\ldots,n$) spacetime independent determines the phase structure of the vacuum. In general, $\Gamma[f]$ is difficult to compute directly. It is useful to define an effective action $S_{eff}$ for the boson fields as \begin{eqnarray} S_{eff}[f] &=& -i{1\over 2}LnDet \gamma_0 iS_F^{-1}[f] + i{1\over 2}LnDet \gamma_0 iS_F^{-1}[0] + \int d^4 x {\cal L}_B[f] \nonumber\\ &=& - i{1\over 2} Sp Ln S_F^{-1}[f] + i{1\over 2} Sp Ln S_F^{-1}[0] + \int d^4 x {\cal L}_B[f], \label{Seff} \end{eqnarray} where $Sp$ denotes the functional trace and a constant (infinite) term independent of the boson fields are subtracted. Eq. \ref{Eff_W1} becomes \begin{eqnarray} e^{W[J,\overline\eta,\eta]} &=& \int \prod_i D[f_i] e^{i S_{eff}[f] + {1\over 2} \overline\eta S_F[f] \eta + i\int d^4x \sum_k J_k f_k}. \label{Eff_W2} \end{eqnarray} Starting from $S_{eff}[f]$, either systematic improvement beyond the Gaussian approximation or numerical simulations like lattice computation can be made. A more detailed discussion of the local quantum fluctuations around the mean field is given in Appendix \ref{app:thedark}. A formal relation between $\Gamma[f]$ and $S_{eff}[f]$ is developed in Ref. \cite{Jackiw} and discussed in Appendix \ref{app:CJT}. When the fluctuation in $f$ is only treated at an one loop level, the vertex functional $\Gamma[f]$ is (see Appendix \ref{app:CJT}) \begin{eqnarray} \Gamma[f] &=& iS_{eff}[f] - {1\over 2} Sp Ln D G^{-1}[f] ,\label{1-loop-gamma} \end{eqnarray} where $D$ is the bare propagator for $f$ and $\delta^2 S_{eff}/\delta f\delta f$ is symbolically denoted as $G^{-1}[f]$ . Under such an approximation, the solution to the equation \begin{eqnarray} {\delta S_{eff}[f]\over \delta f_j} + {i\over 2} Sp G[f] {\delta G^{-1}[f] \over \delta f_j} &=& 0 \hspace{0.7cm}(j=1,2,\ldots,n) \label{Seff-stable} \end{eqnarray} determines the vacuum phase structure of the system. It will not be further discussed in this paper since the results of this paper depend only on some of the bosonic fields $\{ f_i \}$ as a solution to Eq. \ref{EqofMotion} being different from zero. The effective action $S_{eff}[f]$ can be expressed in terms of the spectra of the operator $\gamma_0 iS^{-1}_F[f]$, which is Hermitian. The eigenequation of interest is \begin{eqnarray} \gamma^0 iS^{-1}_F[f] \Psi_\lambda = \lambda[f] \Psi_\lambda \label{EigenEq} \end{eqnarray} with $\Psi_\lambda$ the eigenvector. In the time independent case, $S_{eff}[f]$ in terms of $\lambda$ is \begin{eqnarray} S_{eff}[f] &=& -i{T\over 2} \sumint {dp^0\over 2\pi} \ln{\lambda_{p^0,\xi}[f]\over \lambda_{p^0,\xi}[0]} + \int d^4 x {\cal L}_B[f] ,\label{Seff2} \end{eqnarray} where $T$ is the temporal dimension of the system ($T\to\infty$), $p^0$ represents the energy of the eigenvector $\Psi_\lambda$ and $\xi$ is a collection of other quantum numbers that completely determines a single eigenvector $\Psi_\lambda$. The order in which $p^0$ integration and $\xi$ summation is carried out is important in general since they both are divergent before the subtraction. The symbol $$\displaystyle \sumint {dp^0\over 2\pi}(\ldots)$$ is understood as that neither the sum over $\xi$ nor the $p^0$ integration is not done first but they are done in a covariant way. The order between them depends on situations which are discussed in the following and in Appendix \ref{app:FF}. Due to the logarithmic function, the integrand in the complex $p^0$ plane is multivalued, the physical contour ${\cal C}$ with Feynman--Mathews--Salam causal structure \cite{TFTQFT} for the $p^0$ integration is shown in Fig. \ref{Fig:ConCon1}. The conventional computation of the energy density of the vacuum with a non-covariant cutoff, which is called the {\em quasiparticle approximation}, can be represented by a distortion of contour to curve I of Fig. \ref{Fig:ConCl2}. It corresponds to a summation of the energies of individual stationary quasiparticle orbits in the negative energy Dirac sea with a fixed (time-independent) background configuration for $f$. \subsection{Euclidean spacetime formulation} The path integration representation of the generating functional $W[J,\overline\eta,\eta]$ on the right hand side of Eq. \ref{General_W} contains ambiguities associated with the non-specification of the initial and final field configurations. For the transition amplitude between a given pair of initial and finial field configurations, the contributing intermediate states can in principle be different from the vacuum state interested here. The lowest energy configuration corresponding to the vacuum is automatically projected out in an Euclidean spacetime computation with a sufficiently large Euclidean time for a large set of proper initial and final field configurations. In addition, the Minkowski effective action Eq. \ref{Seff} has extrema (or is stable) only at configurations of the boson fields that are of classical nature. An important class of configurations corresponding to pure quantum mechanical effects, namely the tunneling effects through potential barrier or penetration into the classically forbidden field configurations are expected to be missing in a steepest--descent or Gaussian approximation. The Euclidean action obtained by replacing time variable $t$ by $-it$ (with $i^2=-1$) can be stable at either the time independent configurations, which are the same ones as in the Minkowski approach, or the configurations that correspond to the tunneling or penetration that are quantum mechanical in origin. It is important to realize that the later stable configurations are always absent in the set of stable configurations of the Minkowski effective action despite they constitute an important contribution to the energy density of the system. The Euclidean spacetime formulation has been used to find important stable field configurations called instantons in non-Abelian gauge theories. They correspond to stable finite action field configurations due to the tunneling between gauge field configurations of different winding numbers. Other applications of finding the tunneling effects by using the Euclidean spacetime formulation and their connection to the WKB method in quantum mechanics can be found in e.g. Ref. \cite{Eucl-App}. It will not be elaborated here. The effects that are interested in this study are the ones that survive the thermodynamic limit and are thus non-finite action effects despite they can be decomposed into a collection of random finite action ones (see Appendix \ref{app:thedark}). The generating functional for the Green functions in the Euclidean spacetime formulation can be expressed as \begin{eqnarray} e^{W[J,\overline\eta,\eta]} &=& \int \prod_i D[f_i] e^{ S_{eff}^E[f] + {1\over 2} \overline\eta S^E_F[f] \eta + \int d^4x \sum_k J_k f_k} \label{Eff_W3} \end{eqnarray} which also serves as a definition of the Euclidean effective action $S_{eff}^E[f]$. The Euclidean propagator $S^E_F[f]$ is found by using the rules discussed in the following. For a fermion system, the simple replacement $t\to -it$ is not sufficient to obtain a consistent Euclidean spacetime formalism from the corresponding Minkowski one due to the fact that a fermion is not a scalar particle. More sophisticated set of transformations are needed. Formally, an Euclidean effective action for fermions can be obtained by making a continuous change of the metric \cite{Eucl-Ferm}. The result of change for a Dirac particle can be summarized by \begin{eqnarray} g^E_{\mu\nu} = \{-,-,-,-\},\hspace{0.3cm} \gamma_0^E = i\gamma^5,&\hspace{0.2cm} & \gamma_5^E = -i\gamma_0,\hspace{0.3cm}\gamma^E_i = \gamma_i \label{Eucl-Rules} \end{eqnarray} and the effective action Eq. \ref{Seff} in the Euclidean spacetime formulation becomes \begin{eqnarray} S^E_{eff}[f] &=& {1\over 2} \left \{SpLn \left [i\gamma_5 (S^E_F[f])^{-1}\right] -SpLn\left[ i\gamma_5 (S^E_F[0])^{-1}\right] \right \} + \int d^4x_E {\cal L}_B[f^E],\label{E-Seff} \end{eqnarray} where the Euclidean propagator $S^E_F[f]$ is also obtained by using substitution rules listed in Eqs. \ref{Eucl-Rules}. In terms of the eigenvalue of the hermitian operator $i\gamma_5 (S^E_F[f])^{-1}$ that satisfies \begin{eqnarray} i\gamma_5(S^E_F[f])^{-1} \Psi_{\lambda} &=& \lambda[f] \Psi_{\lambda} .\label{Eucl-EigEq} \end{eqnarray} The Euclidean correspondence of Eq. \ref{Seff2} is then \begin{eqnarray} S_{eff}[f] = -i{T\over 2} \sumint {dp^0\over 2\pi} \ln{\lambda_{p^0,\xi}[f]\over \lambda_{p^0,\xi}[0]} + \int d^4 x {\cal L}_B[f] ,\label{E-Seff2} \end{eqnarray} where the superscript ``$E$'' on top of a quantity in Euclidean spacetime is suppressed in there and in the following whenever no confusion is thought to occur. It can be demonstrated that Eq. \ref{E-Seff2} can be obtained from Eq. \ref{Seff2} by a distortion of the $p^0$ integration contour to curve II shown in Fig. \ref{Fig:ConCl2}. \subsection{Local and global observables} The physical observables in a classical theory consistent with relativity are local ones like the charge density, energy density etc.. Unlike in the non-relativistic world, the global observables like the total charge and the energy of the system are not directly measurable. In a given frame, however, the global observables can be defined as a spatial integration of the local observables on an equal time hypersurface in the Minkowski spacetime. The meaning of the global observables can only be defined operationally. The particular value of a global observable has to be determined by the following procedure. First a synchronization of the clocks of a group of observers on each spacetime point should be carried out, then let each observer measure the corresponding density at his/her location in spacetime at the same time and finally sum (integrate) each observers finding to obtain the value of the global observable. The global observables in quantum theories, including the relativistic QFTs, are regarded as physical observables. Let us consider certain density observable $\widehat \rho(x)$ in a QFT. The corresponding global observable $\widehat Q$ is defined simply as \begin{eqnarray} \widehat Q &=& \int_{\Sigma} d^3x \widehat \rho(x) \end{eqnarray} with $\Sigma$ the equal-time hypersurface in certain reference frame. Then the matrix elements of $\widehat\rho(x)$ and $\widehat Q$, namely, $\bra{f}\widehat\rho(x)\ket{i}$ and $\bra{f}\widehat Q\ket{i}$ define the corresponding observables. Since both the local observable $\widehat\rho(x)$ and the classically not defined global one $\widehat Q$ are defined in a relativistic QFT, one can naturally ask the following question, namely, is there any difference between $O=\bra{\Omega}\widehat Q\ket{\Omega}$ and $O'= \int d^3 x \bra{\Omega}\widehat\rho(x)\ket{\Omega}$ measured in a state $\Omega$? It can be shown that this is a relevant question for a relativistic QFT. A measurement of the total charge or charge density of a system normally consists of supplying an external global field or local field coupled to the corresponding observables in a known way and then measure the {\em response of the system} like the force that the external field exerts on the system or the amount of increase of some proper defined ``potential'' of the system, which allows the observer to deduce the total charge or charge density measured. Let us consider the measurements, in the above sense, of the global and the local observables in the vacuum state of the system expressed symbolically as \begin{eqnarray} O_0 &=& \lim_{J\to 0} \bra{0_J}\widehat Q\ket{0_J}= \lim_{J\to 0} \bra{0_J}\int_{\Sigma} d^3x \widehat \rho(x) \ket{0_J},\label{O0}\\ O'_0 &=& \int_{\Sigma} d^3x \lim_{\delta j(x)\to 0} \bra{0_{\delta j(x)}} \widehat \rho(x) \ket{0_{\delta j(x)}} ,\label{Op0} \end{eqnarray} where $J$ is a global external field coupled to $\widehat Q$ and $\delta j(x)$ is a local external field taking non-vanishing value only at $x$ that couples to $\widehat \rho(x)$ and $\ket{0_J}$ and $\ket{0_{\delta j(x)}}$ are the corresponding vacuum states. The first one $O_0$ corresponds to a measurement of $\widehat Q$ directly; the second one corresponds to the integration of a (infinite) set of measurements of $\rho(x)$ on a space-like hypersurface. These two measurements can in principle be different in a system governed by a QFT because of the random local quantum fluctuations of the fields that are not suppressed in the thermodynamic limit. Such a potential difference guarantees the possible existence of the dark component in a QFT. The fact that localized random quantum fluctuations are not suppressed in an interacting QFT, especially in some of the non-trivial phases of the system, are discussed in more details in Appendix \ref{app:thedark} where it is also shown that the conventionally used quasiparticle picture in many-body theory and QFT is not sufficient to saturate the local observables due to the existence of the dark component. Such a deviation from the conventional physical picture based upon quasiparticles gets less and less significant as the resolution of our observation respect to spacetime gets lower and lower compared to the typical size of the localized random fluctuations $f_a$ of the system. In such a case these fluctuations are more and more suppressed and the contribution of the dark component becomes smaller and smaller resulting in 1) an emergency of a quasiparticle dominated picture for the system and 2) the validity of the results obtained based upon a global chemical potential $\mu_{ch}$ in the grand canonic ensemble in low resolution (energy) observations. \section{A Naive Local Quantum Finite Density Field Theory and Its Problems} \label{sec:Instab} \subsection{A tentative formalism for the local finite density fermionic field theory} In order to develop a theoretical framework consistent with the requirement of locality, a new quantity called primary statistical gauge field $\mu^\alpha(x)$ is introduced in the following. The motivation for its introduction is discussed in the introduction and in the above sections. The other reason that it is treated as a local variable is due to the fact that total fermion number has limited meaning in a relativistic system that are generated in the past not infinite long ago. The total number of fermions in such a system is not an observable since there are regions outside the horizon that are classically non-detectable even in principle due to the constraint of causality. \subsubsection{The asymptotic grand canonic ensemble} The generating functional corresponding to Eq. \ref{General_W} for a fermionic system with finite density can be formally written as \begin{eqnarray} e^{W[J,\overline\eta,\eta,\mu]} &=& \int \prod_i D[f_i] D[\Psi] e^{i\int d^4x \left ({1\over 2} \overline\Psi i S_F^{-1}[f]\Psi + \mu_\alpha j^\alpha + {\cal L}_B[f] + \ldots \right )} ,\label{General_W_FD} \end{eqnarray} where ``$\ldots$'' denotes the source terms and the fermion number current $j^\mu(x)$ is \begin{eqnarray} j^\mu(x) &=& {1\over 2}\overline\Psi(x) \gamma^\mu O_3 \Psi(x). \label{fermion-curr} \end{eqnarray} In the Minkowski spacetime, quantity on the left hand side of the above equation is the transition amplitude of the system from properly weighted initial field configurations $\phi_i$ at $t=-\infty$ to final field configurations $\phi_f$ at $t=+\infty$, namely, \begin{eqnarray} e^{W[J,\overline\eta,\eta,\mu]} &=&\sum_{\{\phi_i,\phi_f\}} {\cal W}[\phi_f,\phi_i] \braket{\phi_f,t=+\infty}{\phi_i,t=-\infty}, \end{eqnarray} where ${\cal W}[\phi_f,\phi_i]$ is the weight functional discussed in the following. These initial and final fields are considered free fields. The interaction terms are adiabatically switched on at certain large negative time $-T$ and switched off at certain large positive time $T$. Such a technical manipulation does not affect the actual local physical observables like the energy density, fermion number density, etc. at time $t$ that is far from both $-T$ and $T$ due to locality. Since the particle content for free fields at $t=\pm\infty$ is clear, which allows a straight forward statistical interpretation in terms of number of particles in each single particle state of the system. It is natural to assume that the initial (final) state are in the grand canonic ensemble with the weight ${\cal W}[\phi_f,\phi_i]$ for the summation determined by the factor $\lim_{\beta\to\infty} \exp[-\beta(E_0-\mu N)]$\footnote{We are interested in the zero temperature case here. The form of the weight functional for free fields at finite temperature can be found using the method given in, e.g., Refs. \cite{CTPT,CTPT1}, which involves two time axis: one runs from $-\infty$ to $+\infty$ on real $t$ axis; the other lies below it on the complex $t$ plan. In the zero temperature limit, only the contributions from the real $t$ axis is nonzero, which leads to Eq. \ref{General_W_FD}. }, where $E_0$ is the total energy and $N$ is the total fermion number of the (free) system. $\mu$ agrees with the time component of the spacetime independent part of the primary statistical gauge field $\mu^\alpha(x)$. Such an ensemble is called the {\em asymptotic grand canonic ensemble} here. The asymptotic grand canonic ensemble differs from the grand canonic ensemble for it allows for a local approach to the relativistic finite density problem and the existence of the dark component discussed in the above sections and in Appendix \ref{app:thedark}. Its predictions, however, approaches that of the grand canonic ensemble in the non-relativistic situations in which the spacetime resolution or energy in a measurement is low. This point is demonstrated in Appendix \ref{app:thedark} at the mean field level. Two points must be pointed out. The first one is that although the time interval $2T$ between the (adiabatic) switching on and off of the interaction terms are let to go to infinity in the final result, the thermodynamic limit, in which the spacetime box that contains the system approaches infinity, is taken first. Therefore $T$ is still infinitesimal compared to the temporal size of the system. The second one is that, as a result, for the conserved quantities, like the total fermion number, the extreme value of it picked out \cite{Huang} in the asymptotic grand canonic ensemble with a fixed value of $\mu$ is unmodified. The later point is elaborated in the following. \subsubsection{The fermion number density in the asymptotic grand canonic ensemble} Before continuing the development of the formalism, it is important to reveal a property of the asymptotic grand canonic ensemble related to the $U(1)$ symmetry corresponding to the fermion number conservation. The conservation of $j^\alpha(x)$ due to the $U(1)$ symmetry can be derived from the Noether's theorem at the classical level, namely, \begin{eqnarray} \partial_\mu j^\mu(x) &=& 0. \label{jB-cons} \end{eqnarray} Those interaction Lagrangian densities for which Eq. \ref{jB-cons} remains true at the quantum level are considered despite the fact that this symmetry is not explicitly related to a gauge symmetry for which a superselection sector in the Hilbert space of the system exists. The reason to consider such a class of models is because for a quark system, the fermion number is identical to the baryon number, which is conserved to a high precision due to the lack of any convincing evidence of the proton decay in observation at the present. For an uniform system, Eq. \ref{jB-cons} implies that \begin{eqnarray} \left . {\partial \overline\rho\over \partial g_i}\right |_{\mu^\alpha} & = & 0 \hspace{1cm}(i=1,2,\ldots) \label{p-rho-pgi} \end{eqnarray} in the asymptotic grand canonic ensemble with $\{g_i\}$ representing a set of interaction coupling constants and the derivative taken by keeping $\mu^\alpha$ unchanged. For an uniform system, the mean local fermion density is a function both of the coupling constants $\{ g_i \}$ and $\mu^\alpha$, which is now spacetime independent. Eq. \ref{p-rho-pgi} implies that {\em the mean local fermion density of the system $\overline\rho$ is only a function of $\mu^\alpha$, it is independent of the interaction coupling constants $\{g_i\}$.} This property allows us to find out the relationship between $\overline\rho$ and $\mu^\alpha$ easily by considering a non-interacting system. It is discussed in Appendix \ref{app:FF}. The result for a massless fermion with $n_f$ flavors and $n_c$ colors is \begin{eqnarray} \overline\rho &=& {n_f n_c\over 3 \pi^2} \mu^3 ,\label{bar-rho-mu} \end{eqnarray} where $\mu \equiv \sqrt{\mu^2}$. The simplicity of Eq. \ref{bar-rho-mu} in the asymptotic grand canonic ensemble is due to the fact that the interaction Lagrangian density conserves fermion number. It implies that for an interacting massless fermion system, local quantity $\overline\rho$ is non-zero as long as $\mu$ is non-zero even when a phase transition in its vacuum state that generates a finite gap for the lowest excitation of the system has happened. Such a qualitative behavior is required following the discussion given in Appendix \ref{app:thedark}. Eq. \ref{bar-rho-mu} is however an exact relation in the asymptotic grand canonic ensemble. After an exploration of the implications of the $U(1)$ symmetry of the class of models under consideration in the asymptotic grand canonic ensemble, we are in a position to further develop the formalism for the investigation of the density fluctuations of the vacuum after a phase transition. Since Eq. \ref{General_W_FD} can be obtained from Eq. \ref{General_W} by the replacement \begin{eqnarray} i\rlap\slash\partial&\to& i\rlap\slash\partial+\rlap\slash\mu O_3 \label{part-repl} \end{eqnarray} the effective action $S_{eff}$ of the system is changed to \begin{eqnarray} S_{eff}[f,\mu] &=& -i{T\over 2} \left (\sumint {dp^0\over 2\pi} \ln{\lambda_{p^0,\xi}[f;\mu]\over \lambda_{p^0,\xi}[0,\mu]} + \sum_{\xi} \int_{\cal C} {dp^0\over 2\pi} \ln{\lambda_{p^0,\xi}[0;\mu]\over \lambda_{p^0,\xi}[0,0]} \right ) +\int d^4 x {\cal L}_B[f] \label{Seff2-FD} \end{eqnarray} with the eigenvalues $\lambda$ obtained from the following equation \begin{eqnarray} \gamma^0 iS^{-1}_F[f;\mu] \Psi_\lambda = \lambda[f;\mu] \Psi_\lambda ,\label{EigenEq2} \end{eqnarray} where $S_F^{-1}[f;\mu]$ is derived from the corresponding one in Eq. \ref{EigenEq} by making the substitution Eq. \ref{part-repl}. Since the question interested in this study is related to a comparison of the energies of states with different fermion densities when the interaction is present, I consider the following effective action obtained from $S_{eff}$ by a Legendre transformation \begin{eqnarray} \widetilde S_{eff} &=& S_{eff} - \int d^4 x \mu_\alpha \overline j^\alpha. \label{tS_eff} \end{eqnarray} It is a canonic functional of $\overline j^\alpha$ with $\mu^\alpha$ implicitly depending on it. For an uniform system like the vacuum, the second logarithmic term in Eq. \ref{Seff2-FD} is calculated in such a way as not to over-count the already known (and included) quantum fluctuations of the free fields. The result is given in Appendix \ref{app:FF} (Eq. \ref{W00-3}), it is \begin{eqnarray} -i{T\over 2} \sum_{\xi} \int_{\cal C} {dp^0\over 2\pi} \ln{\lambda_{p^0,\xi}[0;\mu]\over \lambda_{p^0,\xi}[0,0]} &=& \int d^4 x \left (\mu \overline \rho - \overline \varepsilon\right ) \end{eqnarray} in the rest frame of the density. Together with Eqs. \ref{bar-rho-mu}, Eq. \ref{tS_eff} for an uniform system takes the form \begin{eqnarray} \widetilde S_{eff} &=& -i{T\over 2} \sumint {dp^0\over 2\pi} \ln{\lambda_{p^0,\xi}[f;\mu]\over \lambda_{p^0,\xi}[0,\mu]} -{n_f n_c\over 4\pi^2} \int d^4 x\mu^4 +\int d^4 x {\cal L}_B[f]. \label{Seff2-FD2} \end{eqnarray} The effective potential to be minimized for an uniform system is then defined by \begin{eqnarray} V_{eff} &=& - \lim_{V_3T\to\infty} \widetilde S_{eff}/V_3T \nonumber \\ &=& \lim_{V_3\to\infty} i{1\over 2V_3} \sumint {dp^0\over 2\pi} \ln{\lambda_{p^0,\xi}[f;\mu]\over \lambda_{p^0,\xi}[0,\mu]} +{n_f n_c\over 4\pi^2} \mu^4 - {\cal L}_B[f] ,\label{Veff-FD} \end{eqnarray} where volume $V_3 = L^3$ with $L$ the spatial dimension of the system. \subsection{Euclidean Instability of the $\alpha$ Phase and Its CP Problem} The $\alpha$ phase for a massless fermion system can be shown to realize by using the Nambu Jona--Lasinio (NJL) model \cite{NJL}. In the 8--dimensional ``real'' representation for the fermion spinor, the NJL model with 2 flavors ($n_f=2$) and 3 colors ($n_c=3$) are given in appendix \ref{app:NJL}. The NJL model is studied extensively in the literature by assuming $\mu^\alpha=0$ in the vacuum so far. It's possible that such a plausible assumption is in fact incorrect. With Eq. \ref{Veff-FD}, the question of whether or not the $\alpha$ phase is stable against fluctuations in $\mu^\alpha$ can be studied. In the $\alpha$ phase, Eq. \ref{Veff-FD} takes the following form \begin{eqnarray} V_{eff} &=& 6i\int_{\cal C} {d^4p\over (2\pi)^4} \ln\left ( 1- {\sigma^2\over p_+^2} \right ) \left ( 1- {\sigma^2\over p_-^2} \right ) + {1\over 4 G_0}\sigma^2 + {3\over 2\pi^2} \mu^4 ,\label{Veff-a-FD} \end{eqnarray} where $p_+^\mu = (p^0+\mu^0,\mbox{\boldmath $p$}+\mbox{\boldmath $\mu$})$ and $p_-^\mu= (p^0-\mu^0,\mbox{\boldmath $p$}-\mbox{\boldmath $\mu$})$. \subsubsection{Quasiparticle approximation and its problem} In the quasiparticle approximation, the $p^0$ integration contour is the one shown in Fig. \ref{Fig:ConCl2}. The localized quantum fluctuations of the order parameter $\sigma$ are not included following such a contour. With the help of Appendix \ref{app:FF}, Eq. \ref{Veff-a-FD} can be shown to have the following form \begin{eqnarray} V_{eff} = {3\over 2\pi^2} \mu^4 + {1\over 4 G_0} \sigma^4 - {6\over \pi^2} \int_{k_F}^{{\Lambda_3}} d|\mbox{\boldmath $p$}| |\mbox{\boldmath $p$}|^2 \left ( \sqrt{|\mbox{\boldmath $p$}|^2+\sigma^2} - |\mbox{\boldmath $p$}| \right ) ,\label{Veff-FD-2} \end{eqnarray} where $k_F = \theta(\mu-\sigma)\sqrt{\mu^2-\sigma^2}$ with $\theta(x)$ the step function and ${\Lambda_3}$ is the cutoff in the 3--momentum that defines the model. It is a monotonic increasing function of $\mu$. So the stationary point for $V_{eff}$ is at the position $\mu=0$, as expected. There is however an inconsistency related to the fermion number in the quasiparticle approximation to the $\alpha$ phase. Eq. \ref{bar-rho-mu} holds for any phase of a model Lagrangian density for which the fermion number is conserved. On the other hand, the average fermion number for an uniform system in the quasiparticle approximation is \begin{eqnarray} \overline n = {2\over \pi^2}\theta(\mu-\sigma) (\mu^2-\sigma^2)^{3/2} \label{semi-c-n} \end{eqnarray} which differs from Eq. \ref{bar-rho-mu} and the results of Appendix \ref{app:thedark} even qualitatively. The reason, as is mentioned generally in section \ref{sec:General}, is because the quasiparticle approximation is related to a Gaussian approximation to the generating functional of the model in the Minkowski spacetime formulation. Such an approximation can only take into account of the contributions from propagating modes or quasiparticles. The quasiparticles are not fundamental building blocks, namely, the bare particles, of the system; they are composite objects representing coherent excitations of infinite pairs of bare particles and anti-particles. In the phase that quasiparticles propagate, the excitations correspond to the bare particles of the system damp in spacetime when produced. They are not, however, absent in a time interval that is sufficiently short. They form an important component, which is called the dark component (see Appendix \ref{app:thedark}), that contributes to the fermion number density and other relevant local physical quantities. The contribution of such a dark component is expected to be taken into account, at least partially, if we formulate our semi-classical approximation in the Euclidean spacetime to sample the important contributions of pure quantum mechanical configurations. \subsubsection{Euclidean stationary points and the CP problem} In the Euclidean spacetime, Eq. \ref{Veff-a-FD} can be evaluated by using the $p^0$ integration contour shown in Fig. \ref{Fig:ConCl2} together with a {\em covariant} cutoff. This operation corresponds to the replacement rule given by Eq. \ref{Eucl-Rules} plus $\mu^0\to -i\mu^0$, which is equivalent to the change $p^0\to ip^0$ and $\mu^0 \to \mu^0$ in the expression for the Minkowski effective action after the tracing over spin, isospin and color degrees of freedom is taken. The time component of $\mu^\alpha$ is kept unchanged since it is regarded as an external field and is related to the fermion density given by Eq. \ref{bar-rho-mu}. The result is \begin{eqnarray} V_{eff}/\Lambda^4 = -{3\over \pi^3} \int^1_0 dy y^3 \int_0^1 dx \sqrt{1-x^2} \ln{(y^2-\widetilde\mu^2+\widetilde\sigma^2)^2 + 4\widetilde\mu^2 y^2 x^2 \over (y^2-\widetilde \mu^2)^2 + 4\widetilde\mu^2 y^2 x^2} + {1\over 16\pi \alpha_0}\widetilde\sigma^2 + {3\over 2\pi^2} \widetilde \mu^4 \label{Eucl-Veff-FD} \end{eqnarray} $\widetilde \sigma \equiv \sigma/\Lambda$, $\widetilde \mu \equiv \mu/\Lambda$, $\alpha_0=G_0\Lambda^2/4\pi$ and with $\Lambda$ the covariant cutoff in the Euclidean momentum space. In the $\alpha$ phase with non-zero $\sigma$, the dependence of $V_{eff}/\Lambda^4$ on $\mu$ is plotted in Fig. \ref{Fig:a-phase-mu}. The minima of $V_{eff}$ is not located at $\mu=0$ for any finite $\sigma$ but some finite $0<|\mu_0|<\sigma$. This is physically not acceptable. There are two problems related to such a vacuum with non-vanishing fermion density if the dominate phase of the physical strong interaction vacuum is considered to be in the $\alpha$ phase. The first one is related to the strong CP problem. Since in such a background field as $\mu^\alpha$, the two CP conjugate eigenstates of a neutral particle has different energies; it implies the occupation number for one eigenstate can be larger than the other in physical processes at sufficiently low energy. This in turn will cause much too large CP violation phenomena not observed in nature. Let us consider the only system, namely the $K^0/\overline K^0$ system, in which a CP violation was observed. A mechanism for the explanation of the CP violation in the $K^0/\overline K^0$ system was proposed long ago \cite{TDLee}. Its basic idea is to assume a $\mu^\alpha$ like potential through out the space that differentiates $K^0$ and $\overline K^0$. However, such a $\mu^0$ is estimated to have a value of order $10^{-8}$ eV. It is much smaller than the average value found here, which is of the order of $10^2$ MeV if $\Lambda$ is taken to be $1$ GeV. The second one is related to the dark matter problem. If there exists an uniform $\mu^0\sim 10^2$ MeV field in the universe at the present, the dark component of the baryon number density would be of order of the nuclear matter density, which is much larger than what it is expected. Had such a scenario been correct, the universe would not has lived until today. Facing these two serious problems, a solution is needed to be looked for. There are at least two alternatives. The first one, which is likely to be correct, is that there are something missing in the computation procedure used so far. The second one is that our notion that the dominate phase of the universe is in the $\alpha$ phase is wrong. The later alternative is unlikely to be correct since there is a large body of empirical facts that support such a notion. With this consideration in mind, I turn next to an investigation of the first alternative. \subsection{A Fock space inspection of the vacuum structure of the $\alpha$ phase and the blocking effects} It is known that in the $\alpha$ phase of the NJL model treated in the mean field approximation, the vacuum can be related to the bare one by an unitary transformation before the thermodynamic limit $L^3\to\infty$ is taken. It can be explicitly written as \begin{eqnarray} \ket{\mbox{vac}} = \prod_{{\bf p},h} e^{i\widehat O({\bf p},h)} \ket{0} \label{V-BV} \end{eqnarray} with ${\bf p}$ the three momentum, $h$ the helicity label and operator $\widehat O$ expressed in term of the creation operators ($a^\dagger_{{\bf p}h}$, $b^\dagger_{-{\bf p}h}$) and annihilation operators ($a_{{\bf p}h}$, $b_{-{\bf p}\lambda}$) of the bare fermions as \begin{eqnarray} \widehat O({\bf p},h) &=& {i\over 2}\theta_{\bf p} \left [ a^\dagger_{{\bf p}h} b^\dagger_{-{\bf p}h} - b_{-{\bf p}h} a_{{\bf p}h} \right ], \label{O-oper} \end{eqnarray} where $\cos\theta_{\bf p}=|{\bf p}|/E_{\bf p}$ and $E_{\bf p} = \sqrt{|{\bf p}|^2 + \sigma^2}$. The annihilation operators for the quasiparticles $\alpha_{{\bf p}h}$ and the antiquasiparticle $\beta_{{\bf p}h}$ of the system are related to the bare ones through a Bogoliubov transformation \begin{eqnarray} \alpha_{{\bf p}h} &=& \cos({1\over 2}\theta_{\bf p}) a_{{\bf p}h} - \sin({1\over 2}\theta_{\bf p}) b^{\dagger}_{-{\bf p}h}, \label{alpha_oper}\\ \beta_{{\bf p}h} &=& \sin({1\over 2}\theta_{\bf p}) a^\dagger_{-{\bf p}h} + \cos({1\over 2}\theta_{\bf p}) b_{{\bf p}h}.\label{beta_oper} \end{eqnarray} {}From Eqs. \ref{V-BV}, \ref{O-oper} two properties can be seen: 1) the $\alpha$ phase vacuum is a superposition of states in the Fock space of bare fermions with increasing number of fermion and anti-fermion pairs and 2) after the thermodynamic limit $L^3\to\infty$ is taken the overlap between the bare vacuum $\ket{0}$ and the $\alpha$ phase vacuum $\ket{\mbox{vac}}$ tends to zero. This implies 1) the occupation of fermions and anti-fermions in the $\alpha$ phase vacuum can have blocking effects on the creation operation of bare particle states and 2) the true $\alpha$ phase vacuum can not be reached by perturbative iterations starting from some preassumed state of the system without introducing some kind of macroscopic variables into the path integration formalism. \section{A Consistent Local Quantum Finite Density Field Theory} \label{sec:FDth} \subsection{The statistical blocking parameter} \subsubsection{The motivating Fock space study} I develop a theory that takes into account both the fermion--antifermion pair condensation and the resulting blocking effects in this section. As mentioned in the previous section, the bare vacuum with zero pairs of fermion and antifermions is not a suitable initial state that starts the path integration computation since it has zero overlap with the true vacuum state of the system after the phase transition. Instead, the starting state is of the following kind \begin{eqnarray} \ket{\phi_0} &=& \sum_{n,\xi} C^n_\xi \ket{ (f\overline f)_\xi^n} = \sum_{n,\xi}C^n_\xi \ket{n\xi} \label{init-state} \end{eqnarray} with $n$ the number of the fermion--antifermion pairs and $\xi$ other quantum numbers that completely specify the state. The diagonal matrix element of the evolution operator of the system can be expressed in terms of path integration over the dynamical fields of the system \begin{eqnarray} \braket{\phi_0,t=+\infty}{\phi_0,t=-\infty} &=& \lim_{\begin{array}{c} t_f\to\infty\\t_i\to-\infty \end{array}} \bra{\phi_0} e^{-i\widehat H (t_f-t_i)} \ket{\phi_0}\nonumber \\ &=& \sum_n \sum_{\xi\xi'} C^n_\xi C^{n *}_{\xi'} \braket{n\xi',t=+\infty}{n\xi,t=-\infty} + \ldots ,\label{evolution1} \end{eqnarray} where $\widehat H$ is the total Hamiltonian of the system and $\ldots$ represents the off diagonal contributions to the transition amplitude between states with different pairs of fermion and antifermion. For a discussion of the eigenstate of the total Hamiltonian like the vacuum state, the off diagonal contributions with macroscopically different initial and final fermion--antifermion pairs are expected to be effectively absent in the final result after the thermodynamic limit. The transition amplitude $\braket{n\xi',t_f}{n\xi,t_i}$ with the external fields present is then written in terms of path integration, namely, \begin{eqnarray} \braket{n\xi',t=+\infty}{n\xi,t=-\infty} &=& N \int D[\Psi]\prod_i D[f_i] e^{i\int d^4x \left ({1\over 2} \overline\Psi i S_F^{-1}[f]\Psi + {\cal L}_B[f] + \overline\Psi\eta + \overline\eta\Psi + \sum_k J_k f_k\right)}, \label{General_W_FDB} \end{eqnarray} where $N$ is a constant. The formal manipulations, which express the above functional integration over fermion degrees of freedom by an effective action $S_{eff}$, remain mostly unchanged, except $S_{eff}$ depends now on a new statistical parameter $\epsilon$ \begin{eqnarray} \braket{n\xi',t=+\infty}{n\xi,t=-\infty} &=& N' \int \prod_i D[f_i] e^{iS_{eff}[f,\mu,\epsilon] + {1\over 2} \overline\eta S_F[f] \eta + i\int d^4x \sum_k J_k f_k}. \label{Eff_W_FDB} \end{eqnarray} For a stationary situation, the effective action $S_{eff}[f,\mu,\epsilon]$ is given by Eq. \ref{Seff2}. The constraint that both of the initial and the final states considered are configurations with both of the bare fermion and antifermion states below energy $\epsilon$ filled is implemented by a distortion of the contour for $p^0$ integration from the one in Fig. \ref{Fig:ConCon1} to that of Fig. \ref{Fig:ConFDB0}. In the thermodynamic limit of $L^3\to \infty$, the sum over $n$ in Eq. \ref{evolution1} can be replaced by an integration over $\epsilon$, the statistical blocking parameter, namely \begin{eqnarray} \sum_n \to \int d\epsilon M(\epsilon) \label{n-to-eps} \end{eqnarray} with $M(\epsilon)$ the integration measure of the transformation. Eq. \ref{evolution1} becomes \begin{eqnarray} \braket{\phi_0,t=+\infty}{\phi_0,t=-\infty} &=& \int d\epsilon M(\epsilon)\sum_{\xi\xi'} \widetilde C^\epsilon_\xi \widetilde C^{\epsilon *}_{\xi'} \braket{n\xi',t=+\infty}{n\xi,t=-\infty}\nonumber\\ &=& \int d\epsilon \int \prod_i D[f_i] e^{iS_{eff}[f,\mu,\epsilon] + {1\over 2} \overline\eta S_F[f] \eta + i\int d^4x \left (\sum_k J_k f_k - V_0(\mu,\epsilon) \right )} \label{General_W_3} \end{eqnarray} with $exp[-i\int d^4x V_0(\mu,\epsilon)]$ the leading piece of the measure for the $\epsilon$ integration in the thermodynamic limit. When the volume $L^3$ of the system becomes increasingly large, the integrand of the $\epsilon$ integration becomes increasingly sharp at the extrema positions of the argument of the exponential in the above equation. This is due to the fact that $\epsilon$ couples to macroscopic variables that are proportional to the volume; it has no quantum fluctuation in the thermodynamic limit. Due to this reason, the detailed form of the measure $M(\epsilon)$ but its leading piece in Eq. \ref{n-to-eps} is irrelevant in the thermodynamic limit. So the weighted sum \begin{eqnarray} \sum_{\{\phi_0\}} {\cal W}[\phi_0,\phi_0] \braket{\phi_0,t=+\infty}{\phi_0,t=-\infty} &=& \int \prod_i D[f_i] e^{iS_{eff}[f,\mu,\epsilon] + {1\over 2} \overline\eta S_F[f] \eta + i\int d^4x \left (\sum_k J_k f_k - V_0(\mu,\epsilon) \right )} \label{General_W_4} \end{eqnarray} with $\epsilon$ taking the value of one of the extrema of the argument of the exponential. The conventional method, which may turns out to be not sufficient in symmetry breaking phases, corresponds to a special case, namely, the $\epsilon=0$ one. The relevance of introducing $\epsilon$ will be discussed in the following sections. It can be seen that a possible finite $\epsilon$ in the final result is perturbatively non-reachable. \subsubsection{The determination of $V_0(\mu,\epsilon)$} Eq. \ref{General_W_4} tells us that the generating functional of an interacting fermionic system should be written as \begin{eqnarray} e^{W[J,\overline\eta,\eta,\mu,\epsilon]} &=& \int \prod_i D[f_i] e^{iS_{eff}[f,\mu,\epsilon] + {1\over 2} \overline\eta S_F[f] \eta + i\int d^4x \left (\sum_k J_k f_k - V_0(\mu,\epsilon) \right )}. \label{General_W_5} \end{eqnarray} It is normalized by the condition \begin{eqnarray} W[0,0,0,0,0] &=& 0. \label{W-normal} \end{eqnarray} For a stationary situation, $\widetilde S_{eff}$ corresponding to Eq. \ref{tS_eff} can be expressed (see also Eq. \ref{Seff2}) as \begin{eqnarray} \widetilde S_{eff}[f,\mu,\epsilon] &=& -i{T\over 2}\left ( \sumint {dp^0\over 2\pi} \ln{\lambda_{p^0,\xi}[f,\mu]\over \lambda_{p^0,\xi}[0,\mu]} + \sum_{\xi} \int_{\cal C} {dp^0\over 2\pi} \ln\lambda_{p^0,\xi}[0,\mu] - \sum_{\xi} \int_{{\cal C}_0} {dp^0\over 2\pi} \ln\lambda_{p^0,\xi}[0,0] \right )\nonumber\\ &&+ \int d^4 x [{\cal L}_B[f]-V_0(\mu,\epsilon)], \label{Seff_FD2} \end{eqnarray} where ${\cal C}$ represents the $p^0$ integration contour shown in Fig. \ref{Fig:ConFDB0} and ${\cal C}_0$ represents the $p^0$ integration contour given by Fig. \ref{Fig:ConCon1}. $V_0(\mu,\epsilon)$ satisfies $V_0(\mu,0)= \int d^4 x \mu_\alpha \overline j^\alpha$ so that it agrees with Eq. \ref{tS_eff} in this special case. From Appendix \ref{app:FF}, it is shown that the left hand side of the following equation is finite for an uniform system, namely, \begin{eqnarray} -i{T\over 2}\left (\sum_{\xi} \int_{\cal C} {dp^0\over 2\pi} \ln\lambda_{p^0,\xi}[0,\mu] \right. &-& \left . \sum_{\xi} \int_{{\cal C}_0} {dp^0\over 2\pi} \ln\lambda_{p^0,\xi}[0,0] \right ) = \nonumber \\ && \int d^4 x \left [ \left (\mu_+ \overline\rho_{(+)} + \mu_- \overline\rho_{(-)} - \mu \overline\rho \right ) - \left ( \overline e_{(+)} + \overline e_{(-)} - \overline e \right ) \right ]. \end{eqnarray} The {\em basic assumption} of the theory is then the following {\em choice} for $V_0(\mu,\epsilon)$, namely, \begin{eqnarray} V_0(\mu,\epsilon) &=& \int d^4 x \left (\mu_+ \overline \rho_{(+)} + \mu_- \overline\rho_{(-)} - \mu \overline \rho \right ) \label{V_0_Form} \end{eqnarray} which completely specifies Eq. \ref{General_W_4}. For an uniform system, the corresponding effective potential in a Hartree--Fock approximation to be minimized is \begin{eqnarray} V_{eff} &=& \lim_{V_3\to\infty} i{1\over 2V_3} \sumint {dp^0\over 2\pi} \ln{\lambda_{p^0,\xi}[f;\mu]\over \lambda_{p^0,\xi}[0,\mu]} +{n_f n_c\over 4\pi^2} \left (\mu^4 + 2\epsilon^4 + 12 \mu^2\epsilon^2 \right ) - {\cal L}_B[f] \label{Veff-FDB} \end{eqnarray} with ${\cal C}$ denoting the $p^0$ integration contour chosen. In order to preserve the causal structure of the original Minkowski $p^0$ integration contour, the Euclidean effective action is obtained by distorting the Minkowski contour given in Fig. \ref{Fig:ConFDB0} to the one labeled ``II'' in Fig. \ref{Fig:ConFDB1}. \subsection{More on the primary statistical gauge field} \label{subsec:more} \subsubsection{Statistical gauge invariance, physical states and conservation of fermion number} In the process of introducing the primary statistical gauge field $\mu^\alpha$, the original global $U(1)$ symmetry corresponding to the fermion number conservation is replaced, in a certain sense, by a local symmetry. This local symmetry originates from the fact that the eigenvalues $\lambda_{p^0,\xi}[f,\mu]$, which satisfies the eigenequation \begin{eqnarray} \gamma^0 iS_F^{-1}[f,\mu] \Psi_\lambda = \lambda[f,\mu] \Psi_\lambda, \label{EigenEq3} \end{eqnarray} is invariant under the following gauge transformation \begin{eqnarray} \Psi_\lambda(x) &\to & e^{i\phi(x) O_3} \Psi_\lambda(x)\label{G-tran1}\\ \mu^\alpha(x) &\to & \mu^\alpha(x) - \partial^\alpha\phi(x) \label{G-tran2} \end{eqnarray} with $\phi(x)$ an arbitrary function of the spacetime that decreases to zero sufficiently fast at the spacetime infinity. The introduction of a local field $\mu^\alpha(x)$ is expected to introduce infinite extra degrees of freedom, which should be eliminated in certain way. Albeit the full effective action given by Eq. \ref{Seff_FD2}, which depends on $\mu^\alpha\mu_\alpha\ge 0$, is not invariant under the gauge transformations given by Eqs. \ref{G-tran1} and \ref{G-tran2}, the primary statistical gauge invariance of the quantum fluctuation or the connected part of $\widetilde S_{eff}$ requires further investigation. Let us find the connection of the primary statistical gauge invariance to the conservation of fermion number by quantize the system governed by the Lagrangian density \begin{eqnarray} {\cal L}' &=& {\cal L} + \mu^\alpha j_\alpha \label{Lag-prim} \end{eqnarray} in Eq. \ref{General_W_5}. Here ${\cal L}$ is the original Lagrangian density before introducing $\mu^\alpha$ and $j_\alpha$ is the fermion number current density given by Eq. \ref{fermion-curr}. The fermion field $\Psi$ and the boson fields $\{f_i\}$ are quantized as usual \cite{TFTQFT}; they shall not be repeated here. What is needed to be found here is the conjugate variable $\pi_{u}^\alpha$ of $\mu^\alpha$. For that purpose, Eq. \ref{Lag-prim} can be treated as the Hamiltonian density, namely \begin{eqnarray} {\cal H}_{(\mu)} &=& {\cal L}'. \label{Hamilt} \end{eqnarray} Before the quantization, it follows from the Hamiltonian dynamics that \begin{eqnarray} \partial_0{\pi}_{ui} &=& -{\partial {\cal H}_{(\mu)}\over\partial \mu_i} = j_i ,\label{qu-dot}\\ \partial_0{\mu}_{i} &=& {\partial {\cal H}_{(\mu)}\over\partial \pi_{ui}} = 0 ,\label{uu-dot}\\ \end{eqnarray} where $i,j=1,2,3$ labels the spatial components of 4-vectors. The quantization of $\mu_i$ is then implemented by the Dirac quantization condition \begin{eqnarray} \left [\widehat\pi_{ui}({\bf x},t), \widehat\mu_j({\bf x}',t) \right ] &=& - i \delta^{(3)}({\bf x}-{\bf x}')\delta_{ij}. \label{mui-quantization} \end{eqnarray} The {\em statistical ``electric field''} $\widehat\pi_{ui}$ commutes with all elementary fields in the Lagrangian density but the one listed above at equal-time. Note that all quantities with a hat ``$\wedge$'' on top denote operators in the following. After the quantization, the set of gauge transformation, in which $\phi(x)$ is independent of time, is represented by \begin{eqnarray} \widehat\Psi &\to& U[\phi] \widehat \Psi U^\dagger[\phi] = e^{i\phi O_3}\widehat \Psi \label{G-tran3}\\ \widehat \mu_\alpha &\to& U[\phi] \widehat \mu_\alpha U^\dagger[\phi] = \widehat \mu_\alpha - \partial_\alpha\phi \label{G-tran4} \end{eqnarray} with \begin{eqnarray} U[\phi] &=& e^{-i\int d^3 x \left ( \widehat \rho + \nabla\cdot\widehat{\bf \pi}_{u} \right )\phi}, \label{Us-operator} \end{eqnarray} where the space integration is at any specific time at which a transformation of an operator is considered. The operator algebra between dynamical fields is then represented in a Hilbert space. Since the quantity $\mu^\alpha$ is introduced into the original theory, it is expected that this Hilbert space contains states that are not physical or are redundant. The physical states are selected within the full Hilbert space by requiring that they satisfy \begin{eqnarray} \bra{\mbox{Phys}'}U[\phi]\ket{\mbox{Phys}} &=& Z e^{-i\Phi} \label{C-consv1} \end{eqnarray} under the time independent gauge transformation discussed above, with the common phase factor $\Phi$ restricted to those functions that are independent of time (it is explained in the following). Taking the time derivative of Eq. \ref{C-consv1}, and using dynamical equation Eq. \ref{qu-dot}, one obtains \begin{eqnarray} \bra{\mbox{Phys}'} \left ( \partial_0{\widehat\rho} + \nabla\cdot \widehat {\bf j}\right ) \ket{\mbox{Phys}} &=& \bra{\mbox{Phys}'}\partial^\mu \widehat j_\mu \ket{\mbox{Phys}} = 0 \label{C-consv3} \end{eqnarray} which is the conservation of fermion number current in physical processes. The time independent and spatially localized gauge transformation considered is non-trivial one. It selects amongst those states in the extended Hilbert space the physical ones. This can be understood if one consider the commutation relation between $\widehat \rho + \nabla \cdot \widehat\pi_{u}$ and the Hamiltonian of the system. \begin{eqnarray} \left [\widehat \rho + \nabla \cdot \widehat\pi_{u},\widehat H \right ] &=& i{d\over dt} \left (\widehat \rho + \nabla \cdot \widehat\pi_{u} \right ) = \partial_\mu \widehat j^\mu \label{H-Q-comm} \end{eqnarray} with $\widehat H$ the total Hamiltonian of the system. It is zero due to the conservation of fermion number current. It means that $\widehat \rho + \nabla \cdot \widehat\pi_{u}$ is independent of time when the matrix elements between physical states are taken. The states in the extended Hilbert space can be divided into subspaces labeled by a complex (time independent) function of the spatial coordinates according to the matrix elements of $ \widehat \rho + \nabla \cdot \widehat\pi_{u}$ between themselves. For those eigenstates of the Hamiltonian of the system, it can be written as\footnote{It is the quantum version of the ``classical'' constraint equation $\rho + \nabla \cdot \pi_{u} =\varsigma$.} \begin{eqnarray} \bra{\varphi^i_\varsigma} \left (\widehat \rho + \nabla \cdot \widehat\pi_{u} \right ) \ket{\varphi^j_\varsigma} &=& \delta_{E_i E_j} N_{ij} \varsigma \label{Q-eigenstate} \end{eqnarray} with $\varsigma$ the space dependent complex function, $\ket{\varphi^k_\varsigma}$ a state in the physical space that has energy $E_k$, $\delta_{EE'}$ taking zero or unity value if $E\ne E'$ or $E'=E$ (assuming that $E$ is discrete before thermodynamic limit is taken) and $N_{ij}$ independent of spacetime coordinates\footnote{Eq. \ref{H-Q-comm} may not be restrictive enough. Somewhat more restrictive constraint can be suggested. It consists of decomposing $\widehat \rho + \nabla \cdot \widehat\pi_{u}$ into superposition of holomorphic $(\widehat \rho + \nabla \cdot \widehat\pi_{u})_{(-)}$ and antiholomorphic $(\widehat \rho + \nabla \cdot \widehat\pi_{u})_{(+)}$components \cite{JKG}. The physical states are those ones that are eigenstates of $(\widehat \rho + \nabla \cdot \widehat\pi_{u})_{(-)}$ with a common eigenvalue $\varsigma$. }. Eqs. \ref{H-Q-comm} and \ref{Q-eigenstate} mean that the physical states are the ones that have vanishing matrix elements on the commutator of the Hamiltonian and $\widehat \rho + \nabla \cdot \widehat\pi_{u}$. Therefore they are expressible by a superselection sector in the extended Hilbert space defined and labeled by a complex function $\varsigma$ of spatial coordinates. Such a definition of the physical states for the primary statistical gauge theory is less restrictive than the one used in dynamical gauge theory like QED \cite{GSBK} where due to the existence of the dynamical part for the gauge fields at the tree level, the physical states are restricted to the subspace with $\varsigma\equiv 0$ only. In fact, for the $\beta$ and $\omega$ phases discussed in the following, in which the $U(1)$ symmetry corresponding to the fermion number conservation is spontaneously broken down, $\varsigma$ can not be zero due to the fact that before taking into account of the dynamical gauge fields (that of the photon), the massless Goldstone boson corresponding to the spontaneous symmetry breaking has to be considered as a physical excitation. But if $\varsigma$ is chosen to be zero, the massless Goldstone boson belongs to unphysical states \cite{Strocchi}. Such a situation actually opens up the possibility for the spontaneous CP violation to be discussed in the following. The choices made for the physical states is a constraint invariant under time evolution due to Eq. \ref{H-Q-comm}. It shows that definition for physical states remains true at all times and no transition to unphysical states and between the superselection sectors is possible in physical processes. \subsubsection{The statistical gauge degrees of freedom and the question of long range order} The primary statistical gauge field $\mu^\alpha$ is non-dynamical at the tree level. At the quantum level a dynamics for $\mu^\alpha$ is generated due to the fermion quantum fluctuations. The relevant effective action for the primary statistical gauge field can be obtained from Eq. \ref{Seff_FD2} by a ``Legendre transformation'' to a form without the contribution of $V_0$, namely \begin{eqnarray} S_{eff}[f,\mu,\epsilon] &=& -i{T\over 2}\left ( \sumint {dp^0\over 2\pi} \ln{\lambda_{p^0,\xi}[f,\mu]\over \lambda_{p^0,\xi}[0,\mu]} + \sum_{\xi} \int_{\cal C} {dp^0\over 2\pi} \ln\lambda_{p^0,\xi}[0,\mu] - \sum_{\xi} \int_{{\cal C}_0} {dp^0\over 2\pi} \ln\lambda_{p^0,\xi}[0,0] \right )\nonumber\\ &&+ \int d^4 x {\cal L}_B[f], \label{Seff_mu} \end{eqnarray} which is a canonic functional of $\mu^\alpha$. The quadratic term for slow varying $\mu^\alpha$ (in spacetime or at long distances) generated from the fermion determinant is of the form \begin{eqnarray} S^{(\mu)}_{eff} &=& {1\over 2}\int d^4x d^4x' \mu'_\alpha(x) \pi^{\alpha\beta}(x,x') \mu'_\beta(x') + {n_f n_c\over \pi^2} \int d^4 x \epsilon^2 {\mu'}^2 ,\label{Seff-mu1} \end{eqnarray} where $\mu'_\alpha = \mu_\alpha - \overline\mu_\alpha$ with $\overline\mu_\alpha$ shifted $\mu_\alpha$ and the last term is from the corresponding one in Eq. \ref{Seff_mu}. The first term is generated from the fermion determinant. If the electromagnetic interaction between the fermions are not considered, $\pi^{\alpha\beta}(x,x')$ is given by \begin{eqnarray} \pi_0^{\alpha\beta}(x,x') &=& i\bra{0} T j^\alpha(x) j^\beta(x') \ket{0} \nonumber \\ &=& Z^{(\mu)}(\partial_x^2 g^{\alpha\beta} - \partial_x^\alpha\partial_x^\beta ) \delta(x-x') + i\bra{0} j^\alpha(x)\ket{0} \bra{0} j^\beta(x') \ket{0} \label{normal-pi-mn} \end{eqnarray} in the normal phase; and, \begin{eqnarray} \pi_0^{\alpha\beta}(x,x') &=& i\bra{0} T j^\alpha(x) j^\beta(x') \ket{0} \nonumber \\ &=& \Pi^{(\mu)}\left (g^{\alpha\beta} -{\partial_x^\alpha\partial_x^\beta\over\partial^2} \right ) \delta(x-x')+ i\bra{0} j^\alpha(x)\ket{0} \bra{0} j^\beta(x') \ket{0}, \label{sup-pi-mn} \end{eqnarray} in the phase where the $U(1)$ symmetry corresponding to the fermion number conservation is spontaneously broken down since there exists a massless pole in the matrix element of $j^\alpha$ (see Ref. \cite{Ying11} for a more detailed discussion). Here $Z^{(\mu)}$ and $\Pi^{(\mu)}$ are functions of $x$ and $x'$. In the normal phase, the effective action for slow varying $\mu'_\alpha$ is \begin{eqnarray} S^{(\mu)}_{eff} = \int d^4 x \left [ -{Z^{(\mu)}\over 4} f_{\mu\nu} f^{\mu\nu} + {1\over 2} \left( i\bra{0}j^\alpha\ket{0}\bra{0}j^\beta\ket{0} + 2 g^{\alpha\beta} {n_f n_c\over \pi^2} \epsilon^2 \right ) \mu'_\alpha \mu'_\beta \right ] \label{Seff-mu2} \end{eqnarray} with $f^{\alpha\beta} = \partial^\alpha {\mu'}^\beta-\partial^\beta{\mu'}^\alpha$, since the eigenvalues $\lambda_{p^0,\xi}$ are invariant under the gauge transformation given by Eqs. \ref{G-tran1} and \ref{G-tran2}. In the phase where $U(1)$ is spontaneously broken down and before considering electromagnetic interaction, \begin{eqnarray} S^{(\mu)}_{eff} &=& {1\over 2}\int d^4 x \left \{ \left [ {i}\bra{0}j^\alpha\ket{0}\bra{0}j^\beta\ket{0} + g^{\alpha\beta} \left ({\Pi^{(\mu)}} + 2{n_f n_c\over \pi^2} \epsilon^2 \right ) \right ] \mu'_\alpha \mu'_\beta + \ldots\right \}. \label{Seff-mu3} \end{eqnarray} In both of the situations with slow varying $\mu'_\alpha$, $Z^{(\mu)}$ and $\Pi^{(\mu)}$ are approximately constants. If the electromagnetic interaction is considered, then there is a $f_{\alpha\beta} f^{\alpha\beta}$ term in Eq. \ref{Seff-mu3} and the $\Pi^{(\mu)}$ term is absent, even in a spontaneous $U(1)$ symmetry breaking phase. This is because when the electromagnetic interaction between the fermions are considered, $\pi^{\alpha\beta}$ can be decomposed into a connected and disconnected part or $\pi^{(c)}_{\mu\nu} + i\bra{0} j_\mu \ket{0} \bra{0}j_\nu \ket{0} $ with the connected part given by \begin{eqnarray} \pi^{(c)}_{\mu\nu} &=& \pi^{(c)}_{0\mu\nu} + \left ( \pi^{(c)}_0 G_0 \pi^{(c)}_0\right )_{\mu\nu}+ \left ( \pi^{(c)}_0 G_0 \pi^{(c)}_0 G_0 \pi^{(c)}_0\right )_{\mu\nu} + \ldots \end{eqnarray} and \begin{eqnarray} \pi^{(c)}_{0\mu\nu} &=& (q^2 g_{\mu\nu} - q_\mu q_\nu) \pi_0^{(c)} \nonumber\\ G_0^{\mu\nu} &=& \left ( - g^{\mu\nu} + {q^\mu q^\nu\over q^2} \right ) {i\over q^2}, \end{eqnarray} where $G^{\mu\nu}_0$ is the propagator of the bare photon. $\pi^{(c)}_{\mu\nu} \to 0$ in the $q^\mu\to 0$ limit even when $\pi_{0\mu\nu}^{(c)}$ contains a massless pole (in $q^2$). Whatever the case, it can be easily seen that the full two point proper vertex for $\mu'_\alpha$ is non-vanishing in the $q^\mu\to 0$ limit if either $\epsilon\ne 0$ or $\bra{0}j_\mu\ket{0}\ne 0$ or both in the vacuum state of the system. Therefore a long range order for $\mu'_0$ is possible only if both $\epsilon=0$ and $\mu=0$ in the vacuum. The spatial component $\mbox{\boldmath $\mu$}'$ of $\mu'_\alpha$ is short ranged in the phase where $\epsilon\ne 0$ and $\mu_0=0$. It is however long ranged in the phase where $\epsilon=0$. \subsubsection{Topological configurations} The discussion given above shows that after the introduction of a primary statistical gauge field $\mu_\alpha$, the representation Hilbert space of the operator algebra is extended. Such an extended Hilbert space includes not only the physical states, but also non-physical ones. The physical states are the ones that satisfy Eq. \ref{Q-eigenstate}, which is expected to be satisfied in all consistent computations of physical observable where fermion number current conservation is preserved. The extension of the representing Hilbert space gives us additional leverage to project out the collective excitations and configurations not easily discernible in the conventional approach. Due to the statistical gauge invariance under the local transformation given by Eqs. \ref{G-tran1} and \ref{G-tran2}, the physically non-trivial configurations of the system depend, in the path integration sense, only on the flux density or statistical ``magnetic field'' \mbox{\boldmath $b$} defined as \begin{eqnarray} \mbox{\boldmath $b$} &=& \nabla\times \mbox{\boldmath$\mu$}. \label{b-def} \end{eqnarray} Therefore the general form of the generating functional Eq. \ref{General_W_5} can be written more precisely by imposing certain gauge fixing condition or as \begin{eqnarray} e^{W[J,\overline\eta,\eta,\mu,\epsilon]} &=& \int D[\mbox{\boldmath $b$}] J(\mbox{\boldmath $b$})\prod_i D[f_i] e^{iS_{eff}[f,\mu,\epsilon] + {1\over 2} \overline\eta S_F[f] \eta + i\int d^4x \left (\sum_k J_k f_k - V_0(\mu,\epsilon) \right )}, \label{General_W_6} \end{eqnarray} where $J(\mbox{\boldmath $b$})$ is the integration measure. Eq. \ref{General_W_6} is equivalent to Eq. \ref{General_W_5}. It is however useful for us to find out collective stationary configurations that are not easily found in a common approach to the effective action. For a configuration in which \mbox{\boldmath $b$} is non-vanishing only in a localized region, quantization of flux appear, namely \begin{eqnarray} \int_{\Sigma} d{\mbox{\boldmath $S$}}\cdot {\mbox{\boldmath $b$}} &=& \oint_{\partial \Sigma} d{\mbox{\boldmath $l$}}\cdot{\mbox{\boldmath $\mu$}} = 2n\pi, \hspace{0.6cm}(n=0,\pm 1,\pm 2,\ldots) ,\label{flux-quant} \end{eqnarray} where $\Sigma$ is the surface that contains the \mbox{\boldmath $b$} and line integration is around the edge $\partial\Sigma$ of $\Sigma$. The quantization results, as it is well known, by imposing the uniqueness condition on the eigenfunctions $\Psi_\lambda[f,\mu]$. \subsubsection{A new macroscopic parameter and long range order} Using the primary statistical gauge field $\mu^\alpha$, a new macroscopic parameter that characterizes the vacuum state of the system can be introduced. It is defined as \begin{eqnarray} \widehat O_\Sigma &=& e^{ i\oint \displaystyle d\mbox{\boldmath $l$}\cdot \mbox{\boldmath $\mu$} }, \end{eqnarray} where $\Sigma$ is a large 2-dimensional surface area in space at certain time and the line integration $\oint$ is along the edge of the area $\Sigma$. It is known \cite{JBKogut} that the vacuum expectation value of $\widehat O_\Sigma$ provides another one of the macroscopic parameters for a more detailed characterization of the phase structure of the system. For example, in a disordered system in which the correlation between the complex phase of the fermions at different space points becomes short ranged, condensation of vortices or monopoles of the type of configurations with non-vanishing $n$ in Eq. \ref{flux-quant} can derive a Kosterlitz--Thouless \cite{KT-phase} type of phase transition. \section{Two Models for Strong Interaction and Their Vacuum Phase Structure} \label{sec:Models} The fundamental theory for the strong interaction is considered to be QCD with its Lagrangian density given in Appendix \ref{app:QCD}. The current available method of studying QCD from first principle is the lattice QCD simulation. Albeit great progresses are made, the lattice computation are limited by the small lattice sizes and by the limitation in the computer power. Model approaches, which are simpler than the full QCD calculation and has large overlap with it in the low and intermediate energy regions can be and have been used to obtain much of the physical pictures that are supposed to happen in the system governed by QCD Lagrangian density. Since the mass of the light quarks have values much smaller than the typical scale of the hadronic spectrum of order 1 GeV, certain subset of the behaviors of the light quark system can be simulated by an interacting massless fermion systems that possesses the basic symmetries of the QCD Lagrangian density. It is expected that we can learn some of the important possible behaviors of the light quark system by using models due to our (relatively) increased theoretical analytic power. The spontaneous chiral $SU(2)_L\times SU(2)_R$ symmetry breaking down in hadronic systems was historically discussed before the birth of the conception of quark and QCD Lagrangian density. This phenomenon is only latter justified by the QCD (lattice) calculation. I discuss in this paper the physical properties of strong interaction vacuum related to the fluctuation of baryon number using two model Lagrangian densities that possess the chiral $SU(2)_L\times SU(2)_R$ symmetry of massless QCD and has 3 colors ($n_c=3$). For a full investigation of the possible phases of the vacuum of a relativistic massless fermion system, these models are so chosen that they allow not only the quark--antiquark condensation that is widely discussed in the literature but also the rarely studied phases that are induced by a condensation of quark--quark (or antiquark--antiquark) pairs. One of these possibilities is studied in detail in Ref. \cite{Ying1,Ying11}. In order to simplify our discussion, I consider two model Lagrangian densities that are half bosonized. Both of them can be viewed as been the descendents of two four quark interaction models after introducing the auxiliary fields in a Fierz invariant way \cite{Ying11}. This section, which mainly serves the purpose of introducing the models, contains the determination of their vacuum phase structure using the conventional approaches. A more detailed study using the refined method developed in sections \ref{sec:Instab} and \ref{sec:FDth} is given in the next section. \subsection{Model I and the $\omega$ phase} The first model is defined by the following Lagrangian density \begin{eqnarray} {\cal L}_1 & = & {1\over 2} \overline \Psi\left [i{\rlap\slash\partial}-\sigma- i\vec{\pi}\cdot \vec{\tau}\gamma^5 O_3-\gamma^5 {\cal A}_c\chi^c O_{(+)}-\gamma^5 {\cal A}^c\overline\chi_c O_{(-)} \right ]\Psi -{1\over 4 G_0} (\sigma^2 + \vec{\pi}^2) + {1\over 2 G_{3'}} \overline\chi_c \chi^c ,\label{Model-L-1} \end{eqnarray} where $\sigma$, $\vec{\pi}$, $\overline\chi_c$ and $\chi^c$ are auxiliary fields with $(\chi^c)^{\dagger} = - \overline\chi_c$ and $G_0$, $G_{3'}$ are coupling constants of the model. ${\cal A}_c$ and ${\cal A}^c$ $(c=1,2,3)$ act on the color space of the quark; they are \begin{eqnarray} {\cal A}_{c_1c_2}^c = -\epsilon^{cc_1c_2}\hskip 0.5 in && {\cal A}_{c,c_1c_2} = \epsilon^{cc_1c_2}\label{Amtr} \end{eqnarray} with $\epsilon^{abc}$ ($a,b = 1,2,3$) the total antisymmetric Levi--Civit\'a tensor. Here the quark spinor is represented by the 8-dimensional Dirac spinor and $O_{(\pm)}$ are raising and lowering operators respectively in the upper and lowering 4 components of $\Psi$. The effective action is given by Eq. \ref{Seff2} with the auxiliary fields independent of spacetime, the effective potential has the following form \begin{eqnarray} V_{eff} &=& -\lim_{L,T\to\infty} {1\over L^3 T} S_{eff}\nonumber\\ &=& -{i\over 2} \lim_{L,T\to\infty} {1\over L^3 T} \sum_{\lambda_n} \ln{\lambda_n\over\lambda_n^{(0)}} + {1\over 4 G_0} \sigma^2 - {1\over 2 G_{3'}} \overline\chi_c\chi^c ,\label{V-eff-2} \end{eqnarray} where $\lambda_n$ and $\lambda_n^{(0)}$ correspond to the eigenvalues of the two Hermitian operators defined in Eq. \ref{EigenEq} with and without the auxiliary fields shifted respectively. Since the auxiliary fields do not depend on spacetime, the eigenvalues $\lambda_n$ and $\lambda_n^{(0)}$ can be labeled by the 4-momentum of the corresponding eigenstates $\Psi_{\lambda_n}$. The result is \begin{eqnarray} V_{eff} &=& {i\over 2} \int {d^4p\over (2\pi)^4} \left ( 8 \sum_{i=1}^4 \ln{\lambda_i(p)\over\lambda_i^{(0)}(p)} + 4 \sum_{i=1}^4 \ln{{\lambda'}_i(p)\over{\lambda'}_i^{(0)}(p)} \right ) + {1\over 4 G_0} \sigma^2 - {1\over 2 G_{3'}} \overline \chi_c \chi^c\label{V-eff-2-1}, \end{eqnarray} where $\lambda_i(p)$, ${\lambda'}_i(p)$ are eigenvalues of states with color different, the same as $\chi^c$ respectively and factors 8, 4 correspond to their degeneracy. It has the following explicit form \begin{eqnarray} V_{eff} &=& 4i\int {d^4p\over (2\pi)^4} \ln\left [\left ( 1- {\sigma^2+\chi^2\over p^2} \right )^2-{\sigma^2\over p^2} \left ( 1-{\sigma^2-\chi^2\over p^2} \right )^2\right ] + {1\over 4 G_0}\sigma^2 + {1\over 2G_{3'}} \chi^2 ,\label{Veff-2-2} \end{eqnarray} where $\chi^2 \equiv -\overline\chi_c\chi^c$. In the Minkowski spacetime formulation, the contour for the $p^0$ integration is shown in Fig. \ref{Fig:ConCon1}. One of the most commonly used ones, which is called quasiparticle path, is shown in Fig. \ref{Fig:ConCl2}. It can be shown that the resulting effective potential is the change of the energy density of the quasiparticles, which is obtained from summing over the quasiparticle energies determined by the poles of $S_F[f]$, relative to that of the bare particle in the truncated Dirac sea. It does not has a covariant form due to the fact that a non-covariant cutoff in the 3-momentum ${\bf p}$ space has to be introduced. The Euclidean path shown in Fig. \ref{Fig:ConCl2} can be used to obtain a covariant expression for the effective action. The Euclidean covariant cutoff scheme does not result in deriving physically undesirable results in other covariant approaches \cite{Hatsuda}. As discussed in section 2.2, it can also be used to include quantum effects (see Appendix \ref{app:thedark}) that are beyond the quasiparticle approximation. The differences between these two paths are however not very important for the purpose of this subsection. Since different kind of cutoffs are used in these two approaches, a comparison between them is difficult. Nevertheless, I shall adopt the Euclidean path shown in Fig. \ref{Fig:ConCl2}. An Euclidean path is however important for discussions to be followed. The resulting expression is \begin{eqnarray} V_{eff}(\sigma,\chi) &=& -4\int^\Lambda {d^4p\over (2\pi)^4} \ln\left [\left ( 1+{\sigma^2+\chi^2\over p^2} \right )^2+{\sigma^2\over p^2} \left ( 1+{\sigma^2-\chi^2\over p^2} \right )^2\right ] + {1\over 4 G_0}\sigma^2 + {1\over 2G_{3'}} \chi^2 ,\label{Veff-2-3} \end{eqnarray} where $\Lambda$ is the Euclidean cutoff introduced to define the model. A numerical evaluation shows that the minima of $V_{eff}(\sigma,\chi)$ is located on either the $\sigma$ axis ($\chi=0$) or the $\chi$ axis ($\sigma=0$). Explicit expression for $V_{eff}(\sigma,0)$ and $V_{eff}(0,\chi)$ are found to be \begin{eqnarray} v_{eff}(\sigma,0) &=& 3 f({\sigma^2\over\Lambda^2}) + {1\over 16\pi\alpha_0} {\sigma^2\over\Lambda^2}\label{Veff10}\\ v_{eff}(0,\chi) &=& 2 f({\chi^2\over\Lambda^2}) + {1\over 16\pi\alpha_{3'}} {\chi^2\over\Lambda^2} ,\label{Veff01} \end{eqnarray} where the dimensionless effective potential $v_{eff}$ is defined by $V_{eff}\equiv \Lambda^4 v_{eff}$, $\alpha_{0} = G_0\Lambda^2/4\pi$ and $\alpha_{3'} = G_{3'}\Lambda^2/8\pi$ with \begin{eqnarray} f(x) &=& {1\over 8\pi^2}\left [-x + \ln\left (1+{1\over x}\right) x^2 - \ln(1+x) \right ]. \label{f-func} \end{eqnarray} The values of $\sigma^2$ and $\chi^2$ at the minima of Eqs. \ref{Veff10} and \ref{Veff01} determine the vacuum of the system in the one loop Hartree--Fock approximation for the fermions (i.e. Eq. \ref{1-loop-gamma} without the second bosonic one loop term). The phase structure of the model is shown in the $\alpha_0$--$\alpha_{3'}$ plane in Fig. \ref{Fig:bound1}. Three kinds of phases for the vacuum are possible. The first phase, which is called the $O$ phase, is the bare vacuum. The second phase, or the $\alpha$ phase, has non-vanishing vacuum expectation value of $\overline\Psi\Psi$; the chiral $SU(2)_L\times SU(2)_R$ symmetry is spontaneously broken down to a $SU(2)_V$ flavor symmetry. The third phase, called the {\em $\omega$ phase} of the vacuum, has non-vanishing diquark and antidiquark condensation characterized by a non-vanishing $\chi^2$; chiral symmetry is unbroken in this phase. The phase transitions across the boundary between the $O$ and the $\alpha$ phases ($\alpha_{0}=\pi/12$ and $\alpha_{3'}<\pi/8$) and the one between the $O$ and the $\omega$ phases ($\alpha_0<\pi/12$ and $\alpha_{3'} = \pi/8$) are of second order. The phase transition between the $\alpha$ and the $\omega$ phases ($\alpha_0>\pi/12$ and $\alpha_{3'} > \pi/8$) is of first order. The Meissner effects for the electromagnetic field are expected in the $\omega$ phase. The basic physics of it is discussed in more detail in Refs. \cite{Ying1,Ying11,GDHp} for model II of the following. I shall relegate such a discussion for this model to other work. The Minkowski propagator for the quarks in the $\omega$ phase can be found by an inversion of the operator $S_F^{-1}[f]$ in the action, namely. \begin{eqnarray} S_F &=& i\left [ i{\rlap\slash\partial}-\gamma^5 {\cal A}_c\chi^c O_{(+)}-\gamma^5 {\cal A}^c\overline\chi_c O_{(-)} \right ]^{-1}. \label{Prop-gamma} \end{eqnarray} In the momentum space, it can be expressed explicitly as \begin{eqnarray} S_F(p) &=& \left \{\begin{array}{cc} \left ({\rlap\slash p}-\gamma^5 {\cal A}_c\chi^c O_{(+)}-\gamma^5 {\cal A}^c\overline\chi_c O_{(-)}, \right ){\displaystyle i\over \displaystyle p^2-\chi^2} &\hspace{0.8cm} \mbox{For quark type I} \\ & \\ {\displaystyle i{\rlap\slash p}\over \displaystyle p^2} &\hspace{0.8cm}\mbox{For quark type II} \end{array} \right ., \label{Prop-gamma1} \end{eqnarray} where a quark of {``type I''} is the one that has color different from $\chi^c$ (or $\overline\chi_c$) and a quark of {`` type II''} is the one that has the same color as $\overline\chi_c$ (or conjugate to that of $\chi^c$). It can be seen that a quark of type II has no gap for its excitation and an excitation of a quark of type I has a finite gap $\sqrt {\chi^2}$. \subsection{Model II and the $\beta$ phase} The second model Lagrangian density is \begin{eqnarray} {{\cal L}}_2&=& {1\over 2}\overline\Psi \left [ i{\rlap\slash\partial} - \sigma - i\vec{\pi}\cdot\vec{\tau}\gamma^5 O_3 + \left (\phi^c_\mu\gamma^\mu\gamma^5 {\cal A}_c +\vec{\delta}_\mu^c\cdot\vec{\tau}\gamma^\mu{\cal A}_c \right ) O_{(+)} - \left (\overline\phi_{\mu c}\gamma^\mu\gamma^5{\cal A}^c +\vec{\overline\delta}_{\mu c}\cdot\vec{\tau}\gamma^\mu{\cal A}^c \right) O_{(-)}\right ]\Psi \nonumber \\ &&-{1\over 4\widetilde G_0}(\sigma^2 + \vec{\pi}^2) - {1\over 2\widetilde G_{3}} (\overline\phi_{\mu c}\phi^{\mu c} + \vec{\overline\delta}_{\mu c}\cdot \vec{\delta}^{\mu c}),\label{Model-L-2} \end{eqnarray} where $\sigma$, $\vec{\pi}$, $\phi^c_\mu$, $\overline\phi_{c\mu}$, $\vec{\delta}^c_\mu$ and $\vec{\overline\delta}_{c\mu}$ are auxiliary fields. It is symmetric under the chiral $SU(2)_L\times SU(2)_R$ group transformation. This model is discussed in detail in Refs. \cite{Ying1,Ying11}. It is written down here for references in the later sections. The Euclidean effective potential for this model used for a determination of the vacuum phase structure in a Hartree--Fock approximation is found to be \begin{eqnarray} V^{eff} &=& -{1\over 2}\int^\Lambda {d^4p\over (2\pi)^4}\left (8\sum_{i=1}^4 ln{\lambda_i(p) \over\lambda^{(0)}_i(p)} + 4\sum_{i=1}^4 ln{\lambda'_i(p)\over{\lambda'}_i^{(0)} (p)}\right ) + {1\over 4\widetilde G_0}\sigma^2 + {1\over 2\widetilde G_3}\overline\phi_\mu\phi^\mu\nonumber\\ &=& -4\int^\Lambda {d^4p\over (2\pi)^4} ln\left [ (1 +{\sigma^2+\phi^2\over p^2})^2 + {\sigma^2\over p^2} (1+{\sigma^2\over p^2})(1+{\sigma^2-\phi^2\over p^2}) -4(1+{\sigma^2\over p^2}){(\phi\cdot p)^2\over p^4}\right ]\nonumber\\ && + {1\over 4 \widetilde G_0}\sigma^2 + {1\over 2\widetilde G_3}\phi^2,\label{EffV3} \end{eqnarray} A numerical evaluation of Eq.~\ref{EffV3} shows that the absolute minimum of $V^{eff}(\sigma^2,\phi^2)$ is located at either $\sigma^2\ne 0$ and $\phi^2 = 0$ or $\sigma^2 = 0$ and $\phi^2 \ne 0$ in the spontaneous symmetry breaking phases. The phase diagram for this model is presented in Fig. \ref{Fig:bound2}. Three kinds of phases for the vacuum are possible. The first phase, which is identical to the $O$ phase discussed above, is the bare vacuum. The second phase, which is the same as the $\alpha$ phase introduced above, has non-vanishing vacuum expectation value of $\overline\Psi\Psi$; the chiral $SU(2)_L\times SU(2)_R$ symmetry is spontaneously broken down to a $SU(2)_V$ flavor symmetry in this phase. The third phase is labeled as the {\em $\beta$ phase}. It has non-vanishing diquark and antidiquark condensation characterized by a non-vanishing $\phi^2$; the chiral symmetry is spontaneously broken down to a flavor symmetry, in the same way as in the $\alpha$ phase. The phase transition is of second order across $O$ phase and the $\alpha$ phase boundary ($\alpha_0=\pi/12$ and $\alpha_3 < \pi/4$). It is first order phase transition across the $\alpha$ and $\beta$ phase boundary ($\alpha_0>\pi/12$ and $\alpha_3>\pi/4$). There is a second order phase transition across the $O$ and $\beta$ phase boundary ($\alpha_0<\pi/12$ and $\alpha_3 = \pi/4$). In the $\beta$ phase, the propagators for the quarks are found \cite{Ying1} to be \begin{eqnarray} S_F(p) &=& \left \{ \begin{array}{cc} \left ( 1 - iO_2 {\displaystyle {\rlap\slash p}\over \displaystyle p^2}{{\rlap\slash\phi}}^c\gamma^5{\cal A}_c \right ) i {\displaystyle (p^2-{\phi}^2){\rlap\slash p} - 2 p\cdot{\phi}{{\rlap\slash\phi}} \over \displaystyle (p^2-\phi^2)^2 - 4(p\cdot\phi)^2} \hspace{0.8cm} &\mbox{For quark of type I}\\ &\\ i{\displaystyle {\rlap\slash p}\over \displaystyle p^2} \hspace{0.8cm} & \mbox{For quark of type II} \end{array} \right ., \label{SFp2} \end{eqnarray} where $\phi^2 =\overline\phi_{\mu c}\phi^{\mu c}= - \phi^c_\mu\phi^{\mu c}$. In order to simplify the computation, a special choice for the complex phase of the non-vanishing auxiliary fields $\phi_\mu^c$ and $\overline\phi_{c \mu}$ is made, namely, $\overline\phi_{c\mu}$ is chosen to be $\overline\phi_{c \mu}= -\phi_\mu^c$. \section{The Vacuum Structure of the $\alpha$ , $\beta$ and $\omega$ Phases} \label{sec:Vacua} The possible phases of two model Lagrangian densities are studied in the previous section using the conventional approach to effective potential in the Hartree--Fock approximation. A more detailed characterization of these vacua and their properties is given in the following using the general framework developed in section \ref{sec:FDth}. \subsection{The $\alpha$ phase} The effective potential for the $\alpha$ phase in the full theory given in section \ref{sec:FDth} can be obtained by computing the right hand side (r.h.s.) of Eq. \ref{Veff-a-FD} using the $p^0$ integration contour ${\cal C}$ in Fig. \ref{Fig:ConFDB0}, namely \begin{eqnarray} V_{eff}(\mu,\epsilon) &=& 6i\int_{\cal C} {d^4p\over (2\pi)^4} \ln\left ( 1- {\sigma^2\over p_+^2} \right ) \left ( 1- {\sigma^2\over p_-^2} \right ) + {1\over 4 G_0}\sigma^2 + {3\over 2\pi^2} \mu^4. \label{Veff-a-FDF} \end{eqnarray} The effective potential has only a trivial minimum located at $\mu=\epsilon=0$ when the integration contour is chosen to be the quasiparticle one shown in Fig. \ref{Fig:ConFDB1}. This contour, which selects only the quasiparticle contributions, violate the conservation of baryon number explicitly; it is demonstrated in section \ref{sec:Instab}. It is necessary to turn the contour ${\cal C}$ into the complex plane to find the Euclidean stationary points of the field configurations in a way that preserves the causal structure of Fig. \ref{Fig:ConFDB0}. Such a contour is also displayed in Fig. \ref{Fig:ConFDB1}. A numerical study shows that the minima of $V_{eff}$ for a given value of non-zero $\sigma$ is located either at $\mu\ne 0$ and $\epsilon=0$ or at $\mu=0$ and $\epsilon\ne 0$. Fig. \ref{Fig:NJ-sec} shows $V_{eff}$ along three different directions in the $\mu-\epsilon$ plane. The absolute minimum of $V_{eff}$ is located at $\epsilon=\epsilon_{0}\ne 0$ and $\mu=0$. This is the result I regard as natural since it avoids the not observed CP violation associated with the $\alpha$ phase found in section \ref{sec:Models}. It also agrees with the physical picture for the $\alpha$ phase, which is condensed with correlating quark--antiquark pairs. With a non-vanishing $\epsilon$, Let us first assess the nature of the primary statistical gauge field excitation. The effective action for the primary statistical gauge field ${\mu'}^\alpha$ at long distances is given by (see Eq. \ref{Seff-mu2}) \begin{eqnarray} S^{(\mu)}_{eff} = \int d^4 x \left [ -{Z^{(\mu)}\over 4} f_{\mu\nu} f^{\mu\nu} + \left ({6\over \pi^2} \epsilon^2 \right ) {\mu'}^2 \right ]. \label{Seff-mu4} \end{eqnarray} Since $\epsilon^2\ne 0$ in the $\alpha$ phase, the $\mu'_\alpha$ excitation is massive (or short ranged) in the static limit and is now stable against quantum fluctuations. This agrees also with our observations since no corresponding long range force and large CP violation are observed at the present-time condition. Due to the presence of a finite $\epsilon$, the response of the system to external (or internal) excitations is different from the familiar one we have learned from the conventional approaches. When Dirac's view for the bare vacuum of fermions is taken, namely, the bare vacuum corresponds to a state in which the negative energy states are filled and the positive states empty, the vacuum where $\epsilon\ne 0$ is the state in which all single particle states with energy $E\le-\epsilon$ and $0\le E<\epsilon$ are filled whereas other single particle states empty. Since for an uniform system, the value of the mass $\sigma$ for the quasiparticle is always larger than $\epsilon$ in the models considered, the presence of a finite $\epsilon$ in the vacuum of the $\alpha$ phase appears to has no effects if the quasiparticle can propagate long enough without suffering from further scattering. The presence of $\epsilon$ only provides a virtual possibility for an uniform system that the local fluctuations of the fields can feel. So, it has genuine physical effects on local observables even in uniform systems according to the discussion in Appendix \ref{app:thedark}. For a non-uniform system in which the energy of the operator in Eq. \ref{EigenEq2} could be smaller than $\epsilon$, the presence of $\epsilon$ may has a real effect on the dynamical processes. For example, in a chiral soliton in which the energy of the lowest orbits for the valence fermions moves with the size of the soliton, the presence of $\epsilon$ will limit the range of change of the soliton's possible size which gives an extra stability of such solitons. To put the above argumentation in a more concrete context, let us consider a situation in which the lowest energy valence fermion state lies within the region $-\epsilon \le E < 0$ for one size and shape, then a change in its size or shape that moves $E$ upward can be continued freely only until $E=0$ since the $0\le E <\epsilon$ states are filled and the next available state is the one with energy $E=\epsilon$, which can only be reached by a discontinuous change in the size and shape of the soliton. If the nucleon can be regarded as a chiral soliton, this mechanism can prevent it from dissolving inside a nucleus if the lowest energy valence (constituent) quark states inside the nucleon lies between ($-\epsilon$,0). Other implications of a nonvanishing $\epsilon$ are worth to be studied in future works. Perhaps other interesting implications of a non-vanishing $\epsilon$ are on the particle production and dissipation processes in non-equilibrium situations like the heavy ion collision. \subsection{The $\beta$ and $\omega$ phases} According to the physical picture discussed so far, the baryon number content of the $\beta$ and $\omega$ phases ought to be different from the $\alpha$ phase due to the fact that, instead of quark and antiquark pairs, quark pairs and antiquark pairs (or diquark and antidiquark) are condensed in the vacuum. This is reflected in the fact that in these two vacua, the expectation value of $\chi^c$ (together with its conjugate field $\overline\chi_c$) and $\phi^c_\mu$ (together with its conjugate field $\overline\phi_{c\mu}$) are non-vanishing. In the $\beta$ and $\omega$ phases, where $\sigma=0$, the effective potentials for model I and II are found, using Eq. \ref{Veff-FDB}, to be of the following forms \begin{eqnarray} V_{eff} &=& -{4\over (2\pi)^4}\int_{\cal C} d^4p \ln \left (1 + {\chi^4-2\chi^2 p_+\cdot p_-\over p^2_+ p^2_-} \right ) + {1\over 2 G_{3'}} \chi^2 \nonumber \\ & & \hspace{4.5cm} + {3\over 2\pi^2} \left (\mu^4 + 2\epsilon^4 + 12 \mu^2\epsilon^2 \right ) \hspace{2cm} \mbox{For Model I}\label{Veff-M1}\\ V_{eff} &=& -{4\over (2\pi)^4}\int_{\cal C}d^4p \ln \left (1 + {\phi^4-2\phi^2 p_+\cdot p_-\over p^2_+ p^2_-} - 4{(p\cdot\phi)^2\over p_+^2 p_-^2} \right ) + {1\over 2 G_{3}} \phi^2 \nonumber \\ & & \hspace{4.5cm} + {3\over 2\pi^2} \left (\mu^4 + 2\epsilon^4 + 12 \mu^2\epsilon^2 \right ) \hspace{2cm} \mbox{For Model II}\label{Veff-M2} \end{eqnarray} Numerical evaluations show that the local minima of the above effective potentials are located either at $\mu\ne 0$ and $\epsilon=0$ or $\mu=0$ and $\epsilon\ne 0$. The corresponding $V_{eff}$ in the $\mu$--$\epsilon$ plane along three different directions are plotted in Figs. \ref{Fig:SPS-sec} and \ref{Fig:SPV-sec} respectively. The absolute minima of both effective potential are located at $\mu \ne 0$ and $\epsilon=0$. That is, in the $\beta$ and $\omega$ phases where quark pair or diquark condense, the vacua of the systems are the ones with finite density of baryons with the baryon density given by Eq. \ref{bar-rho-mu}. Such a phase also spontaneously violates the CP invariance of the system's original Lagrangian density due to the existence of a non-vanishing CP odd order parameter $\mu^\alpha$ in these phases. In addition, a pattern in which baryonic matter and antimatter are separated in space for the $\beta$ and $\omega$ phases of the vacuum are energetically favored. The superselection sector in the Hilbert space in which the physical states stays in for such a CP violating phase can then be determined. Assuming that the statistical ``electric'' field $\pi_u$ is finite, then Eq. \ref{Q-eigenstate} tells us that $\varsigma=\overline\rho$ when the spacetime approaches infinity. Due to the translational invariance of the vacuum state, it is natural to require that $\varsigma=\overline\rho$ at all spacetime position. In this way the physical states labeled by $\varsigma$ in the CP violating phase of the system are determined. One of the interesting properties of the $\beta$ and $\omega$ phases is that these phases are expected to have off diagonal long range order ODLRO \cite{ODLO} due to the spontaneous breaking down of the $U(1)$ symmetry corresponding to baryon number conservation. Macroscopic quantum phenomena manifest in those phases possessing ODLRO\footnote{ODLRO is absent in the normal phases of matter where classical picture is supposed to emerge for macroscopic systems due to decoherence characterized by a diagonalization of the effective density matrix of the system interested during the time evolution (see, for example, Refs. \cite{Decoh}).}. The quantum nature of these phases leads to behaviors of the system not expected from our daily experiences. Some of these behaviors are observed in the superfluid state of $^4$He. Could it allows a quantum mechanical jump (transition) from an initially zero baryon density $\beta$ or $\omega$ phases of the vacuum state to the lowest energy state of the system inside relatively large regions: some of them contain baryonic matter and others contain antibaryonic matter? Such a kind of transition is forbidden in classical picture since it violates the the relativistic causality. It is allowed in the quantum measurement processes according to the standard interpretation of quantum mechanics, in which a collapse of the wave function of the system occurs after a measurement. It remains to be understood in the future more detailed researches. If it is permitted, then each of these regions can has a finite size at the moment of the transition or, in another word, each of them can has a size large than its event horizon. This property of the $\beta$ and $\omega$ phases could provide a mechanism for the baryogenesis in the early universe in a matter--antimatter symmetric universe \cite{Omnes}. It was shown to be impossible for such a baryogenesis mechanism to be compatible with observations if the process of baryon--antibaryon separation is classical \cite{Steigman}. The discovery of the possible $\beta$ and $\omega$ phases for the strong interaction vacuum in this study might provide a theoretical basis for a reconsideration of the idea of matter antimatter symmetric universe. Some of the other more detailed consequences of this picture, which is beyond scope of this paper, is worthy of exploring. The ${\mu'}_0$ degrees of freedom is non-propagating in the $\beta$ and $\omega$ phases due to a non-vanishing (uniform) baryon number density is present in these two phases. According to Eq. \ref{Seff-mu3} \begin{eqnarray} S^{(\mu)}_{eff} &=& {i\over 2}\int d^4 x \left [ (\overline\rho^2) {\mu'}_0^2 + \ldots\right ]. \label{Seff-mu5} \end{eqnarray} It shows that the ${\mu'}_0$ fluctuation is damped by a non-oscillating Gaussian factor with the width proportional to the inverse square of the baryon number density in these phases (i.e., $\overline\rho^2$). The spatial component of $\mu^\alpha$ in the $\beta$ and $\omega$ phases is however long ranged. This can also be realized through an inspection of Eq. \ref{Seff-mu3}. The question of whether or not there is a condensation of statistical mono-poles in these two phases, which is discussed in a general term in subsection \ref{subsec:more} should be studied in future more detailed works. \subsection{The excitations of the primary statistical gauge field} \label{subsec:abslro} In the above discussion it can be seen that in the $\alpha$ , $\beta$ and $\omega$ phases, either $\epsilon$ or $\mu$ (it is equivalent to $\bra{0} j \ket{0}$) is nonvanishing. From the discussion presented in the subsection \ref{subsec:more}, it can be concluded that there is no long range order for the time component of the primary statistical gauge field $\mu^\alpha$ at long distances or $\mu^0$ corresponds to at most a massive excitation in the non-trivial vacuum discussed in this paper. The situation for the spatial component of $\mu^\alpha$ is different between the $\alpha$ and $\beta$ or $\omega$ phases. In the $\alpha$ phase, the excitation related to $\mu^\alpha$ is short ranged. In the $\beta$ and $\omega$ phases, the excitations related to \mbox{\boldmath{$\mu$}} are long ranged. These excitations can therefore generate a statistical ``magnetic force'' between different particles within the $\beta$ or $\omega$ phases of the vacuum. The consequences of such a statistical ``magnetic force'' on the evolution of the system is worthy of studying. \section{Discussion and Outlook} \label{sec:Summ} It is found that at least three interwinding new theoretical elements are necessary to be brought into a consistent treatment of the problem. The first one is the general existence of the so called {\em dark component} in an interacting system originated from the transient and short distance quantum fluctuations of the system, which is measured by the difference between the absolute value and the apparent value of some conserved quantities like the baryon number density, energy density, etc. of the system. The second one is related to the recognition of the existence of the so called {\em blocking effects} in the non-trivial phases of a system. The third one is related to the necessity of introducing a {\em primary statistical gauge field} coupled to the fermion (baryon) number current density of the system. By introducing these three elements into the formulation of the problem, the door to go beyond the physical pictures limited by the approximated concept of quasiparticles is open, which allows us to explore new physical possibilities. A systematic path integration formalism for the investigation of the quantum aspects of an interacting fermion system (or sector) sampled by Euclidean spacetime stationary configurations in which a condensation of fermion pairs (fermion pairs, antifermion pairs, and fermion--antifermion pairs) is present is developed based upon the asymptotic grand canonic ensemble. Two statistical parameters, namely the primary statistical gauge field $\mu^\alpha$ and the statistical blocking parameter $\epsilon$ are introduced to allow a finer characterization of the vacuum structure of the system. In addition, it is shown that the asymptotic grand canonic ensemble reduces to the grand canonic ensemble as the spacetime resolution of observation is sufficiently lowered. Such a behavior is a necessary condition for the usefulness of describing the macroscopic properties of an interacting system in terms of particles in certain domain of energy and for the smooth approach to the well established results in non-relativistic condensed matter systems at low energies. Combined with the Euclidean approach to the effective action, some of the quantum effects that survive the thermodynamic limit can be included. The present approach, which uses quantum field theoretical language, is consistent with thermodynamics and can be extended to finite temperature case \cite{TFTQFT}. Firstly, the dark component for local observables like the fermion number density does exist in interacting theories in which the direct association of the field theoretic definition of fermion number density with the number of ``free particles'' per unit volume becomes obscure especially when a phase transition inside the system has occurred. This conclusion is also applicable to other local observables like the energy density, which may have implications on the dark matter problem in Cosmology since it implies that the apparent matterless space at the macroscopic level is capable of revealing itself of matter effects in low energy gravitational processes when $\mu$ is below the baryonic particle production threshold even after the energy density for the $\mu=0$ state is subtracted. This is because gravitational fields couple {\em locally} to the source matter fields which contain the random quantum fluctuation generated dark component. Further researches in that direction in the context of understanding the cosmological baryogenesis and dark matter problem is an interesting direction to be explored. In addition, the picture that a nucleon is made of three valence quarks (quasiparticles) need to be modified when there is additional close-by virtual phases for the hadronic vacuum state that has slightly higher energy density than the actual one. The implications of such a finding can be explored in observables related to a nucleon. Some of them are studied in Refs. \cite{PCACsuc,GDHp}, other related problems concerning a nucleon, like the understanding of the origin of the Gottfried sum rule violation in deep inelastic scattering confirmed in a recent measurement \cite{GFsum}, the small-x behavior of nucleon structure functions in deep inelastic scattering \cite{F2-pap,g1-pap}, etc.. Secondly, it is important to take into account the fermionic blocking effects due to the presence of a macroscopic population of bare particles in the nontrivial vacuum phases of a system. The blocking effects are generally included in the theory by introducing the statistical blacking parameter $\epsilon$, which is non-zero for a system's certain vacuum phases. The effects of the blocking can not be progressively generated using perturbative expansion starting from a field configuration with $\epsilon=0$. In the $\alpha$ phase of the strong interaction vacuum discussed here, $\epsilon=0$ configurations are inconsistent ones since it is known that the $\alpha$ phase of the strong interaction vacuum is macroscopically populated with the current quarks and antiquarks, which changes the available states for an current quark due to Pauli principle. The discovery of the blocking effects has hitherto unnoticed implications related to, e.g., the stability of a nucleon in a nucleus and nuclear matter, the mechanism for particle production in a heavy ion collision, new ways of (quasi)particle dissipation in a strong interacting system, etc.. Thirdly, the statistical gauge degrees of freedom of the system represents certain collective mode of the system that has a dynamics of its own. There are two components for the primary statistical gauge field: the first one is the classical configurations which serves as a background field; the second one is the local fluctuations of it around the classical configurations. The classical configurations, which is a spacetime independent background $\mu^\alpha$ in the case studied here, play the role of the chemical potential in the the conventional non-relativistic approach. It determines the asymptotic grand canonic ensemble of the system. It can also has non-trivial topological configurations corresponding to different quantized statistical ``magnetic flux'' which can determine the phase structure of the system on a finer basis. Once present, the statistical ``magnetic field'' affects the dynamical evolution of the system that can result in 1) material pattern formation and 2) providing the seed for the galactic magnetic field in the early universe. Whether or not such an idea is actually relevant to comprehend what happened in the the early universe can be studied in further works. The local fluctuations of it around the background extended configuration represent the corresponding dynamical excitations of the system. The necessary condition for the existence of long range statistical gauge correlation in various possible phases of the system is discussed. The statistical gauge degrees of freedom are also introduced in condensed matter physics in the context of the half filled Hubbard model, which serves as one of the prefered models that is expected to describe the phenomena of ``high temperature superconductivity'' in certain matterials \cite{Affleck,Kotliar}. The motiviation for introducing the statistical gauge degrees of freedom there is quite different from the ones in this work. Here, the statistical gauge degrees of freedom are introduced {\em a priori} based upon locality and Lorentz invariance with the intention of describing the relativistic fermionic systems. Since the approach in this paper is applicable to all fermionic system, it is quite interesting to see whether the non-relativistic reduction of the problem can lead to some form of statistical gauge degrees of freedom for condensed matter systems at low energies, including the ones that Hubbard model describes. Nevertheless, many techniques in treating the statistical gauge degrees of freedom in condensed matter physics are expected to be applicable or a least adaptable here. One of the differences at formal level between the approach here and the ones used in condensed matter physics manifests in the different criterion for the selection of physical states within the full representing Hilbert space of the problem. The physical states in the statistical gauge theories developed so far in condensed matter physics is invariant under infinitesimal local gauge transformations, which is realized by the requirement that the operator form of the ``Gauss law'' annihilates all physical states in the superselection sector of the Hilbert space \cite{Fradkin}. Such a strict enforcement of the statistical gauge invariance on the physical state vector of the system is neither necessary nor desirable for the statistical gauge invariant systems since it would exclude all finite density state from the physical sector of the system if one requires that the statistical ``electric field'' (denoted by $\pi_u$ here) is finite. Such a situations is certainly unacceptable. Albeit this embarrassment can be circumvented in the 2+1 dimension \cite{Fradkin}, it is not expected to be easily implemented in higher dimensions. For the statistical gauge transformations, the statistical gauge invariance can be implemented by a imposing a less restrictive conditions on the physical superselection sector of the Hilbert space. Instead of requiring that the physical states are invariant under the gauge transformation, the gauge invariance on observables can be implemented by requiring that all states in a physical superselection sector of the Hilbert space change a (coordinate dependent) {\em common phase}. The mathematical form for such a requirement is represented by Eq. \ref{Q-eigenstate}. The physical superselection sectors are then a functional determined by the common function $\varsigma$. With such a generalization of the ``Gauss law'' for the theory, both the requirements of the finiteness of the statistical ``electric field'' $\pi_u$ and of the fact that finite density states are actually physical states can be met consistently. The formalism is then applied to two half bosonized model Lagrangian densities. Four possible phases for the vacuum state of the interacting relativistic chiral symmetric systems are found. The first phase, called the $O$ phase, correspond to the bare vacuum state of the system. Fermion--antifermion pairs condense in the second phase, named the $\alpha$ phase, of its vacuum. The $\alpha$ phase has the following properties: 1) the chiral $SU(2)_L\times SU(2)_R$ symmetry of the Lagrangian density of the system is spontaneously broken down to a $SU(2)_V$ symmetry 2) the baryon number density is zero 3) statistical blocking effects exists 4) the statistical gauge correlation is short ranged due to the presence of the statistical blocking effects. The third and fourth possible phases of the vacuum are called the $\omega$ phase and $\beta$ phase respectively. Fermion pairs and antifermion pairs condense in the $\omega$ and $\beta$ phases. It is found that in these two phases of the vacuum: 1) the original chiral $SU(2)_L\times SU(2)_R$ symmetry of the Lagrangian density of the system remains unbroken in the $\omega$ phase and is spontaneously broken down to a $SU(2)_V$ symmetry in the $\beta$ phase 2) the baryon number density is different from zero or can be locally generated by separating fermion and antifermion rich region spontaneously 3) the $U(1)$ symmetry corresponding to electromagnetism is spontaneously broken down to generate ``massive photon'' excitations (see \cite{Ying11}) 4) no statistical blocking effects in these two phases 5) the spatial components of the statistical gauge excitation is long ranged; the quantum fluctuation in the time component of the primary statistical gauge field is Gaussian damped 6) off diagonal long range order exists in these two phases to give rise to macroscopic quantum behavior for the system, which is suppressed in the normal phase of the system. The implication of the finding presented in this paper on physical processes of strong interaction phenomena that are currently being or going to be observed or are in need to be explained theoretically remains to be investigated in the future. \vspace{0.8cm} \noindent {\bf Acknowledgment} This work is supported in part by grants from the Post Doctoral Fund of China, the Young Researcher Fund of Fudan University and a Research Fund from the State Education Commission of China.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,574
```html <Trigger toggle='id-of-target'> <a className='button'>Toggle Target</a> <Trigger> ```
{ "redpajama_set_name": "RedPajamaGithub" }
1,453
\section{Introduction} A dichotomy for the Hardy--Littlewood maximal operators was noticed for the first time by Bennett, DeVore and Sharpley in the context of the space of functions of bounded mean oscillation. In \cite{BDVS} the authors discovered the principle that for any function $f \in BMO(\mathbb{R}^d)$, $d \geq 1$, its maximal function $Mf$ either is finite almost everywhere or equals $+\infty$ on the whole $\mathbb{R}^d$. Later on, however, it turned out that this property is not directly related to the $BMO$ concept. Fiorenza and Krbec \cite{FK} proved that for any $f \in L^1_{loc}(\mathbb{R}^d)$ the following holds: if $Mf(x_0) < \infty$ for some $x_0 \in \mathbb{R}^n$, then $Mf(x)$ is finite almost everywhere. In turn, in \cite{AK} Aalto and Kinnunen have shown in a very elegant way that this implication remains true if one replaces the Euclidean space by any metric measure space with a doubling measure. Finally, some negative results in similar contexts also appeared in the literature. For example, in \cite{LSW} C.-C. Lin, Stempak and Y.-S. Wang observed that such a principle does not take place for local maximal operators. The aim of this article is to shed more light on the above-mentioned issue by examining the occurrence of the dichotomy property for the two most common maximal operators of Hardy--Littlewood type, non-centered $M$ and centered $M^c$, associated with metric measure spaces for which the doubling condition fails to hold. By a metric measure space $\mathbb{X}$ we mean a triple $(X, \rho, \mu)$, where $X$ is a set, $\rho$ is a metric on $X$ and $\mu$ is a non-negative Borel measure. Throughout the paper we will additionally assume (without any further mention) that $\mu$ is such that $0 < \mu(B) < \infty$ holds for each open ball $B$ determined by $\rho$. In this context we introduce the $\textit{Hardy--Littlewood}$ $\textit{maximal operators}$, non-centered $M$ and centered $M^c$, by \begin{displaymath} Mf(x) = \sup_{B \ni x} \frac{1}{\mu(B)} \int_B |f| \, d \mu , \qquad x \in X, \end{displaymath} and \begin{displaymath} M^cf(x) = \sup_{r > 0} \frac{1}{\mu(B_r(x))} \int_{B_r(x)} |f| \, d\mu, \qquad x \in X, \end{displaymath} respectively. Here by $B$ we mean any open ball in $(X,\rho)$, while $B_r(x)$ stands for the open ball centered at $x \in X$ with radius $r>0$. We also require the function $f$ used above to belong to the space $L^1_{loc}(\mu)$ which means that $\int_B |f| \, d \mu < \infty$ for any ball $B \subset X$. We say that $M$ possesses the $\textit{dichotomy property}$ if for any $f \in L^1_{loc}(\mu)$ exactly one of the following cases holds: either $\mu(E_\infty(f)) = 0$ or $E_\infty(f) = X$, where $E_\infty(f) = \{x \in X \colon Mf(x) = \infty\}$. Similarly, $M^c$ possesses the dichotomy property if for any $f \in L^1_{loc}(\mu)$ we have either $\mu(E^c_\infty(f)) = 0$ or $E^c_\infty(f) = X$, where $E^c_\infty(f) = \{x \in X \colon M^cf(x) = \infty\}$. Notice that, equivalently, the dichotomy property can be formulated in the following way: if $Mf(x_0) < \infty$ (respectively, $M^cf(x_0) < \infty$) for some $f \in L^1_{loc}(\mu)$ and $x_0 \in X$, then $Mf$ (respectively, $M^cf$) is finite $\mu$-almost everywhere. Observe that for any $f \in L^1_{loc}(\mu)$ we have $E^c_\infty(f) \subset E_\infty(f)$. Moreover, if the space is doubling (which means that $\mu(B_{2r}(x)) \lesssim \mu(B_r(x))$ holds uniformly in $x \in X$ and $r > 0$), then $E^c_\infty(f) = E_\infty(f)$. Nevertheless, at first glance, there is no clear reason why the two properties mentioned in the previous paragraph would be somehow interdependent in general, since $Mf$ and $M^cf$ may be incomparable if $(X, \rho, \mu)$ is not doubling. In other words, we have no obvious indications at this point that the existence or absence of the dichotomy property for one operator implies its existence or absence for another one. Therefore, natural problems arise: ``can each of the four possibilities actually take place for some metric measure space?" and ``can we additionally demand that this space be non-doubling?". Thus, one of the two major results in this article is to prove the following theorem that gives affirmative answers to these two questions. \begin{theorem} For each of the four possibilities regarding whether $M$ and $M^c$ possess the dichotomy property or not, there exists a non-doubling metric measure space for which the associated maximal operators behave just the way we demand. \end{theorem} \begin{proof} Examples 1, 2, 3 and 4 in Sections 2 and 3 together constitute the proof of this theorem, illustrating all the desired situations. \end{proof} It is worth noting at this point that, in addition to indicating appropriate examples, our goal is also to ensure that they are constructed as simply as possible. Thus, in all examples presented later on $X$ is either $\mathbb{R}^d$ or $\mathbb{Z}^d$, $d \geq 1$, while $\rho$ is the standard Euclidean metric $d_e$ or the supremum metric $d_\infty$. Finally, in the discrete setting $\mu$ is defined by letting the value $\mu(\{x\}) > 0$ to each point $x \in X$, while in the continuous situation $\mu$ is determined by a suitable strictly positive weight $w$. For the convenience of the reader, the results obtained in Examples 1, 2, 3 and 4 have been summarized in Table 1 below. \begin {table}[H] \vspace*{0.3cm} \caption {Occurrence of the dichotomy property (DP) for $M$ and $M^c$ associated with spaces described in Examples 1, 2, 3 and 4.} \label{T} \begin{center} \begin{tabular}{ | c | C{0.7cm} | C{0.7cm} | c | c | c |} \hline & $X$ & $\rho$ & $\mu$ & DP for $M$ & DP for $M^c$ \\ \hline Ex. $1$ & $\mathbb{R}$ & $d_e$ & $e^{x^2} dx$ & \cmark & \xmark \\ \hline Ex. $2$ & $\mathbb{R}$ & $d_e$ & $e^{-x^2} dx$ & \cmark & \cmark \\ \hline Ex. $3$ & $\mathbb{Z}^2$ & $d_\infty$ & $\mu(n,m) = \left\{ \begin{array}{rl} 4^{|m|} & \textrm{if } n = 0, \\ 1 & \textrm{otherwise. } \end{array} \right.$ & \xmark & \cmark \\ \hline Ex. $4$ & $\mathbb{Z}^2$ & $d_\infty$ & $\mu(n,m) = \left\{ \begin{array}{rl} 4^{|m|} & \textrm{if } n = 0, \\ 2^{n^2} & \textrm{if } n < 0 \textrm{ and } m = 0, \\ 1 & \textrm{otherwise. } \end{array} \right. $ & \xmark & \xmark \\ \hline \end{tabular} \end{center} \end{table} One more comment is in order here. While the doubling condition for measures is often assumed in the literature to provide that most of the classical theory works, some statements can be verified under the less strict condition that the space is geometrically doubling or satisfies both geometric doubling and upper doubling properties (see \cite{H} for the details). In our case, although the metric measure spaces appearing in Table 1 are non-doubling, the corresponding metric spaces are geometrically doubling. This means that the general result for the class of doubling spaces, concerning the existence of the dichotomy property for maximal operators, cannot be repeated in the context of geometrically doubling spaces. Finally, Example 5 in Section 4 illustrates the situation where the space is geometrically doubling and upper doubling at the same time, while the associated operator $M$ does not possess the dichotomy property. \section{Real line case} In this section we study the dichotomy property for the Hardy--Littlewood maximal operators $M$ and $M^c$ associated with the space $(\mathbb{R}, d_e, \mu)$, where $\mu$ is arbitrary. Let us note here that we consider one-dimensional spaces separately, since they have some specific properties, mainly due to their linear order (for example, in this case $M$ always satisfies the weak type $(1,1)$ inequality with constant $2$). Our first task is to prove the following. \begin{proposition} Consider the space $(\mathbb{R}, d_e, \mu)$, where $\mu$ is an arbitrary Borel measure. Then $M$ possesses the dichotomy property. \end{proposition} \noindent The proof of Proposition 1 is preceded by some additional considerations. Let $r(B)$ be the radius of a given ball $B$. For $f \in L^1_{loc}(\mu)$ we denote \begin{displaymath} L_f = L_f(\mu) = \Big\{ x \in \mathbb{R} \colon \lim_{r \rightarrow 0} \sup_{B \ni x \colon r(B)=r} \frac{1}{\mu(B)} \int_{B} | f(y) - f(x) | \, d\mu(y) = 0 \Big\}, \end{displaymath} and \begin{displaymath} L^c_f = L^c_f(\mu) = \Big\{ x \in \mathbb{R} \colon \lim_{r \rightarrow 0} \frac{1}{\mu(B_{r}(x))} \int_{B_{r}(x)} | f(y) - f(x) | \, d\mu(y) = 0 \Big\}. \end{displaymath} Notice that there is a small nuisance here, because $f$ is actually an equivalence class of functions, while $L_f $ and $L^c_f$ clearly depend on the choice of its representative. Nevertheless, for any two representatives $f_1$ and $f_2$ of a fixed equivalence class we have $\mu(L_{f_1} \triangle L_{f_2})=0$ and $\mu(L^c_{f_1} \triangle L^c_{f_2})=0$ (where $\triangle$ denotes the symmetric difference of two sets) and this circumstance is sufficient for our purposes. The conclusion of the following lemma is a simple modification of the well known fact about the set of Lebesgue points of a given function. Although the proof is rather standard, we present it for completeness (cf. \cite[Theorem 3.20]{Fo}). \begin{lemma} Consider the space $(\mathbb{R}, d_e, \mu)$ and let $f \in L^1_{loc}(\mu)$. Then $\mu(\mathbb{R} \setminus L_f) = 0$. \end{lemma} \begin{proof} For a function $g \in L^1_{loc}(\mu)$ let us introduce the sets $L_{g,N}$, $N \in \mathbb{N}$, defined by \begin{displaymath} L_{g,N} = \Big\{ x \in \mathbb{R} \colon \limsup_{r \rightarrow 0} \sup_{B \ni x \colon r(B)=r} \frac{1}{\mu(B)} \int_{B} | g(y) - g(x) | \, d\mu(y) \leq \frac{1}{N} \Big\}. \end{displaymath} Note that $L_f = \bigcap_{N = 1}^{\infty} L_{f,N}$. Therefore, it suffices to prove that for each $N \in \mathbb{N}$ there exists a Borel set $A_N$ such that $(-N, N) \setminus L_{f,N} \subset A_N$ and $\mu(A_N) \leq 1/N$. Fix $N$ and consider $f_N = f \cdot \chi_{(-N-1, N+1)}$. Thus $f_N \in L^1(\mu)$ and $L_{f_N,N}$ coincides with $L_{f,N}$ on $(-N, N)$. We take a continuous function $g_N$ satisfying $\|f_N - g_N \|_{L^1(\mu)} \leq 1 / (9N^2)$ (notice that continuous functions are dense in $L^1(\mu)$ by \cite[Proposition 7.9]{Fo}) and define two auxiliary sets \begin{displaymath} E_N^1 = \{x \in \mathbb{R} \colon |(f_N - g_N)(x)| > \frac{1}{3N} \}, \qquad E_N^2 = \{x \in \mathbb{R} \colon M(f_N - g_N)(x) > \frac{1}{3N} \}. \end{displaymath} Observe that $\mu(E^1_N) \leq 1 / (3N)$ and $\mu(E^2_N) \leq 2 / (3N)$. Now we fix $x_0 \in (-N, N) \setminus (E_N^1 \cup E_N^2)$ and take $0 < \epsilon <1$ such that $|g_N(y) - g_N(x_0)| \leq 1 / (3N)$ for $|y - x_0| < \epsilon$. If $B$ contains $x_0$ and satisfies $r(B)< \epsilon/2$, then by using the estimate \begin{displaymath} | f(y) - f(x_0) | \leq | f_N(y) - g_N(y) | + |g_N(y) - g_N(x_0)| + |(g_N(x_0) - f_N(x_0)|, \end{displaymath} which is valid for all $y \in B$, we obtain \begin{displaymath} \frac{1}{\mu(B)} \int_{B} | f(y) - f(x_0) | \, d\mu(y) \leq M(f_N - g_N)(x_0) + \frac{1}{3N} + |f_N(x_0) - g_N(x_0)| \leq \frac{1}{N}, \end{displaymath} and therefore $A_N = E_N^1 \cup E_N^2$ satisfies the desired conditions. \end{proof} \begin{remark*} Of course, the definitions of $L_f$ and $L_f^c$ can also be adapted to the situation of an arbitrary metric measure space $(X, \rho, \mu)$. In this case we have $\mu(X \setminus L_f) = 0$ (respectively, $\mu(X \setminus L_f^c) = 0$) for a given function $f \in L^1_{loc}(\mu)$ if only the associated maximal operator $M$ (respectively, $M^c$) is of weak type $(1,1)$ and continuous functions are dense in $L^1(\mu)$. This is the case, for example, when dealing with $L_f^c$ and the space $(\mathbb{R}^d, \rho, \mu)$, $d \geq 1$, where $\rho$ is the metric induced by a fixed norm (in particular, $\rho = d_e$ and $\rho = d_\infty$ are included) and $\mu$ is arbitrary. We explain some details more precisely in Section 4. \end{remark*} Now we are ready to prove Proposition 1. \begin{proof} Assume that $\mu(E_\infty(f)) > 0$. Then we can take $x \in L_f$ such that $Mf(x) = \infty$. There exist balls $B_n$, $n \in \mathbb{N}$, containing $x$ and satisfying \begin{displaymath} \frac{1}{\mu(B_n)} \int_{B_n} |f(y)| \, d\mu(y) > n. \end{displaymath} Fix $\epsilon > 0$ such that \begin{displaymath} \frac{1}{\mu(B)} \int_{B} | f(y) - f(x) | \, d\mu(y) < 1, \end{displaymath} if $r(B) \leq \epsilon$ and denote $\delta = \min\{ \mu((x-\epsilon/2, x]), \mu([x, x + \epsilon/2))\}$. We obtain that $B_n \subsetneq (x-\epsilon/2, x + \epsilon/2)$ if $n \geq |f(x)| + 1$ and, as a result, $\mu(B_n) \geq \delta$ for that $n$. Now let us fix an arbitrary point $x' > x$ (the case $x < x'$ can be considered analogously). We denote $\gamma = \mu((x, x'+1)) < \infty$ and $B_n' = B_n \cup (x, x'+1)$, $n \in \mathbb{N}$. Observe that if $n \geq |f(x)| + 1$, then the set $B_n'$ forms a ball containing $x'$ and therefore \begin{displaymath} Mf(x') \geq \frac{1}{\mu(B_n')} \int_{B_n'} |f(y)| \, d\mu(y) \geq \frac{\mu(B_n)}{\mu(B_n')} \, \frac{1}{\mu(B_n)} \int_{B_n} |f(y)| \, d\mu(y) \geq \frac{\delta n}{\delta + \gamma}. \end{displaymath} This, in turn, implies $Mf(x') = \infty$, since $n$ can be arbitrarily large. \end{proof} At the end of this section we show an example of a space $(\mathbb{R}, d_e, w(x) dx)$, where $w$ is a suitable weight (and $w(x) dx$ is non-doubling), for which the centered Hardy--Littlewood maximal operator does not possess the dichotomy property. \begin{example} Consider the space $(\mathbb{R}, d_e, \mu)$ with $d \mu = e^{x^2} dx$. Then $M$ possesses the dichotomy property, while $M^c$ does not. \end{example} Indeed, it suffices to prove only the second part, since $M$ possesses the dichotomy property by Proposition 1. Consider $f(x) = x \cdot \chi_{(0, \infty)} (x)$. We shall show that $M^c(f) = \infty$ if and only if $x \geq 0$. For $x \in \mathbb{R}$ and $r>0$ let us introduce the quantity \begin{displaymath} A_rf(x) = \frac{1}{\mu(B_r(x))} \int_{B_r(x)} |f(y)| \, e^{y^2} \, dy. \end{displaymath} At first, observe that $\lim_{r \rightarrow \infty} A_rf(0) = \infty$. Indeed, fix $N \in \mathbb{N}$ and take $r_0 > N$ such that \begin{displaymath} \int_{(N, r)} e^{x^2} \, dx \geq \frac{1}{3} \int_{(-r, r)} e^{x^2} \, dx, \end{displaymath} for each $r \geq r_0$. Therefore, for that $r$, we obtain \begin{displaymath} A_rf(0) = \frac{1}{\mu(B_r(0))} \int_{B_r(0)} f(x) \, e^{x^2} \, dx \geq \frac{N}{\mu(B_r(0))} \int_{(N, r)} e^{x^2} \, dx \geq \frac{N}{3}, \end{displaymath} and thus $M^cf(0) = \infty$. Next, it is easy see that for any $x > 0$ there is $A_rf(x) \geq A_{r+x}f(0)$ for $r \geq x$. This fact, in turn, gives $Mf^c(x) = \infty$ for any $x \geq 0$. Now we show that $M^cf(x) < \infty$ if $x$ is strictly negative. Fix $x < 0$ and $r > 0$. We can assume that $r > |x|$, since for the smaller values of $r$ we have $A_rf(x) = 0$. Observe that it is possible to choose $r_0 > |x|$ such that for each $r \geq r_0$ \begin{displaymath} e^{(x+r)^2} \leq 2 \, |x| \, e^{r^2}. \end{displaymath} If $r < r_0$, then $A_rf(x) \leq f(x + r_0)$. On the other hand, if $r \geq r_0$, then \begin{displaymath} A_rf(x) \leq \frac{1}{\mu(B_r(x))} \int_{B_r(x)} f(x) \, e^{x^2} \, dx \leq \frac{e^{(x+r)^2}}{2 \, \mu((x-r, -r))} \leq \frac{e^{(x+r)^2}}{2 \, |x| \, e^{r^2}} \leq 1, \end{displaymath} which implies $M^cf(x) < \infty$. \section{Multidimensional case} Throughout this section we work with spaces that do not necessarily have a linear structure. In the first place, we would like to receive that in certain circumstances $M^c$ must possess the dichotomy property. Of course, for our purpose, we should ensure that the introduced criterion is relatively easy to apply and returns positive results also for some non-doubling spaces. Fortunately, it turns out that it is possible to find a condition that successfully meets all these requirements. The following proposition is embedded in the context of Euclidean spaces, but it is worth keeping in mind that, in fact, it concerns all spaces $(X, \rho, \mu)$ for which $\mu(X \setminus L_f^c) = 0$ holds for each $f \in L^1_{loc}(\mu)$. \begin{proposition} Consider the space $(\mathbb{R}^d, d_e, \mu)$, $d \geq 1$, and assume that \begin{equation}\label{C} \exists y_0 \in \mathbb{R}^d \colon \limsup_{r \rightarrow \infty} \frac{\mu(B_{r+1}(y_0))}{\mu(B_{r}(y_0))} = \tilde{C} = \tilde{C}(y_0) < \infty. \end{equation} Then the associated maximal operator $M^c$ possesses the dichotomy property. \end{proposition} Observe that condition (\ref{C}) is related to certain global properties of a given metric measure space $\mathbb{X}$ and thus its occurrence (or not) should be independent of the choice of the point $y_0$ specified above. Indeed, it can be easily shown that if the inequality in (\ref{C}) holds for some $y_0$, then it is also true if we replace $y_0$ by an arbitrary point $y \in X$. Secondly, as it turns out according to Theorem 2 in Section 4, the converse also holds in the case $\mathbb{X} = (\mathbb{R}^d, d_e, \mu)$. Namely, we shall prove that if $M^c$ possesses the dichotomy property, then (\ref{C}) holds for some $y_0 \in \mathbb{R}^d$. Notice that we state only one of the implications in Proposition 2 above because it is enough to prove Theorem 1. On the other hand, the opposite implication allows us to say that the formulated condition is sufficient and necessary at the same time and, since looking for such conditions is interesting itself, we discuss it in a separate section. \begin{proof} Let $f \in L^1_{loc}(\mu)$ and assume that $\mu(E_\infty^c(f)) > 0$. We take $x_0 \in L_f^c$ such that $M^cf(x_0) = \infty$. Hence for each $n \in \mathbb{N}$ we have a ball $B_n = B_{r_n}(x_0)$ satisfying \begin{displaymath} \frac{1}{\mu(B_n)} \int_{B_n} |f(y)| \, d\mu(y) > n. \end{displaymath} Fix $\epsilon >0$ such that \begin{displaymath} \frac{1}{\mu(B_r(x_0))} \int_{B_r(x_0)} |f(y)-f(x_0)| \, d\mu(y) \leq 1, \end{displaymath} for $r \leq \epsilon$ and denote $\delta = \mu(B_\epsilon(x_0))$. If $n \geq |f(x_0)| + 1$, then $B_n \subsetneq B_\epsilon(x_0)$ and, as a result, we have $\mu(B_n) \geq \delta$. This fact easily implies that $\lim_{n \rightarrow \infty} r_n = \infty$, since $f$ is locally integrable. Now we fix any point $x \in \mathbb{R}^d$. There exists $r_0>0$ such that \begin{displaymath} \mu(B_{r+1}(y_0)) \leq 2 \tilde{C} \, \mu(B_{r}(y_0)), \end{displaymath} for each $r \geq r_0$. We choose $n_0 \geq |f(x_0)| + 1$ large enough to ensure that $n \geq n_0$ implies $r_n - |y_0 - x_0| \geq r_0$. Consider the balls $B_n' = B_{r_n + |x_0-x|}(x)$ for $n \in \mathbb{N}$. If $n \geq n_0$, then \begin{displaymath} \mu(B_n') \leq \mu(B_{r_n + |x_0-x| + |y_0 - x|}(y_0)) \leq (2\tilde{C})^m \mu(B_{r_n - |x_0 - y_0|}(x_0)) \leq (2\tilde{C})^m \mu(B_n), \end{displaymath} where $m > |x_0-x| + |y_0 - x| + |x_0 - y_0|$ is a positive integer independent of $n$. Finally, by using the fact that $B_n \subset B_n'$, we get \begin{displaymath} M^cf(x) \geq \frac{1}{\mu(B_n')} \int_{B_n'} |f(y)| \, d\mu(y) \geq \frac{\mu(B_n)}{\mu(B_n')} \frac{1}{\mu(B_n)} \int_{B_n} |f(y)| \, d\mu(y) \geq \frac{n}{(2\tilde{C})^m}, \end{displaymath} which gives $M^cf(x) = \infty$, since $n$ can be arbitrarily large. \end{proof} \begin{remark*} Notice that the conclusion of Proposition 2 remains true if we take the metric $d_\infty$ instead of $d_e$ provided that this time the balls determined by $d_\infty$ are used in (\ref{C}). There are also no obstacles to getting a discrete counterparts of the above statements. Namely, one can replace $\mathbb{R}^d$ by $\mathbb{Z}^d$, $d \geq 1$, and obtain the desired result for the space $(\mathbb{Z}^d, \rho, \mu)$, where $\rho = d_e$ or $\rho = d_\infty$ and $\mu$ is arbitrary. \end{remark*} Now, with Propositions 1 and 2 in hand, we can easily give an example of a non-doubling space, for which both $M$ and $M^c$ possess the dichotomy property. \begin{example} Consider the space $(\mathbb{R}, d_e, \mu)$ with $d\mu(x) = e^{-x^2} dx$. Then both $M$ and $M^c$ possess the dichotomy property. \end{example} Indeed, $M$ possesses the dichotomy property by Proposition 1, while $M^c$ possesses the dichotomy property by Proposition 2, since $\lim_{r \rightarrow \infty} \mu(B_{r+1}(0)) / \mu(B_r(0)) = 1$. \newline At this point, a natural question arises: will we get the same result for Gaussian measures in higher dimensions? The following proposition settles affirmatively this problem. \begin{proposition} Consider the space $(\mathbb{R}^d, d_e, \mu)$ with $\mu(\mathbb{R}^d) < \infty$. Assume that $\mu$ is determined by a strictly positive weight $w$ satisfying \begin{equation}\label{D} 0 < c_n \leq w(x) \leq C_n < \infty, \qquad x \in B_n(0), \, n \in \mathbb{N}, \end{equation} for some numerical constants $c_n$ and $C_n$, $ n \in \mathbb{N}$. Then the associated maximal operators, $M$ and $M^c$, both possess the dichotomy property. \end{proposition} \begin{proof} It suffices to prove that $M$ possesses the dichotomy property, since $\mu(\mathbb{R}^d) < \infty$ implies that \eqref{C} is satisfied with $\tilde{C} = 1$ (regardless of which point $y_0 \in \mathbb{R}^d$ we choose). Take $f \in L_{loc}^1(\mu)$. We shall show that $\mu(\mathbb{R}^d \setminus L_f) = 0$. For a fixed $n \in \mathbb{N}$ let us consider the measure $\mu_n$ determined by $w_n$ satisfying \begin{displaymath} w_n(x) = \left\{ \begin{array}{rl} w(x) & \textrm{if } x \in B_n(0), \\ 1 & \textrm{otherwise. } \end{array} \right. \end{displaymath} Observe that condition \eqref{D} implies that $\mu_n$ is doubling. Let $f_n = f \chi_{B_n(0)}$. We have \begin{displaymath} \mu(B_n(0) \setminus L_f) = \mu_n(B_n(0) \setminus L_{f_n}(\mu_n)) \leq \mu_n(\mathbb{R}^d \setminus L_{f_n}(\mu_n))= 0, \end{displaymath} because $f_n \in L^1_{loc}(\mu_n)$ and this yields $\mu(\mathbb{R}^d \setminus L_f) = 0$, since $n$ can be arbitrarily large. Assume that $\mu(E_\infty(f)) > 0$ and take $x_0 \in L_f$ such that $Mf(x_0) = \infty$. For each $n \in \mathbb{N}$ we have a ball $B_n \ni x_0$ for which \begin{displaymath} \frac{1}{\mu(B_n)} \int_{B_n} |f(y)| \, d\mu(y) > n. \end{displaymath} Fix $\epsilon >0$ such that \begin{displaymath} \frac{1}{\mu(B)} \int_{B} |f(y)-f(x_0)| \, d\mu(y) \leq 1, \end{displaymath} whenever $B \subset B_\epsilon(x_0)$. If $n \geq |f(x_0)| + 1$, then $B_n \subsetneq B_\epsilon(x_0)$. Thus, combining condition \eqref{D} with the fact that $r(B_n) \geq \epsilon / 2$ for that $n$, we conclude that $\mu(B_n) \geq \delta$, where $\delta = \delta(x_0, \epsilon)$ is strictly positive and independent of $n$. Now we fix any point $x \in \mathbb{R}^d$ and take $n \geq |f(x_0)| + 1$. Let $B_n'$ be any ball containing $x$ and $B_n$. Then we get \begin{displaymath} Mf(x) \geq \frac{1}{\mu(B_n')} \int_{B_n'} |f(y)| \, d\mu(y) \geq \frac{1}{\mu(\mathbb{R}^d)} \int_{B_n} |f(y)| \, d\mu(y) \geq \frac{\delta n}{\mu(\mathbb{R}^d)}, \end{displaymath} which gives $M^cf(x) = \infty$, since $n$ can be arbitrarily large. \end{proof} Until now we furnished examples illustrating two of the four possibilities related to the problem of possessing or not the dichotomy property by $M$ and $M^c$. Notice that in both considered situations the indicated space was $\mathbb{R}$ with the usual metric and measure determined by a suitable weight. Unfortunately, as was indicated in Proposition 1, such examples cannot be used to cover the remaining two cases, since this time we want $M$ to not possess the dichotomy property. Therefore, a natural step is to try to use $\mathbb{R}^2$ instead of $\mathbb{R}$. This idea turns out to be right. However, for simplicity, the other two examples will be initially constructed in the discrete setting $\mathbb{Z}^2$. Also, for purely technical reasons, the metric $d_e$ is replaced by $d_\infty$. Nevertheless, after presenting Examples 3 and 4, we include some additional comments in order to convince the reader that it is also possible to obtain the desired results for the appropriate metric measure spaces of the form $(\mathbb{R}^2, d_e, \mu)$. While dealing with $\mathbb{Z}^2$, for the sake of clarity, we will write $B_r(n,m)$ and $\mu(n,m)$ instead of $B_r((n,m))$ and $\mu(\{(n,m)\})$, respectively. \begin{example} Consider the space $(\mathbb{Z}^2, d_\infty, \mu)$, where $\mu$ is defined by \begin{displaymath} \mu(n,m) = \left\{ \begin{array}{rl} 4^{|m|} & \textrm{if } n = 0, \\ 1 & \textrm{otherwise. } \end{array} \right. \end{displaymath} Then $M^c$ possesses the dichotomy property, while $M$ does not. \end{example} At first, observe that $M^c$ possesses the dichotomy property by Proposition 2 (or, more precisely, by the remark following Proposition 2), since \begin{displaymath} \lim_{r \rightarrow \infty} \frac{\mu(B_{r+1}(0,0))}{\mu(B_{r}(0,0))} = 4. \end{displaymath} To verify the second part of the conclusion let us consider the function $f$ defined by \begin{displaymath} f(n,m) = \left\{ \begin{array}{rl} 2^n & \textrm{if } n > 0 \textrm{ and } m=0, \\ 0 & \textrm{otherwise. } \end{array} \right. \end{displaymath} We will show that $Mf(1,0) = \infty$ and $Mf(-1,0) < \infty$ (in fact, it should be clear for the reader that $(1,0)$ and $(-1, 0)$ may be replaced by any other points $(n_1,m_1)$ and $(n_2,m_2)$ such that $n_1$ is strictly positive and $n_2$ is strictly negative). Consider the balls $B_N = B_N(N,0)$ for $N \in \mathbb{N}$. Observe that \begin{displaymath} Mf(1,0) \geq \frac{1}{\mu(B_N)} \sum_{(n,m) \in B_N} f(n,m) \, \mu(n,m) \geq \frac{f(N,0) \, \mu(N,0) }{(2N-1)^2} = \frac{2^N}{(2N-1)^2}, \end{displaymath} which implies $Mf(1,0) = \infty$. On the other side, consider any ball $B$ containing $(-1,0)$ and denote \begin{displaymath} K = K(B) = \max\{n \in \mathbb{N} \colon (n,0) \in B\}. \end{displaymath} If $K \leq 0$, then $\sum_{(n,m) \in B} f(n,m) \, \mu(n,m) = 0$. In turn, if $K > 0$, then $B$ must contain at least one of the points $(0,-\lfloor K/2 \rfloor )$ and $(0,\lfloor K/2 \rfloor)$. Consequently, we have \begin{displaymath} \frac{1}{\mu(B)} \sum_{(n,m) \in B} f(n,m) \, \mu(n,m) \leq \frac{2 f(K,0)}{4^{\lfloor K/2 \rfloor}} \leq 4, \end{displaymath} which implies $Mf(-1,0) < \infty$. \begin{example} Consider the space $(\mathbb{Z}^2, d_\infty, \mu)$, where $\mu$ is defined by \begin{displaymath} \mu(n,m) = \left\{ \begin{array}{rl} 4^{|m|} & \textrm{if } n = 0, \\ 2^{n^2} & \textrm{if } n < 0 \textrm{ and } m = 0, \\ 1 & \textrm{otherwise. } \end{array} \right. \end{displaymath} Then both $M$ and $M^c$ do not possess the dichotomy property. \end{example} To verify that $M$ does not possess the dichotomy property we can use exactly the same function $f$ as in Example 3. It is easy to see that $Mf(1,0) = \infty$ and $Mf(-1,0) < \infty$ hold as before. Next, in order to show that $M^c$ does not possess the dichotomy property, let us take the function $g$ defined by \begin{displaymath} g(n,m) = \left\{ \begin{array}{rl} 2^{n^2} & \textrm{if } n > 0 \textrm{ and } m = 0, \\ 0 & \textrm{otherwise. } \end{array} \right. \end{displaymath} Consider the balls $B_N^+ = B_N(1,0)$ and $B_N^- = B_N(-1,0)$ for $N \in \mathbb{N}$. Observe that for large values of $N$ we have \begin{displaymath} \frac{1}{\mu(B_N^+)} \sum_{(n,m) \in B_N^+} g(n,m) \, \mu(n,m) \geq \frac{g(N,0)}{2 \mu(-N+2, 0)} = 2^{N^2 - (N-2)^2 - 1}, \end{displaymath} and \begin{displaymath} \frac{1}{\mu(B_N^-)} \sum_{(n,m) \in B_N^-} g(n,m) \, \mu(n,m) \leq \frac{2 g(N-2,0)}{\mu(-N, 0)} = 2^{-N^2 + (N-2)^2 + 1}. \end{displaymath} Consequently, this easily leads to the conclusion that $Mg(1,0) = \infty$ and $Mg(-1,0) < \infty$. \newline At last, as we mentioned earlier, we will try to outline a sketch of how to adapt Examples 3 and 4 to the situation of $\mathbb{R}^2$ with the Euclidean metric. First of all, note that the key idea of Example 3 was to construct a measure which creates a kind of barrier separating (in the proper meaning) the points $(n,m)$ with positive and negative values of $n$, respectively. Exactly the same effect can be obtained if we define $w$ so that it behaves like $e^{|y|}$ in the strip $-\frac{1}{2} < |x| < \frac{1}{2}$ and like $1$ outside of it. However, because of some significant differences between the shapes of the balls determined by $d_e$ and $d_\infty$, respectively, one should be a bit more careful when looking for the proper function $f$ such that $Mf(x,y) = \infty$ if $x > 1$ and $Mf(x,y) < \infty$ if $x < -1$. Observe that any ball $B$ such that $(-1,0) \in B$ and $(N,0) \in B$ must contain at least one of the points $(0, - \sqrt{N})$ and $(0, \sqrt{N})$. Therefore, if $B_N$ is such that $N$ is the largest positive integer $n$ satisfying $(n,0) \in B_N$, then it would be advantageous to ensure that the integral $\int_{B_N} f(x,y) w(x,y) \, dx \, dy$ is no more than $C e^{\sqrt{N}}$, where $C > 0$ is some numerical constant. On the other hand, we want this quantity to tend to infinity with $N$ faster than $N^2$. This two conditions are fulfilled simultaneously if, for example, $f(x,y)$ behaves like $x^2$ in the region $\{(x,y) \in \mathbb{R}^2 \colon x > 0, -\frac{1}{2} < |y| < \frac{1}{2}\}$, and equals $0$ outside of it. Finally, to arrange the situation of Example 4, it suffices to define $w$ in such a way that it is comparable to $e^{|y|}$ if $-\frac{1}{2} < |x| < \frac{1}{2}$, to $e^{x^2}$ if $x < 0$ and $-\frac{1}{2} < |y| < \frac{1}{2}$, and to $1$ elsewhere. Also, apart from those described above, there are no further difficulties in finding the appropriate functions $f$ and $g$ that break the dichotomy condition for $M$ and $M^c$, respectively. \section{Necessary and sufficient condition} The last section is mainly devoted to describing the exact characterization of situations, in which $M^c$ possesses the dichotomy property, for metric measure spaces of the form $(\mathbb{R}^d, d_e, \mu)$, $d \geq 1$, where $\mu$ is arbitrary. Namely, our goal is to prove the following. \begin{theorem} Consider the metric measure space $(\mathbb{R}^d, d_e, \mu)$, $d \geq 1$, where $\mu$ is an arbitrary Borel measure. Then $M^c$ possesses the dichotomy property if and only if (\ref{C}) holds. \end{theorem} We show the proof only for $d=2$, since in this case all the significant difficulties are well exposed and, at the same time, we omit a few additional technical details that arise when $d \geq 3$. In turn, the case $d=1$ is much simpler than the others, so we do not focus on it. When dealing with $\mathbb{R}^2$, we will write shortly $B_r(x,y)$ instead of $B_r((x,y))$, just like we did in the previous section in the context of $\mathbb{Z}^2$. \begin{proof} First of all, let us recall that one of the implications has already been proven in Proposition 2. Thus, it is enough to show that (\ref{C}) is necessary for $M^c$ to possess the dichotomy property. Take $(\mathbb{R}^2, d_e, \mu)$ and assume that (\ref{C}) fails to occur. Thus, for the point $(0,0)$ there exists a strictly increasing sequence of positive numbers $\{a_k\}_{k \in \mathbb{N}}$ such that \begin{displaymath} \mu(B_{a_k+1}(0,0)) \geq 2^{2k} \, \mu(B_{a_k}(0,0)) \end{displaymath} holds for each $k \in \mathbb{N}$. In addition, we can force that $a_1 \geq 8$ and $a_{k+1} \geq a_k + 2$. For $n \in \mathbb{N}$ we introduce the auxiliary sets $S_{k+, j}^{(n)}$, $j \in \{1, \dots, 2^n\}$, defined by \begin{displaymath} S_{k+, j}^{(n)} = \Big \{(x,y) \in B_{a_k+1}(0,0) \colon \phi(x,y) \in \big[ \frac{2 \pi (j-1)}{2^n}, \frac{2 \pi j}{2^n} \big) \Big\}, \end{displaymath} where $\phi(x,y) \in [0, 2\pi)$ is the angle that $(x,y)$ takes in polar coordinates. Take $n = 1$ and choose $j_1 \in \{1, 2\}$ such that the set \begin{displaymath} \Lambda_1 = \{k \in \mathbb{N} \colon \mu(S_{k+, j_1}^{(1)}) \geq \frac{1}{2} \mu(B_{a_{k}}(0,0)) \} \end{displaymath} is infinite. Next, take $n = 2$ and choose $j_2 \in \{1, 2, 3, 4\}$ satisfying $\lceil j_2/2 \rceil = j_1$ (where $\lceil \, \cdot \, \rceil$ is the ceiling function) and such that \begin{displaymath} \Lambda_2 = \{ k \in \Lambda_1 \colon \mu(B_{k+, j_2}^{(2)}) \geq \frac{1}{4} \mu(B_{a_{k}}(0,0)) \} \end{displaymath} is infinite. Continuing this process inductively we receive a sequence $\{j_n\}_{n \in \mathbb{N}}$ satisfying $\lceil j_{n+1} / 2 \rceil = j_n$, $n \in \mathbb{N}$, and, by invoking the diagonal argument, a strictly increasing subsequence $(a_{k_n})_{n \in \mathbb{N}}$ such that for each $n \in \mathbb{N}$ we have \begin{displaymath} \mu(S_{k_n+, j_n}^{(n)}) \geq \frac{1}{2^n} \mu(B_{a_{k_n}}(0,0)), \qquad n \in \mathbb{N}. \end{displaymath} From now on, for simplicity, we will write $B_n$ and $S_{n+, j_n}$ instead of $B_{a_{k_n}}(0,0)$ and $S_{k_n+, j_n}^{(n)}$, respectively. Observe that the received sequence $\{j_n\}_{n \in \mathbb{N}}$ determines a unique angle $\phi_0 \in [0, 2\pi)$ which indicates a ray around which, loosely speaking, a significant part of $\mu$ is concentrated. For the sake of clarity we assume that $\phi_0 = 0$ and therefore $\{j_n\}_{n \in \mathbb{N}}$ equals either $(1, 1, 1, \dots)$ or $(2, 4, 8, \dots)$. Denote $B_{n-} = B_{1/2}(-a_{k_n}+2, 0)$, $n \in \mathbb{N}$, and consider the function $f$ defined by \begin{displaymath} f = \sum_{n=1}^\infty \frac{2^n \mu(B_n)}{\mu(B_{n-})} \chi_{B_{n-}}. \end{displaymath} Of course, $f \in L^1_{loc}(\mu)$. We will show that $M^cf(x_0,y_0) = \infty$ for $(x_0,y_0) \in B_{1/2}(0,0)$ and $M^cf(x_0,y_0) < \infty$ for $(x_0,y_0) \in B_{1/2}(3,0)$. Fix $(x_0,y_0) \in B_{1/2}(0,0)$ and observe that $B_{n-} \subset B_{a_{k_n}-1}(x_0,y_0) \subset B_n$ and therefore \begin{displaymath} \frac{1}{\mu(B_{a_{k_n}-1}(x_0,y_0))} \int_{B_{a_{k_n}-1}(x_0,y_0)} f \, d\mu \geq \frac{1}{\mu(B_n)} \int_{B_{n-}} f \, d\mu = 2^n, \end{displaymath} which implies $M^cf(x_0,y_0) = \infty$. In turn, fix $(x_0,y_0) \in B_{1/2}(3,0)$ and consider $r > 0$ such that $B_r(x_0,y_0)$ intersects at least one of the sets $B_{n-}$, $n \in \mathbb{N}$. Notice that this requirement forces $r>2$. We denote \begin{displaymath} N = N(r) = \max\{ n \in \mathbb{N} \colon B_r(x_0,y_0) \cap B_{n-} \neq \emptyset \}. \end{displaymath} One can easily see that this implies $r > a_{k_n}$ and hence $(a_{k_n},0) \in B_{r-2}(x_0,y_0)$. It is possible to choose $N_0 = N_0(x_0,y_0) \geq 2$ such that if $N \geq N_0$, then $(a_{k_N},0) \in B_{r-2}(x_0,y_0)$ implies $S_{N+,j_N} \subset B_r(x_0,y_0)$. Let $\tilde{N} = \max\{r>0 \colon N(r)<N_0\}$. If $2 < r \leq \tilde{N}$, then \begin{displaymath} \frac{1}{\mu(B_r(x_0,y_0))} \int_{B_r(x_0,y_0)} f \, d\mu \leq \frac{1}{\mu(B_2(x_0,y_0))} \int_{B_{\tilde{N}}(x_0,y_0)} f \, d\mu = C, \end{displaymath} where $C$ is a numerical constant independent of $r$. On the other hand, if $r > \tilde{N}$, then \begin{displaymath} \frac{1}{\mu(B_r(x_0,y_0))} \int_{B_r(x_0,y_0)} f \, d\mu \leq \frac{2^{N+1} \mu(B_N)}{\mu(S_{N+,j_N})} \leq 2, \end{displaymath} which implies $M^cf(x_0,y_0) < \infty$. \end{proof} \begin{remark*} Note that this time the proof relies on some Euclidean geometry properties and therefore it cannot be repeated in a more general context. The only clearly visible way to generalize it is to replace the Euclidean metric. Indeed, one can, for example, put a metric $\rho$ induced by any norm on $\mathbb{R}^d$ in place of $d_e$ and get the desired result by following the same path only with a few minor modifications. Notice that in this case, of course, the balls in (\ref{C}) are taken with respect to $\rho$. Thus, among other things, we must take into account how the shape of these balls is related to the direction determined by the angle $\phi_0$ specified in the proof. Finally, the weak type $(1,1)$ inequality of $M^c$ associated to $(\mathbb{R}^d, \rho, \mu)$, which is needed to provide $\mu(\mathbb{R}^d \setminus L^c_f) = 0$ in Proposition 2, can be deduced from a stronger version of the Besicovitch Covering Lemma (see \cite[Theorem 2.8.14]{F}). \end{remark*} We conclude our studies with an example which indicates that a possible necessary and sufficient condition for $M$ must be of a completely different form. Namely, while condition (\ref{C}) concerned only the growth at infinity of a given measure, the parallel condition for non-centered operators should deal with both global and local aspects of the considered spaces. Thus, this problem, probably more difficult, is an interesting starting point for further investigation. \begin{example} Consider the space $(\mathbb{R}^2, d_e, \mu)$ with $\mu = \lambda_1 + \lambda_2$, where $\lambda_1$ is $1$-dimensional Lebesgue measure on $A = [0,1] \times \{0\}$ and $\lambda_2$ is $2$-dimensional Lebesgue measure on the whole plane. Then there exists $f \in L^1(\mu)$ with compact support such that $E_\infty(f) = A$. \end{example} Indeed, denote $S_n = [0,1] \times (2^{-n^2}, 2^{-n^2+1})$ and consider the function \begin{displaymath} f = \sum_{n=1}^{\infty} 2^n \chi_{S_n}. \end{displaymath} Observe that $f$ equals $0$ outside the square $[0,1] \times [0,1]$ and $\|f\|_{1} = \sum_{n=1}^{\infty} 2^n \cdot 2^{-n^2} \leq 2$. Let us fix $x_0 \in [0,1]$ and consider the balls $B_n = B_{2^{-n^2 + \epsilon_n}}(x_0, 2^{-n^2})$, $n \in \mathbb{N}$, where $\epsilon_n > 0$ are such that $\mu(B_n) \leq 2^{-2n^2+2}$. Observe that $(x_0,0) \in B_n$ for each $n$. If $n \geq 2$, then $\mu(B_n \cap S_n) \geq 2^{-2n^2 - 1}$ and, consequently, \begin{displaymath} \frac{1}{\mu(B_n)} \int_{B_n} f \, d\mu \geq \frac{2^n \cdot 2^{-2n^2 - 1}}{2^{-2n^2+2}} = 2^{n-3}, \end{displaymath} which implies $Mf(x_0,0) = \infty$. On the other hand, consider $(x_0, y_0) \notin A$. In this case, there exist $\epsilon > 0$ and $L > 0$ such that $d_e((x_0,y_0),(x,y)) < \epsilon$ implies $f(x,y) \leq L$ and, as a result, we obtain $Mf(x_0,y_0) \leq \max\{L, 2/\lambda_2(B_{\epsilon/2}(x_0,y_0))\} < \infty$. \section*{Acknowledgement} This article was largely inspired by the suggestions of Professor Krzysztof Stempak. I would like to thank him for insightful comments and continuous help during the preparation of the paper.
{ "redpajama_set_name": "RedPajamaArXiv" }
106
\section{Introduction} Perovskite materials are quite ubiquitous and exhibit a variety of interesting and intriguing phenomena such as superconductivity or charge ordering (or their co-existence), colossal magnetoresistance, ferroelectricity, spin-dependent-transport, and the interplay among magnetic, structural, and transport properties \cite{tvr1,hotta,khomskii1}. Many oxides, that have the formula $ABO_3$, assume a perovskite structure where two adjacent $BO_6$ octahedra share an oxygen which leads to cooperative octahedral distortions. Simple systems that manifest such cooperative electron-phonon phenomena are the barium bismuthates ($BaBiO_3$). Here, only the 6s electrons are involved in transport and these electrons produce only a single normal mode distortion, namely, the breathing-mode. In pure $BaBiO_3$, the $BO_6$ octahedra alternately dilate and contract with $Bi-O$ bonds of adjacent octahedra differing by about $10\%$ which is indicative of strong electron-phonon interaction (EPI) \cite{tvr1}. Thus the relevant physics is dominated by a one-band three-dimensional cooperative breathing-mode (CBM). There is also compelling evidence of strong EPI in manganites (from extended X-ray absorption fine structure\cite{bianc} and pulsed neutron diffraction\cite{louca} measurements) and in cuprates (through angle-resolved photoemission spectroscopy\cite{damascelli}). In copper oxides, as pointed out in Refs.~\onlinecite{sawatzky,berciu}, the dynamics of the Zhang-Rice singlet \cite{zhang} can be described by a one-band system with orbitals centered on copper sites. Furthermore, the onsite energy is modulated by the movement of oxygen closer or further from the neighboring copper ion. Thus the breathing-mode is relevant to describe the linear modulation of the onsite energy. Consequently the copper-oxide planes represent a one-band two-dimensional CBM system. In the context of the two-band Jahn-Teller manganite systems as well, when C-type antiferromagnetism manifests [as in $La_{1-x}Sr_{x}MnO_3$ for $0.65 \le x \le 0.9$ \cite{dabrowski}], the $d_{z^2}$ orbitals participate in the C-chain ordering. A ferromagnetic C-chain can be looked upon as a one-band (i.e., $d_{z^2}$ orbital band) and one-dimensional (1D) CBM system that is however Jahn-Teller coupled to neighboring C-chains whose spin alignment is antiparallel. Understanding the CBM phenomena, in systems such as the bismuthates, the cuprates, and the manganites is still an open question. The main purpose of this paper is to study the CBM physics in the simpler case of a one-band 1D system by taking account of the {\em quantum phonons} [see Fig.~\ref{fig:chain}(b)]. In fact, a controlled analytic treatment of the many-polaron effects produced by quantum phonons in a one-band 1D Holstein model [see Fig.~\ref{fig:chain}(a)] \cite{holstein} (which is a simpler non-cooperative EPI system) has been reported not long ago\cite{sdadys,sdys}. However, definite progress has been made in numerically treating the Holstein model at half-filling (by employing a variety of techniques) \cite{fehske2011,fradkin2,capone,zheng1,perroni,hamer} and, to a limited extent, away from half-filling \cite{fehske3}. Owing to its cooperative nature, the EPI leads to non-local distortion effects which can change the very nature of long range order. While a weak interaction is amenable to a Migdal-type of perturbative treatment, the strong interaction (even for a one-band system) necessitates a non-perturbative approach \cite{sdadys}. As a step towards modeling CBM distortions in real systems (such as the bismuthates, the cuprates, and the manganites), the present work builds up on our previous work on the Holstein model \cite{sdadys} to obtain the effective Hamiltonian for a one-band 1D CBM system \cite{alex3,alex4,alex5}. Upon inclusion of cooperative effects in the strong EPI, we show that the system changes its dominant transport mechanism from one of nearest-neighbor (NN) hopping to that of next-nearest-neighbor (NNN) hopping while the effective NN electron-electron interaction becomes significantly more repulsive due to incompatibility of NN breathing mode distortions. Away from half-filling in rings with even number of sites (while the Holstein system without cooperative effects remains a Luttinger liquid at all interaction strengths), our model (at strong interaction), produces a commensurate charge-density-wave (CDW) state which is surprisingly conducting and whose period is independent of density. Furthermore, using scaling of the fidelity susceptibility (FS), we demonstrate that the CDW transition is a second-order quantum phase transition (QPT). \begin{figure}[t] \includegraphics[width=3.0in]{fig1.eps} \noindent\caption[]{Molecular chains with $d_{z^2}$ orbital hopping sites (filled circles) and oxygen sites (empty circles) in (a) Holstein model, (b) one-band CBM system.} \label{fig:chain} \end{figure} The paper is organized as follows. We derive an effective polaronic Hamiltonian starting from a CBM model in Sec.~II. Next, we present the relevant formulae for the density-density correlation function and the structure factor in Sec.~III. We then analyze the strong-coupling limiting case of our CBM model in Sec.~IV. The nature of the QPT and the long range order in our CBM model are discussed in Sec.~V. Finally, we close in Sec.~VI with our conclusions. \section{Effective Polaronic Hamiltonian} To bring out the essential physics, we begin with a 1D model of spinless electrons hopping in a one-band system of $d_{z^2}$ orbitals which are coupled to the oxygens in between, via CBM as shown in Fig.~\ref{fig:chain}(b) \cite{sawatzky}. The Hamiltonian is expressed as $H = H_t + H_{ep} + H_l$ where the hopping term $H_t$, using standard notation, is given by \begin{eqnarray} H_t = - t \sum_{j} ( c^{\dagger}_{j} c_{j+1} + {\rm H.c.}) , \label{eq:Ht} \end{eqnarray} with $c_j$ ($c^{\dagger}_{j}$) being the destruction (creation) operator of an electron in a $d^j_{z^2}$ orbital (at site $j$). The EPI term $H_{ep}$ is expressed as \begin{eqnarray} H_{ep} = - g \omega_0 \sqrt{2 M \omega_0} \sum_{j} n_j q_j , \end{eqnarray} where $g$ is the electron-phonon coupling (EPC), $n_i = c^{\dagger}_{i} c_{i} $, $q_i = u_i - u_{i-1}$ represents the expansion of the oxygens around the $d^i_{z^2}$ orbital, and the right-hand-side (RHS) oxygen displacement $u_i =(a^{\dagger}_i + a_i)/{\sqrt{2 M \omega_0}}$. Furthermore, the lattice term $H_l$ representing simple harmonic oscillators is of the form \begin{eqnarray} H_{l} = \frac{K}{2} \sum_{j} u_j^2 + \frac{1}{2M} \sum_{j} p_j^2 = \omega_0 \sum_{j} a^{\dagger}_{j} a_{j} . \end{eqnarray} The main difference between the Holstein model and the above cooperative Hamiltonian is that in the Holstein model electrons at different sites are coupled to different on-site molecular distortions whereas in the present system the electrons on adjacent sites are coupled to the displacement of the same in-between oxygen. Thus in our system, to produce an effective polaronic Hamiltonian, we need to devise a modification of the usual Lang-Firsov transformation \cite{LF} so as to take into account the cooperative nature of the distortions. To meet this end, we used the following canonical transformation $\tilde{H} = \exp(S) H \exp(-S)$ where $S$ now contains the {\em difference in densities on adjacent sites} \begin{eqnarray} S = g \sum_j ( a_{j} - a^{\dagger}_{j}) (n_j - n_{j+1}) . \end{eqnarray} Then, one obtains $\tilde{H} = H_0 + H_1$ where \begin{eqnarray} H_0 =&& \omega_0 \sum_{j} a^{\dagger}_{j} a_{j} - 2 g^2 \omega_0 \sum_{j} n_j + 2 g^2 \omega_0 \sum_{j} n_j n_{j+1} \nonumber \\ && - t e^{-3 g^2} \sum_{j} ( c^{\dagger}_{j} c_{j+1} + {\rm H.c.}) , \end{eqnarray} and \begin{eqnarray*} H_1 = \sum_{j}H_{1j} = - t e^{-3 g^2} \sum_{j} [ c^{\dagger}_{j} c_{j+1} \{ {\cal{T}}_{-}^{j \dagger} {\cal{T}}^{j}_{+} -1 \} + {\rm H.c.}] , \end{eqnarray*} with ${\cal{T}}^{j}_{\pm} = \exp[\pm g( 2 a_{j} - a_{j-1} - a_{j+1})]$. {\it On account of the cooperative nature of the EPI we obtain an additional term $ 2 g^2 \omega_0 \sum_{j} n_j n_{j+1}$} involving NN repulsion in $H_0$ and the perturbation $H_1$ now involves phonons at three sites as opposed to phonons at only two sites as in the non-cooperative case. We consider the case $ t\exp[-3 g^2] << \omega_0$ and perform second-order perturbation theory similar to that in Refs.~\onlinecite{sdadys, sahinur}. The eigenstates of $H_0$ relevant for perturbation theory are $|n,m \rangle \equiv |n \rangle_{el} \otimes |m\rangle_{ph}$ where NN occupied electronic states are projected out and $|0,0\rangle$ is the ground state (GS) with no phonons. The corresponding eigenenergies are $E_{n,m}=E_{n}^{el}+E_{m}^{ph}$. The treatment to perform second-order perturbation theory is an extension of the method followed in Refs.~\onlinecite{sdadys, sahinur} and yields the following effective Hamiltonian for polarons: \begin{eqnarray} H^{(2)} = \sum_{i,j} \sum_{m} \frac{\langle 0|_{ph} H_{1i} |m\rangle_{ph} \langle m|_{ph} H_{1j} |0\rangle_{ph}} {E_{0}^{ph} - E_{m}^{ph}} . \label{H^2} \end{eqnarray} {Here (as shown by using Schrieffer-Wolff transformation in appendix A of Refs. \onlinecite{sahinur,sahinur_arxiv}),} it must be mentioned that, when $ t\exp[-3 g^2] << \omega_0$, $H_0 + H^{(2)}$ represents the exact Hamiltonian up to second-order in perturbation (even for finite anti-adiabatic values $t/\omega_0 \lesssim1$); the small parameter [$t/(g \omega_0)$] of the perturbation will be derived below. In the above equation (\ref{H^2}), unlike in Ref.~\onlinecite{sdadys}, the term $H_j$ produces phonons at sites $j$, $j-1$, and $j+1$. Hence, we get non-zero contributions only when the index $i = j-2,~ j-1,~ j, ~ j+1$, or $j+2$. Then after some tedious algebra one obtains \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\!\! -H^{(2)} \frac{\omega_0}{t^2 e^{-6 g^2}} = \nonumber \\ &&\!\!\!\!\!\!\!\!\!\!\! \sum_j \left \{ [ n_j (1-n_{j+1}) + (1-n_j) n_{j+1} ] \right . \nonumber \\ &&[F_3(4,1,1)+2F_2(4,1)+F_1(4)+ 2F_1(1)+F_2(1,1)] \nonumber \\ &&+ [c^{\dagger}_{j-1}(1-2n_j) c_{j+1} + {\rm H.c.} ] [2F_1 (2)+F_2 (2,2)] \nonumber \\ &&+ 2 [c^{\dagger}_{j-2} c_{j-1} c^{\dagger}_{j+1} c_{j} + {\rm H.c.} ] F_1 (1) \nonumber \\ &&\left . + 2 [c^{\dagger}_{j-1} c_{j-2} c^{\dagger}_{j+1} c_{j} + {\rm H.c.} ] F_1 (-1) \right \} , \label{eq:H2} \end{eqnarray} where \begin{eqnarray*} F_n(\alpha_1, ... , \alpha_n ) \equiv \sum_{m_1=1}^{\infty} ... \sum_{m_n=1}^{\infty} \frac {(\alpha_1 g^2)^{m_1} ... (\alpha_n g^2)^{m_n}} {m_1! ... m_n!(m_1+ ... + m_n)}, \end{eqnarray*} which for large values of $g^2$ becomes $F_n \approx \exp ( g^2 \sum_{i=1}^{n} \alpha_i )/( g^2 \sum_{i=1}^{n} \alpha_i )$ for $ \sum_{i=1}^{n} \alpha_i \ge 1$. In the above Eq.~\eqref{eq:H2}, the last two terms are a direct consequence of the cooperative nature of the EPI and are negligible for large $g^2$. More importantly, the relative importance of the various coefficients is noticeably different from the case where no cooperative effect exists (as explained below). For large $g^2$, the effective polaronic Hamiltonian simplifies to be: \begin{eqnarray} H^{C}_{eff} = && -\left [ 2 g^2 \omega_0 +\frac{t^2}{3 g^2 \omega_0} \right ] \sum_j n_{j+1} (1-n_{j}) \nonumber \\ && - t e^{-3 g^2} \sum_{j} ( c^{\dagger}_{j} c_{j+1} + {\rm H.c.}) \nonumber \\ && - \frac{t^2 e^{-2 g^2} }{4 g^2 \omega_0} \sum_j [c^{\dagger}_{j-1}(1-2n_j) c_{j+1} + {\rm H.c.} ] . \label{eq:Hpol} \end{eqnarray} Notice that the coefficient of the NN hopping is significantly smaller than the coefficient of the NNN hopping for large $g^2$ and not-too-small $t/\omega_0$! This is a {\em key feature resulting from cooperative effects.} The above effective Hamiltonian may be contrasted with the following Hamiltonian $H_{eff}$ for the case where there is no cooperative EPI [i.e., $H_{ep} = -\sqrt{2}g \omega_0 \sum_i n_i (a^{\dagger}_i + a_i)$] [see Ref.~\onlinecite{sdadys} and Fig.~\ref{fig:chain}(a)]: \begin{eqnarray} H_{eff} = && -2 g^2 \omega_0 \sum_j n_j -\frac{t^2}{2 g^2 \omega_0} \sum_j n_{j+1} (1-n_{j}) \nonumber \\ && - t e^{-2 g^2} \sum_{j} ( c^{\dagger}_{j} c_{j+1} + {\rm H.c.}) \nonumber \\ && - \frac{t^2 e^{-2 g^2} }{2 g^2 \omega_0} \sum_j [c^{\dagger}_{j-1}(1-2n_j) c_{j+1} + {\rm H.c.} ] . \label{eq:Hpolold} \end{eqnarray} We now provide an explanation of the above results. In Eq.~\eqref{eq:Hpolold}, the coefficient of the $ \sum_j n_{j+1} (1-n_{j})$ term can be understood as resulting from a hopping process where an electron at site $j+1$ hops to a neighboring site $j$ and back, but the lattice has no time to distort (relax) {\em locally} at site $j$ ($j+1$) and thus yields the second-order perturbation energy $-t^2$/(energy change) (see Fig.~2 of Ref.~\onlinecite{sahinur}). On the other hand, the coefficient of $\sum_j [c^{\dagger}_{j-1}(1-2n_j) c_{j+1} + {\rm H.c.} ] $ results when, in the intermediate state, site $j$ does not distort/relax during hopping and thus yields $ t\exp[-2 g^2] \times \frac{t}{2 g^2 \omega_0}$ where $ t\exp[-2 g^2] $ is due to time $\left (\frac{\hbar}{te^{-2g^2}} \right )$ taken to distort the site $j+1$ (see Fig.~2 of Ref.~\onlinecite{sahinur}). In the above non-cooperative case, the NN hopping dominates over the NNN hopping in the small polaron limit. Using the above logic we see that the higher order terms in perturbation theory, for both cooperative and non-cooperative cases, are dominated by the process where an electron hops back and forth between the same two sites. The dominant term to $k$th order is approximately given for even $k$ by \begin{eqnarray} \omega_0 \left [ \frac{t}{ g \omega_0} \right ]^k \sum_j n_{j+1} (1-n_{j}) , \end{eqnarray} while for odd $k$ by \begin{eqnarray} t e^{-\gamma g^2} \left [ \frac{t}{ g \omega_0} \right ]^{k-1} \sum_{j} ( c^{\dagger}_{j} c_{j+1} + {\rm H.c.}) , \end{eqnarray} where $\gamma$ is 2 for the non-cooperative case and 3 for the cooperative one. Since each term in the perturbation theory should be smaller than $\omega_0$, we see that the small parameter in our perturbation theory is $t/(g \omega_0)$ \cite{sdys}. Here, a few observations are in order. Firstly, the cooperative effects, unlike in the Holstein model's case, raise the potential of the site next to an occupied site and thus make it unfavorable for hopping. Consequently, in Eq.~\eqref{eq:Hpol} as compared to Eq.~\eqref{eq:Hpolold}, the exponent is larger for the NN hopping and also the denominators of the coefficients are similarly larger for the hopping-generated NN interaction and for the NNN hopping. Next, Lau {\it et al.} \cite{sawatzky} obtain the same energy expression for {\it a single polaron} as that given by Eq.~\eqref{eq:Hpol} (when $n_j=0$). Additionally, in Ref.~\onlinecite{tvr}, the authors explain the ferromagnetic insulating behavior in low-doped manganites by using the non-cooperative hopping-generated NN interaction [i.e., second term on RHS of Eq.~\eqref{eq:Hpolold}] after modifying the hopping term for double-exchange effects. From Eq.~\eqref{eq:Hpol}, we see that cooperative phenomenon must be taken into account as it reduces the ferromagnetism generating interaction strength by a factor of $1.5$. Lastly, the authors of Refs. \onlinecite{alex1,alex2} study the formation of bipolarons using Fr\"ohlich polarons with spin degrees of freedom; although they use a Lang-Firsov transformation followed by a Schrieffer-Wolff transformation (which is similar to our type of perturbation theory), they nevertheless do not consider the dominant NNN hopping effects which are central to our treatment. In the next few sections, we will analyze the effective polaronic Hamiltonian given by Eq.~\eqref{eq:Hpol} and show that there is a period-doubling QPT from a Luttinger liquid to a conducting CDW when the coupling $g$ increases at fixed adiabaticity $t/\omega_0$: the transition is a consequence of enhanced NNN hopping and pronounced NN repulsion. We employ a modified Lanczos algorithm \cite{gagliano} [and use antiperiodic (periodic) boundary conditions for even (odd) number of fermions] to study the QPT in the system. In all our numerical calculations involving the effective polaronic Hamiltonian, we used the series $F_n(\alpha_1, ... , \alpha_n )$ given in Eq. (\ref{eq:H2}) and not the approximate coefficients in Eq. (\ref{eq:Hpol}). \section{Density-density correlation function and structure factor} In this section, to characterize correlations and analyze QPT, we present the relevant formulae for the density-density correlation function and the structure factor. The two-point correlation function for density fluctuations of electrons at a distance $l$ apart is given by \begin{equation} W(l)=\frac{4}{N}\sum_j\left[\langle n_jn_{j+l}\rangle-\langle n_j\rangle\langle n_{j+l}\rangle\right] , \label{wl} \end{equation} with filling-fraction (FF) $\langle n_j\rangle =\frac{N_p}{N}$ where $N$ is the total number of sites and $N_p$ is the total number of electrons in the system. Then the structure factor, which is the Fourier transform of $W(l)$, is given by \begin{equation} S(k)=\sum_le^{ikl}W(l) , \end{equation} where wavevector $k=\frac{2n\pi}{N}$ with n=1,2,.....,N. Now, we observe that \begin{equation} S(\pi)=\left(\sum_{l_{\rm even}}-\sum_{l_{\rm odd}}\right)W(l) , \nonumber \end{equation} with \begin{equation} \sum_{l_{\rm even}}W(l)=\frac{2\langle(\hat N_e-\hat N_o)^2\rangle}{N} , \nonumber \end{equation} and \begin{equation} \sum_{l_{\rm odd}}W(l)=-\frac{2\langle(\hat N_e-\hat N_o)^2\rangle}{N} ,\nonumber \end{equation} where $\hat N_e=\sum_{j_{\rm even}}n_j$ $(\hat N_o=\sum_{j_{\rm odd}}n_j)$ is the number operator which gives the total number of electrons at even (odd) sites. Hence, we obtain the simple expression \begin{equation} S(\pi)=\frac{4\langle(\hat N_e-\hat N_o)^2\rangle}{N} . \label{eq:spi} \end{equation} We will now analyze the situation where only one sub-lattice is occupied and obtain some exact results. When we consider odd values of $l$, we note that \begin{equation} \langle n_jn_{j+l}\rangle=0. \nonumber \end{equation} Hence, from Eq.~\eqref{wl} we get \begin{equation} W(l_{\rm odd})=-\frac{4N^2_p}{N^2} . \label{eq:wlodd} \end{equation} Next, we observe that the GS becomes an eigenstate of the operators $\hat N_e$ and $\hat N_o$ with the eigenvalues $N_p$ ($0$) and $0$ ($N_p$) respectively if the even-site (odd-site) sub-lattice is occupied. Consequently, we get \begin{equation} \left[S(\pi)\right] = \left[S(\pi)\right]_{\rm max}=\frac{4N^2_p}{N} , \label{eq:spimax} \end{equation} where $\left[S(\pi)\right]_{\rm max}$ is the maximum value that $S(\pi)$ can attain. { To analyze the QPTs, we can treat the rescaled value of $S(\pi)$ as the order parameter $S^*(\pi)$ defined as follows: \begin{equation} S^*(\pi)= \frac{S(\pi)-\left[S(\pi)\right]_{\rm min}} {\left[S(\pi)\right]_{\rm max}-\left[S(\pi)\right]_{\rm min}} , \end{equation} where $\left[S(\pi)\right]_{\rm min}$ is the minimum value of $S(\pi)$; consequently, $S^*(\pi)$ varies from 0 to 1 during the phase transition. } In the next section, we will study the limiting case of large EPC values where the NNN hopping is the only relevant transport mechanism in the CBM model leading to the t$_2$-V model [see Eq. (\ref{eq:t2v})]. \section{Analysis of the $t_2-V$ model -- a limiting case of the CBM model} { The effective Hamiltonian for the CBM model contains three terms, namely, NN hopping, NNN hopping, and NN repulsion [as can be seen from Eq.~\eqref{eq:Hpol}]. There are two possible extreme cases of the CBM model corresponding to small and large values of the EPC $g$. For small values of $g$ ($\sim 1$), NN hopping dominates over NNN hopping; consequently, Eq.~\eqref{eq:Hpol} reduces to \begin{equation} H_{tV} \equiv - t \sum_{j} ( c^{\dagger}_{j} c_{j+1} + {\rm H.c.}) + V \sum_{j} n_j n_{j+1}, \label{eq:tv} \end{equation} which is the well studied t-V model \cite{gagliano,haldane} with $t/V << 1$ at the small values of $g$ ($\sim1 $) considered. On the other hand, for large values of $g$, NNN hopping dominates over NN hopping and Eq.~\eqref{eq:Hpol} can be simplified to \begin{eqnarray} H_{t_2V} \equiv && - t_2 \sum_j (c^{\dagger}_{j-1}(1-2n_{j}) c_{j+1} + {\rm H.c.} ) \nonumber \\ && + V \sum_j n_{j} n_{j+1} , \label{eq:t2v} \end{eqnarray} which we shall call as the t$_2$-V model; here, since EPC is large (i.e., $g \gtrsim 3$), $t_2/V << 1$. However, owing to the novelty of the model, we shall study it [i.e., Eq. (\ref{eq:t2v})] for arbitrary values of $t_2/V$ in rings with even number of sites. Next, for $t_2/V << 1$, we note that the system always has alternate sites (i.e., one sub-lattice) occupied for less than half-filling and above half-filling the other sub-lattice gets filled. This can be explained, for less than half-filling, as follows. At large repulsion, we shall compare the energy for the following two situations: \begin{enumerate} \item When there are $m_A>0$ ($m_B>0$) electrons in sub-lattice A (B). \item When all the $m_A+m_B=N_p$ electrons are in one sub-lattice only. \end{enumerate} In case $1$, each electron in sub-lattice B has $m_{B}-1$ sites blocked in B by other electrons in B and at least [at most] $m_{A}+1$ [$2m_{A}$] sites blocked in B by electrons in sub-lattice A; one can similarly argue for the electrons in sub-lattice A. Thus in sub-lattice B(A), each electron can hop to at most $\frac{N}{2}-m_{A(B)}-m_{B(A)}$ unblocked sites and at least $\frac{N}{2}-2 m_{A(B)} -m_{B(A)}+1$ unblocked sites. In case $2$, each electron has $m_A+m_B-1$ sites blocked by the other electrons in the same sub-lattice. Hence, each electron has exactly $\frac{N}{2}-m_A-m_B+1$ unblocked sites to hop to. At large repulsion, since case $2$ gives electrons more number of unblocked sites to hop to, we see that the total energy is the lowest when all the electrons are present in the same sub-lattice. As for the other extreme situation $V=0$, for even number of electrons, the model has both sub-lattices equally occupied. In the t$_2$-V model, {\it the ground state energy has a slope discontinuity, with the energy increasing up to a critical value, after which it is constant for FFs $\frac{1}{4}$, $\frac{1}{3}$, and $\frac{1}{2}$ [as shown in Fig.~\ref{fig:tp_GSESPI} (a)]}. We will now show clearly that as the interaction strength increases, {\it at a critical value of $V/t_2$, Ising $Z_2$ symmetry (i.e., both sub-lattices being equally populated) is broken and only a single sub-lattice is occupied}. As depicted in Fig.~\ref{fig:tp_GSESPI} (b), the structure factor $S(\pi)$ jumps from zero to its maximum value [given by Eq.~\eqref{eq:spimax} for FFs $\frac{1}{4}$, $\frac{1}{3}$, and $\frac{1}{2}$] indicating {\it explosive} first-order QPT from a Luttinger liquid to a CDW. \begin{figure}[t] \includegraphics[height=8.5cm,width=4.5cm,angle=-90]{fig2.eps} \caption{(Color online) Plots, of (a) ground state energy (E) and (b) structure factor value $S(\pi)$, as a function of interaction strength (V) for the t$_2$-V model in rings with N sites, Np electrons, and hopping t$_2$ = 1. } \label{fig:tp_GSESPI} \end{figure} Next, we observe that the number of electrons in even and odd sub-lattices are conserved quantities for the t$_2$-V model. Therefore, GS of the system is an eigenstate of both $\hat N_e$ and $\hat N_o$ with eigenvalues $N_e$ and $N_o$, respectively. Hence, Eq.~\eqref{eq:spi} simplifies to \begin{equation} S(\pi)=\frac{4(N_e - N_o)^2}{N} . \end{equation} Then, when $Z_2$ symmetry is respected, for even number of electrons $N_p = 2N_e = 2N_o$, we have $S(\pi)=0$ and for odd value of $N_p$ we have $S(\pi)= 4/N$. We find that at a critical interaction strength, as shown in Fig.~\ref{fig:tp_wlsk}, the following dramatic changes occur: (i) the structure factor $S(\pi)$ jumps from $0$ to its maximum value $4 N_p^2/N$; (ii) $W(l {\rm odd})$ also jumps to its large $V$ value of $-\frac{4N^2_p}{N^2}$; and (iii) $W(l {\rm even})$ (for $l \neq 0$) too jumps and its final value at half-filling is 1. For a fixed $t_2$ and $N$, the critical value $V^N_C$ of $V$ increases monotonically as $N_p$ decreases. For $N=16$, $N_p = 2$, and $t_2=1$, we get $V^{16}_{C} \approx 4$. From finite size scaling for half-filling, using $V^N_{C} - V^{\infty}_C \propto 1/N^2$ and system size $N \le 20$, we obtain $V^{\infty}_C \approx 2.83$. We see from the above analysis that, at a critical repulsion, the system undergoes a discontinuous transition to a {\em conducting commensurate CDW} state away from half-filling while at half-filling one obtains a Mott insulator. Usually commensurate CDW's are insulating (see Ref. \onlinecite{Gruner}) whereas our model surprisingly predicts a conducting commensurate CDW. Furthermore, quite unlike the Peierls transition, the {\it period of the CDW is independent of density}! \begin{figure}[b] \centering \includegraphics[height=8.5cm,width=8.5cm,angle=-90]{fig3.eps} \caption{(Color online) Plots of the density-density correlation function W(l) and structure factor S(k) for the t$_2$-V model on a N-site ring with Np electrons, interaction strength V, and hopping t$_2$ = 1. {Structure factor S(k) in (b) and (d) correspond to plots of W(l) in (a) and (c), respectively.} } \label{fig:tp_wlsk} \end{figure} } \section{Analysis of the CBM model} To analyze the QPT at various FFs of a system governed by the effective polaronic Hamiltonian given in Eq.~\eqref{eq:Hpol}, we performed our calculations at values of the adiabaticity $t/\omega_0 < 1$ and $g \ge 1$ such that the small parameter $t/(g\omega_0) < 1$ and $t e^{-3 g^2} \ll \omega_0$. Here we report only for the conservative case $t/\omega_0 = 0.1$ since the results at other values of $t/\omega_0 < 1$ are qualitatively similar (as shown in appendix \ref{app:tw}). As the value of $g$ increases (in the regime of study $1 \le g \le 3.5$), NN and NNN hoppings compete and the system gradually transits from a large-V ~t-V model to a large-V ~t$_{2}$-V model; thus, at values of $g \sim 1$ we expect the system to be a Luttinger liquid while at $g \sim 3$ we should get a CDW. In the next sub-sections we will demonstrate that the system indeed undergoes a Luttinger liquid to a {\it conducting} CDW transition with the QPT being second-order in nature. The QPT discussed in this work is quite different from the metallic Luttinger liquid to {\it insulating} CDW transition studied by many authors \cite{poilblanc,ortolani,hohenadler} in a system with only NN hopping and long-range Coulomb interaction. \subsection{Study of density-density correlation function, structure factor, and order parameter} First, we calculated $W(l)$ and $S(k)$ at FFs $\frac{1}{4}$ and $\frac{1}{3}$ numerically and the results are displayed in Fig.~\ref{fig:wlsk}. Upon tuning the EPC $g$, the density-density correlation function $W(l)$ gradually changes its nature from decaying to oscillatory thereby exhibiting long range order; it then attains the value given by Eq.~\eqref{eq:wlodd} at all odd values of $l$ corresponding to the state of only one sub-lattice being occupied [see Figs.~\ref{fig:wlsk}(a) and (c)]. Furthermore, the structure factor value $S(\pi)$ increases upon increasing $g$ and attains the maximum value given by Eq.~\eqref{eq:spimax} [see Figs.~\ref{fig:wlsk} (b) and (d)]. These observations assert that the system undergoes QPT from a Luttinger liquid to a conducting commensurate CDW state away from half-filling with period-doubling. Thus, at a critical value of $g$, the Ising $Z_2$ symmetry (i.e., both sub-lattices being equally populated) is broken. Quite surprisingly, our model predicts a conducting commensurate CDW without an excitation gap. Furthermore, similar to the t$_2$-V model, here too the period of CDW is independent of density and is, in fact, twice the lattice constant. \begin{figure}[t] \centering \includegraphics[height=8.5cm,width=8.5cm,angle=-90]{fig4.eps} \caption{(Color online) Density-density correlation function $W(l)$ in the CBM model at $\frac{t}{\omega_0}=0.1$ and $N=24$ for $(a)$ $\frac{1}{4}$-filling; and $(c)$ $\frac{1}{3}$-filling. Structure factor $S(k)$ at $(b)$ $\frac{1}{4}$-filling; and $(d)$ $\frac{1}{3}$-filling corresponding to plots of $W(l)$ in $(a)$ and $(c)$ respectively. } \label{fig:wlsk} \end{figure} We will now compare $S(\pi)$ versus $g$ behavior manifested by our model and the Holstein model in Fig.~\ref{fig:Hols_vs_Coop}. We see that, while our CBM model appears to undergo a QPT, the Holstein model does not seem to do so. We observe that the coefficient of NNN hopping for the Holstein model (see Eq.~\eqref{eq:Hpolold}) becomes much smaller than that of the NN hopping as EPC $g$ increases. Hence, the Holstein model, for sufficiently larger values of $g$, behaves like the t-V model; whereas our model can be approximated by the t$_2$-V model at large $g$. We know that the t-V model does not undergo a QPT away from half-filling \cite{haldane}. Therefore, the Holstein model too will not undergo a QPT at a non-half FF (which is consistent with the results of Ref.~\onlinecite{sdadys}). Thus, the $Z_2$ symmetry breaking QPT (at non-half filling) in our model is a unique feature which has no analog in either the Holstein model or the t-V model. Furthermore, at half-filling, the t-V model undergoes a QPT when $V=2t$ \cite{gagliano,haldane} while the Holstein model suffers a QPT at $g > 1$ \cite{sdadys,hamer}; on the other hand our CBM model, for the range of EPC $g$ considered (i.e., $g \ge 1$), is always deep inside the CDW phase since the coefficient of NN repulsion is much larger than the hopping terms. \begin{figure}[b] \includegraphics[height=8.5cm,width=8cm,angle=-90]{fig5.eps} \caption{(Color online) Structure factor value $S(\pi)$ at $\frac{t}{\omega_0}=0.1$ for $(a)$ $\frac{1}{4}$-filling and $N=16$; and $(b)$ $\frac{1}{3}$-filling and $N=18$ in our CBM model and the Holstein model.} \label{fig:Hols_vs_Coop} \end{figure} Plots of the order parameter $S^*(\pi)$ displayed in Fig.~\ref{fig:order_parameter} also reveal signatures of QPT at FFs $\frac{1}{4}$ and $\frac{1}{3}$ and at different system sizes. Moreover, we observe that the increase in $S^*(\pi)$ becomes sharper as the system size increases. From the figures it appears that there is either a continuous or weakly first-order QPT for both the FFs. \begin{figure}[t] \includegraphics[height=9cm,width=8cm,angle=-90]{fig6.eps} \caption{(Color online) Order parameter $S^*(\pi)$ in the CBM model at $\frac{t}{\omega_0}=0.1$ for $(a)$ $\frac{1}{4}$-filling; and $(b)$ $\frac{1}{3}$-filling.} \label{fig:order_parameter} \end{figure} \subsection{Ground State Fidelity; Fidelity Susceptibility and its Scaling Behavior} Although the order parameter $S^*(\pi)$ depicts a QPT, but the nature of the transition (whether it is first-order, second-order, or KT-like) is not clear. Therefore, we take recourse to the study of the ground state fidelity (GSF) and FS to characterize the nature of the QPT. The GSF is defined as the overlap between GSs at two different but near values of the control parameter (say $g$ and $g+\delta$) as follows: \begin{equation} F(g,\delta)=|\langle\Psi_0(g)|\Psi_0(g+\delta)\rangle| , \label{eq:fidelity} \end{equation} where $|\Psi_0 \rangle$ is the GS of the system and $\delta$ is a small quantity \cite{zanardi}. It is clear from Eq.~\eqref{eq:fidelity} that $F(g,\delta)$ depends on $\delta$. On the other hand, the FS \cite{you}, defined below as the second derivative of GSF \cite{zanardi}, \begin{equation} \chi_F(g) \equiv \partial^2_\delta F(g,\delta)|_{\delta=0}=2\lim_{\delta\rightarrow0}\frac{1-F(g,\delta)}{\delta^2} , \label{eq:susceptibility} \end{equation} is independent of $\delta$. The GS of the system, after transition, becomes two-fold degenerate as either of the two sub-lattices, namely even and odd, can have the larger occupancy. Now, any linear superposition of the two degenerate states is also a GS. Therefore, the calculated GSF [i.e., the absolute value of the overlap of GS $|\Psi_0 \rangle$ at two close by values of the control parameter ($g$ and $g+\delta$)] becomes arbitrary. To eliminate arbitrariness in the estimate of GSF, we start with $\Psi_0(g)$ as our initial guess in the modified Lanczos algorithm to get the GS $\Psi_0(g+\delta)$. Next, we point out a mapping that will enable us to perform fidelity calculations in systems with sizes larger than the usual sizes accessible to the modified Lanczos technique. At $\frac{N_p}{N}$-filling in our CBM model, when NN repulsion is much larger than both the NN and the NNN hoppings, our model can be reduced to the following model at $\frac{N_p}{N-N_p}$-filling but without NN repulsion (for similar analyses, see the treatment of the t-V model in Ref.~\onlinecite{dias} and the mapping of the t-V$_1$-V$_2$ model in Ref.~\onlinecite{sahinur_arxiv}): \begin{eqnarray} H^{RC}_{eff} &= & - t e^{-3 g^2} \sum_{j} ( c^{\dagger}_{j} c_{j+1} + {\rm H.c.}) \nonumber \\ && - \frac{t^2 e^{-2 g^2} }{4 g^2 \omega_0} \sum_j [c^{\dagger}_{j-1}(1-n_j) c_{j+1} + {\rm H.c.} ] . \label{eq:Hpol_reduced} \end{eqnarray} In the NNN hopping term, because of large NN repulsion, we have ignored the contribution of the sequential hopping depicted in Fig. 2(c) of Ref.~\onlinecite{sahinur}. The above prescription reduces the dimension of the Hilbert space significantly from $^{N}C_{N_p}$ to $^{N-N_p}C_{N_p}$. From Eq.~\eqref{eq:Hpol_reduced}, we also observe that the new effective Hamiltonian contains only kinetic terms. Hence, the GS in the CDW phase has to be conducting away from half-filling. In Fig.~\ref{fig:fid_deltaG}, we depict $F(g,\delta)$ and $\chi_F(g)$ as a function of $g$ at FFs $\frac{1}{4}$ and $\frac{1}{3}$. The dip in $F(g,\delta)$ at the critical point increases with the increase in $\delta$. This happens because of the fact that the distance between two ground states in parameter space increases with the increase in $\delta$. However, $\chi_F(g)$ for different small values of $\delta$ coincide as $\chi_F(g)$ is independent of $\delta$ [as can be seen from Eq.~\eqref{eq:susceptibility}]. \begin{figure}[t] \includegraphics[height=8.5cm,width=8cm,angle=-90]{fig7.eps} \caption{(Color online) GSF $F(g, \delta)$ in the CBM model at $\frac{t}{\omega_0}=0.1$ for $(a)$ $\frac{1}{4}$-filling and $N=32$; and $(c)$ $\frac{1}{3}$-filling and $N=30$. FS $\chi_F(g)$ for $(b)$ $\frac{1}{4}$-filling; and $(d)$ $\frac{1}{3}$-filling correspond to the GSF-plots in $(a)$ and $(c)$ respectively. For the sake of clarity, only selected points are shown for $\delta = 0.02$.} \label{fig:fid_deltaG} \end{figure} Additionally, Fig.~\ref{fig:fid_sys_size} shows $F(g,\delta =0.05)$, $\chi_F(g)$ and $\chi_{F_{\rm max}}$ (or the peak FS) for different system sizes at FFs $\frac{1}{4}$ and $\frac{1}{3}$. The dip (peak) in $F(g,\delta)$ $[\chi_F(g)]$ at the extremum point increases with the system size $N$. Furthermore, for a finite system, $\chi_{F_{ \rm max}}$ scales like \cite{gu,gu_review} \begin{equation} \chi_{F_{ \rm max}}\propto N^{\mu}. \end{equation} The logarithmic scale plot of the peak FS value $\chi_{F_{ \rm max}}(N)$ with $N$ shows a linear behavior (see Fig.~\ref{fig:fid_sys_size}$(e)$) which confirms a power law divergence of $\chi_{F_{ \rm max}}(N)$ at the extremum point $g_{\rm max}$. At large $N$, we obtain $\chi_{F_{ \rm max}}(N)\sim N^{2.001}$ at $\frac{1}{4}$-filling; whereas at $\frac{1}{3}$-filling we get $\chi_{F_{ \rm max}}(N)\sim N^{1.868}$. The superextensive power law divergence of $\chi_{F_{\rm max}}$ along with the dynamical critical exponent value $z \sim 1$ rule out a KT-like transition (see appendix \ref{app:sus} for details). \begin{figure}[b] \includegraphics[height=8cm,width=8cm,angle=-90]{fig8a-d.eps} \\ \hspace{.5cm} \includegraphics[height=8cm,width=3cm,angle=-90]{fig8e.eps} \caption{(Color online) GSF $F(g, \delta)$ in the CBM model at $\frac{t}{\omega_0}=0.1$ and $\delta = 0.05$ for $(a)$ $\frac{1}{4}$-filling; and $(c)$ $\frac{1}{3}$-filling. FS $\chi_F(g)$ for $(b)$ $\frac{1}{4}$-filling; and $(d)$ $\frac{1}{3}$-filling correspond to the GSF-plots in $(a)$ and $(c)$. (e) Plot of the peak values of FS $\chi_{F_{\rm max}}$(N) versus $N$, on a logarithmic scale, at $\frac{1}{4}$-filling and $\frac{1}{3}$-filling and the corresponding power-law fits.} \label{fig:fid_sys_size} \end{figure} In order to examine the possibility of a second-order QPT, we consider the following scaling relation \cite{gu,gu_review} for $\chi_F(g)$: \begin{equation} \frac{(\chi_{F_{ \rm max}}(N)-\chi_F(g,N))}{\chi_F(g,N)}=f[N^{\frac{1}{\nu}}(g-g_{\rm max})], \label{eq:scale} \end{equation} where $\nu$ is the critical exponent of the correlation length. Interestingly, a plot of $[\chi_{F_{ \rm max}}(N)-\chi_F(g,N)]/{\chi_F(g,N)}$ versus $N^{\frac{1}{\nu}}(g-g_{\rm max})$, as depicted in Fig.~\ref{fig:sus_scale}, shows a nice scaling relation of $\chi_F(g,N)$ with $\nu$ taking the values $1.33\pm 0.01$ and $1.41\pm 0.01$ for the best fits to the universal curves at FFs $\frac{1}{4}$ and $\frac{1}{3}$ respectively. The superextensive power law divergence and the scaling behavior of $\chi_F$ demonstrate that the QPT is second-order in nature. \begin{figure}[t] \includegraphics[height=8cm,width=4.5cm,angle=-90]{fig9.eps} \caption{(Color online) Scaling behavior of FS $\chi_F(g, N)$ in the CBM model at $\frac{t}{\omega_0}=0.1$ for $(a)$ $\frac{1}{4}$-filling yielding $\nu = 1.33\pm 0.01$ and for $(b)$ $\frac{1}{3}$-filling producing $\nu = 1.41\pm 0.01$.} \label{fig:sus_scale} \end{figure} Furthermore, as pointed out in Refs.~\onlinecite{gu,gu_review}, average FS $\chi_F(g)/N$ around the critical point $g_c$ scales like \begin{equation} \frac{\chi_F(g)}{N}\propto \frac{1}{\arrowvert g_c-g\arrowvert^\alpha}, \end{equation} in the thermodynamic limit, with $\alpha$ being a critical exponent. The three exponents $\alpha$, $\mu$, and $\nu$ are related as\cite{gu,gu_review} \begin{equation} \alpha= \nu(\mu-1). \label{eq:exponents} \end{equation} The values of the critical exponent $\alpha$, on using Eq.~\eqref{eq:exponents}, turn out to be $\alpha\simeq 1.33$ and $\alpha\simeq 1.22$ for FFs $\frac{1}{4}$ and $\frac{1}{3}$ respectively. On using finite size scaling, we find the critical point $g_c$ values to be $2.785$ and $2.594$ for FFs $\frac{1}{4}$ and $\frac{1}{3}$ respectively [based on positions of dips (peaks) of GSF (FS) in Fig.~\ref{fig:fid_sys_size}]. \section{Conclusions} We derived an effective Hamiltonian for molecular chains involving CBM at strong EPI. The spinless fermion model considered here should be relevant to perovskite systems with large onsite coulomb repulsion. Our analysis shows that our system has an effective Hamiltonian of the form \begin{eqnarray} H_{t-t_2-V} = &&- t \sum_{j} ( c^{\dagger}_{j} c_{j+1} + {\rm H.c.}) \nonumber \\ &&- t_2 \sum_j (c^{\dagger}_{j-1}(1-2n_{j}) c_{j+1} + {\rm H.c.} ) \nonumber \\ && + V \sum_{j} n_j n_{j+1} , \label{eq:tt2v} \end{eqnarray} with $t_2 << t$ for small $g$ ($\sim 1$), whereas $t_2 >> t$ for large $g$ ($\gtrsim 3$); furthermore $V$ is significantly larger than both $t$ and $t_2$ for all values of EPC ($1 \le g \le 3.5$) considered. { Thus, NN and NNN hoppings compete and the system transits from a large-V ~t-V model (with a Luttinger liquid GS) to a large-V ~t$_2$-V model (with a period-doubling CDW GS) as $g$ increases;} our fidelity analysis shows that the QPT is second-order in nature. In the past, a density independent charge ordering has indeed been observed in manganite systems (see Fig.~2 in Ref.~\onlinecite{Littlewood}). However, since the dimensionality and number of bands are different, our findings are not directly related to these reported results. Although the reported calculations were performed for a conservative value of the adiabaticity $t/\omega_0 = 0.1$, we find that our results are qualitatively similar in the whole anti-adiabatic regime of $t/\omega_0 < 1$ (as shown in appendix A). Furthermore, we provide one more model system where the utility of GSF and FS in studying the nature of QPT is clearly demonstrated. \section{Acknowledgments} One of the authors (S. Y.) would like to thank P. B. Littlewood, S. Kos, D. E. Khmelnitskii, T. V. Ramakrishnan, Diptiman Sen, and M. Q. Lone for valuable discussions and KITP for hospitality.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,078
Essi Kausalainen: performance B Event, Taidetapahtuma • Tennispalatsi • Intended for everyone Free admission • No signup required 23.10. 15.00 Photo: HAM/Maija Toivanen The name of the performance comes from the first letter of the words 'body', 'bone', 'being' and 'bee'. At the centre of everything is an impossible textile body (too large, too fragile, too heavy, too porous) installed in the exhibition space and inhabited by the performers' voices in the form of nearly imperceptible sound and three living performances. The group of performers consists of two singers, a visual artist and a child playing a horn. The horn-playing child leads the audience to the textile body and begins a dialogue with it. The three voices offer three perspectives on the body's experience in a moment in which the observer's understanding of the world gets flipped. The outlines of the body grow softer and the world starts to leak in. In a state of confusion, the observer holds on to language and attempts to make themselves whole again with words. The work is part of the Come Back as a Flower exhibition running at HAM. Performance times: Sat 23 October 2021 at 15:00 Wed 17 November 2021 at 17:00 Sat 8 January 2022 at 15:00 The duration of the performance is roughly 15–20 minutes. Languages: english and finnish.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,127
Cricket Streams Watch all Cricket matches for free on live stream City Full Member Arena Seddon Park Stadium The New Zealand cricket team, nicknamed the Black Caps, are the national cricket team representing New Zealand. They played their first Test in 1930 against England in Christchurch, New Zealand, becoming the fifth country to play Test cricket. From 1930 New Zealand had to wait until 1956, more than 26 years, for its first test victory, against the West Indies at Eden Park in Auckland. They played their first ODI in the 1972–73 season against Pakistan in Christchurch. The current Test, One-day and Twenty20 captain is Kane Williamson, who replaced Brendon McCullum who announced his retirement in late December, 2015. The national team is organised by New Zealand Cricket. New Zealand next online matches:
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
0
There are two municipalities of Switzerland named Teufen: Teufen, Appenzell Ausserrhoden Freienstein-Teufen in the district of Bülach in the Canton of Zurich de:Teufen AR
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,662
\section{INTRODUCTION} Star formation in our galaxy occurs primarily within giant molecular clouds (GMC) - highly nonuniform complexes of molecular gas containing a total mass of $\sim 10^5\;M_\odot$ within a radius of $\sim 20$ pc. These complexes have hierarchical structure that can be characterized in terms of clumps and dense cores surrounded by an interclump gas of density $\sim 5$--$25$ cm$^{-3}$. Clumps have characteristic densities of $\sim 10^3$ cm$^{-3}$ and radii ranging between $0.2$--$2$ pc, the largest of which are comprised of as many as $\sim 1000$ small ($R \sim 0.1 - 0.2$ pc), dense ($\sim 10^4$--$10^5$ cm$^{-3}$) cores whose mass function has been measured to range from $\sim 1 - 100 M_\odot$ (with a peak $\sim 10 M_\odot$) by Jijina et al. (1999), and more recently, from $\sim 0.2 - 20 M_\odot$ (with a characteristic mass of $\approx 2 M_\odot$) by Lada et al. (2008). It is the gravitational collapse of these cores within a clump that results in the formation of stars. The dynamics of core collapse is therefore a fundamental component of the star formation process. The study of self-similar collapse flows has provided an important cornerstone of the current theory of star formation (e.g., Shu et al. 1987), with numerous works appearing in the literature. The original self-similar collapse calculations (Larson 1969ab; Penston 1969ab; Shu 1977) considered isothermal spherical flows. Since then, many generalizations of the collapse have been made. The leading order effects of rotation have been studied, both for the inner pressure free region (Ulrich 1976; Cassen \& Moosman 1981) and for the entire core (Terebey, Shu, \& Cassen 1984). The leading order effects of magnetic fields have also been included (Galli \& Shu 1993ab; Li \& Shu 1996, 1997). More recently, the collapse of magnetized singular isothermal toroids has been studied (Allen, Shu, \& Li 2003; Allen, Li, \& Shu 2003). While much of the focus has been given to spherically symmetric flows, filamentary and elongated structures in molecular clouds are commonly observed (e.g., Houlahan \& Scalo 1992; Harjunpaa et al. 1999; Jijina 1999), and based on the results of numerical simulations (e.g., Curry 2000; Jappsen et al 2005), appear to be an important aspect of the star formation process. A complete understanding of molecular cloud dynamics thus requires the inclusion of cylindrical geometries. Indeed, several authors have applied self-similar techniques toward the study of how cylindrical structures collapse (Inutsuka \& Miyama 1992; Kawachi \& Hanawa 1998; Hennebelle 2003; Tilley \& Pudritz 2003; Shadmehri 2005). A central element of these studies involves the form of the equation of state used. While an isothermal equation of state clearly provides a reasonable and important starting point for the general study of core collapse, observational evidence points to softer equations of state. Specifically, non-thermal linewidths $\Delta v$ in molecular cloud cores show a correlation with density of the form $(\Delta v)^2 \sim \rho^{-\beta}$, with $\beta > 0$ (e.g., Larson 1981; Jijina et al. 1999). If one interprets the linewidth $\Delta v$ as the effective transport speed in the medium, then the corresponding effective equation of state has the form $P \sim \rho^\Gamma$, where $\Gamma = {1-\beta}$ and hence $0 < \Gamma \le 1$. The self-similar collapse of filamentary structures with polytropic equations of state has been explored by Kawachi \& Hanawa (1998) and Shadmehri (2005), with the latter work also including the effects of magnetic fields. These works, however, considered static equations of state which imposed global constraints on the pressure and density. It is likely that the physical processes which govern a gas will change as that gas undergoes gravitational collapse. We therefore extend the analyses of previous works on cylindrical flows by considering a dynamic equation of state (Fatuzzo, Adams, \& Myers 2004). Specifically, we consider the case in which the dynamic equation of state for the collapsing gas is different than the effective (static) equation of state that produces the initial equilibrium configuration. Here, the static equation of state (as set by $\Gamma$) refers to the pressure law that enforces the initial (pre-collapse) configuration for the gas, whereas the dynamic equation of state (as set by $\gamma$) refers to the pressure law that describes how the thermodynamic variables of the gas change as the material is compressed during collapse. This process is governed by the entropy evolution equation (introduced in \S 2.1), so that specific entropy is conserved along a given streamline. However, since the physics that determines the density profiles of the pre-collapse states can be different from the physics that governs the thermodynamics of the collapse flow, we allow $\gamma \ne \Gamma$. Our analysis therefore allows for a more robust model from which to gain insight into the dynamics of filamentary/elongated molecular structures. An important aspect of this work is its focus on the mass infall rate $\dot M$. Collapse flows are often self-similar and have no characteristic mass scale. However, the mass infall rate is one of the most important physical quantities in the star formation problem, and determines, in part, the total system luminosity and the total column density of the infalling envelope. These quantities, in turn, largely account for the spectral appearance of protostellar objects (e.g., Adams, Lada, \& Shu 1987; Adams 1990) since most of the luminosity is derived from material falling through the gravitational potential well of the star. Although the circumstellar disk stores some of the energy in rotational motion, the system luminosity is (usually) a substantial fraction of the total available luminosity \begin{equation} L_0 \equiv {G M_s {\dot M} \over R_\ast } \, , \label{eq:luminosity} \ee where $M_s$ is the total mass of the star/disk system, ${\dot M}$ is the mass infall rate, and $R_\ast$ is the stellar radius. The stellar radius, which helps determine the depth of the potential well, is itself a function of the mass infall rate (Stahler, Shu, \& Taam 1980). The paper is organized as follows. We formulate the collapse problem via self-similar methods for the general case of a collapsing cylindrical cloud of gas in \S2. In \S3, we determine the range of parameter space that yields collapse solutions by considering the limiting cases $v/x \to 0$ and $x/v \to 0$, and illustrate how the ensuing collapse is affected by the initial state of the gas. Guided by these results, we explore in \S4 the collapse of initial states that are out of exact hydrostatic equilibrium by being overdense, and apply our results to the collapse of elongated cores in \S 5. We consider cases for which solutions go smoothly through the singular surface in \S6, including the $\gamma = 1$ case which leads to a different class of collapse solutions. We present our conclusions in \S7. \section{FORMULATION OF THE COLLAPSE PROBLEM} \subsection{Basic Governing Equations} We adopt a cylindrical coordinate system described by the variables $r, z$ and $\phi$. To keep the problem tractable, we assume dependence only in $r$ (i.e., the cylindrical structure is infinite in length), with the self-gravitating gas described locally by its density $\rho(r,t)$, pressure $P(r,t)$ and (radial) velocity $u(r,t)$, and globally by the mass per unit length $M_L(r,t)$ contained within a radius $r$. The gravitational collapse of this fluid is governed by conservation of mass \begin{equation} {\partial M_L \over \partial t} + u {\partial M_L \over \partial r} = 0 \qquad {\rm and} \qquad {\partial M_L \over \partial r} = 2 \pi r \rho , \ee or equivalently, by the equation of continuity, \begin{equation} {\partial \rho \over \partial t} + {1 \over r} {\partial \over \partial r} (r \rho u) = 0 \, , \label{eq:fullcont} \ee and the force equation \begin{equation} {\partial u \over \partial t} + u {\partial u \over \partial r} = - {1 \over \rho} {\partial P \over \partial r} - {2 G M_L \over r} \, . \ee To complete the description required for the evolution of the gas to be solved, the pressure must be specified through a choice of the equation of state. For example, the relation $P = s^2 \rho$ is used to describe an isothermal gas (where $s$ is the sound speed), whereas a polytropic equation of state $P = {\cal K} \rho^\gamma$ allows for a more general treatment of the problem. Of course, adopting an equation of state represents a simplification to the real system as it embodies the numerous physical processes which govern the true state of the gas -- and how they add/remove energy to/from the gas -- into one simple relation. It is quite likely that these processes will change significantly during the collapse, so that the equation of state which governs the gas during the initial stages of collapse will almost certainly have a different form from that which governs the collapse during the later stages of the collapse. We therefore introduce a dynamic polytropic equation of state in our formalism which allows the relation between pressure and density to evolve during the collapse (e.g., Fatuzzo, Adams \& Myers 2004). Specifically, we assume that entropy is conserved along a streamline, and use the conservation of entropy equation \begin{equation} \left({\partial \over \partial t} + u {\partial \over \partial r}\right) \log \left[P / \rho^\gamma \right] \, = 0 \, , \label{eq:entropyone} \ee to follow the evolution of the pressure. We thus refer to $\gamma$ as the index of the dynamic equation of state. It is important to note that equation (\ref{eq:entropyone}) describes how a given parcel of gas changes its thermodynamic variables along a streamline, therefore allowing for an equation of state which evolves during the collapse. In contrast, relating the pressure to the density through a fixed (static) equation of state (e.g., $P = {\cal K} \rho^\gamma$) implies a global constraint on those variables. \subsection{The Similarity Transformation} As shown in \S2.1, the cylindrical collapse problem is represented mathematically by a set of coupled partial differential equations in time $t$ and radial position $r$. In this section, we find a similarity transformation that reduces this set of PDE's to a set of ordinary differential equations in a new similarity variable $x$ which we define below. In particular, we look for a similarity transformation of the general form $$ x = A t^a r \, , \qquad \rho = B t^b \alpha(x) \, , \qquad M_L = C t^c m(x) \, , $$ \begin{equation} u = D t^d v(x) \, , \qquad {\rm and} \qquad P = E t^e p(x) \, . \ee Here, both the coefficients ($A$, $B$, $C$, $D$, $E$) and the indices $(a, b, c, d, e)$ are constants. The reduced fluid fields ($\alpha$, $m$, $v$, $p$) are dimensionless functions of the (single) dimensionless similarity variable $x$. In this work, the time benchmark $t=0$ corresponds to the instant of the onset of collapse. The general similarity transformation calculation leads to four equations to specify the five indices $a,b,c,d,e$. We leave the constant $a$ arbitrary for the moment and write the rest of the variables in terms of its value, i.e., $$ a=a \, , \qquad b = - 2 \, , \qquad c = - (2a + 2) \, , $$ \begin{equation} d= - (a + 1) \, , \qquad {\rm and} \qquad e = - 2 (a + 2) \, . \ee Similarly, for the coefficients we obtain $$ A = A \, , \qquad B = (2 \pi G)^{-1} \, , \qquad C = (A^2 G)^{-1} \, , $$ \begin{equation} D = A^{-1} \, , \qquad {\rm and} \qquad E = (2 \pi G A^2)^{-1} \, , \ee where $G$ is the gravitational constant. We thus obtain reduced equations of motion in the form \begin{equation}(ax + v) {dm \over dx} = (2a + 2) m \, , \label{eq:contmass} \ee \begin{equation} {dm \over dx} = x \alpha \, , \label{eq:contmass2} \ee \begin{equation} (ax + v) {1 \over \alpha} {d \alpha \over dx} + {dv \over dx} = \left(2 - {v\over x}\right) \, , \label{eq:contrho} \ee \begin{equation} (ax + v) {dv \over dx} + {1 \over \alpha} {dp \over dx} = - {2 m \over x} + (a+1) v \, , \label{eq:force} \ee \begin{equation} (ax + v) {d \over dx} \log [ p/\alpha^\gamma ] = 2 (2 + a - \gamma) \, . \label{eq:entropy} \ee Note that our similarity transformation is not unique -- one can always rescale the coefficients $\{ A, B, C, D, E \}$ by a set of dimensionless numbers and obtain new equations of motion with different numerical coefficients. The first two equations of motion can be immediately combined to obtain an expression for the reduced mass $m(x)$, i.e., \begin{equation} m = {(ax + v) \over (2a + 2) } x \alpha \, . \label{eq:massint} \ee Likewise, multiplying equation (\ref{eq:contmass}) by the constant \begin{equation} q \equiv {2 \over 2a + 2} (2 + a - \gamma) . \label{eq:qdef} \ee and subtracting the product from equation (\ref{eq:entropy}) yields a differential equation which can be integrated to obtain an expression for the reduced pressure \begin{equation} p = \, {\cal C}_1 \, \alpha^{\gamma} \, m^q \, = {\cal C}_1 \, \alpha^{\gamma+q}\, \left[{(ax+v)\over (2a+2)}\,x\right]^q\, , \label{eq:psolution} \ee where ${\cal C}_1$ is a positive integration constant. Given the solutions for the reduced pressure $p(x)$ and reduced mass $m(x)$, Eqs. (\ref{eq:contrho}) and (\ref{eq:force}) are the relevant equations of motion to determine the remaining unknown functions $\alpha(x)$ and $v(x)$. Using Cramer's rule, we derive an equivalent set of equations \begin{equation} {d\alpha\over dx} = {{\cal A}\over {\cal D}} \qquad {\rm and} \qquad {dv \over dx} = {{\cal V}\over {\cal D}}\,, \ee where \begin{equation} {\cal D} = {\left(ax+v\right)^2\over\alpha}-{\gamma p \over\alpha^2}\,, \ee \begin{equation} {\cal V} = -{\left(ax+v\right)^2\over 1+a}-{2p\left(2+a-\gamma\right)\over\alpha^2} +\left(ax+v\right)\left(1+a\right){v\over\alpha} -{\gamma p\over\alpha^2}\left(2-{v\over x}\right)\,, \ee and \begin{equation} {\cal A} = \left(ax+v\right)\left(2-{v\over x}\right)+{\alpha\left(ax+v\right)\over 1+a}+{2p\left(2+a-\gamma\right)\over\alpha \left(ax+v\right)} -(1+a)v\;. \ee The physical state of the gas is defined through a choice of the parameters ($a, \gamma, C_1$), and its collapse is described by the solutions to Eqs. (14) -- (20) for the specified set of reduced field variables $v_i = v(x_i)$ and $\alpha_i = \alpha(x_i)$. It is mathematically possible to consider ``complete'' solutions that span the entire available range $-\infty < x < \infty$ (first obtained by Hunter 1977; see also Whitworth \& Summers 1985), and as such, span both negative and positive times. However, it is rather unlikely that molecular clouds will evolve toward their centrally condensed initial configurations in a self-similar manner subject to (only) the physics included in these equations of motion. For example, before the onset of collapse (for $t < 0$), molecular clouds may evolve through the processes of ambipolar diffusion (at least for small mass scales), shocks, turbulent dissipation, cooling flows, condensation instabilities, and cloud-cloud collisions. In addition, the cloud will most likely initiate collapse before a completely self-similar equilibrium state has been attained; the collapse will only become self-similar asymptotically in time (i.e., the self-similar collapse solutions of this paper are intermediate asymptotic solutions to the realistic problem of the collapse of a finite cloud with finite central density). As such, the self-similar solutions of this paper for the protostellar collapse phase ($t > 0$) cannot (in general) be extended to the pre-stellar phase ($t < 0$), as they would likely encounter a critical point and become singular. We therefore limit this discussion to solutions with $0 < x < \infty$, sometimes called ``semi-complete'' solutions. \section{GENERAL SOLUTIONS TO THE COLLAPSE PROBLEM} In order to determine what set of parameters ($a,\gamma, C_1$) yield solutions that describe the collapse of a molecular cloud filament, we obtain analytic solutions to the equations of motion in the limit that $t \to 0$. As these solutions describe the early stages of the collapse, the gas velocity $u$ must be negative and bounded. For a positive value of $a$, this constraint requires that $u = D t^{-(a+1)} v(x)$ not be singular at $t = 0$, which in turn requires that $v(x) \propto x^\beta \propto t^{\beta a}$, where $\beta \ge (a+1)/a$. Clearly then, the ratio $v/x \to 0$ as $t \to 0$ for solutions relevant to our discussion. In this limit, Eq. (9) reduces to a form that can be easily integrated, and along with Eqs. (14) and (16), leads to analytic solutions for the reduced density, mass, and pressure of the form \begin{equation} \alpha = \lambda\, x^{2/a}\;, \ee \begin{equation} m = {a\over 2a+2}\, \lambda\, x^{(2a+2)/a}\;, \ee and \begin{equation} p = C_1\,\lambda^{-aq}\,\left[{a\over 2a+2}\right]^q\,\alpha^{2+a} \,, \ee where $\lambda$ is a positive constant. The reduced velocity is governed by the limiting form of Eq. (12) \begin{equation} a {dv\over dx} - (a+1) {v\over x} = V_0\, x^{2/a}\,, \ee where \begin{equation} V_0 = -2\lambda\left[{a\over 2a+2}\right]\,\left[1 + C_1 \left({2+a \over a}\right) \left({2a+2\over a}\,\lambda^a\right)^{(\gamma-1)/(1+a)}\right]\,. \ee Eq. (24) yields a power-law form solution \begin{equation} v = V_0\, x^{(2+a)/a}\,, \ee indicating that collapse solutions can be found for $a > 0$ (since $V_0 < 0$). However, the real density becomes time-independant and scales with radius as $\rho\propto r^{2/a}$ in the limit that $x/v \to 0$. Since the density approaches zero at small radii, solutions for $a > 0$ cannot represent the collapse of filamentary like structures (whose initial density profiles are peaked at $r = 0$), thereby ruling out this range of parameter space in our work. Interestingly, the ratio $v/x$ also approaches zero as $t \to 0$ for the case that $a < 0$ (since $x \to \infty$). The reduced density, mass and pressure given by Eqs. (21) -- (23) therefore also describe the state of the gas at the onset of collapse for this case. However, the real density (which still scales as $\rho\propto r^{2/a}$) now becomes singular as $r \to 0$. Of course, the equations of motion presented in this work are simply mathematical idealizations to the real, physical problem, for which filaments have a finite density core at small radii $r < r_C$ and a large but finite outer boundary $r_{out}$. This "real" system is expected to follow the self-similar solutions for intermediate length-scales. Indeed, previous numerical work (Foster \& Chevalier 1993) indicates that the collapse of an isothermal core approaches the expected self-similar form when the core has $r_{out}/r_C > 20$. Likewise, Fatuzzo, Adams \& Myers (2004) found similar results, with cores that have initial inward velocities more readily approaching the self-similar collapse forms. For the case that $a$ is negative, additional constraints are provided by the requirement that the reduced mass $m$ remain positive, and that the reduced pressure $p$ increases as $\alpha$ increases. Together, these physical conditions require $-2 < a < -1$, as can be clearly seen by the forms of Eqs. (22) and (23). In addition, collapse solutions (for which $V_0 < 0$) require that \begin{equation} \lambda > \lambda_{crit} \equiv \left[{a \over 2a+2}\right]^{1/a}\, \left[-C_1{(2+a) \over a}\right]^{(a+1)/(a-a\gamma)}\,. \ee We note that $\lambda_{crit}$ is not defined if $\gamma = 1$. Indeed, the $\gamma = 1$ case yields a unique class of collapse solutions, and will be considered separately in \S 6.2. Since the real pressure is defined in terms of the constant $A$ through the similarity transforms, we can set $p = \alpha^{a+2}$ without loss of generality. That is, for a real pressure \begin{equation} P = {\cal K} \rho^\Gamma\,, \ee setting \begin{equation} A = \left[{\cal K} (2\pi G)^{1-\Gamma}\right]^{-1/2}\,, \ee and $\Gamma = (2+a)$ yields the desired form for the reduced equation of state ($p = \alpha^{a+2}$), and sets the integration constant \begin{equation} C_1 = \left[{2a+2\over a}\right]^q \lambda^{aq}\,, \ee and, in turn, \begin{equation} \lambda_{crit} = \left[-{a^2\over (2a+2)(2+a)}\right]^{1/a}\,, \ee To obtain full solutions, one must numerically integrate Eqs. (17) -- (20) from a specified initial set of values $\alpha_i = \alpha(x_i)$ and $v_i = v(x_i)$. We note that one cannot set initial conditions which correspond to the onset of collapse since $x\to \infty$ as $t\to 0$. As such, initial conditions are set through an arbitrary choice of $x_i >> 1$ along with a corresponding value of $\alpha_i$ for which $\lambda > \lambda_{crit}$ and a small but negative value of $v_i$. Representative solutions are illustrated in Figs. (1) and (2) for the case that $a = -1.25$ ($\Gamma = 0.75$) and $\gamma = 0.25$ (with $C_1$ given by Eq. [30]), obtained by integrating inward from $x_i = 5 \times 10^3$ and $\alpha_i = 5.78 \times 10^{-7}$ (for which $\lambda = 1.5 \lambda_{crit}$). The solid curves represent the solutions obtained by setting the value of $v_i$ equal to its power-law counterpart, as defined by Eq. (26). The dashed lines represent solutions obtained by setting the value of $v_i$ equal to 0.1, 0.3, 3 and 10 times the power-law value. It is clear from our results that the reduced density is not sensitive to the initial velocity, as all five cases yield values that differed by less than $0.1$ percent. In addition, it is clear that the velocity evolves toward its power-law form as given by Eq. (26). For completeness, we note that in the limit that $x/v \to 0$, the reduced mass approaches a constant value -- $m \to m_0$. In turn, the reduced pressure takes the form $p \propto \alpha^\gamma$. Taken together, the two limiting solutions presented here thus illustrate how the equation of state smoothly transforms from an initial index of $\Gamma = a+2$ (at early times and/or large radii) to a dynamic index $\gamma$ (at late times and/or small radii). This result is clearly illustrated in Fig. 3, which plots $p$ versus $\alpha$ for the solutions presented in Figs. (1) and (2). The five initial velocities yield results that are within $0.5$ percent of each other. As can be easily seen from Fig. (3), the equation of state index evolves from a value of $\Gamma = 0.75$ (in the lower left corner of the figure) to a value $\gamma = 0.25$ (in the upper right corner), with the transition occurring within a narrow region around $x \approx 1$. Finally, we note that the equations of motion which define our problem may well yield several different classes of solutions -- both physical (e.g., wind solutions) and unphysical. However, solutions which pertain to the collapse of filamentary structures in the absence of shocks (but see \S 6) are all qualitatively similar, and therefore characterized by solutions such as those presented in Figs. (1) -- (3). \section{COLLAPSE OF OVERDENSE STATES} The analysis presented in the previous section illustrates that collapse solutions of filamentary structures exist when $-2 < a < -1$ and $\lambda > \lambda_{crit}$. It is easy to show that when $\lambda = \lambda_{crit}$, Eqs. (21) -- (23) yield self similar solutions which describe a gas in hydrostatic equilibrium (for which $v = 0$). These solutions are fully specified by the equation of state index $\Gamma$ (which we use hereafter to specify the state of the gas instead of $a$), and reduce to \begin{equation} \alpha_E = \left[\Gamma\,{(2-2\Gamma)\over (2-\Gamma)^2}\right]^{1\over 2-\Gamma} \,x^{-2\over 2-\Gamma}\;, \ee \begin{equation} m_E = \left[\Gamma\,{(2-2\Gamma)^{\Gamma-1}\over (2-\Gamma)^\Gamma} \right]^{1\over 2-\Gamma}\, x^{2-2\Gamma \over 2-\Gamma}\,, \ee and $p = \alpha^\Gamma$ (assuming that $A$ is set through the relation given by Eq. [29]). In turn, the real density and mass per unit length profiles are given by the time independent expressions \begin{equation} \rho_E = \left[{{\cal K}\over 2\pi G}\right]^{{1\over 2-\Gamma}} \, \left[\Gamma \,{(2-2\Gamma) \over (2-\Gamma)^2} \right]^{1\over 2-\Gamma} \, r^{{-2\over 2-\Gamma}}\;, \ee and \begin{equation} M_{L,E} = \left[{\cal K} (2\pi G)^{1-\Gamma}\right]^{{1\over 2-\Gamma}} \, \,G^{-1}\, \left[\Gamma \,{(2-2\Gamma)^{\Gamma-1}\over (2-\Gamma)^\Gamma} \right]^{1\over 2-\Gamma}\, r^{2-2\Gamma \over 2-\Gamma}\,. \ee In this section, we consider the collapse of gas structures which are initially at rest, but are out of hydrostatic equilibrium by being everywhere overdense by a factor $\Lambda$. As such, the initial gas (at time $t = 0$) is described by \begin{equation} \alpha_i = \Lambda \alpha_E \qquad {\rm and} \qquad v_i = 0\;, \ee where $\alpha_E$ is the corresponding density profile if the gas were in hydrostatic equilibrium. Essentially, this class of solutions correspond to the solutions discussed in \S 3, where $\lambda = \Lambda \lambda_{crit}$, but with the velocity in the $x \to \infty$ limit given by Eq. (26), as shown below. We note that hydrostatic equilibrium solutions of the form \begin{equation} \alpha_E = \bar\Lambda x^{2/a} \qquad {\rm and} \qquad m_E = \bar\Lambda\, {a\over a+2}\, x^{(2+2a)/a}\,, \ee exist when $\gamma = 1$. However, since the reduced pressure takes the form $p = C_1 \alpha m$, the corresponding balance between the pressure gradient and gravity, as described by Eq. (12), \begin{equation} C_1 {d\over dx} (\alpha m) = - 2 {\alpha m \over x}\,, \ee is then always maintained regardless of the value of $\bar \Lambda$. The collapse scenarios considered in this section therefore cannot occur if the dynamic index $\gamma = 1$, regardless of the value of the static index $\Gamma$. For completeness, we explore a different class of collapse solutions with $\gamma = 1$ in \S 6.2. With an initial (overdense) state defined in terms of $\Gamma$ and $\Lambda$, as per Eqs. (32) and (36), and the dynamic equation of state defined in terms of $\gamma$, the ensuing collapse solutions can then be obtained by numerically integrating Eqs. (17). The adopted relation $p = \alpha^\Gamma$ is maintained for the initial (overdense) state by setting \begin{equation} C_1 = \Lambda^{{(\Gamma-2)(\gamma-\Gamma)\over 1-\Gamma}} \left({2-\Gamma\over \Gamma}\right)^{{\gamma-\Gamma\over 1-\Gamma}}\,. \ee In practice, however, this initial state corresponds to a time $t_i = 0$, and in turn, an initial value of $x_i = \infty$. We therefore first obtain higher-order analytical expressions for the reduced density and velocity from those presented in \S 3 via a series expansion of the reduced equations in the limit $x_0 >> 1$ . Doing so yields the following values for the reduced density and velocity: \begin{equation} \alpha_0 = \Lambda\,\alpha_E (x_0)\, \left[1-{\Gamma\over 2-\Gamma} \,\Delta_0 \, x_0^{-2\over 2-\Gamma}\;\right]\,, \ee and \begin{equation} v_0 = -\Delta_0 \,x_0^{-\Gamma\over 2-\Gamma}\; \left[1+{1\over 3}\left\{\Lambda \beta_0 + {2\Gamma^3\over (2-\Gamma)^2} \,{1\over (\Lambda\beta_0)^{1-\Gamma}} -{\Gamma\over 2-\Gamma}\,\Delta_0\right\}\,x_0^{-2\over 2-\Gamma}\right]\,, \ee where \begin{equation} \beta_0 = \left[\Gamma {(2-2\Gamma) \over (2-\Gamma)^2}\right]^{1\over 2-\Gamma}\,, \ee and \begin{equation} \Delta_0 = \Lambda\,\left({2-\Gamma\over 1-\Gamma}\right)\,\beta_0 \,\left[1-\Lambda^{-(2-\Gamma)}\right]\,. \ee Full solutions can then be easily obtained by numerically integrating inward from $x_0$. We present solutions for the reduced density and velocity in Figs. 4 -- 7 for an overdensity parameter of $\Lambda = 1.5$ and several different sets of ($\gamma, \Gamma$). The reduced density clearly exhibits the broken power law profile typical of self-similar collapse solutions. We note that the spectral indices which characterize the reduced density solutions are not sensitive to the choice of $\gamma$, and hence have similar forms when $x << 1$. In contrast, these solutions depend sensitively on the value of $\Gamma$, both in terms of shape when $x >> 1$ and in overall normalization. Similar behavior is also clearly observed for the reduced velocity. As noted in the introduction, self-similar collapse flows have no characteristic mass scale. Instead, the collapse flow feeds material onto the central star/disk system at a well-defined mass infall rate $\dot M$. In these flows, the infalling material always approaches free-fall conditions on the inside (in the limit $r \to 0$) and the reduced mass determines the size of the infall rate. Specifically, $m(x) \rightarrow m_0$ as $x\rightarrow 0$, and in turn, the mass accretion rate per unit length becomes \begin{equation} \dot M_L = 2 (1-\Gamma) {\cal K}\, (2\pi G)^{1-\Gamma} \,G^{-1}\,m_0 \,t^{1-2\Gamma} \,. \ee Note that if $\Gamma = 0.5$, $\dot M_L$ is constant in time, whereas softer (stiffer) equations of state result in temporally increasing (decreasing) mass accretion rates. In contrast, the mass accretion rate for spherically symmetric flows is given by the expression \begin{equation} \dot M = (4-3\Gamma)\, {\cal K}^{3/2} \, (4\pi G)^{3(1-\Gamma)/2}\, G^{-1}\, m_0 \, t^{3(1-\Gamma)}\;, \ee and is constant in time for an initially isothermal gas (Fatuzzo, Adams \& Myers 2004). We plot the value of $m_0$ as a function of $\Gamma$ for $\Lambda = 1.5$ and three different values of $\gamma$ in Fig. 8, and as a function of $\Lambda$ for several sets of ($\Gamma$, $\gamma$) in Fig. 9 (the data point in this figure at $\Lambda = 1.08$ is discussed in \S 6.1). As expected, larger infall rates occur for initial states that are more overdense. In addition, stiffer static equations of state result in larger mass accretion rates. In contrast, the dynamic equation of state (as defined by $\gamma$) has little effect on the value of $m_0$. The insensitivity of the collapse dynamics to the value of the dynamic index $\gamma$ should not be surprising given the inside-out nature of the collapse (as shown explicitly in \S5). Specifically, the break at $x \approx 1$ exhibited in the density and velocity profiles shown in Figs. 4 -- 7 denotes the boundary between gas that is in some part supported by pressure ($x >> 1$) and gas that has lost that support and is therefore approaching free-fall ($x << 1$). In terms of real quantities, this boundary occurs at a radius \begin{equation} r_B \approx { 1\over A t^a} = {\cal K}^{1/2}\, (2\pi G)^{(1-\Gamma)/2} \, t^{2-\Gamma}\,, \ee that moves outward in time. Since the speed at which this boundary moves is governed by the static equation of state, the collapse dynamics therefore depend sensitively on $\Gamma$. In contrast, the dynamic gas pressure has little effect on the collapse dynamics when $x < 1$ ($r < r_B$), and hence, on the overall collapse. \section{APPLICATION TO ELONGATED CORES} Dense star-forming cores in molecular clouds are on average slightly elongated, with some cores exhibiting aspect ratios as large as 5 (Jijina et al. 1999). We apply our results to the evolution of fairly elongated cores, making the highly idealized assumption that their collapse can be reasonably approximated by our formalism. In reality, how such cores collapse likely falls within the limiting cases of cylindrical symmetry (explored here) and spherical symmetry (explored in Fatuzzo, Adams \& Myers 2004). Interpreting the observed non-thermal linewidths $\Delta v$ in molecular cores as the effective transport speed in the medium, one finds \begin{equation} \Delta v = \left[{\partial P\over \partial \rho}\right]^{1/2} = \left[{\cal K}\, \Gamma \right]^{1/2} \,\rho^{-(1-\Gamma)/2}\,, \ee for an assumed polytropic equation of state $P = {\cal K} \rho^\Gamma$. Adopting the fiducial values $\Gamma = 0.5$ and $\Delta v = 1$ km/s for a density of 5,000 cm$^{-3}$ (e.g., Larson 1981) then yields a value of ${\cal K} = 2.6$ g$^{1/2}$ cm$^{1/2}$ s$^{-2}$, and in turn, density and velocity profiles given by \begin{equation} \rho(r,t) = 2.4 \times 10^{-21} \,{\rm g}\, {\rm cm}^{-3} \left({t \over 1\, {\rm Myr}}\right)^{-2} \alpha(x)\,, \ee \begin{equation} u(r,t) = 2.3 \,{\rm km/s} \, \left({t \over 1 \,{\rm Myr}}\right)^{1/2} \, v(x)\,, \ee where \begin{equation} x = 0.42 \,\left({r\over 1 {\rm pc}}\right)\, \left({t\over 1 \,{\rm Myr}}\right)^{-3/2}\,. \ee In Figs. 10 and 11, we plot the density and velocity profiles for the collapse from an initial state defined by $\Lambda = 1.5$ and $\Gamma = 0.5$, and a dynamics index $\gamma = 0.5$. The solid curve in Fig. 10 depicts the initial density profile, and the dotted curves represent the ensuing profiles (from top to bottom) at times $t = 10^4$, $10^5$, and $10^6$ yrs. Likewise, the dotted curves in Fig. 11 show the velocity profiles (from bottom to top) at times $t = 10^4$, $10^5$, and $10^6$ yrs. These results clearly illustrate the inside-out nature of the collapse, with gas inside the transition boundary \begin{equation} r_B \approx 2.4\, {\rm pc}\, \left({t\over 1 \,{\rm Myr}}\right)^{3/2}\,, \ee falling inwardly away from the overlying gas layers. From the onset of collapse, it takes a time $t_* \sim 10^5$ yrs for the formation of a young stellar object to occur, and an additional $t_d \sim 5\times 10^5$ yrs before stellar outflows disperse the surrounding core material. Thus, our model predicts that cores associated with young stellar objects would be characterized by densities $\sim 5\times 10^4$ cm$^{-3}$ and radii $\sim 0.1$ pc, in good agreement with observations (Jijina et al. 1999). In addition, the mass accretion rate per unit length is \begin{equation} \dot M = {\cal K} \left({2\pi\over G}\right)^{1/2} m_0 = 1.2\times 10^3 M_\odot \,{\rm Myr}^{-1}\, {\rm pc}^{-1}\, m_0\,. \ee An elongated core with a length of $l = 0.5$ pc would accrete a mass of $M_{acc} \approx 10 M_\odot$ in a time $t_*$ for a slightly overdense initial state (since $m_0 \approx 0.15$ in this case, as can be from the solid curve in Fig. 9), and $M_{acc} \approx 37 M_\odot$ for $\Lambda = 1.5$ ($m_0 = 0.6$). These results are consistent with the low star formation efficiencies of $2 - 10$ \% deduced from observations, but seem to favor initial states that are only slightly overdense. \section{SMOOTH SINGULAR SOLUTIONS} \subsection{General Formalism} The solutions presented in \S 3 did not cross through the singular surface, defined in the three-variable space $x, \alpha,$ and $v$ as the surface on which ${\cal D} = 0$. While several different forms of solutions can pass through the singular surface (see, e.g., Lou \& Cao 2007), we focus here on those solutions which pass through it smoothly. That is, we consider solutions for which the derivatives $v' = dv/dx$ and $\alpha' = d\alpha/dx$ exist on the singular surface. A necessary but not sufficient condition for the existence of these solutions is that ${\cal V}$ (and hence ${\cal A}$) also become zero, which occurs along a unique curve (referred to as the critical curve) on the singular surface for each set of values ($a, C_1, \gamma$). Alternatively, for solutions which represent the collapse from an initially static, overdense state for which $p = \alpha^\Gamma$, a unique critical curve exists for each set of ($\Lambda, \Gamma, \gamma$) values. As a matter of illustration, we plot the ($v_c, x_c$) and ($\alpha_c, x_c$) projections of the critical curve in Figure 12 for the parameters $\Lambda = 1.5$, $\Gamma = 0.5$, $\gamma = 0.5$, with the upper panel presenting the reduced density profile, and the lower panel presenting the reduced velocity profile (dotted curves). The solid curves represent the solution for the corresponding collapse dynamics, which clearly does not cross the critical curve (or the singular surface). In order to properly treat solutions which pass through the singular surface smoothly, we obtain analytical solutions through a Taylor series expansion about the critical curve of the form $x = x_c + \delta$, $v = v_c + \delta v_1$, and $\alpha = \alpha_c + \delta \alpha_1$, where $\delta << 1$, and $v_1$ and $\alpha_1$ are the first derivatives in $v$ and $\alpha$ evaluated on the critical curve. Since the system ODE's are second order, only the first two terms in the expansion are required. Values for $v_1$ and $\alpha_1$ are obtained by substituting these terms into equations (17) and expanding in $\delta$. Doing so then yields the relations \begin{equation} \alpha_1 = \left(2-{v_c\over x_c} - v_1\right)\,{\alpha_c\over v_c-(2-\Gamma)x_c} \,, \ee and \begin{equation} -(1+\gamma) v_1^2 + \left[4\Gamma-1+2(1-\gamma){v_c\over x_c}\right] v_1 + C_v = 0\,, \ee where \begin{equation} C_v = {\alpha_c\over 1-\Gamma}\left(\Gamma-{v_c\over x_c}\right) +2 \left(\Gamma+1-{\Gamma\over\gamma}\right){v_c\over x_c} -\gamma {v_c^2\over x_c^2} - {2\Gamma^2\over \gamma}\,. \ee The first derivative $v_1$ is then the real, positive solution to the above quadratic equation. Full solutions can then be obtained by numerically integrating inward ($\delta < 0$) and outward ($\delta > 0$) from the corresponding Taylor series solutions. As an example, we consider the collapse from an initial state defined by the parameters $\Lambda = 1.08$, $\Gamma = 0.25$ and $\gamma = 0.5$. Numerically integrating Eqs. (17) inward from an initial value of $x_0 = 10^4$ (as described in \S4), one finds that the equations become singular as $x\rightarrow 0.336$, i.e., as the solutions approach the singular surface. Since a real and positive solution $v_1$ exists for Eq. (54) at $x_c = 0.336$ for this case, this solution can thus pass smoothly through the singular surface (by crossing through the critical curve), and can be matched to the Taylor series solution on the other side of the critical curve. Further numerical integration shows that this solution also crosses smoothly through the singular surface at a second point $x_c = 0.0322$, as illustrated in Figure 13. The solid curve in the upper (lower) panel presents the reduced density (velocity) profile obtained for our solution, and the dotted curves represent the ($\alpha_c, x_c$) and ($v_c, x_c$) projections of the critical curve for the case being considered. It is interesting to note that $\alpha$ approaches $\alpha_c$ as $x \rightarrow 0$. The value of $m_0$ was also determined and is depicted in Fig. 9 by the solid square data point, and is consistent with the extrapolation of the dot-dash curve corresponding to the same ($\Gamma$, $\gamma$) values. \subsection{The $\gamma$ = 1 case} As noted in \S4, density profiles that are out of hydrostatic equilibrium by being everywhere overdense (i.e., are of the form $\alpha_i = \Lambda \alpha_E$, $v_i = 0$) do not exist for the $\gamma = 1$ case, regardless of the form for the initial equation of state as defined by $a$ and $C_1$. This result is somewhat surprising given the apparent insensitivity that the solutions presented in \S4 have on the choice of $\gamma$, but is borne out through numerical investigation. However, a different class of collapse solutions which passes smoothly through the singular surface does exist for this case, and is presented here for completeness. A full analysis of all solutions with $\gamma = 1$ (e.g., wind solutions, shock solutions) is outside the scope of this work and will be presented elsewhere. To keep our analysis as general as possible, we describe the gas in terms of $a$ and $C_1$ throughout this subsection. It is easy to show that when $\gamma = 1$, the critical curve on the singular surface becomes a straight line defined by the linear relation $v_c = k x_c$ and a constant reduced density \begin{equation} \alpha_c = {(2+2a)(a+k)\over C_1}\,, \ee where $k$ is a (real) solution to the quadratic equation \begin{equation} \left[{2\over C_1}-1\right] k^2 + \left[3+{4a\over C_1}\right] k +\left[4a+2a^2+{2a^2\over C_1}\right] = 0\,, \ee derived by imposing the condition that ${\cal D} = {\cal V} = {\cal A} = 0$. Given our focus on collapse solutions, we plot the negative roots of Eq. (57) in Figure 14 as a function of $a$ for values of $C_1 = 1.2, 1.5, 1.8, 2.1$ and $2.4$. Solutions that pass through the critical curve can be obtained by numerically integrating inward and outward from both sides, using the analytical solutions obtained via the Taylor series expansion presented in \S6.1 (where $v_c = k x_c$). We note that the values of $v_1$ and $\alpha_1$ obtained by solving Eqs. (43) - (55) are now independent of $x_c$ -- a consequence of the ODE's which describe the gas dynamics (Eqs. 17) being invariant to the scaling transformation $x \rightarrow \eta x$, $v\rightarrow \eta v$, $\alpha\rightarrow\alpha$, $p\rightarrow \eta^2 p$, and $m\rightarrow \eta^2 m$ for the case $\gamma = 1$. We therefore set $x_c = 1$ in our analysis without loss of generality. Through numerical exploration, we find a class of solutions that have the limiting form $\alpha \propto x^{2/a}$ and $v\propto x^{(1+a)/a}$ in the limit $x\to \infty$, and $\alpha\propto x^{-2/C_1}$ and $v\propto x^{(2-C_1)/C_1}$ in the limit $x\to 0$, where $1 < C_1 < 2$. To illustrate this type of collapse flow, we plot solutions for the case $a = -1.05 $ and $C_1 = 1.5$ in Figure 15. The upper panel presents the reduced density profile, and the lower panel presents the reduced velocity profile. The dotted curves in these panels represent the ($\alpha_c, x_c$) and ($v_c, x_c$) projections of the critical line. Solutions for the real density and real velocity, scaled so that $x = 1$ corresponds to $t$ = 1 Myr and $r$ = 1 pc, are presented in Figs. 16 and 17 for times 0.01, 0.1, 1 and 10 Myrs. Clearly, this case represents a class of collapse solutions characterized by a nonzero initial inward velocity that slows with time at a fixed radius after the gas becomes isothermal.\footnote{ This class of solutions is analogous to the Type II solutions presented in Lou \& Cao (2007) for a spherical geometry with $\gamma = 4/3$}. Indeed, the gas at large radii/early times ($x \to \infty$) is moving inward with nearly constant velocity, though the magnitude of the velocity differs from one part of the flow to another (i.e., each "parcel" of gas moves with a nearly constant velocity, such that $du/dt \approx 0$, but the velocity field for the gas depends on $r$ and $t$). The density increases as the gas falls inward ($x \to 0$), leading to a rise in pressure that eventually becomes effective at slowing the inward collapse. The break in density at $x\sim 1$ in this case results from the slowing down of the infalling gas, rather than from an inward-out collapse associated with typical self-similar collapse solutions. This class of solutions could, in part, idealize the dynamics of a cylindrical structure collapsing after the loss of magnetic support via ambipolar diffusion, where it becomes isothermal as turbulence dissipates during the collapse. \section{CONCLUSION} The main goal of our work was to obtain self-similar solutions which describe the collapse of an initially stationary, cylindrically symmetric gas that is overdense from hydrostatic equilibrium by a factor $\Lambda$. Hydrostatic equilibrium profiles are easily found for an assumed static equation of state $p = \alpha^\Gamma$, as specified through the choice of the index $\Gamma$. Unlike previous works, we allow the equation of state to evolve during the collapse, under the condition that entropy is conserved along a streamline. Doing so allows the equation of state to change from its initial form (as defined by $\Gamma$), to a different polytropic form as defined by the dynamic index $\gamma$. Physical solutions describing this type of collapse require $0 < \Gamma < 1$ and $\gamma\ne 1$. We present solutions for which the system equations do not become singular, as well as solutions which pass smoothly through the singular surface. Our solutions clearly exhibit the broken power law profiles typical of self-similar collapse flows. We find that the spectral indices which characterize the reduced density solutions are not sensitive to the choice of $\gamma$, and hence have similar forms when the similarity variable $x = At^a r << 1$. In contrast, these solutions depend sensitively on the value of $\Gamma$, both in terms of shape when $x >> 1$ and in overall normalization. Similar behavior is also clearly observed for the reduced velocity solutions. The insensitivity of the collapse dynamics to the value of $\gamma$ results from the inside-out nature of the collapse. Specifically, the break at $x \approx 1$ exhibited in the density and velocity profiles (as shown in Figs. 4 -- 7) occurs as a result of the gas being in some part supported by pressure ($x >> 1$) evolving to a state approaching free-fall ($x << 1$) as a result of the loss of that pressure support. This break-point also denotes a transition from an initial (static) equation of state ($p = \alpha^\Gamma$) to a dynamic equation of state ($p \propto \alpha^\gamma$), with the transition occurring within a narrow region around $x \approx 1$. In terms of real variables, this transition occurs at a boundary which moves outward through the gas as governed by the static equation of state. As such, the gas dynamics depend sensitively on the value of $\Gamma$ (which, of course, also governs the density profile $\rho(r)$ of the initial state). In contrast, the gas pressure within this boundary, as described by the dynamic equation of state (and in turn $\gamma$), has little effect on the overall collapse dynamics. Although self-similar collapse flows have no characteristic mass scale, the collapse flow feeds material onto the central star/disk system at a well-defined mass infall rate $\dot M$. Since the infalling material always approaches free-fall conditions on the inside (in the limit $r \to 0$), the reduced mass determines the size of the infall rate, and is therefore an important parameter in the collapse problem. Our analysis shows that, as expected, larger infall rates occur for initial states that are more overdense. In addition, stiffer static equations of state result in larger mass accretion rates. In contrast, the dynamic equation of state (as defined by $\gamma$) has little effect on the value of $m_0$. Our results indicate that collapse from an overdense state initially at rest cannot occur if $\gamma = 1$, regardless of the form for the initial equation of state. This result is somewhat surprising given the apparent insensitivity to $\gamma$ exhibited by all other solutions we obtain. However, we do find a different class of collapse solutions which passes smoothly through the singular surface when $\gamma = 1$. These solutions are analogous to the Type II solutions presented in Lou \& Cao (2007) for a spherical geometry with $\gamma = 4/3$. \bigskip {} \bigskip \centerline{\bf Acknowledgments} We thank the anonymous referee for useful comments. We also thank Fred Adams for many useful discussions. BB was supported by the Greaves Fund at Northern Kentucky University. MF was supported by the Hauck Foundation at Xavier University. \newpage \centerline{\bf REFERENCES} \medskip \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Adams, F. C. 1990, ApJ, 363, 578 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Adams, F. C., Lada, C. J., \& Shu, F. H. 1987, ApJ, 321, 788 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Allen, A., Li, Z.-Yi., \& Shu, F. H. 2003, ApJ, 599, 363 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Allen, A., Shu, F. H., \& Li, Z.-Yi. 2003, ApJ, 599, 351 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Cassen, P., \& Moosman, A. 1981, Icarus, 48, 353 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Curry, C., \& McKee, C. F. 2000, ApJ, 528, 734 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Fatuzzo, M., Adams, F. C., \& Myers, P. C. 2004, ApJ, 615, 813 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Foster, P. N., \& Chevalier, R. A. 1993, ApJ, 416, 303 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Galli, D., \& Shu, F. H. 1993a, ApJ, 417, 220 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Galli, D., \& Shu, F. H. 1993b, ApJ, 417, 243 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Hennebelle, P. 2003, A\&A, 397, 381 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Harjunpaa, P., Kaas, A. A., Carlqvist, P., \& Gahm, G. F. 1999, A\&A, 349, 912 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Houlahan, P., \& Scalo, J. M. 1992, ApJ, 393, 172 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Hunter, C. 1977, ApJ, 218, 834 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Inustsuka, S., \& Miyama, S. M. 1992, ApJ, 388, 392 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Jappsen, A. K., Klessen, R. S., Larson, R. B., Li, Y., \& MacLow, M. M. 2005, A\&A, 435, 611 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Jijina, J., Myers, P. C., \& Adams, F. C. 1999, ApJ Suppl., 125, 161 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Kawachi, T., \& Hanawa, T. 1998, PASJ, 50, 577 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Lada, C. J., Muench, A. A., Rathborne, J., Alves, J. F., \& Lombardi, M. 2008, ApJ, 672, 410 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Larson, R. B. 1969a, MNRAS, 145, 271 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Larson, R. B. 1969b, MNRAS, 145, 297 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Larson, R. B. 1981, MNRAS, 194, 809 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Li, Z.-Y., \& Shu, F. H. 1996, ApJ, 472, 211L \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Li, Z.-Y., \& Shu, F. H. 1997, ApJ, 475, 237 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Lou, Y.-Q., \& Cao, Y. 2008, MNRAS, 384, 611 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Penston, M. V. 1969a, MNRAS, 144, 425 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Penston, M. V. 1969b, MNRAS, 145, 457 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Shadmehri, M. 2005, MNRAS, 356, 1429 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Shu, F. H. 1977, ApJ, 214, 488 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Shu, F. H., Adams, F. C., \& Lizano, S. 1987, A R A \& A, 25, 23 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Stahler, S. W., Shu, F. H., \& Taam R. E. 1980, ApJ, 241, 637 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Terebey, S., Shu, F. H., \& Cassen, P. 1984, ApJ, 286, 529 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Tilley, D. A., \& Pudrit, R. E. 2003, ApJ, 593, 426 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Ulrich, R. K. 1976, ApJ, 210, 377 \par\parshape 2 0.0truecm 16.25truecm 2truecm 14.25truecm Whitworth, A., \& Summers, D. 1985, MNRAS, 214, 1 \newpage \begin{figure} \figurenum{1} {\centerline{\epsscale{0.90} \plotone{fig1.ps} }} \figcaption{Self-similar solutions for the reduced density describing the collapse of a gas defined by the parameters $a = -1.25$ and $\gamma = 0.25$. All solutions are integrated inward from an initial set of values $x_i = 5 \times 10^3$ and $\alpha(x_i) = 5.78 \times 10^{-7}$, and the five values of $v_i$ as discussed in the text. All five solutions are within $0.1$ percent of each other, illustrating that the self-similar density profiles are not sensitive to the initial values of the reduced velocity. } \end{figure} \newpage \begin{figure} \figurenum{2} {\centerline{\epsscale{0.90} \plotone{fig2.ps} }} \figcaption{Self-similar solutions for the reduced velocity describing the collapse of a gas defined by the parameters $a = -1.25$ and $\gamma = 0.25$. All solutions are integrated inward from an initial set of values $x_i = 5 \times 10^3$ and $\alpha(x_i) = 5.78 \times 10^{-7}$, and the five values of $v_i$ as discussed in the text. } \end{figure} \newpage \begin{figure} \figurenum{3} {\centerline{\epsscale{0.90} \plotone{fig3.ps} }} \figcaption{Log $p$ - log $\alpha$ profiles for the solutions presented in Figs. (1) and (2), illustrating the transition during the collapse from a static equation of state ($p = \alpha^{a+2} = \alpha^{0.75}$) as shown in the lower left corner to a dynamic equation of state ($ p \propto\alpha^\gamma \propto\alpha^{0.25}$) as shown in the upper right corner. } \end{figure} \begin{figure} \figurenum{4} {\centerline{\epsscale{0.90} \plotone{fig4.ps} }} \figcaption{Self-similar solutions for the reduced density describing the collapse of an initially static gas overdense from its hydrostatic equilibrium state by a factor of $\Lambda = 1.5$. The four profiles correspond to the four different choices of static and dynamics equations of state, as defined by the specified values of $\gamma$ and $\Gamma$. } \end{figure} \newpage \begin{figure} \figurenum{5} {\centerline{\epsscale{0.90} \plotone{fig5.ps} }} \figcaption{Self-similar solutions for the reduced velocity describing the collapse of an initially static gas overdense from its hydrostatic equilibrium state by a factor of $\Lambda = 1.5$. The four profiles correspond to the four different choices of static and dynamics equations of state, as defined by the specified values of $\gamma$ and $\Gamma$. } \end{figure} \newpage \begin{figure} \figurenum{6} {\centerline{\epsscale{0.90} \plotone{fig6.ps} }} \figcaption{Same as Figure 4, but for a different set of values for $\gamma$ and $\Gamma$. We note that the solid curve and short-dashed curve lie over each other in this Figure, and are therefore hard to distinguish. } \end{figure} \newpage \begin{figure} \figurenum{7} {\centerline{\epsscale{0.90} \plotone{fig7.ps} }} \figcaption{Same as Figure 5, but for a different set of values for $\gamma$ and $\Gamma$. } \end{figure} \newpage \begin{figure} \figurenum{8} {\centerline{\epsscale{0.90} \plotone{fig8.ps} }} \figcaption{The value of the reduced mass at the origin $m_0$ as a function of the static equation of state index $\Gamma$ for $\Lambda = 1.5$ and three different values of the dynamic equation of state index $\gamma$. } \end{figure} \newpage \begin{figure} \figurenum{9} {\centerline{\epsscale{0.90} \plotone{fig9.ps} }} \figcaption{The value of the reduced mass at the origin $m_0$ as a function of the overdensity parameter $\Lambda = 1.5$ for the specified values of the equation and state indices $\Gamma$ and $\gamma$. The solid square data point corresponds to the results obtained for a solution which passes smoothly through the singular surface, as discussed in \S5.1. } \end{figure} \newpage \begin{figure} \figurenum{10} {\centerline{\epsscale{0.90} \plotone{fig10.ps} }} \figcaption{Density profiles illustrating the inside-out collapse of a cylindrical cloud initially at rest and overdense by a factor $\Lambda = 1.5$, and for which $\Gamma = \gamma = 0.5$. The solid curve represents the initial ($t = 0$) density profile $\rho = \Lambda \rho_E$, and the dotted curves show the profiles (from top to bottom) at times $t = 10^4$, $t = 10^5$ and $t = 10^6$ years. } \end{figure} \newpage \begin{figure} \figurenum{11} {\centerline{\epsscale{0.90} \plotone{fig11.ps} }} \figcaption{Velocity profiles illustrating the inside-out collapse of a cylindrical cloud initially at rest and overdense by a factor $\Lambda = 1.5$, and for which $\Gamma = \gamma = 0.5$. The dotted curves show the profiles (from bottom to top) at times $t = 10^4$, $t = 10^5$ and $t = 10^6$ years. } \end{figure} \newpage \begin{figure} \figurenum{12} {\centerline{\epsscale{0.90} \plotone{fig12.ps} }} \figcaption{The dotted curves denote the density (upper panel) and velocity (lower panel) projections of the critical curve for the parameters $\Gamma = \gamma = 0.5$ and $\Lambda = 1.5$. The solid curve in each panel denotes the corresponding self-similar solution, also shown in Figs. 4 and 5. This solution does not pass through the critical curve. } \end{figure} \newpage \begin{figure} \figurenum{13} {\centerline{\epsscale{0.90} \plotone{fig13.ps} }} \figcaption{The dotted curves denote the density (upper panel) and velocity (lower panel) projections of the critical curve for the parameters $\Gamma = 0.25$, $\gamma = 0.5$, and $\Lambda = 1.08$. The solid curve in each panel denotes the corresponding self-similar solution, which passes through the critical curve at $x_c = 0.336$ and at $x_c = 0.0322$. } \end{figure} \newpage \begin{figure} \figurenum{14} {\centerline{\epsscale{0.90} \plotone{fig14.ps} }} \figcaption{The negative (real) roots of Eq. xxxxxx as a function of $a$ for values of $C_1 = 1.2, 1.5, 1.8, 2.1$ and $2.4$, as denoted in the figure. } \end{figure} \newpage \begin{figure} \figurenum{15} {\centerline{\epsscale{0.90} \plotone{fig15.ps} }} \figcaption{The dotted curves denote the density (upper panel) and velocity (lower panel) projections of the critical line for the parameters $a = -1.05$, $C_1 = 1.5$. The solid curve in each panel denotes the corresponding self-similar solution which passes through the critical line at the point $x_c = 1$. } \end{figure} \newpage \begin{figure} \figurenum{16} {\centerline{\epsscale{0.90} \plotone{fig16.ps} }} \figcaption{Density profiles associated with the self-similar solutions presented in Fig. 15, scaled by setting $x_c = 1$ when $r = 1$ pc and $t = 1$ Myr. The solid curves show the profiles (from top to bottom) at times $t = 0.01$, $0.1$, $1$ and $10$ Myrs. } \end{figure} \newpage \begin{figure} \figurenum{17} {\centerline{\epsscale{0.90} \plotone{fig17.ps} }} \figcaption{Velocity profiles associated with the self-similar solutions presented in Fig. 15, scaled by setting $x_c = 1$ when $r = 1$ pc and $t = 1$ Myr. The solid curves show the profiles (from top to bottom) at times $t = 0.01$, $0.1$, $1$ and $10$ Myrs. } \end{figure} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,480
Q: R Foreach Iterator - Walkforward How can I create a "walkforward" iterator using the iterators package? How can an iterator be created where each nextElem returns a fixed moving window? For example, let's say we have a 10x10 matrix. Each iterator element should be a groups of rows. The first element is rows 1:5, second is 2:6, 3:7, 4:8....etc How can I turn x into a walkforward iterator: x <- matrix(1:100, 10) EDIT: To be clear, I would like to use the resulting iterator in a parallel foreach loop. foreach(i = iter(x), .combine=rbind) %dopar% myFun(i) A: You could use an iterator that returns overlapping sub-matrices as you describe, but that would use much more memory than is required. It would be better to use an iterator that returns the indices of those sub-matrices. Here's one way to do that: iwalk <- function(n, m) { if (m > n) stop('m > n') it <- icount(n - m + 1) nextEl <- function() { i <- nextElem(it) c(i, i + m - 1) } obj <- list(nextElem=nextEl) class(obj) <- c('abstractiter', 'iter') obj } This function uses the icount function from the iterators package so that I don't have to worry about details such as throwing the "StopIteration" exception, for example. That's a technique that I describe in the "Writing Custom Iterators" vignette. If you were using the doMC parallel backend, you could use this iterator as follows: library(doMC) nworkers <- 3 registerDoMC(nworkers) x <- matrix(1:100, 10) m <- 5 r1 <- foreach(ix=iwalk(nrow(x), m)) %dopar% { x[ix[1]:ix[2],, drop=FALSE] } This works nicely with doMC since each of the workers inherits the matrix x. However, if you're using doParallel with a cluster object or the doMPI backend, it would be nice to avoid exporting the entire matrix x to each of the workers. In that case, I would create an iterator function to send the overlapping sub-matrices of x to each of the workers, and then use iwalk to iterate over those sub-matrices: ioverlap <- function(x, m, chunks) { if (m > nrow(x)) stop('m > nrow(x)') i <- 1 it <- idiv(nrow(x) - m + 1, chunks=chunks) nextEl <- function() { ntasks <- nextElem(it) ifirst <- i ilast <- i + ntasks + m - 2 i <<- i + ntasks x[ifirst:ilast,, drop=FALSE] } obj <- list(nextElem=nextEl) class(obj) <- c('abstractiter', 'iter') obj } library(doParallel) nworkers <- 3 cl <- makePSOCKcluster(nworkers) registerDoParallel(cl) x <- matrix(1:100, 10) m <- 5 r2 <- foreach(y=ioverlap(x, m, nworkers), .combine='c', .packages=c('foreach', 'iterators')) %dopar% { foreach(iy=iwalk(nrow(y), m)) %do% { y[iy[1]:iy[2],, drop=FALSE] } } In this case I'm using iwalk on the workers, not the master, which is why the iterators package must be loaded by each of the workers.
{ "redpajama_set_name": "RedPajamaStackExchange" }
131
A QUARTO BOOK Copyright © 2015 Quarto Inc. First published in the United States by Running Press, A Member of the Perseus Books Group All rights reserved under the Pan-American and International Copyright Conventions Printed in China _This book may not be reproduced in whole or in part, in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system now known or hereafter invented, without written permission from the publisher._ Books published by Running Press are available at special discounts for bulk purchases in the United States by corporations, institutions, and other organizations. For more information, please contact the Special Markets Department at the Perseus Books Group, 2300 Chestnut Street, Suite 200, Philadelphia, PA 19103, or call (800) 810-4145, ext. 5000, or e-mail special.markets@perseusbooks.com. Library of Congress Control Number: 2014952838 E-book ISBN 978-0-7624-5670-3 9 8 7 6 5 4 3 2 1 Digit on the right indicates the number of this printing Conceived, designed, and produced by Quarto Publishing plc The Old Brewery, 6 Blundell Street London N7 9BH QUAR.KCCA Senior editor: Victoria Lyle Art editor and designer: Jackie Palmer Pattern checker: Rachel Vowles Photographers: Liz Coleman and Phil Wilkins Illustrator: Kuo Kang Chen Design assistant: Martina Calvio Indexer: Helen Snaith Art director: Caroline Guest Creative director: Moira Clinch Publisher: Paul Carslake Color separation in Hong Kong by Cypress Colours (HK) Ltd Printed in China by 1010 Printing Limited Running Press Book Publishers 2300 Chestnut Street Philadelphia, PA 19103-4371 Visit us on the web * * * * * * CONTENTS Welcome! Hat Selector **KNIT PATTERNS** Dinosaur Bobble Hat Strawberry Pumpkin Sports Cap Spring Chick Punk Mohawk Bunny Turkey Flower Cap I Heart You Extraterrestrial Reindeer Antlers Party Hat Witch Cupcake Banana Santa Hat Elf Top Hat **CROCHET PATTERNS** Pom Pom Hat Little Lion Feline Fox Baby Bear Dog Shark Attack Santa Paws Candy Corn Unicorn Cowboy Hat **TECHNIQUES** Materials and Equipment Knitting Techniques Crochet Techniques Additional Techniques Abbreviations Reading Charts Yarns Used Index Starring WELCOME! It's very easy for me to pinpoint where my love of cats and knitting came from: my admiration for my grandmother, or "Nani," as we called her. She lived in a two-story craftsman-style house in Nashville, with lots of cats and lots of yarn. We'd visit her regularly, and she would regale us with stories of her childhood growing up in Germany. During those tales she'd often be knitting (continental style), and I was fascinated by the fabric and textures she created, sometimes knitting up a sweater in a weekend. She even sold her designs to local shops and celebrities, as one of many ways to provide for her family of eight. Those weekend visits were full of inspiration for me—often alternating between armfuls of kittens and the many craft supplies my father and Nani encouraged. As a young adult, I moved to London to study fashion design. Through courses at my university, I became interested in fiber arts. I love the magic that can be created with string and two needles, as well as the history behind this craft. * * * Sara's cat Dorothy is always first in line to try out the new hats. * * * When I started Scooter Knits on Etsy in March of 2009, I was fulfilling a dream six years in the making. My university experience had given me the courage to share things I made with the world. The first cat hat I made was in August of 2009 when I adopted my first kitten, Dorothy. After sewing her a tiny T-shirt and making her a tiny house, a tiny hat seemed like the natural next step for this self-professed cat lady. I never imagined when I listed my first cat hat on Etsy that it would become the focus of Scooter Knits, but five years later I'm so grateful it did. My hope is that you will have as much fun making these hats as I've had designing them. I've captured so many great moments of my cats wearing their hats over the years, and I hope you do too! For me, my cats are a special part of my life—not the aloof pets some think them to be, but intelligent animals with unique personalities. This book is an ode to the many sides of our feline friends. Use the patterns in this book to showcase your cat's personality in family photos, on Christmas cards, for Halloween, etc! We lost my Nani in August of 2014, one month before her 92nd birthday. She was intellectual, articulate, loving, talented, quick to put you in your place—an inspiration to me in innumerable ways. This book is dedicated to her. Sara x * * * **BE KIND TO YOUR FELINE FRIENDS** Some cats are born to model millinery (Gus, Bluebell, and Luna, you know who I'm talking about...) and others aren't. If your cat doesn't want to wear your yarn creation, don't force it to. **WORKING THE CATWALK** The models showing off in this book were volunteered because they have exactly the right temperament to wear a hat that looks like a banana, for example. **HAT -WEARING ETIQUETTE** As Lady Mary Crawley might have said, "A cat should never wear a hat outdoors." Do not let your cat outside in a hat nor leave it to wear a hat unsupervised. * * * **BEHIND THE SCENES!** Leeroy reflects on a hard day's work. Link cat naps between shots. * * * HAT SELECTOR * * * METHOD: KNIT SKILL LEVEL: BEGINNER Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 25 YD (23 M) BULKY WEIGHT YARN IN A (GREEN) • 10 YD (9 M) WORSTED WEIGHT YARN IN B (ORANGE) • SIZE 7 (4.5 MM) KNITTING NEEDLES • SIZE 5 (3.75 MM) KNITTING NEEDLES • SIZE F5 (3.75 MM) CROCHET HOOK • YARN NEEDLE * * * FOR THE CAT THAT GOES RAWR! THIS SIMPLE DESIGN IS GREAT FOR HALLOWEEN, OR ANYTIME! DINOSAUR **BASE** Using yarn A and size 7 (4.5 mm) needles, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K3, bind off next 10 sts, k last st. **Row 14:** K2, cast on 10 sts, k3. (The 3 st side is the front of the hat.) **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K3, bind off next 10 sts, k last st. **Row 32:** K2, cast on 10 sts, k3. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 25 in. (64 cm) tail. **LEEROY MODELS THE DINOSAUR HAT WHILE PLAYING WITH HIS DINOSAUR MODELS.** **SPIKES** **(Make 3)** Using yarn B and size 5 (3.75 mm) needles, cast on 8 sts. **Rows 1–3:** Knit. **Row 4:** K2tog, k4, k2tog. (6 sts) **Rows 5–7:** Knit. **Row 8:** K2tog, k2, k2tog. (4 sts) **Row 9:** Knit. **Row 10:** [K2tog] twice. (2 sts) **Row 11:** K2tog. Fasten off, leaving a 6 in. (15 cm) tail. **ASSEMBLY** Turn the spikes so that the cast on and bind off tails are at the bottom. You will have three dinosaur shaped spikes. The lower edge with both tails is the edge you sew to the hat base. Starting at the center front of the base, stitch the lower edge of the first spike into place. Weave in both ends to underside of hat and secure. Repeat with other spikes, following center of hat and stitching into lower edge. * * * METHOD: KNIT SKILL LEVEL: BEGINNER Back to Hat Selector SIZE TO FIT A SMALL ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2 IN. (5 CM) SUPPLIES • 20 YD (18 M) WORSTED WEIGHT YARN IN A (BLUE) • 5 YD (4.5 M) WORSTED WEIGHT YARN IN B (RED) • SIZE 7 (4.5 MM) KNITTING NEEDLES • YARN NEEDLE • POM POM MAKER (OPTIONAL) * * * DOES YOUR CAT GET FRISKY ABOUT COLDER WEATHER? THEN HE MIGHT APPRECIATE THIS CLASSIC WINTER STYLE, WITH A CATTY TWIST! BOBBLE HAT **BASE** Using yarn A, cast on 3 sts, leaving a 10 in. (25 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K3, bind off next 10 sts, k last st. **Row 14:** K2, cast on 10 sts, k3. (The 3 st side is the front of the hat.) **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K3, bind off next 10 sts, k last st. **Row 32:** K2, cast on 10 sts, k3. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 10 in. (25 cm) tail. **LUNA CHECKS THE WEATHER TO SEE IF IT'S WORTH LEAVING HER COZY DEN IN ORDER TO PLAY OUTSIDE.** **BRAIDED TRIM** Cut 3 x 36 in. (91 cm) pieces from both yarn A and B (6 strands total). Holding strands together, knot approximately 1 in. (2.5 cm) from top. Holding pairs of strands together, braid a 3-ply braid until trim measures 26 in. (66 cm). Knot, and trim tassel lengths. Stitch to the front edge of base, using cast on and bind off tails to sew into place. Center trim to the center of the hat. You should be left with approximately 7½ in. (19 cm) ties on both sides. Weave tails into the underside of the hat and snip. Make a 1 in. (2.5 cm) pom pom from yarn A and B, and attach to the center of the base. Now enjoy, take lots of pictures, and give your sweet cat a treat for being such a handsome model! **SOMETHING HAS CAUGHT GUS'S EYE...** * * * METHOD: KNIT SKILL LEVEL: BEGINNER Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 25 YD (23 M) WORSTED WEIGHT YARN IN A (RED) • 10 YD (9 M) WORSTED WEIGHT YARN IN B (GREEN) • 3 YD (2.7 M) WORSTED WEIGHT YARN IN C (WHITE) • SIZE 7 (4.5 MM) KNITTING NEEDLES • SIZE 5 (3.75 MM) DPNS • SIZE F5 (3.75 MM) CROCHET HOOK • YARN NEEDLE * * * A KITSCHY SUMMER DESIGN! YOU CAN LEAVE THE SEEDS OFF THE DESIGN AND IT WILL DOUBLE AS A TOMATO. STRAWBERRY **BASE** Using yarn A and size 7 (4.5 mm) needles, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows four more times. (13 sts) **FIRST EAR HOLE** **Row 11:** K2, bind off next 9 sts, k last st. **Row 12:** K2, cast on 9 sts, k2. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 29:** K2, bind off next 9 sts, k last st. **Row 30:** K2, cast on 9 sts, k2. **Row 31:** Knit. **Row 32:** K2tog, k to last 2 sts, k2tog. (11 sts) Rep last 2 rows four more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 25 in. (64 cm) tail. **STEM** Using yarn B and two size 5 (3.75 mm) dpns, cast on 4 sts and knit a row. Do not turn needle. Slide the 4 sts to other end of needle, bring yarn around from the back, and knit the 4 sts again. This forms the i-cord technique. Knit in i-cord technique, until stem measures 2 in. (5 cm). You may find it helpful to pull on the cast on edge after every few rows to help the shape. Once complete, snip a 10 in. (25 cm) tail and pull through all 4 loops on the needle. Weave bind off tail through to bottom of stem. You will use this tail to stitch the stem to the base. **LEAVES** **(Make 3)** Using yarn B and two size 5 (3.75 mm) dpns, cast on 5 sts. **Rows 1–2:** Knit. **Row 3:** K2tog, k1, k2tog. (3 sts) **Row 4:** Knit. **Row 5:** K2tog, k1. (2 sts) **Row 6:** K2tog. Fasten off, leaving a 6 in. (15 cm) tail. Weave through sides of leaf to cast on edge. **ASSEMBLY** Using the bind off tail from the stem, stitch it onto the center of the base. Stitch securely around the cast on edges of the stem. Once complete, pull the cast on and bind off stem tails to the underside of the hat and secure. Attach leaves to the base of the hat, around the bottom of the stem, by stitching through the cast on edge of the leaves. Use the longer tail to stitch. Weave all ends to the underside of hat and secure. To finish the hat, thread yarn needle with yarn C. Make short, well-placed stitches on the base of the hat to represent the seeds. Do not do too many—use the photo for reference if necessary. Secure yarn to the underside of the hat. * * * METHOD: KNIT SKILL LEVEL: BEGINNER Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 25 YD (23 M) WORSTED WEIGHT YARN IN A (ORANGE) • 10 YD (9 M) WORSTED WEIGHT YARN IN B (GREEN) • SIZE 7 (4.5 MM) KNITTING NEEDLES • SIZE 5 (3.75 MM) DPNS • SIZE F5 (3.75 MM) CROCHET HOOK • YARN NEEDLE * * * FOR YOUR LITTLE PURR PUMPKIN! KNIT THIS IN A VARIETY OF FALL COLORS FOR ALL THE LITTLE PUMPKINS IN YOUR PATCH. PUMPKIN **BASE** Using yarn A and size 7 (4.5 mm) needles, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows four more times. (13 sts) **FIRST EAR HOLE** **Row 11:** K2, bind off next 9 sts, k last st. **Row 12:** K2, cast on 9 sts, k2. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 29:** K2, bind off next 9 sts, k last st. **Row 30:** K2, cast on 9 sts, k2. **Row 31:** Knit. **Row 32:** K2tog, k to last 2 sts, k2tog. (11 sts) Rep last 2 rows four more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 25 in. (64 cm) tail. **STEM** Using yarn B and two size 5 (3.75 mm) dpns, cast on 4 sts and knit a row. Do not turn needle. Slide the 4 sts to other end of needle, bring yarn around from the back, and knit the 4 sts again. This forms the i-cord technique. **LEEROY BASKS IN THE FALL SUNLIGHT IN THE PUMPKIN HAT.** Knit i-cord, until stem measures 2 in. (5 cm). You may find it helpful to pull on the cast on edge after every few rows to help the shape. Once complete, snip a 10 in. (25 cm) tail and pull through all 4 loops on the needle. Weave bind off tail through to bottom of stem. You will use this tail to stitch the stem to the base. **ASSEMBLY** Using the bind off tail from the stem, stitch it onto the center of the base. Stitch securely around the cast on edges of the stem. Once complete, pull the cast on and bind off stem tails to the underside of the hat and secure. **TENSIONS ARE HIGH AS LUNA WATCHES THE CLOSING MOMENTS OF THE BASEBALL GAME.** * * * METHOD: KNIT SKILL LEVEL: BEGINNER Back to Hat Selector SIZE TO FIT A SMALL ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2 IN. (5 CM) SUPPLIES • 15 YD (14 M) WORSTED WEIGHT YARN IN A (RED) • 15 YD (14 M) WORSTED WEIGHT YARN IN B (WHITE) • 15 YD (14 M) WORSTED WEIGHT YARN IN C (BLUE) • SIZE 7 (4.5 MM) KNITTING NEEDLES • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE * * * KNIT THIS QUICK PROJECT UP IN YOUR FAVORITE SPORTS TEAM COLORS! SPORTS CAP **BASE** Using yarn A, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K2, bind off next 11 sts, k last st. **Row 14:** K2, cast on 11 sts, k2. **MIDDLE SECTION** Knit 1 row. Change to yarn B. Knit 14 rows. Change to yarn C. Knit 1 row. **SECOND EAR HOLE** **Row 31:** K2, bind off next 11 sts, k last st. **Row 32:** K2, cast on 11 sts, k2. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 25 in. (64 cm) tail. * * * METHOD: KNIT SKILL LEVEL: INTERMEDIATE Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 25 YD (23 M) FAUX FUR YARN IN A (YELLOW) • 10 YD (9 M) WORSTED WEIGHT YARN IN B (BLACK) • 10 YD (9 M) WORSTED WEIGHT YARN IN C (ORANGE) • SIZE 8 (5 MM) KNITTING NEEDLES • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE * * * MAKE SURE TO CHOOSE A TEXTURED YARN FOR THIS PROJECT—IT MAKES ALL THE DIFFERENCE IN ACHIEVING AN ULTRA-FLUFFY CHICK! **GRACIE STRIKES A POSE TO MODEL THE SPRING CHICK HAT.** SPRING CHICK **BASE** Using yarn A, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K2, bind off next 11 sts, k last st. **Row 14:** K2, cast on 11 sts, k2. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K2, bind off next 11 sts, k last st. **Row 32:** K2, cast on 11 sts, k2. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 25 in. (64 cm) tail. **EYES** **(Make 2)** Using yarn B and crochet hook, make a magic ring. **Rnd 1:** 1ch, 6sc in ring, sl st in first ch. **Rnd 2:** 1ch, [2sc in first st, 1sc] 3 times, sl st in first sc. Fasten off, leaving a 7 in. (18 cm) tail. **BEAK** Using yarn C, cast on 12 sts, leaving a 10 in. (25 cm) tail. **Rows 1–2:** Knit. **Row 3:** K2tog, k8, k2tog. (10 sts) **Row 4:** Knit. **Row 5:** K2tog, k6, k2tog. (8 sts) **Row 6:** Knit. **Row 7:** K2tog, k4, k2tog. (6 sts) **Rows 8–9:** Knit. **Row 10:** K2tog, k2, k2tog. (4 sts) **Row 11:** Knit. **Row 12:** [K2tog] twice. (2 sts) Bind off, leaving a 6 in. (15 cm) tail. **ASSEMBLY** **BEAK** Lay the beak so that both the cast on and bind off tail are upright. This is the top of the beak. Place beak, top side up, along the front of the base, in the middle, about ½ in. (1 cm) from edge (either side can be the front of the hat). Using the 10 in. (25 cm) tail and a yarn needle, stitch along the edges of the beak that are laying on the hat. Also use the 6 in. (15 cm) tail. Attach securely. **EYES** Place eyes so that they are just above the top of the beak, evenly spaced. Attach securely with yarn tails and yarn needle. * * * METHOD: KNIT SKILL LEVEL: INTERMEDIATE Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 25 YD (23 M) WORSTED WEIGHT YARN IN A (BLACK) • 15 YD (14 M) BULKY WEIGHT YARN IN B (PINK) • SIZE 7 (4.5 MM) KNITTING NEEDLES • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE * * * FOR THE ULTIMATE ANARCHIST IN YOUR FAMILY, KNIT UP A PUNK ROCKER FAUX HAWK IN SHOCKING PINK! PUNK MOHAWK **BASE** Using yarn A, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K3, bind off next 10 sts, k last st. **Row 14:** K2, cast on 10 sts, k3. (The 3 st side is the front of the hat.) **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K3, bind off next 10 sts, k last st. **Row 32:** K2, cast on 10 sts, k3. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. Using crochet hook, work 25ch on each side using the 25 in. (64 cm) tails left at the beginning and end of your work. This creates the ties for your cat hat. **FAUX HAWK** Using yarn B, clip fifteen to twenty 2 in. (5 cm) pieces. You will attach these pieces to the center garter stitch ridge of your cat hat base. **ASSEMBLY** You will attach the faux hawk pieces in a manner similar to attaching fringe to a scarf. To attach, slide crochet hook underneath the first stitch in the center garter stitch ridge on the base of the hat. Take a 2 in. (5 cm) piece of yarn, fold it in half, and pull the loop through the garter row using the crochet hook. Take the ends of the faux hawk yarn and pull them through the loop so that it is securely knotted. The faux hawk yarn should stand upright using this method. Repeat, cutting more 2 in. (5 cm) pieces if necessary, and continue attaching them down the center garter stitch row. To create a fuller faux hawk, clip more yarn and attach it to both sides of the center garter stitch ridge. When finished, trim faux hawk to a uniform length (about 1 in./2.5 cm). **JASPER PACES THE SIDEWALK WITH PRIDE IN HIS PUNK MOHAWK HAT.** * * * METHOD: KNIT SKILL LEVEL: INTERMEDIATE Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 40 YD (37 M) MEDIUM WEIGHT YARN IN CREAM • SIZE 7 (4.5 MM) KNITTING NEEDLES • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE * * * IF YOUR EARS HANG LOW, THIS BUNNY HAT WILL BE THE PERFECT FIT! BUNNY **BASE** Cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K2, bind off next 11 sts, k last st. **Row 14:** K2, cast on 11 sts, k2. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K2, bind off next 11 sts, k last st. **Row 32:** K2, cast on 11 sts, k2. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 25 in. (64 cm) tail. **EARS** **(Make 2)** Cast on 13 sts, leaving a 10 in. (25 cm) tail (you will use this tail to attach the ear to the base of the hat later). **Row 1:** Knit. **Row 2:** Purl. Rep last 2 rows once more. **Row 5:** K1, k2tog, k7, k2tog, k1. (11 sts) **Row 6:** Purl. **Row 7:** Knit. **Row 8:** Purl. Rep last 2 rows three more times. **Row 15:** K1, k2tog, k5, k2tog, k1. (9 sts) **Row 16:** Purl. **Row 17:** Knit. **Row 18:** Purl. **Row 19:** K1, k2tog, k3, k2tog, k1. (7 sts) **Row 20:** Purl. **Row 21:** K1, k2tog, k1, k2tog, k1. (5 sts) **Row 22:** Purl. Bind off, leaving a 6 in. (15 cm) tail. Using yarn needle, weave in bind off tail. Adjust the ear length in this pattern by increasing (or decreasing) between Rows 7 and 14. **ASSEMBLY** To attach ears, use cast on tail and yarn needle to stitch one ear to the middle section of the base, centered above ear opening and with knit side uppermost. Secure both ends around ear opening (so that ear is slightly curved down). Stitch the ear along the cast on edge, placing it on the first garter row above the ear opening. Repeat with second ear and other ear opening. **WHO IS THIS HANDSOME FURBALL? IT'S HUCK MODELING THE BUNNY HAT.** * * * METHOD: KNIT SKILL LEVEL: INTERMEDIATE Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 25 YD (23 M) BULKY WEIGHT YARN IN A (BROWN) • 10 YD (9 M) WORSTED WEIGHT YARN IN B (RED) • 10 YD (9 M) WORSTED WEIGHT YARN IN C (BLACK) • 15 YD (14 M) WORSTED WEIGHT YARN IN D (ORANGE) • 10 YD (9 M) WORSTED WEIGHT YARN IN E (WHITE) • SIZE 7 (4.5 MM) KNITTING NEEDLES • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE * * * THE CUTEST LITTLE TURKEY YOU EVER DID SEE! A REAL CENTERPIECE FOR ANY THANKSGIVING CELEBRATION! TURKEY **BASE** Using yarn A, cast on 3 sts, leaving a 6 in. (15 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K2, bind off next 11 sts, k last st. **Row 14:** K2, cast on 11 sts, k2. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K2, bind off next 11 sts, k last st. **Row 32:** K2, cast on 11 sts, k2. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 6 in. (15 cm) tail. To create ties, cut two pieces of yarn B measuring 30 in. (76 cm) each. With crochet hook and one piece of yarn, pull a loop through the end of the base and work 25ch. Pull yarn through last loop tightly and trim. Weave in starting end to underside of the hat. Repeat on other side. **OUTER EYES** **(Make 2)** Using yarn E and crochet hook, make a magic ring. **Rnd 1:** 1ch, 12sc in ring, sl st in first ch. **Rnd 2:** 1ch, [2sc in first st, 1sc] 6 times, sl st in first sc. Cut yarn, leaving a 10 in. (25 cm) tail, and pull tightly through loop. **LYRIC PLAYFULLY MODELS THE TURKEY HAT.** **INNER EYES** **(Make 2)** Using yarn C and crochet hook, make a magic ring. **Rnd 1:** 1ch, 6sc in ring, sl st in first ch. **Rnd 2:** 1ch, [2sc in first st, 1sc] 3 times, sl st in first sc. Cut yarn, leaving a 7 in. (18 cm) tail, and pull tightly through loop. **BEAK** Using yarn D, cast on 12 sts, leaving a 10 in. (25 cm) tail. **Rows 1–2:** Knit. **Row 3:** K2tog, k8, k2tog. (10 sts) **Row 4:** Knit. **Row 5:** K2tog, k6, k2tog. (8 sts) **Row 6:** Knit. **Row 7:** K2tog, k4, k2tog. (6 sts) **Rows 8–9:** Knit. **Row 10:** K2tog, k2, k2tog. (4 sts) **Row 11:** Knit. **Row 12:** [K2tog] twice. (2 sts) Bind off, leaving a 6 in. (15 cm) tail. **ASSEMBLY** **BEAK** Lay the beak so that both the cast on and bind off tails are upright. This is the top of the beak. Place beak, top side up, along the front of the base, in the middle, about ½ in (1 cm) from edge (either side can be the front of the hat). Using the 10 in. (25 cm) tail and a yarn needle, stitch along the edges of the beak that are laying on the hat. Also use the 6 in. (15 cm) tail. Attach securely. **EYES** Place eyes so that they are touching the top of the beak, evenly spaced. Attach securely with yarn tails and yarn needle. **GOBBLE** Using yarn B and crochet hook, pull a loop through base at the center of the stitched on edge of the beak. Work 7ch through the beak and base using photo as guide, 3ch, and cut yarn. Weave end through ch sts and to underside of hat. * * * METHOD: KNIT SKILL LEVEL: INTERMEDIATE Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2 IN. (5 CM) SUPPLIES • 40 YD (37 M) WORSTED WEIGHT YARN IN A (RED) • 10 YD (9 M) WORSTED WEIGHT YARN IN B (BLUE) • SIZE 7 (4.5 MM) KNITTING NEEDLES • SIZE 5 (3.75 MM) KNITTING NEEDLES • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE * * * FOR ALL THOSE COOL CATS OUT THERE IN STYLISH HATS: PUT A FLOWER IN YOUR CAP FOR A SPECIAL FELINE FLOURISH! FLOWER CAP **BASE** Using yarn A and size 7 (4.5 mm) needles, cast on 3 sts, leaving a 6 in. (15 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K2, bind off next 11 sts, k last st. **Row 14:** K2, cast on 11 sts, k2. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K2, bind off next 11 sts, k last st. **Row 32:** K2, cast on 11 sts, k2. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 6 in. (15 cm) tail. To create ties, cut two pieces of yarn A measuring 30 in. (76 cm) each. With crochet hook and one piece of yarn, pull a loop through the end of the base and work 25ch. Pull yarn through last loop tightly and trim. Weave in starting end to underside of the hat. Repeat on other side. **THESE COMPLEMENTARY FLOWER CAPS ARE MODELED BY LYRIC AND LINK.** **BRIM** Using yarn A and size 5 (3.75 mm) needles, pick up 12 sts along front of base, starting in front of one ear hole and picking them up evenly until you work your way past the second ear hole. Knit 1 row. **Row 2:** Kfb of each st. (24 sts) **Rows 3–5:** Knit. **Row 6:** Kfb, k to last st, kfb. (26 sts) Bind off using yarn B. Thread ends down the sides of the brim and secure to the underside of the base. **VARIATION** Use yarn B to knit base, yarn A for brim, and bind off brim with yarn B. **FLOWER** Using yarn B and crochet hook, make a magic ring. **Rnd 1:** 1ch, 6sc in ring, sl st in first ch. **Rnd 2:** [2sc in first st, 1sc] 3 times. Change to yarn A. **Rnd 3:** Sc in each st, sl st in first sc. Cut yarn and pull through loop. Weave in end to center. Attach to the right of the brim. * * * METHOD: KNIT SKILL LEVEL: INTERMEDIATE Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 25 YD (23 M) WORSTED WEIGHT YARN IN A (BROWN) • 5 YD (4.5 M) WORSTED WEIGHT YARN IN B (RED) • SIZE 7 (4.5 MM) DPNS • SIZE G6 (4 MM) CROCHET HOOK • 1 X 6 IN. (15 CM) PIECE OF PIPE CLEANER (PREFERABLY IN A CORRESPONDING COLOR TO YOUR YARN) • YARN NEEDLE * * * SHOW YOUR CAT SOME LOVE WITH THIS ADORABLE HEART HAT. I HEART YOU **BASE** Using yarn A and two dpns, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K3, bind off next 10 sts, k last st. **Row 14:** K2, cast on 10 sts, k3. (The 3 st side is the front of the hat.) **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K3, bind off next 10 sts, k last st. **Row 32:** K2, cast on 10 sts, k3. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. Using crochet hook, work 25ch on each side using the 25 in. (64 cm) tails left at the beginning and end of your work. This creates the ties for your cat hat. **HEART** Using yarn B and two dpns, cast on 4 sts, leaving a 6 in. (15 cm) tail. Knit a row. Do not turn needle. Holding piece of pipe cleaner in place, slide the 4 sts to other end of needle, bring yarn around from the back, encasing the pipe cleaner, and knit the 4 sts again. This forms the i-cord technique. Using the i-cord method, knit around pipe cleaner until only ¼ in. (0.5 cm) of pipe cleaner remains exposed on both ends. Cut yarn, leaving a 6 in. (15 cm) tail, and use yarn needle to pull tail through sts on needle. Bend the i-cord into a heart shape. **WHO CAN RESIST THE LOVING LOOK OF PERCY?** **ASSEMBLY** Using a knitting needle as a guide, poke both ends of the heart through the center of the hat base. Bring the exposed ends back up through the base and twist them around themselves securely. Using a yarn needle, stitch the base of the heart to the base of the hat. Stitch so that the pipe cleaner is no longer exposed and the heart sits upright. Pull both tails through to underside of hat, and knot the ends to anchor the heart. * * * METHOD: KNIT SKILL LEVEL: INTERMEDIATE Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 35 YD (32 M) WORSTED WEIGHT YARN IN A (GREEN) • SCRAP OF WORSTED WEIGHT YARN IN B (BLACK) • SIZE 7 (4.5 MM) KNITTING NEEDLES • SIZE 3 (3.25 MM) DPNS • SIZE G6 (4 MM) CROCHET HOOK • 1 X 3 IN. (7.5 CM) PIECE OF PIPE CLEANER (PREFERABLY IN A CORRESPONDING COLOR TO YOUR YARN) • YARN NEEDLE • POLYESTER FIBERFILL * * * IT'S AN ENCOUNTER OF THE THIRD KIND! KNITTING AROUND A FLEXIBLE PIPE CLEANER ALLOWS YOUR CAT'S THIRD EYE TO PEER IN WHATEVER DIRECTION YOU CHOOSE. EXTRATERRESTRIAL **BASE** Using yarn A and size 7 (4.5 mm) needles, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K2, bind off next 11 sts, k last st. **Row 14:** K2, cast on 11 sts, k2. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K2, bind off next 11 sts, k last st. **Row 32:** K2, cast on 11 sts, k2. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 25 in. (64 cm) tail. **EYE** Using yarn A and two size 3 (3.25 mm) dpns, cast on 4 sts, leaving a 15 in. (38 cm) tail. Knit a row. Do not turn needle. Holding piece of pipe cleaner in place, slide the 4 sts to other end of needle, bring yarn around from the back, encasing the pipe cleaner, and knit the 4 sts again. This forms the i-cord technique. Using the i-cord method, knit around pipe cleaner until it is enclosed, leaving ½ in. (1 cm) exposed at the bottom. **Next row:** Kfb of each st. (8 sts) Divide sts onto three dpns and join to work in the round. **Next rnd:** Kfb of each st. (16 sts) Knit 3 rnds. **Next rnd:** K2tog around. (8 sts) Lightly stuff with polyester fiberfill. **Next rnd:** K2tog around. (4 sts) Cut yarn and pull through remaining loops. Weave in end. Using yarn B and yarn needle, run several long stitches on center of eye ball to create a pupil. Weave in ends. **ASSEMBLY** Center the eye on the base. Poke the exposed pipe cleaner through the base, using a knitting needle as a guide. Bring the exposed end back up through the base and twist it around itself securely. Using a yarn needle and the cast on tail of the i-cord, stitch the eye to the base along the cast on edge of the i-cord. Pull both tails through to underside of the hat, and knot the ends to anchor the eye. * * * METHOD: KNIT SKILL LEVEL: INTERMEDIATE Back to Hat Selector SIZE TO FIT A SMALL ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2 IN. (5 CM) SUPPLIES • 30 YD (27 M) WORSTED WEIGHT YARN • SIZE 7 (4.5 MM) DPNS • 2 X 4 IN. (10 CM) PIECES OF PIPE CLEANER (PREFERABLY IN A CORRESPONDING COLOR TO YOUR YARN) • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE * * * ANTLERS ARE THE PERFECT HOLIDAY GIFT, GUARANTEED TO INSPIRE SOME HOLIDAY CHEER IN YOUR FAVORITE FELINE! REINDEER ANTLERS **JASPER HELPS WITH THE WRAPPING IN HIS FESTIVE OUTFIT.** **BASE** Using two dpns, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K3, bind off next 10 sts, k last st. **Row 14:** K2, cast on 10 sts, k3. (The 3st side is the front of the hat.) **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K3, bind off next 10 sts, k last st. **Row 32:** K2, cast on 10 sts, k3. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. Using crochet hook, work 25ch on each side using the 25 in. (64 cm) tails left at the beginning and end of your work. This creates the ties for your cat hat. **ANTLERS** **(Make 2)** Using two dpns, cast on 4 sts, leaving a 6 in. (15 cm) tail. Knit a row. Do not turn needle. Holding one piece of pipe cleaner in place, slide the 4 sts to other end of needle, bring yarn around from the back, encasing the pipe cleaner, and knit the 4 sts again. This forms the i-cord technique. This technique, when worked with the pipe cleaner in the center, will cover the pipe cleaner and form a bendable antler. Work in this method until pipe cleaner is covered, leaving a ½ in. (1 cm) of pipe cleaner exposed at the bottom for securing antler to the base. To finish, cut 8 in. (20 cm) tail and pull through sts on needle using a yarn needle. Weave tail through antler, leaving it at the base to stitch antler in place. Next, pick up 2 sts ¼ in. (0.5 cm) from top of antler and knit 5 rows with i-cord technique. Weave ends in, pulling through antler to base and clipping so that the ends are not exposed. **ASSEMBLY** With smaller points facing inward, attach antlers as follows. Position antler ¼ in. (0.5 cm) from the ear opening and center the antler on the hat. Poke the exposed pipe cleaner through the base, using a knitting needle as a guide. Bring the exposed end back up through the base and twist it around itself securely. Using a yarn needle, stitch the base of the antler to the base of the hat. Stitch so that the pipe cleaner is no longer exposed and the antler sits upright. Pull both tails through to underside of hat, and knot the ends to anchor the antler. Repeat with second antler. * * * METHOD: KNIT SKILL LEVEL: DIFFICULT Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2 IN. (5 CM) SUPPLIES • 25 YD (23 M) WORSTED WEIGHT YARN IN A (PINK) • 10 YD (9 M) WORSTED WEIGHT YARN IN B (WHITE) • 10 YD (9 M) WORSTED WEIGHT YARN IN C (MINT) • SIZE 7 (4.5 MM) DPNS • SIZE F5 (3.75 MM) CROCHET HOOK • YARN NEEDLE • POLYESTER FIBERFILL • POM POM MAKER (OPTIONAL) * * * THIS STRIPED HAT IS PURRFECT FOR CAT CELEBRATIONS! PARTY HAT **BASE** Using yarn A and two dpns, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K3, bind off next 10 sts, k last st. **Row 14:** K2, cast on 10 sts, k3. (The 3 st side is the front of the hat.) **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K3, bind off next 10 sts, k last st. **Row 32:** K2, cast on 10 sts, k3. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly and snip extra yarn. Repeat with other 25 in. (64 cm) tail. **HAT** Using yarn B, cast on 30 sts leaving a 20 in. (50 cm) tail. Divide onto three dpns (10 sts per needle) and join to work in the round. **Rnds 1–2:** Knit. **Rnd 3:** [K2tog, k6, k2tog] 3 times. (24 sts) **Rnds 4–7:** Knit. **Rnd 8:** [K2tog, k4, k2tog] 3 times. (18 sts) **Rnds 9–10:** Knit. Change to yarn A. **Rnds 11–14:** Knit. **Rnd 15:** [K2, k2tog, k2] 3 times. (15 sts) **Rnds 16–19:** Knit. **Rnd 20:** [K1, k2tog, k2] 3 times. (12 sts) Change to yarn C. **Rnds 21–24:** Knit. **Rnd 25:** [K1, k2tog, k1] 3 times. (9 sts) **Rnds 26–28:** Knit. **Rnd 29:** [K2tog] 4 times, k1. (5 sts) **Rnd 30:** [K2tog] twice, k1. (3 sts) Cut yarn and pull through sts on needle. Pull closed, and weave end into inside of hat. **ASSEMBLY** Make a 1 in. (2.5 cm) pom pom using all three yarn colors. Secure to top of hat. Weave ends into the inside of the hat and secure in place. Lightly stuff hat with polyester fiberfill. Do not overfill. Using the cast on tail from the hat, stitch the hat onto the middle section of the base. Stitch into the cast on edge of the hat, until hat is securely in place. Weave end through to underside of base and tie securely. **MOOCH GETS IN THE PARTY MOOD WITH THIS STRIPED POM POM HAT.** * * * METHOD: KNIT SKILL LEVEL: DIFFICULT Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 40 YD (37 M) MEDIUM WEIGHT YARN IN A (PURPLE) • 15 YD (14 M) MEDIUM WEIGHT YARN IN B (GREEN) • SIZE 7 (4.5 MM) DPNS • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE • POLYESTER FIBERFILL * * * YOUR CAT WILL BEWITCH AND BEGUILE IN THIS HAT, ESPECIALLY ON HALLOWEEN! HAVE FUN WITH THE COLORS ON THIS PROJECT-USE A SPARKLY YARN FOR AN ESPECIALLY ENCHANTING LOOK. WITCH **BASE** Using yarn A and two dpns, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K2, bind off next 11 sts, k last st. **Row 14:** K2, cast on 11 sts, k2. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K2, bind off next 11 sts, k last st. **Row 32:** K2, cast on 11 sts, k2. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 25 in. (64 cm) tail. **HAT** Using yarn B, cast on 30 sts, leaving a 15 in. (38 cm) tail. Distribute sts evenly across three dpns (10 sts per needle) and join to work in the round. **Rnds 1–3:** Knit. Change to yarn A. **Rnd 4:** Knit. **Rnd 5:** [K1, k2tog, k4, k2tog, k1] 3 times. (24 sts) **Rnds 6–8:** Knit. **FALL UNDER THE SPELL OF LINK, THE ENCHANTING BIRMAN.** **Rnd 9:** [K1, k2tog, k2, k2tog, k1] 3 times. (18 sts) **Rnds 10–12:** Knit. **Rnd 13:** [K1, k2tog x 2, k1] 3 times. (12 sts) **Rnds 14–16:** Knit. **Rnd 17:** K2tog around. (6 sts) **Rnds 18–20:** Knit. Cut yarn, leaving a 6 in. (15 cm) tail, and pull through loops on needle. If desired, run a hidden stitch ½ in. (1 cm) from the top of the hat and tug, to give the hat a rumpled look. Knot yarn on underside of hat to keep effect in place. Using polyester fiberfill, lightly stuff hat. Do not overfill. Center the hat on the middle of the base and stitch into place using yarn B tail. Stitch along cast on edge of hat. Once finished, secure to underside of hat base. * * * METHOD: KNIT SKILL LEVEL: DIFFICULT Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 30 YD (27 M) WORSTED WEIGHT YARN IN A (LIGHT PINK) • 15 YD (14 M) WORSTED WEIGHT YARN IN B (HOT PINK) • 15 YD (14 M) WORSTED WEIGHT YARN IN C (WHITE) • SIZE 7 (4.5 MM) DPNS • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE • POLYESTER FIBERFILL * * * AN ODE TO A FAVORITE TREAT! HAVE FUN WITH THE TOPPINGS ON YOUR "CUPCAKE!" CUPCAKE **BASE** Using yarn A and two dpns, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K2, bind off next 11 sts, k last st. **Row 14:** K2, cast on 11 sts, k2. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K2, bind off next 11 sts, k last st. **Row 32:** K2, cast on 11 sts, k2. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 25 in. (64 cm) tail. **CUPCAKE WRAPPER** Using yarn B and two dpns, cast on 6 sts. Knit 30 rows. Bind off. Sew ends together. Weave tails to the bottom of the band. **CUPCAKE** Using yarn C, pick up 30 sts along edge of band. Place 10 sts on each of three dpns. Knit 5 rnds. **Rnd 6:** [K3, k2tog] to end. (24 sts) **Rnds 7–8:** Knit. **Rnd 9:** [K2, k2tog] to end. (18 sts) **Rnd 10:** Knit. **Rnd 11:** [K1, k2tog] to end. (12 sts) **Rnd 12:** Knit. **Rnd 13:** K2tog around. (6 sts) Cut yarn and pull through loops on needles. **SPRINKLES** Using yarn A, thread yarn needle and sew short stitches on the white "frosting" in random spots for a sprinkle effect. Secure yarn to inside of cupcake. Stuff cupcake with polyester fiberfill. Stitch cupcake to the base of the hat, slightly off center, using the tails from the cupcake wrapper. * * * METHOD: KNIT SKILL LEVEL: DIFFICULT Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 40 YD (37 M) BULKY WEIGHT YARN IN A (YELLOW) • 3 YD (2.7 M) BULKY WEIGHT YARN IN B (BROWN) • SIZE 7 (4.5 MM) DPNS • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE * * * **GUS GOES TO EXTREME LENGTHS TO CAMOUFLAGE HIMSELF WHILE ON THE PROWL.** WHAT DOES YOUR CAT GO BANANAS FOR? THESE KITTIES ARE BANANAS FOR BANANA HATS! BANANA **BASE** Using yarn A and two dpns, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K2, bind off next 11 sts, k last st. **Row 14:** K2, cast on 11 sts, k2. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K2, bind off next 11 sts, k last st. **Row 32:** K2, cast on 11 sts, k2. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 25 in. (64 cm) tail. **BANANA** Using yarn A, cast on 20 sts, leaving a 15 in. (38 cm) tail. Divide sts evenly onto three dpns and join to work in the round. **Rnds 1–6:** Knit. **Rnd 7:** K14, k2tog, k2, k2tog. (18 sts) **Rnds 8–12:** Knit. **Rnd 13:** K5, [k2tog] 2 times, k9. (16 sts) **Rnd 14:** K4, [k2tog] 2 times, k6. (14 sts) **Rnd 15:** K3, [k2tog] 2 times, k7. (12 sts) **Rnds 16–18:** Knit. **Rnd 19:** K2, [k2tog] 2 times, k6. (10 sts) **Rnd 20:** Knit. **Rnd 21:** K1, [k2tog] 2 times, k1, [k2tog] 2 times. (6 sts) Change to yarn B. Knit 4 rnds. Cut yarn and pull through loops on needles. **ASSEMBLY** You should have a "flat side" to the banana—that side is the front. Stuff with polyester fiberfill, being mindful that the front should lay flat and the rest should curve. Use the blunt edge of a knitting needle to help stuff the stem of the banana if necessary. Attach to the middle of the base, centering the banana, and making sure the front of the banana is facing the front of the base (any side of the base can be the front). Using cast on tail, securely attach the banana to the hat base by stitching along the cast on edge of the banana. When finished, tie off to underside of base. **IF YOU'VE BEEN GOOD GIRLS AND BOYS, LINK MAY DELIVER YOUR CHRISTMAS PRESENTS.** * * * METHOD: KNIT SKILL LEVEL: DIFFICULT Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2 IN. (5 CM) SUPPLIES • 25 YD (23 M) BULKY WEIGHT YARN IN A (RED) • 10 YD (9 M) BULKY WEIGHT YARN IN B (WHITE) • SIZE 7 (4.5 MM) DPNS • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE • POM POM MAKER (OPTIONAL) * * * A CLASSIC SANTA HAT, SLIGHTLY SLOUCHY AND KNIT IN RICH COLORS. ITS NOSTALGIC LOOK IS GREAT FOR SEASONAL PHOTOS! SANTA HAT **BASE** Using yarn A and two dpns, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K2, bind off next 11 sts, k last st. **Row 14:** K2, cast on 11 sts, k2. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K2, bind off next 11 sts, k last st. **Row 32:** K2, cast on 11 sts, k2. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 25 in. (64 cm) tail. **HAT** Using yarn A, cast on 30 sts, leaving a 25 in. (64 cm) tail. Divide sts evenly over three dpns (10 sts per needle). Being careful not to twist sts, join to work in the round. **Rnds 1–4:** Knit. **Rnd 5:** [K2tog, k6, k2tog] 3 times. (24 sts) **Rnds 6–9:** Knit. **Rnd 10:** [K2tog, k4, k2tog] 3 times. (18 sts) **Rnds 11–12:** Knit. **Rnd 13:** [K2tog, k2, k2tog] 3 times. (12 sts) **Rnds 14–15:** Knit. **Rnd 16:** [K2tog] 6 times. (6 sts) **Rnd 17:** Knit. **Rnd 18:** [K2tog] 3 times. (3 sts) Cut yarn, leaving a 10 in. (25 cm) tail. Pull through loops on needle and securely close top of hat. To slouch hat, use the bind off tail and weave yarn through sts until yarn is about 1 in. (2.5 cm) from top. Tug until desired slouch is achieved. Knot yarn on inside to secure slouch. **ASSEMBLY** Stitch the hat to the base using the long cast on tail from the hat. Center the hat, and stitch evenly through the cast on edge. The hat should reach the edge of the front and back of the base. Make a 1 in. (2.5 cm) pom pom with yarn B. Attach to the top of the hat. **TRIM** Using yarn B and crochet hook, begin on right side of base and work single crochet evenly along front edge of base. Weave in ends to the underside of hat base and secure. **WHO BETTER TO WISH YOUR FRIENDS AND FAMILY SEASON'S GREETINGS THAN DAISY?** * * * METHOD: KNIT SKILL LEVEL: DIFFICULT Back to Hat Selector SIZE TO FIT A SMALL ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2 IN. (5 CM) SUPPLIES • 40 YD (37 M) BULKY WEIGHT YARN IN A (GREEN) • 10 YD (9 M) BULKY WEIGHT YARN IN B (RED) • SIZE 7 (4.5 MM) DPNS • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE • POLYESTER FIBERFILL * * * **VIVI IS COUNTING DOWN THE DAYS UNTIL CHRISTMAS!** SANTA'S LITTLE HELPER LOOKS PURRFECT IN THIS FESTIVE HAT! ELF **BASE** Using yarn A and two dpns, cast on 3 sts, leaving a 25 in. (64 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows five more times. (15 sts) **FIRST EAR HOLE** **Row 13:** K2, bind off next 11 sts, k last st. **Row 14:** K2, cast on 11 sts, k2. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 31:** K2, bind off 11 next sts, k last st. **Row 32:** K2, cast on 11 sts, k2. **Row 33:** Knit. **Row 34:** K2tog, k to last 2 sts, k2tog. (13 sts) Rep last 2 rows five more times. (3 sts) Bind off, leaving a 25 in. (64 cm) tail. To create ties, use crochet hook and 25 in. (64 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 25 in. (64 cm) tail. **HAT** Using yarn A, cast on 30 sts, leaving a 15 in. (38 cm) tail. Divide sts evenly onto three dpns (10 sts per needle). Join to begin working in the round. **Rnds 1–3:** Knit. Change to yarn B. **Rnd 4:** Knit. **Rnd 5:** [K3, k2tog] around. (24 sts) **Rnd 6:** Knit. Change to yarn A. **Rnd 7:** [K2, k2tog] around. (18 sts) **Rnd 8:** Knit. **Rnd 9:** [K1, k2tog] around. (12 sts) **Rnds 10–11:** Knit. **Rnd 12:** K2tog around. (6 sts) **Rnd 13:** Knit. **Rnd 14:** K2tog around. (3 sts) Cut yarn and pull through loops. Lightly stuff hat with polyester fiberfill. Stitch to middle section of base using cast on tail from hat. **MOOCH IS READY TO RAZZLE DAZZLE IN HIS TOP HAT.** * * * METHOD: KNIT SKILL LEVEL: DIFFICULT Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 15 YD (14 M) WORSTED WEIGHT YARN IN A (BLACK) • 45 YD (41 M) SPARKLY SPORT WEIGHT YARN IN B (BROWN) • SIZE 7 (4.5 MM) DPNS • POLYESTER FIBERFILL • YARN NEEDLE • SIZE 7 (4.5 MM) CROCHET HOOK • ½ IN. (1 CM) WIDE BLACK SATIN RIBBON (OPTIONAL) * * * A CLASSIC TOP HAT FOR FANCY FELINES EVERYWHERE! A BIT OF SPARKLE MAKES THIS SUITABLE FOR SPECIAL OCCASIONS OR RINGING IN A NEW YEAR! TOP HAT **BASE** Using yarn A and two dpns, cast on 3 sts, leaving a 20 in. (51 cm) tail. **Row 1:** Knit. **Row 2:** Kfb, k to last st, kfb. (5 sts) Rep last 2 rows three more times. (11 sts) **FIRST EAR HOLE** **Row 9:** K1, bind off next 9 sts. **Row 10:** K1, cast on 9 sts, k1. **MIDDLE SECTION** Knit 16 rows. **SECOND EAR HOLE** **Row 27:** K1, bind off next 9 sts. **Row 28:** K1, cast on 9 sts, k1. **Row 29:** Knit. **Row 30:** K2tog, k to last 2 sts, k2tog. (9 sts) Rep last 2 rows three more times. (3 sts) Bind off, leaving a 20 in. (51 cm) tail. To create ties, use crochet hook and 20 in. (51 cm) tail, pull a loop through each stitch on bind off edge (3 loops), yo, pull one loop through, work 25ch, pull end through loop tightly, and snip extra yarn. Repeat with other 20 in. (51 cm) tail. **TOP HAT** **TOP OF HAT** To get a flat top for the hat, you will be creating a flap that will need to be stitched into place later, so that the hat has a smooth top. Using yarn B and two dpns, cast on 1 st, leaving a 4 in. (10 cm) tail. **Row 1:** [K1, p1, k1] all into st. (3 sts) **Row 2:** Purl. **Row 3:** Kfb of all sts. (6 sts) **Row 4:** Purl. **Row 3:** Kfb of all sts. (12 sts) Divide sts onto three dpns (4 sts per needle) and join to work in the round. **Next rnd:** Knit. **Next rnd:** [Kfb, k1] around. (18 sts) **Next rnd:** Knit. **Next rnd:** [Kfb, k2] around. (24 sts) **Next rnd:** Knit. **Next rnd:** [Kfb, k3] around. (30 sts) **Next rnd:** Purl. **TUBE OF HAT** Knit 15 rnds. Bind off, leaving a 20 in. (51 cm) tail. Using the cast on tail, stitch the flap created on the top of the hat into place, so that it lies flat. Next, using the 20 in. (51 cm) tail, begin stitching the top hat in place on the base of the cat hat. Center the top hat on the middle section of the base, stitching into the bind off edge of the top hat. Once you have stitched halfway around the hat, lightly stuff the hat with polyester fiberfill. Overstuffing will give the wrong shape. Continue stitching around the bind off edge of the hat until it is securely in place. Weave in ends of hat securely on underside of the base and snip. **BRIM** Using yarn B and two dpns, cast on 6 sts, leaving a 4 in. (10 cm) tail. Knit every row until band measures 9 in. (23 cm). Bind off, leaving a 20 in. (51 cm) tail. Stitch ends of the brim together using cast on tail. Weave in tail and snip. Slide the brim over the top hat, to the bottom of the top hat. Using the 20 in. (51 cm) tail, stitch the brim onto the base of the hat, following along the bind off edge of the top hat. When finished, pull end through the underside of the cat hat base, weave in securely, and snip. Once stitched into place, use a length of yarn B and a yarn needle to create some simple stitches that will anchor the brim into the appropriate shape. The brim will naturally roll, though you want the front to be flat. You can place a few small stitches on each side of the front brim, to secure the roll and keep the front flat. Weave in any ends to underside of the base and snip. Measure ribbon around the bottom of the hat, add 1 in. (2.5 cm), and cut. Fold one end over ½ in. (1.25 cm), then fold over ½ in. (1.25 cm) again. Using matching thread and a needle, stitch this fold into place. Wrap the ribbon around the bottom of hat, secure raw end under the fold, and stitch in place to create the band. * * * METHOD: CROCHET SKILL LEVEL: BEGINNER Back to Hat Selector SIZE TO FIT A SMALL ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2 IN. (5 CM) SUPPLIES • 20 YD (18 M) WORSTED WEIGHT YARN IN A (PINK) • 5 YD (4.5 M) WORSTED WEIGHT YARN IN B (GREEN) • SIZE H8 (5 MM) CROCHET HOOK • YARN NEEDLE • POM POM MAKER (OPTIONAL) * * * THIS PEPPY POM POM HAT IS THE PERFECT ACCESSORY FOR ANY HIGH-SPIRITED KITTY. POM POM HAT **BASE** Using yarn A, work 2ch, leaving a 25 in. (64 cm) tail. **Row 1:** 3sc in 2nd ch, 1ch. **Row 2:** 3sc, 1ch. **Row 3:** 2sc in first st, 1sc, 2sc in last st, 1ch. (5sc) **Row 4:** 5sc, 1ch. **Row 5:** 2sc in first st, 3sc, 2sc in last st, 1ch. (7sc) **Row 6:** 7sc, 1ch. **Row 7:** 2sc in first st, 5sc, 2sc in last st, 1ch. (9sc) **Row 8:** 9sc, 1ch. **Row 9:** 2sc in first st, 7sc, 2sc in last st, 1ch. (11sc) **FIRST EAR HOLE** **Row 10:** 1sc, 9ch, 1sc in last st, 1ch. **MIDDLE SECTION** **Rows 11–20:** 11sc, 1ch. **SECOND EAR HOLE** **Row 21:** 1sc, 9ch, 1sc in last st, 1ch. **Row 22:** 11sc, 1ch. **Row 23:** 1sc, skip next st, 7sc, skip next st, 1sc, 1ch. (9sc) **Row 24:** 9sc, 1ch. **Row 25:** 1sc, skip next st, 5sc, skip next st, 1sc, 1ch. (7sc) **Row 26:** 7sc, 1ch. **Row 27:** 1sc, skip next st, 3sc, skip next st, 1sc, 1ch. (5sc) **Row 28:** 5sc, 1ch. **Row 29:** 1sc, skip next st, 1sc, skip next st, 1sc, 1ch. (3sc) **Row 30:** 3sc, 1ch. **Row 31:** 1sc in last st. **RACHEL IS PROUD TO WEAR HER COLORFUL, FLUFFY POM POM CAP.** To create ties, work 25ch, snip yarn, and pull through loop. Trim tail. Work 25ch at the beg of base, using 25 in. (64 cm) tail. **FRONT TRIM** Holding yarns A and B together, work 30sc across front of hat base. Clip yarn, pull through loop, and fasten securely on underside of the hat. Make a 1 in. (2.5 cm) pom pom from yarns A and B and attach to center of hat. This is an easy design to adapt. Use your favorite sports team colors or holiday colors to make it your own. **FOR THE LITTLE LION WHO THINKS HE'S A BIG ONE!** * * * METHOD: CROCHET SKILL LEVEL: BEGINNER Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 30 YD (27 M) WORSTED WEIGHT YARN IN A (LIGHT ORANGE) • 30 YD (27 M) WORSTED WEIGHT YARN IN B (DARK ORANGE) • SIZE G6 (4 MM) CROCHET HOOK • SIZE E4 (3.5 MM) CROCHET HOOK • YARN NEEDLE * * * LITTLE LION **BASE** Using yarn A and size G6 (4 mm) hook, work 2ch, leaving a 25 in. (64 cm) tail. **Row 1:** 3sc in 2nd ch, 1ch. **Row 2:** 3sc, 1ch. **Row 3:** 2sc in first st, 1sc, 2sc in last st, 1ch. (5sc) **Row 4:** 5sc, 1ch. **Row 5:** 2sc in first st, 3sc, 2sc in last st, 1ch. (7sc) **Row 6:** 7sc, 1ch. **Row 7:** 2sc in first st, 5sc, 2sc in last st, 1ch. (9sc) **Row 8:** 9sc, 1ch. **Row 9:** 2sc in first st, 7sc, 2sc in last st, 1ch. (11sc) **Row 10:** 11sc, 1ch. **Row 11:** 2sc in first st, 9sc, 2sc in last st, 1ch. (13sc) **Row 12:** 13sc, 1ch. **Row 13:** 2sc in first st, 11sc, 2sc in last st, 1ch. (15sc) **FIRST EAR HOLE** **Row 14:** 1sc, 13ch, 1sc in last st, 1ch. **MIDDLE SECTION** **Rows 15–26:** 15sc, 1ch. **SECOND EAR HOLE** **Row 27:** 1sc, 13ch, 1sc in last st, 1ch. **Row 28:** 15sc, 1ch. **Row 29:** 1sc, skip next st, 11sc, skip next st, 1sc, 1ch. (13sc) **Row 30:** 13sc, 1ch. **Row 31:** 1sc, skip next st, 9sc, skip next st, 1sc, 1ch. (11sc) **Row 32:** 11sc, 1ch. **Row 33:** 1sc, skip next st, 7sc, skip next st, 1sc, 1ch. (9sc) **Row 34:** 9sc, 1ch. **Row 35:** 1sc, skip next st, 5sc, skip next st, 1sc, 1ch. (7sc) **Row 36:** 7sc, 1ch. **Row 37:** 1sc, skip next st, 3sc, skip next st, 1sc, 1ch. (5sc) **Row 38:** 5sc, 1ch. **Row 39:** 1sc, skip next st, 1sc, skip next st, 1sc, 1ch. (3sc) **Row 40:** 1sc in last st. Work 25ch, snip yarn, and pull through loop tightly. Trim tail. Work 25ch at beg of base, using 25 in. (64 cm) tail. **MANE** Using yarn B and size E4 (3.5 mm) crochet hook, work the mane evenly along the front edge of the hat base. Starting just above the right tie: **Row 1:** [Sl st in next sp, 8ch, sl st in same sp] rep across hat, making 40 loops. Turn work. You will now work on top of the hat base, creating a second row of slightly larger loops. **Row 2:** [Sl st in next sp, 12ch, sl st in same sp] rep across hat, making 40 loops and ending on the side the mane started on. **ASSEMBLY** Pull all ends to underside of hat and secure in place. **WATCH OUT! POPPY IS ON THE PROWL.** * * * METHOD: CROCHET SKILL LEVEL: INTERMEDIATE Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 30 YD (27 M) WORSTED WEIGHT YARN IN A (ORANGE) • 10 YD (9 M) WORSTED WEIGHT YARN IN B (WHITE) • 3 YD (2.7 M) BULKY WEIGHT YARN IN C (BLACK) • SIZE G6 (4 MM) CROCHET HOOK • SIZE E4 (3.5 MM) CROCHET HOOK • YARN NEEDLE * * * A CRAFTY HAT FOR YOUR CUNNING CAT! CHICKENS BEWARE! FELINE FOX **BASE** Using yarn A and size G6 (4 mm) hook, work 2ch, leaving a 25 in. (64 cm) tail. **Row 1:** 3sc in 2nd ch, 1ch. **Row 2:** 3sc, 1ch. **Row 3:** 2sc in first st, 1sc, 2sc in last st, 1ch. (5sc) **Row 4:** 5sc, 1ch. **Row 5:** 2sc in first st, 3sc, 2sc in last st, 1ch. (7sc) **Row 6:** 7sc, 1ch. **Row 7:** 2sc in first st, 5sc, 2sc in last st, 1ch. (9sc) **Row 8:** 9sc, 1ch. **Row 9:** 2sc in first st, 7sc, 2sc in last st, 1ch. (11sc) **Row 10:** 11sc, 1ch. **Row 11:** 2sc in first st, 9sc, 2sc in last st, 1ch. (13sc) **FIRST EAR HOLE** **Row 12:** 1sc, 11ch, 1sc in last st, 1ch. **MIDDLE SECTION** **Rows 13–15:** 13sc, 1ch. **Row 16:** 2sc in first st, sc to end, 1ch. (14sc) (The increase side is the front of the hat.) **Row 17:** 13sc, 2sc in last st, 1ch. (15sc) **Row 18:** 2sc in first st, sc to end, 1ch. (16sc) **Row 19:** 15sc, 2sc in last st, 1ch. (17sc) **Row 20:** Skip first st, 16sc, 1ch. (16sc) **Row 21:** 14sc, skip next st, 1sc in last st, 1ch. (15sc) **Row 22:** Skip first st, 14sc, 1ch. (14sc) **Row 23:** 12sc, skip next st, 1sc in last st, 1ch. (13sc) **Rows 24–26:** 13sc, 1ch. **DOMINO HAS A FANTASTIC FOXY MAKEOVER.** **SECOND EAR HOLE** **Row 27:** 1sc, 11ch, 1sc in last st, 1ch. **Row 28:** 13sc, 1ch. **Row 29:** 1sc, skip next st, 9sc, skip next st, 1sc, 1ch. (11sc) **Row 30:** 11sc, 1ch. **Row 31:** 1sc, skip next st, 7sc, skip next st, 1sc, 1ch. (9sc) **Row 32:** 9sc, 1ch. **Row 33:** 1sc, skip next st, 5sc, skip next st, 1sc, 1ch. (7sc) **Row 34:** 7sc, 1ch. **Row 35:** 1sc, skip next st, 3sc, skip next st, 1sc, 1ch. (5sc) **Row 36:** 5sc, 1ch. **Row 37:** 1sc, skip next st, 1sc, skip next st, 1sc, 1ch. (3sc) **Row 38:** 3sc, 1ch. **Row 39:** 1sc in last st. To create ties, work 25ch, snip yarn, and pull through loop tightly. Trim tail. Work 25ch at beg of base, using 25 in. (64 cm) tail. **OUTER EARS** **(Make 2)** Using yarn A and size E4 (3.5 mm) hook, work 3ch. **Row 1:** Skip first ch, 2sc, 1ch. **Row 2:** 2sc, 1ch. **Row 3:** 2sc in each st, 1ch. (4sc) **Row 4:** 4sc, 1ch. **Row 5:** 2sc in first st, 2sc, 2sc in last st, 1ch. (6sc) **Row 6:** 6sc, 1ch. **Row 7:** 2sc in first st, 4sc, 2sc in last st, 1ch. (8sc) **Row 8:** 8sc, 1ch. **Row 9:** 2sc in first st, 6sc, 2sc in last st, 1ch. (10sc) **Rows 10–11:** 10sc, 1ch. Cut a 10 in. (25 cm) tail and pull through loop. Weave in other tail at point of ear. **INNER EARS** **(Make 2)** Using yarn B and size E4 (3.5 mm) hook, work 3ch. **Row 1:** Skip first ch, 2sc, 1ch. **Row 2:** 2sc, 1ch. **Row 3:** 2sc in each st, 1ch. (4sc) **Row 4:** 4sc, 1ch. **Row 5:** 2sc in first st, 2sc, 2sc in last st, 1ch. (6sc) **Rows 6–7:** 6sc, 1ch. Cut yarn, leaving a 10 in. (25 cm) tail, and pull through loop. Weave in other tail at point of ear. Using yarn C and size E4 (3.5 mm) hook, work 5 sl sts along upper tip and point of inner ear. Weave in ends to WS of ear, and clip. Attach one inner ear to one outer ear using the inner ear 10 in. (25 cm) tail and a yarn needle. Stitch into place, with the base of the ears (the last rows) matching up. Use the photos for reference if necessary. Be careful not to stitch right through the outer ear, as you do not want your stitches to be visible. You can create an invisible stitch by running your needle through the edge of the inner ear and barely through the surface of the outer ear. **ASSEMBLY** Stitch each ear along the front edge of the hat base, in front of each ear hole. The front edge of the base is the pointed edge. Use the yarn tails from the outer ears to stitch into place, running your needle through the edge of the outer ear. The ears are meant to stand up, so you may have to reinforce your stitches to give the ears support. Stand back and admire your cat's new look! * * * METHOD: CROCHET SKILL LEVEL: INTERMEDIATE Back to Hat Selector SIZE TO FIT A SMALL ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2 IN. (5 CM) SUPPLIES • 25 YD (23 M) WORSTED WEIGHT YARN IN A (BROWN) • 10 YD (9 M) WORSTED WEIGHT YARN IN B (WHITE) • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE * * * TRANSFORM YOUR KITTY INTO A CUDDLY TEDDY WITH THIS ADORABLE BEAR HAT. BABY BEAR **BASE** Using yarn A, work 2ch, leaving a 25 in. (64 cm) tail. **Row 1:** 3sc in 2nd ch, 1ch. **Row 2:** 3sc, 1ch. **Row 3:** 2sc in first st, 1sc, 2sc in last st, 1ch. (5sc) **Row 4:** 5sc, 1ch. **Row 5:** 2sc in first st, 3sc, 2sc in last st, 1ch. (7sc) **Row 6:** 7sc, 1ch. **Row 7:** 2sc in first st, 5sc, 2sc in last st, 1ch. (9sc) **Row 8:** 9sc, 1ch. **Row 9:** 2sc in first st, 7sc, 2sc in last st, 1ch. (11sc) **Row 10:** 11sc, 1ch. **Row 11:** 2sc in first st, 9sc, 2sc in last st, 1ch. (13sc) **FIRST EAR HOLE** **Row 12:** 1sc, 11ch, 1sc in last st, 1ch. **MIDDLE SECTION** **Rows 13–22:** 13sc, 1ch. **SECOND EAR HOLE** **Row 23:** 1sc, 11ch, 1sc in last st, 1ch. **Row 24:** 13sc, 1ch. **Row 25:** 1sc, skip next st, 9sc, skip next st, 1sc, 1ch. (11sc) **Row 26:** 11sc, 1ch. **Row 27:** 1sc, skip next st, 7sc, skip next st, 1sc, 1ch. (9sc) **Row 28:** 9sc, 1ch. **Row 29:** 1sc, skip next st, 5sc, skip next st, 1sc, 1ch. (7sc) **Row 30:** 7sc, 1ch. **Row 31:** 1sc, skip next st, 3sc, skip next st, 1sc, 1ch. (5sc) **Row 32:** 5sc, 1ch. **Row 33:** 1sc, skip next st, 1sc, skip next st, 1sc, 1ch. (3sc) **Row 34:** 3sc, 1ch. **Row 35:** 1sc in last st. To create ties, work 25ch, snip yarn, and pull through loop tightly. Trim tail. Work 25ch at beg of base, using 25 in. (64 cm) tail. **OUTER EARS** **(Make 2)** Using yarn A, work 2ch. **Row 1:** 2sc in 2nd ch. **Row 2:** 2sc in each st. (4sc) **Row 3:** 2sc in each st. (8sc) **Row 4:** 2sc, 2sc in next 4 sts, 2sc. (12sc) **Row 5:** 4sc, 2sc in next 4 sts, 4sc. (16sc) Cut a 10 in. (25 cm) tail and pull through loop. **INNER EARS** **(Make 2)** Using yarn B, work 2ch. **Row 1:** 2sc in 2nd ch. **Row 2:** 2sc in each st. (4sc) **Row 3:** 1sc, 2sc in next 2 sts, 1sc in last st. (6sc) Cut a 10 in. (25 cm) tail and pull through loop. Attach to outer ear, lining up the bottom edge of the inner ear to the bottom edge of the outer ear. Be careful as you stitch around inner ear not to stitch all the way through the outer ear, as you don't want the yarn B stitches visible on the back of the outer ear. Repeat with other ear. **ASSEMBLY** Attach the ears along the front edge of the hat base, centered in front of each ear hole. Use the tail from the outer ear to stitch the ears into place. You may need to reinforce your stitches, as the ears are meant to stand up. * * * METHOD: CROCHET SKILL LEVEL: INTERMEDIATE Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2 IN. (5 CM) SUPPLIES • 40 YD (37 M) WORSTED WEIGHT YARN IN A (WHITE) • 20 YD (18 M) WORSTED WEIGHT YARN IN B (BROWN) • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE * * * THE PERFECT PROJECT FOR DOG- AND CAT-LOVERS! CUSTOMIZE THE SPOTS, EARS, AND COLORS FOR DIFFERENT PUPS. **WHO SAYS CATS AND DOGS CAN'T GET ALONG? NOT DOMINO!** DOG **BASE** Using yarn A, work 2ch, leaving a 25 in. (64 cm) tail. **Row 1:** 3sc in 2nd ch, 1ch. **Row 2:** 3sc, 1ch. **Row 3:** 2sc in first st, 1sc, 2sc in last st, 1ch. (5sc) **Row 4:** 5sc, 1ch. **Row 5:** 2sc in first st, 3sc, 2sc in last st, 1ch. (7sc) **Row 6:** 7sc, 1ch. **Row 7:** 2sc in first st, 5sc, 2sc in last st, 1ch. (9sc) **Row 8:** 9sc, 1ch. **Row 9:** 2sc in first st, 7sc, 2sc in last st, 1ch. (11sc) **Row 10:** 11sc, 1ch. **Row 11:** 2sc in first st, 9sc, 2sc in last st, 1ch. (13sc) **Row 12:** 13sc, 1ch. **Row 13:** 2sc in first st, 11sc, 2sc in last st, 1ch. (15sc) **FIRST EAR HOLE** **Row 14:** 1sc, 13ch, 1sc in last st, 1ch. **MIDDLE SECTION** **Rows 15–26:** 15sc, 1ch. **SECOND EAR HOLE** **Row 27:** 1sc, 13ch, 1sc in last st, 1ch. **Row 28:** 15sc, 1ch. **Row 29:** 1sc, skip next st, 11sc, skip next st, 1sc, 1ch. (13sc) **Row 30:** 13sc, 1ch. **Row 31:** 1sc, skip next st, 9sc, skip next st, 1sc, 1ch. (11sc) **Row 32:** 11sc, 1ch. **Row 33:** 1sc, skip next st, 7sc, skip next st, 1sc, 1ch. (9sc) **Row 34:** 9sc, 1ch. **Row 35:** 1sc, skip next st, 5sc, skip next st, 1sc, 1ch. (7sc) **Row 36:** 7sc, 1ch. **Row 37:** 1sc, skip next st, 3sc, skip next st, 1sc, 1ch. (5sc) **Row 38:** 5sc, 1ch. **Row 39:** 1sc, skip next st, 1sc, skip next st, 1sc, 1ch. (3sc) **Row 40:** 1sc in last st. To create ties, work 25ch, snip yarn, and pull through loop tightly. Trim tail. Work 25ch at beg of base, using 25 in. (64 cm) tail. **EARS** **(Make 2)** Using yarn B, make a magic ring. **Rnd 1:** 1ch, 10sc in ring. **Rnd 2:** 10sc, 1ch. **Rnd 3:** [1sc, 2sc in next st] 5 times, 1ch. (15sc) **Rnd 4:** 15sc, 1ch. **Rnd 5:** Working in back loops only, work 8sc, 1ch, turn. Continue in rows: **Rows 6–11:** 8sc, 1ch. Cut a 10 in. (25 cm) tail and pull through loop. Weave in beg tail. **SPOTS** **(Make 1 in A, 1 in B)** Work 6ch. **Rnd 1:** 1sc in 2nd ch, 1sc in next 2ch, 2sc in next ch. Continuing along other side of chain, work 1sc in back loop of next 4ch, sl st to beg sc, 1ch. **Rnd 2:** [1sc, 2sc in next st] 3 times, 2sc, 2sc in next st, sl st to beg sc (13sc). Cut a 6 in. (15 cm) tail and pull through loop. Weave in beg tail. **ASSEMBLY** To attach ears, use 10 in. (25 cm) tail to stitch flat side of ear evenly above ear opening on base of hat. Once securely stitched in place, pull tail through to underside of hat and secure. Repeat with other tail. To attach spots, attach spot in yarn A to one ear. Stitch around spot securely, and pull end through to WS of ear. Secure. Attach spot in yarn B to base of hat, underneath the other ear. **ANNA NICOLE GIVES JAWS A RUN FOR HIS MONEY IN HER SHARK ATTACK HAT.** * * * METHOD: CROCHET SKILL LEVEL: INTERMEDIATE Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2 IN. (5 CM) SUPPLIES • 18 YD (16 M) MEDIUM WEIGHT YARN IN A (BLUE) • 3 YD (2.7 M) MEDIUM WEIGHT YARN IN B (RED) • 5 YD (4.5 M) MEDIUM WEIGHT YARN IN C (WHITE) • SIZE H8 (5 MM) CROCHET HOOK • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE * * * LITTLE FISH, BEWARE! SHOWCASE YOUR CAT'S PREDATOR SIDE IN THIS SHARKY HAT! SHARK ATTACK **BASE** Using yarn A and size H8 (5 mm) hook, work 2ch, leaving a 25 in. (64 cm) tail. **Row 1:** 3sc in 2nd ch, 1ch. **Row 2:** 3sc, 1ch. **Row 3:** 2sc in first st, 1sc, 2sc in last st, 1ch. (5sc) **Row 4:** 5sc, 1ch. **Row 5:** 2sc in first st, 3sc, 2sc in last st, 1ch. (7sc) **Row 6:** 7sc, 1ch. **Row 7:** 2sc in first st, 5sc, 2sc in last st, 1ch. (9sc) **Row 8:** 9sc, 1ch. **Row 9:** 2sc in first st, 7sc, 2sc in last st, 1ch. (11sc) **Row 10:** 11sc, 1ch. **Row 11:** 2sc in first st, 9sc, 2sc in last st, 1ch. (13sc) **FIRST EAR HOLE** **Row 12:** 1sc, 11ch, 1sc in last st, 1ch. **MIDDLE SECTION** **Rows 13–22:** 13sc, 1ch. **SECOND EAR HOLE** **Row 23:** 1sc, 11ch, 1sc in last st, 1ch. **Row 24:** 13sc, 1ch. **Row 25:** 1sc, skip next st, 9sc, skip next st, 1sc, 1ch. (11sc) **Row 26:** 11sc, 1ch. **Row 27:** 1sc, skip next st, 7sc, skip next st, 1sc, 1ch. (9sc) **Row 28:** 9sc, 1ch. **Row 29:** 1sc, skip next st, 5sc, skip next st, 1sc, 1ch. (7sc) **Row 30:** 7sc, 1ch. **Row 31:** 1sc, skip next st, 3sc, skip next st, 1sc, 1ch. (5sc) **Row 32:** 5sc, 1ch. **Row 33:** 1sc, skip next st, 1sc, skip next st, 1sc, 1ch. (3sc) **Row 34:** 3sc, 1ch. **Row 35:** 1sc in last st. To create ties, work 25ch, snip yarn, and pull through loop tightly. Trim tail. Work 25ch at beg of base, using 25 in. (64 cm) tail. **FIN** Using yarn A and size G6 (4 mm) hook, work 12ch. **Row 1:** Skip first ch, 11sc, 1ch. **Row 2:** Skip first st, 9sc, 1ch. **Row 3:** 7sc, skip next st, 1sc in last st, 1ch. (8sc) **Row 4:** 7sc, 1ch. (7sc) **Row 5:** 1sc, skip next st, 3sc, skip next st, 1sc, 1ch. (5sc) **Row 6:** 1sc, skip next st, 1sc, skip next st, 1sc, 1ch. (3sc) **Row 7:** Skip first st, 2sc, 1ch. **Row 8:** 2sc, 1ch. **Row 9:** Skip first st, 1sc, 1ch. **Row 10:** 1sc. Cut yarn, leaving a 8 in. (20 cm) tail. Weave the tail from the first row through to the last row of the fin. Stitch to the center of the middle section of the hat base, running needle through the bottom edge of the fin. The fin will be straighter on one side—that's the side that should face the front. Use the pictures for reference if necessary. You may need to reinforce your stitches, as you want the fin to stand upright. **TEETH EDGE** Using yarn B and size G6 (4 mm) hook, work 34 sl sts evenly along front edge of the hat base. Cut yarn, pull through loop securely, and weave in both ends to the underside of the hat. Next, using yarn C and size G6 (4 mm) hook, work 4ch through the first red sl st. Slip st in the same sp, 1sc in next red sl st, [4ch and slip st in same sp, 1sc] rep to end of row. Cut yarn, pull through loop securely, and weave in both ends to the underside of the hat. * * * METHOD: CROCHET SKILL LEVEL: DIFFICULT Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 30 YD (27 M) WORSTED WEIGHT YARN IN A (RED) • 10 YD (9 M) WORSTED WEIGHT YARN IN B (WHITE) • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE • POLYESTER FIBERFILL • POM POM MAKER (OPTIONAL) * * * PADDING AROUND ON THE ROOFTOPS, WHO DOES KITTY DISCOVER? GOOD OLD SANTA CLAUS! SANTA PAWS **BASE** Using yarn A, work 2ch, leaving a 25 in. (64 cm) tail. **Row 1:** 3sc in 2nd ch, 1ch. **Row 2:** 3sc, 1ch. **Row 3:** 2sc in first st, 1sc, 2sc in last st, 1ch. (5sc) **Row 4:** 5sc, 1ch. **Row 5:** 2sc in first st, 3sc, 2sc in last st, 1ch. (7sc) **Row 6:** 7sc, 1ch. **Row 7:** 2sc in first st, 5sc, 2sc in last st, 1ch. (9sc) **Row 8:** 9sc, 1ch. **Row 9:** 2sc in first st, 7sc, 2sc in last st, 1ch. (11sc) **Row 10:** 11sc, 1ch. **Row 11:** 2sc in first st, 9sc, 2sc in last st, 1ch. (13sc) **FIRST EAR HOLE** **Row 12:** 1sc, 11ch, 1sc in last st, 1ch. **MIDDLE SECTION** **Rows 13–22:** 13sc, 1ch. **SECOND EAR HOLE** **Row 23:** 1sc, 11ch, 1sc in last st, 1ch. **Row 24:** 13sc, 1ch. **Row 25:** 1sc, skip next st, 9sc, skip next st, 1sc, 1ch. (11sc) **Row 26:** 11sc, 1ch. **Row 27:** 1sc, skip next st, 7sc, skip next st, 1sc, 1ch. (9sc) **Row 28:** 9sc, 1ch. **POPPY IS FULL OF FESTIVE CHEER IN HER SANTA HAT.** **Row 29:** 1sc, skip next st, 5sc, skip next st, 1sc, 1ch. (7sc) **Row 30:** 7sc, 1ch. **Row 31:** 1sc, skip next st, 3sc, skip next st, 1sc, 1ch. (5sc) **Row 32:** 5sc, 1ch. **Row 33:** 1sc, skip next st, 1sc, skip next st, 1sc, 1ch. (3sc) **Row 34:** 3sc, 1ch. **Row 35:** 1sc in last st. Work 25ch, snip yarn, and pull through loop tightly. Trim tail. Work 25ch at beg of base, using 25 in. (64 cm) tail. **HAT** Do not join at end of rounds. If necessary, place a marker to show start of round. Using yarn A, work 2ch. **Rnd 1:** 4sc in 2nd ch. **Rnd 2:** 1sc in each st around. (4sc) **Rnd 3:** 2sc in each st around. (8sc) **Rnds 4–5:** 1sc in each st around. **Rnd 6:** [1sc, 2sc in next st] 4 times. (12sc) **Rnds 7–9:** 1sc in each st around. **Rnd 10:** [1sc, 2sc in next st] 6 times. (18sc) **Rnds 11–13:** 1sc in each st around. **Rnd 14:** [1sc, 2sc in next st] 9 times. (27sc) **Rnds 15–17:** 1sc in each st around. **Rnd 18:** [2sc, 2sc in next st] 9 times. (36sc) **Rnd 19:** [8sc, 2sc in next st] 4 times. (40sc) **Rnds 20–21:** 1sc in each st around. Snip yarn, leaving a 20 in. (51 cm) tail, and pull through loop tightly. Use this tail to attach the hat to the base. **ASSEMBLY** Using yarn B, make a 1 in. (2.5 cm) pom pom and two ½ in. (1 cm) tassels. Attach pom pom securely to top of hat. Attach one tassel on end of each tie. Lightly stuff hat top with polyester fiberfill, being careful to evenly distribute filling. Stitch onto center of the base of hat with 20 in. (51 cm) tail. * * * METHOD: CROCHET SKILL LEVEL: DIFFICULT Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2½ IN. (6 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 30 YD (27 M) WORSTED WEIGHT YARN IN A (YELLOW) • 10 YD (9 M) WORSTED WEIGHT YARN IN B (WHITE) • 10 YD (9 M) WORSTED WEIGHT YARN IN C (ORANGE) • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE • POLYESTER FIBERFILL * * * TRICK OR TREAT, HERE'S A HAT THAT'S EXTRA SWEET! CANDY CORN **BASE** Using yarn A, work 2ch, leaving a 25 in. (64 cm) tail. **Row 1:** 3sc in 2nd ch, 1ch. **Row 2:** 3sc, 1ch. **Row 3:** 2sc in first st, 1sc, 2sc in last st, 1ch. (5sc) **Row 4:** 5sc, 1ch. **Row 5:** 2sc in first st, 3sc, 2sc in last st, 1ch. (7sc) **Row 6:** 7sc, 1ch. **Row 7:** 2sc in first st, 5sc, 2sc in last st, 1ch. (9sc) **Row 8:** 9sc, 1ch. **Row 9:** 2sc in first st, 7sc, 2sc in last st, 1ch. (11sc) **Row 10:** 11sc, 1ch. **Row 11:** 2sc in first st, 9sc, 2sc in last st, 1ch. (13sc) **FIRST EAR HOLE** **Row 12:** 1sc, 11ch, 1sc in last st, 1ch. **MIDDLE SECTION** **Rows 13–22:** 13sc, 1ch. **SECOND EAR HOLE** **Row 23:** 1sc, 11ch, 1sc in last st, 1ch. **Row 24:** 13sc, 1ch. **Row 25:** 1sc, skip next st, 9sc, skip next st, 1sc, 1ch. (11sc) **Row 26:** 11sc, 1ch. **Row 27:** 1sc, skip next st, 7sc, skip next st, 1sc, 1ch. (9sc) **Row 28:** 9sc, 1ch. **Row 29:** 1sc, skip next st, 5sc, skip next st, 1sc, 1ch. (7sc) **Row 30:** 7sc, 1ch. **Row 31:** 1sc, skip next st, 3sc, skip next st, 1sc, 1ch. (5sc) **Row 32:** 5sc, 1ch. **Row 33:** 1sc, skip next st, 1sc, skip next st, 1sc, 1ch. (3sc) **Row 34:** 3sc, 1ch. **Row 35:** 1sc in last st. To create ties, work 25ch, snip yarn, and pull through loop tightly. Trim tail. Work 25ch at beg of base, using 25 in. (64 cm) tail. **CANDY CORN** Using yarn B, make a magic ring. **Rnd 1:** 1ch, 6sc in ring. **Rnd 2:** 6sc. **Rnd 3:** [2sc in next st, 2sc] twice. (8sc) **Rnds 4–5:** Sc around. **Rnd 6:** [2sc in next st, 3sc] twice. (10sc) **Rnds 7–8:** Sc around. **Rnd 9:** [2sc in next st, 4sc] twice. (12sc) Change to yarn C. **Rnds 10–11:** Sc around. **Rnd 12:** [2sc in next st, 5sc] twice. (14sc) **Rnds 13–14:** Sc around. **Rnd 15:** [2sc in next st, 6sc] twice. (16sc) **Rnds 16–17:** Sc around. **Rnd 18:** [2sc in next st, 7sc] twice. (18sc) Cut yarn, leaving a 15 in. (38 cm) tail, and pull through loop. Turn hat inside out. Sew yarn B tail to close up top gap. Weave ends to inside and finish. **ASSEMBLY** Lightly and evenly stuff candy corn with polyester fiberfill. Be careful to maintain a soft square point at the top of the candy corn. Stitch hat onto the center of the base with yarn C tail, with the square top facing toward the front (not the ears) of the hat. Use the photos if necessary for reference. Secure ends to underside of base and finish. * * * METHOD: CROCHET SKILL LEVEL: DIFFICULT Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • EAR OPENING: 2 IN. (5 CM) • WIDTH OF HAT BETWEEN EARS: 2½ IN. (6 CM) SUPPLIES • 21 YD (19 M) WORSTED WEIGHT YARN IN A (PURPLE) • 10 YD (9 M) WORSTED WEIGHT YARN IN B (BLUE) • SIZE G6 (4 MM) CROCHET HOOK • YARN NEEDLE • POLYESTER FIBERFILL * * * ONCE EXTINCT, THE RARE UNICORN CAT GRACES ONLY THE MOST FANTASTICAL OF HOMES. ADD A BRIGHTLY COLORED FRINGE MANE ALONG THE TOP OF THE HAT FOR A LITTLE FLAIR! UNICORN **BASE** Using yarn A, work 2ch, leaving a 3 in. (7.5 cm) tail. **Row 1:** 3sc in 2nd ch, 1ch. **Row 2:** 3sc, 1ch. **Row 3:** 2sc in first st, 1sc, 2sc in last st, 1ch. (5sc) **Row 4:** 5sc, 1ch. **Row 5:** 2sc in first st, 3sc, 2sc in last st, 1ch. (7sc) **Row 6:** 7sc, 1ch. **Row 7:** 2sc in first st, 5sc, 2sc in last st, 1ch. (9sc) **Row 8:** 9sc, 1ch. **Row 9:** 2sc in first st, 7sc, 2sc in last st, 1ch. (11sc) **Row 10:** 11sc, 1ch. **Row 11:** 2sc in first st, 9sc, 2sc in last st, 1ch. (13sc) **FIRST EAR HOLE** **Row 12:** 1sc, 11ch, 1sc in last st, 1ch. **MIDDLE SECTION** **Rows 13–22:** 13sc, 1ch. **SECOND EAR HOLE** **Row 23:** 1sc, 11ch, 1sc in last st, 1ch. **Row 24:** 13sc, 1ch. **Row 25:** 1sc, skip next st, 9sc, skip next st, 1sc, 1ch. (11sc) **Row 26:** 11sc, 1ch. **Row 27:** 1sc, skip next st, 7sc, skip next st, 1sc, 1ch. (9sc) **Row 28:** 9sc, 1ch. **Row 29:** 1sc, skip next st, 5sc, skip next st, 1sc, 1ch. (7sc) **Row 30:** 7sc, 1ch. **Row 31:** 1sc, skip next st, 3sc, skip next st, 1sc, 1ch. (5sc) **Row 32:** 5sc, 1ch. **Row 33:** 1sc, skip next st, 1sc, skip next st, 1sc, 1ch. (3sc) **Row 34:** 3sc, 1ch. **Row 35:** 1sc in last st. Cut yarn, leaving a 3 in. (7.5 cm) tail. Weave in both tails to underside of base. **UNICORN HORN** **(Make 1)** Using yarn B, make a magic ring. **Rnd 1:** 1ch, 4sc in ring. **Rnd 2:** Sc in each st around, sl st in first sc. (4sc) **Rnd 3:** [2sc in next st, 1sc] twice. (6sc) **Rnd 4:** Sc around. **Rnd 5:** [2sc in next st, 2sc] twice. (8sc) **Rnd 6:** Sc around. **Rnd 7:** [2sc in next st, 3sc] twice. (10sc) **Rnd 8:** Sc around. **Rnd 9:** [2sc in next st, 4sc] twice. (12sc) **Rnd 10:** Sc around. **Rnd 11:** [2sc in next st, 5sc] twice. (14sc) **Rnd 12:** Sc around. Cut yarn, leaving a 15 in. (38 cm) tail, and pull through loop tightly. **ASSEMBLY** Weave in tail from top of unicorn horn, making sure to close any gaps. Lightly stuff horn with polyester fiberfill, making sure to stuff evenly. Using 15 in. (38 cm) tail, securely stitch horn to the center of the base, close to front of hat. Using yarn B, measure out two pieces of yarn 25 in. (64 cm) each. To create ties at each end of the base, use one piece of yarn and pull a loop through the end of the hat, work 25ch, pull yarn through loop tightly, and snip. Repeat with other piece on the other side of the hat. **GUS LOOKS ENCHANTING IN HIS UNICORN HAT.** * * * METHOD: CROCHET SKILL LEVEL: DIFFICULT Back to Hat Selector SIZE TO FIT AN AVERAGE ADULT CAT • LENGTH: 5 IN. (13 CM) • WIDTH: 3 IN. (7.5 CM) SUPPLIES • 40 YD (37 M) WORSTED WEIGHT YARN IN A (BEIGE) • 6 YD (5.5 M) WORSTED WEIGHT YARN IN B (BLUE) • 6 YD (5.5 M) WORSTED WEIGHT YARN IN C (ORANGE) • SIZE E4 (3.5 MM) CROCHET HOOK • YARN NEEDLE * * * TRANSPORT YOUR CAT TO THE WILD WEST IN THIS PERFECTLY BROKEN-IN COWBOY HAT! COWBOY HAT **HAT** Do not turn work in this pattern. Using yarn A, work 6ch. **Rnd 1:** 2sc in 2nd ch, 1sc in next 3ch, 3sc in next ch. Continuing along other side of chain, work 1sc in back loop of next 4ch, sl st to beg ch, 1ch. (12 sts) **Rnd 2:** 1sc in first st, 2sc in next st, 4sc, [2sc in next st] twice, 4sc, sl st to beg ch, 1ch. (15sts) **Rnd 3:** 2sc in next 2sts, 5sc, 2sc in next 2 sts, sl st to next sc; this marks new position for start of rnd. (This will cause the hat to start to curl, which is part of the shaping.) For the following rnds, use a marker at the beg of each rnd to help keep your place. From Rnds 4–13 inclusive, work 1ch at beg of each rnd and end each rnd with sl st to beg ch. **Rnd 4:** [5sc, 2sc in next st] 4 times. **Rnd 5:** [6sc, 2sc in next st] 4 times. **Rnd 6:** [7sc, 2sc in next st] 4 times. **Rnd 7:** [8sc, 2sc in next st] 4 times. **Rnd 8:** [9sc, 2sc in next st] 4 times. **Rnd 9:** [10sc, 2sc in next st] 4 times. **Rnds 10–12:** Sc around. **BRIM** **Rnd 13:** 2sc in each st around, sl st to beg ch. **Rnd 14:** Sl st in each st around. Snip yarn, leaving enough tail to weave in ends. **TO SHAPE** You have created two points at the top of your hat. These points should face the ears, and you should use the starting tail to stitch into place a dip in between these points. You may find it helpful to turn to the WS, or inside, of the hat and, pinching between the points, secure this shape with a few stitches. Turning to the RS, or outside, of the hat, you will need to stitch your brim into place. You want the sides of the hat to roll. Lightly roll each side, and work 3 or 4 stitches to hold it in place. Use the photos for reference. Properly broken in hats have an authentic shape, which means you don't have to perfectly stitch the brim rolls. Just remember that the front and back need to be flat. Stitching the brim is what gives the hat its final shape, so adjust it according to your liking. An experienced crocheter could give this hat a stiffer shape by stuffing it. You would need to crochet an oval, or small base, to stitch your hat onto and to hold the stuffing in. You could use the other patterns in this book as a reference for this if you wish. **TIES** **(Make 2)** Using a strand of all three yarn colors, make two braided ties that each measure 1 yd (90 cm). You could also use a ¼ in. (0.5 cm) wide ribbon or trim of your choice for ties. To attach ties to the hat, take one tie and pull it through the front left side (near the rolled brim) of the hat. Use a larger crochet hook if necessary. Now pull the other end of the tie through the front right side of the hat (again near the rolled brim). Repeat with other tie at the back of the hat. Use the photos for reference if necessary. **TO WEAR** You now have 4 ties hanging underneath your hat, two on the right and two on the left. Carefully tie the two left and the two right ties together, in a bow under the chin. Make sure the ties are on either side of the ear on each side. This ensures a more comfortable fit (your cat's ears should be clear of the hat). If you need the hat to be a bit smaller, or larger, on top, adjust the roll of the brim. **YEE-HAW! BLUEBELL IS THE NEW COWBOY IN TOWN!** MATERIALS AND EQUIPMENT THE NEAT THING ABOUT MAKING CAT HATS IS THAT THEY DON'T REQUIRE SPECIALIST TOOLS, NOR DO THEY USE MUCH YARN. YARNS Yarns are available in a range of weights, from very fine to extra bulky. Because yarns may vary from one manufacturer to another and certainly change from one fiber to another, only generic yarn types are indicated for the hats in this book. You should be aware of the properties of different yarns, however, from the fullness of cotton to the elasticity of wool, because the construction of a yarn will affect its behavior and characteristics, and so will influence the end result. Try using different gauges and, if in doubt, use a smaller needle/hook size than usual. Separate your yarns into color groups and keep these in transparent plastic containers so that you have a palette of colors to work with. KNITTING NEEDLES Needle sizes are specified for each hat. Pairs of knitting needles are made in a variety of lengths. Most are aluminum, although larger-size needles are made of plastic to reduce their weight. For most of the designs in this book, a conventional pair of needles is used, but double-pointed needles are used in some of the projects. Knitting needles are available in a variety of sizes and materials. CROCHET HOOKS Crochet hooks are available in a wide range of sizes and materials. Most hooks are made from aluminum or plastic. Small sizes of steel hooks are made for working with very fine yarns. Handmade wooden, bamboo, and horn hooks are also available. Hook sizes are quoted differently in the United States and Europe, and some brands of hook are labeled with more than one numbering system. Choosing a hook is largely a matter of personal preference. The design of the hook affects the ease of working considerably. Look for a hook that has a comfortable grip. Assorted crochet hooks. Row counters are useful. STUFFING MATERIALS AND ADDITIONAL EQUIPMENT **POLYESTER FIBERFILL** There are a number of options open to you when it comes to stuffing your hat, including foam, cotton batting, and polyester fiberfill. I recommend polyester fiberfill, a synthetic fiber that is extremely lightweight and also washable. It has a soft feel and it bounces back into shape. It tends to clump less than many of the other stuffing materials. It is also widely available. Always have a sharp pair of scissors handy. **TAPE MEASURE** Essential for measuring lengths of yarn, choose one that features both inches and centimeters on the same side. A tape measure lets you check that you have adequate yarn. **MARKERS AND ROW COUNTERS** Ready-made markers can be used to indicate a repeat or to help count stitches in a chain. Similarly, a row counter may help you to keep track of the number of rows you have worked, but in knitting this is usually easy if you remember to include the stitches on the needle as a row. KNITTING TECHNIQUES HERE IS A REMINDER OF THE BASICS, TOGETHER WITH A FEW SUGGESTSIONS AND TECHNIQUES THAT MIGHT BE NEW TO A BEGINNING KNITTER. SLIPKNOT **1** Putting a slipknot on the needle makes the first stitch of the cast on. Loop the yarn around two fingers of the left hand, the ball end on top. Dip the needle into the loop, catch the ball end of the yarn, and pull it through the loop. **2** Pull the ends of the yarn to tighten the knot. Tighten the ball end to bring the knot up to the needle. CASTING ON There are several cast on methods, each with its own merits. **Thumb method** Sometimes called long-tail cast on, this uses a single needle and produces an elastic edge. **1** Leaving an end about three times the length of the required cast on, put a slipknot on the needle. Holding the yarn end in the left hand, take the left thumb under the yarn and upward. Insert the needle in the loop made on the thumb. **2** Use the ball end of the yarn to make a knit stitch, slipping the loop off the thumb. Pull the yarn end to close the stitch up to the needle. Continue making stitches in this way. The result looks like a row of garter stitch because the yarn has been knitted off the thumb. **Cable cast on** This two-needle method gives a firm edge with the appearance of a rope. **1** Put a slipknot on one needle. Use the other needle and the ball end of the yarn to knit into the loop on the left-hand needle without slipping it off. Transfer the new stitch to the left-hand needle. **2** Insert the right-hand needle between the new stitch and the next stitch, and then make another stitch as before. Continue making stitches in this way. **Knitted cast on** Make a cable cast on as above, but instead of knitting between stitches, insert the right-hand needle in the front of each stitch in the usual way. This gives a softer edge than the cable method. I-CORD A very useful round cord can be made using two double-pointed needles. Cast on four (or the required number of) stitches and knit one row in the usual way. *Without turning, slide the stitches to the opposite end of the needle. Take the yarn firmly across the wrong side from left to right and knit one row. Repeat from * for the required length. BINDING AND FASTENING OFF A simple knit stitch bind off is used in this book. Knit two stitches. *With the left needle, lift the first stitch over the second and off the right needle. Knit the next stitch. Repeat from * until one stitch remains. Break off the yarn, pass the end through this stitch, and tighten. PICK UP AND KNIT The pick up and knit technique involves knitting up new stitches along the edge of a knitted piece, ready to work in another direction. This avoids having to cast on a separate piece and join it with a seam. With RS facing you, insert the right needle under an edge stitch, take the yarn around the needle, and pull a loop through to make a stitch. Repeat for the number of stitches required, spacing the picked up stitches evenly along the edge. The next row will be a WS row. KNITTING IN THE ROUND When knitting in the round using four double-pointed needles (dpns), the stitches are distributed among three of the needles and the spare needle is used to knit with. Bring the first and third needles together to form a circle and use the spare needle to work the stitches off the first (left) needle and onto the spare (right) needle in the usual way. This is done with the RS (outside) of the work facing you, unless stated otherwise. Take the yarn firmly from one double-pointed needle to the next or a ladder will appear. CROCHET TECHNIQUES HERE ARE A FEW REMINDERS OF THE BASICS AND SOME SUGGESTIONS FOR BUILDING ON THEM. SLIPKNOT **1** Putting a slipknot on the hook makes the first loop of the chain that will hold the stitches of the first row or round. Loop the yarn around two fingers of the left hand, the ball end to the front. Insert the hook in the loop, catch the ball end of the yarn, and pull it through the loop. **2** Pull the ends of yarn to tighten the knot. Now tighten the ball end to bring the knot up to the hook. HOOKING ACTION Hold the slipknot (and later the chain) between the thumb and forefinger of the left hand. Take the yarn over the second finger of the left hand so it is held taut. Take it around the little finger as well if necessary. The right hand is then free to manipulate the hook. With a turn of the wrist, guide the tip of the hook under the yarn. Catch the yarn and pull it through the loop on the hook to make a chain. Hooking and catching is referred to as yarn over hook (abbreviation: yo). It is the action used in making a chain, a slip stitch, and, in various combinations, all other crochet stitches. **Note** Unless the instructions state otherwise, the hook should be inserted under the two strands of yarn that form the top of the chain, or the top of the stitch. WORKING A SLIP STITCH (SL ST) Slip stitch is the shortest of all the crochet stitches and its main uses are for joining rounds, making seams, and carrying the hook and yarn from one place to another. Insert the hook from front to back into the required stitch. Wrap the yarn over the hook (yarn over) and draw it through both the work and the loop on the hook. One loop remains on the hook and one slip stitch has been worked. CHAIN RING Join a number of chain stitches into a ring with a slip stitch in the first chain. Work the first round of stitches around the chain and into the center of the ring. If the yarn end is also worked around, the ring is lightly padded and this end can be pulled to tighten it. MAGIC RING **1** To make a magic ring, first coil the yarn around two fingers and then use the hook to pull through a loop of the ball end of the yarn, as if making a slipknot (see step 1, above left). However, do not then pull the yarn tight. Holding the ring flat between the thumb and forefinger of the left hand, catch the yarn and pull it through the loop on the hook to anchor it. **2** Working under two strands of yarn each time, make the stitches as directed and then pull the free yarn end to close the ring. Join the ring with a slip stitch in the first stitch. JOINING IN A NEW YARN There are several methods you can use to join in a new yarn or color. **Using slip stitch** This method can be used when working any stitch. Make a slipknot in the new yarn and place it on the hook. Insert the hook into the work at the specified position and make a slip stitch with the new yarn through both stitch and slipknot. Continue working the pattern with the new yarn. **Changing colors mid-stitch** To switch neatly from an old color to a new color in the same place, you can leave the last stitch in the old color incomplete and use the new color to finish the stitch. **1** Using the old color, leave the last stage of the final stitch incomplete, so that there are two loops on the hook. Wrap the new color over the hook and pull it through the loops on the hook to complete the stitch. **2** Continue working with the new color. You may find it easier to knot the two loose ends together before you cut the yarn no longer in use, leaving ends of about 4 in. (10 cm). Always undo the knot before weaving in the yarn ends. ADDITIONAL TECHNIQUES ALL THE HATS ARE EASY TO ASSEMBLE USING JUST A FEW STANDARD FINISHING TECHNIQUES. HERE ARE SOME TIPS AND SUGGESTIONS. MARKERS If markers are needed to count rows or repeats, use a length of contrast thread. *Lay it between stitches from front to back, make a stitch, and then bring it from back to front of the work. Repeat from * once more. It can be pulled out when it is no longer needed. STUFFING Use polyester fiberfill (batting) rather than cotton wool, as the latter can be rather dense and difficult to stitch through. Push the batting in firmly, one wisp at a time, using it to shape the object without distorting it. Too much batting will pack down, whereas too little will never plump up. Don't push the batting in with a pointed implement, but use something like the eraser end of a pencil. Spare matching yarn may be better than batting inside crochet, as there will be no show-through. Wind off short lengths of yarn around two fingers and push these in, one coil at a time. ENDS Sometimes called a tail, the end of yarn left after making the slipknot should be a reasonable length so that it can be used for sewing up. It can also be very useful for covering up imperfections, such as awkward color changes. The same applies to the end left after binding or fastening off. In these projects, ends that will not be needed for sewing up should be woven in and secured to the WS before the main assembly of the hat. POM POMS A couple of the hats in this book require pom poms. Either use a ready-made plastic pom pom maker or cut out two rings of cardboard. **1** Place the two rings together and use a yarn needle to wrap yarn around them. **2** Starting new lengths of yarn at the outside edge, continue until the rings are tightly covered. Insert the blade of a pair of scissors between the rings and cut the yarn around the edge. **3** Tie a length of yarn around the pom pom between the rings. Knot the yarn tightly, slip the rings off, and trim the pom pom. Use the ends of yarn from the tie for attaching the pom pom to the hat. JOINING KNITTED AND CROCHETED PIECES It is always best to leave a lengthy tail when you cast on or bind off, as this tail can serve as your joining yarn when sewing pieces together. The projects in this book specify at which point to leave additional length. Carefully place the piece to be attached in the correct location, using sewing pins if necessary. Using a blunt yarn needle and the tail (or a length of matching yarn), stitch small upright stitches through the edge of the piece being joined to the main piece of the project. When done in the same shade of yarn, these stitches should be invisible. Once attached, pull the yarn to the wrong side of the knitted or crocheted object (usually the underside of the hat) and secure with several knots. Weave in ends. ABBREVIATIONS GENERAL **rep** | repeat ---|--- **rnd(s)** | round(s) **RS** | right side(s) **st(s)** | stitch(es) **WS** | wrong side(s) KNITTING **dpn(s)** | double pointed needle(s) ---|--- **k** | knit **k2tog** | knit 2 together **kfb** | knit in front and back of stitch to make two stitches from one **p** | purl CROCHET **beg** | beginning ---|--- **ch** | chain **sc** | single crochet **sl st** | slip stitch **sp** | space **yo** | yarn forward and over hook to make a stitch READING CHARTS Each crochet design is accompanied by a chart that should be read with the written instructions. The chart represents the right side of the work. CHARTS IN ROWS CHARTS IN ROUNDS YARNS USED Many thanks to Lion Brand who generously supplied the yarn used in this book. The following yarns and colors were used for the hats: Dinosaur, pages 10–13 A: Jiffy, Avocado B: Wool Ease, Pumpkin Bobble Hat, pages 14–15 A: Wool Ease, Seaspray B: Wool Ease, Cranberry Strawberry, pages 16–19 A: Cotton Ease, Cherry B: Kitchen Cotton, Snap Pea C: Cotton Ease, Snow Pumpkin, pages 20–21 A: Kitchen Cotton, Pumpkin B: Cotton Ease, Lime Sports Cap, pages 22–23 A: Wool Ease, Ranch Red B: Vanna's Choice, White C: Wool Ease, Sea Spray Spring Chick, pages 24–25 A: Romance, Passion B: Fun Yarn, Black C: Kitchen Cotton, Pumpkin Punk Mohawk, pages 26–29 A: Vanna's Choice, Black B: Roving Wool, Hot Pink Bunny, pages 30–31 Nature's Choice Organic Cotton, Almond Turkey, pages 32–35 A: Jiffy, Caffe B: Fun Yarn, Red C: Fun Yarn, Black D: Kitchen Cotton, Pumpkin E: Wool Ease, Fisherman Flower Cap, pages 36–37 A: Kitchen Cotton, Cayenne B: Wool Ease, Blue Heather I Heart You, pages 38–39 A: Jiffy, Caffe B: Jiffy, Chili Extraterrestrial, pages 40–41 A: Vanna's Choice, Sweet Pea B: Vanna's Choice, Black Antlers, pages 42–43 Jiffy, Caffe Party Hat, pages 44–45 A: Vanna's Choice, Raspberry B: Wool Ease, Fisherman C: Vanna's Choice, Mint Witch, pages 46–47 A: Kitchen Cotton, Grape B: Kitchen Cotton, Snap Pea Cupcake, pages 48–49 A: Vanna's Choice, Pink Poodle B: Vanna's Choice, Berrylicious C: Vanna's Choice, Angel White Banana, pages 50–51 A: Baby's First, Honeybee B: Wool Ease, Cocoa Santa Hat, pages 52–55 A: Jiffy, Chili B: Jiffy, White Elf, pages 56–57 A: Jiffy, Apple B: Jiffy, Chili Top Hat, pages 58–61 A: Fun Yarn, Black B: Vanna's Glamour, Moonstone Pom Pom Hat, pages 62–63 A: Vanna's Choice, Raspberry B: Vanna's Choice, Fern Little Lion, pages 64–67 A: Vanna's Choice, Honey B: Wool Ease, Paprika Feline Fox, pages 68–71 A: Wool Ease, Paprika B: Wool Ease, Fisherman C: Fun Yarn, Black Baby Bear, pages 72–75 A: Heartland Yarn, Big Bend B: Wool Ease, Fisherman Dog, pages 76–79 A: Wool Ease, Fisherman B: Heartland Yarn, Big Bend Shark Attack, pages 80–83 A: Kitchen Cotton, Tropic Breeze B: Kitchen Cotton, Red C: Kitchen Cotton, Vanilla Santa Paws, pages 84–87 A: Vanna's Choice, Scarlet B: Vanna's Choice, White Candy Corn, pages 88–91 A: Kitchen Cotton, Citrus B: Kitchen Cotton, Vanilla C: Kitchen Cotton, Pumpkin Unicorn, pages 92–95 A: Vanna's Choice, Dusty Purple B: Vanna's Choice, Aqua Cowboy Hat, pages 96–99 A: Vanna's Choice, Beige B: Kitchen Cotton, Tropic Breeze C: Wool Ease, Paprika INDEX B Baby Bear 72–75 Banana 50–51 binding off Bobble Hat 14–15 Bunny 30–31 C Candy Corn 88–91 casting on cable cast on knitted cast on thumb method Cowboy Hat 96–99 crochet hooks Cupcake 48–49 D Dinosaur 10–13 Dog 76–79 E Elf 56–57 equipment 100–101 Extraterrestrial 40–41 F fastening off Feline Fox 68–71 fiberfill Flower Cap 36–37 I i-cord I Heart You 38–39 K knitting needles knitting techniques 102–103 binding and fastening off casting on i-cord knitting in the round pick up and knit slipknots L Little Lion 64–67 M markers materials 100–101 P Party Hat 44–45 pick up and knit polyester fiberfill Pom Pom Hat 62–63 Pumpkin 20–21 Punk Mohawk 26–29 R Reindeer Antlers 42–43 row counters S Santa Hat 52–55 Santa Paws 84–87 scissors Shark Attack 80–83 slipknots Sports Cap 22–23 Spring Chick 24–25 Strawberry 16–19 T tape measures Top Hat 58–61 Turkey 32–35 U Unicorn 92–95 W Witch 46–47 Y yarns STARRING Many thanks to our feline friends and their families for appearing in this book: **Anna Nicole** (black-and-white domestic shorthair), courtesy of Sophie Bremaud, featured on pages 80–83 **Bluebell** (gray Selkirk Rex), courtesy of Alison Hayward, featured on pages 96–99 **Daisy** (black-and-white domestic shorthair), courtesy of Simon Baker, featured on page 55 **Domino** (black-and-white domestic longhair), courtesy of Wendie Cattell, featured on pages 68–69, 76–79 **Gracie** (Maine Coon), courtesy of Clare Earthy, featured on pages 24 **Gus** (gray domestic shorthair), courtesy of Leah Prado, featured on pages 16–19, , , 92–93 **Holly** (Maine Coon), courtesy of Clare Earthy, featured on pages 30, **Huck** (Maine Coon), courtesy of Clare Earthy, featured on pages 31, , **Jasper** (chocolate domestic longhair), courtesy of Katherine Shone, featured on pages 26–29, 42r **Jess** (domestic longhair), courtesy of Wendy Cattell, featured on pages 88–89 **Leeroy** (lilac straight-eared Scottish Fold), courtesy of Joanna Bettles, featured on pages 10–13, 20–21, **Link** (blue-point Birman), courtesy of Rozi Blair, featured on pages 37, 46–47, **Luna** (Snow Bengal cross), courtesy of Alix Taylor, featured on pages 14–15, 22–23 **Lyric** (blue-point Birman), courtesy of Rozi Blair, featured on pages 32–34, 36–37 **Mooch** (British Shorthair with Siamese points), courtesy of Simone Hogan, featured on pages 45, 58–61, , , **Percy** (ginger-and-white domestic shorthair), courtesy of Katherine Shone, featured on pages 38–39, 42l **Poppy** (tabby domestic shorthair), courtesy of Simon Baker, featured on pages 64–67, **Rachel** (black-and-white domestic shorthair), courtesy of Wendie Cattell, featured on page 63 **Vivi** (red-silver Abyssinian), courtesy of Alix Taylor, featured on pages 56, 72–75 And many thanks to the photographers, Liz Coleman and Phil Wilkins, for their patience, ingenuity, and creativity.
{ "redpajama_set_name": "RedPajamaBook" }
1,254
The Railway Carriage is a purpose built Railway Carriage/ Brake Van next to the West Somerset Steam Railway. The Railway Carriage provides a unique and memorable holiday location. The views from the Railway Carriage are stunning lying close to the railway line itself, what could be better for either children of all ages than the magnificent steam locomotives passing by. You will not tire of seeing the beautiful steam trains pass by your holiday home. The stunning Quantock Hills fill the landscape; buzzards can be seen majestically soaring above. The Railway Carriage is finished in a modern, comfortable country style with a shaker style kitchen and 21st century appliances including dishwasher, washer dryer, fridge freezer and electric oven with gas hob. The kitchen/dining room has an oak table. The sitting room has fabulously comfy leather sofas and a plasma tv with freesat and dvd player, there are doors out to the veranda, garden and hot tub. The Railway Carriage has two bedrooms; one with a five foot king-size bed and views of the railway, the second bedroom has two full size singles with views of the railway. The bathroom has a bath with shower over. The Railway Carriage is in a wonderful location for both short breaks and longer holidays, there is lots to see and do in this wonderful corner of Somerset from walking and riding to the Steam Railway itself, Dunster and its castle, the caves at Cheddar and Wookey Hole and endless beaches to visit from the surfing beaches of North Devon to the charming Lyme Regis.
{ "redpajama_set_name": "RedPajamaC4" }
96
Q: Qt on embedded system lost focus when move away mouse I have a simple Qt application. There is a QPushButton on the QMainWindow; when the button is clicked, a QDialog with QLineEdit will be show (using exec()). The QLineEdit gets the focus automatically; it blinks. When run it on my pc/Linux platform it works well. But when I run it on embedded platform and move the cursor away from the QDialog widget (e.g move the cursor over the QMainWindow), the dialog loses the focus: the QLineEdit stops blinking. How does this happen, and how to fix it? MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); QWidget* widgetb = new QWidget(this);//this works well QWidget* widgeta = new QWidget();//this will make focus follow the mouse QDialog* dialog = new QDialog(this);//this will make focus follow the mouse } A: Not sure why it is behaving this way. However, you may try setting setFocusPolicy(Qt::NoFocus) for the QPushButton if you don't need tab focus for the button.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,595
Q: How to convert regex into an extglob expression? I'd like to convert regular expression into glob I was looking on jakarta oro But i can't find method that suits my needs. That it compiles regular expression and returns its glob equivalent They are both Type-3 grammars, so in theory it should be possible. I am unfortunatelly limited by using JDK5. A: extglob can match a number of regex constructs (pattern-list is a list of alterations): extglob regex -------------- ----------------- ? [^/] * [^/]* . \. ** . ?(pattern-list) (pattern-list)? *(pattern-list) (pattern-list)* +(pattern-list) (pattern-list)+ @(pattern-list) (pattern-list) !(pattern-list) (?!pattern-list) There are some things that regex does that cannot be done in extglob, as far as I know, too: ?????????????? [^abc] ?????????????? \1 ?????????????? most look arounds Assuming all of the constructs in the regex have extglob equivalents, it would be possible to convert it to extglob form. It would be difficult, because regexes are represented by a CFG. And you're using Java, which forces you to use the evil escaped escape \\. Why not just use a different bash utility that supports regexes? Like this.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,167
Install the Travello App to enter. TWO friends to either Europe or Asia! WIN a bonus $200 each spending money from Travello by inviting 5 friends to Travello through our "invite Friends" function in the app. If all 5 create a profile you'll also grab the spending money if you win the major prize! Along with the hashtags #busabout & either #takemetoeurope or #takemetoasia – the destination you want to go to. You can post an entry for both or enter as many times as you like. Don't forget to invite 5 friends, if they create a profile and you win the major prize you also get $200 AUD spending money! There is no limit on the number of entries, so get creative and enter as many times as you like on the Travello feed! Choose from their 47 destinations (see below) across Europe! Choose as many or as little as you can fit it! Spend 9 Days with Busabout across Cambodia, visiting Phnom Penh, Siem Reap, Kampot, Sihanoukville, Battambang. PRIZE VALUE IS $1,918 AUD in total for the 2. This competition commences on 12th Dec 2017 and ends at 5pm AEST on 15th Jan 2018 ("Competition Period"). Entrants may enter as many times as they like on the feed. Late, incomplete, incorrectly submitted, delayed, illegible, corrupted or misdirected responses will be deemed invalid. Step 2. Briefly say why that image inspires you to travel more in 2018 again. Step 3: Also include the following hashtag requirements: #busabout & either #takemetoeurope or #takemetoasia – the destination you want to go to. Each entry will be reviewed and judged by a panel of the Promoter's marketing and communications personnel ("Panel"). The Panel will judge each response according to how relevant, creative and inspiring it is. Number of 'Likes' and 'Comments' of the entries under the submissions will not determine the winners, but may influence the Panel's decision when determining the final 10 entrants that will go into the final voting round on Facebook where number of likes, comments and shares will determine our overall winner. All decisions of the Panel are final and no discussions or correspondence will be entered into. There will be a final list of 10 entries chosen by the Travello team that will then go on to be promoted onto Facebook. The eventual winner from those 10 will be chosen from a mix of engagement such as most likes/comments/shares etc for their entry from the Travello community! The final 10 will be drawn at 5:00pm AEST Monday 15th Jan 2018 at Travello headquarters in Brisbane, QLD. The final 10 will be notified via the Travello App And Email by a direct message before 11:59pm AEST Monday 15th Jan 2018 and will also be announced on the Travello Feed in the app at that time. From there the final 10 will be posted to our Facebook Feed and voting will commence. Voting of the final 10 on Facebook will close at 5pm AEST Monday 22nd Jan 2018. The eventual winner will be contacted by Travello by January 23rd 2018. Subject to the Travello prize terms and conditions, the winner will receive one major prize to be awarded to one person for them and one other. Prize includes the choice of either Two (2) x Hop-on Hop-off Unlimited Passes for Europe – RRP $1999 each – total prize value $3998 with travel must being taken between May 2018 and October 2018, or Two (2) x Cambodia Adventure Passes – RRP Value $959 each = $1918 total with travel to be taken in 2018. Prize does NOT include flights from/to the destinations, flight travel arrangements need to be covered by the eventual winner.
{ "redpajama_set_name": "RedPajamaC4" }
3,023
During the series sweep of the Pirates that was completed on Wednesday afternoon, Reds' pitching allowed only one run in all three games combined. Bronson Arroyo began the Pittsburgh affair on Monday pitching seven innings while allowing the only run of the series on five hits. Johnny "Rotten" Cueto had the most handsome outing of his career on Tuesday hurling a one-hit complete-game shutout. On Wednesday, Homer Bailey extended the pitching staff's dominance, leading the Reds to a sweep of the Bucs. Until Wednesday, Bailey had been alienated by the other starting pitchers for the Reds. After an extremely murky start to the season for Reds' pitchers, all except Bailey have been turning the page. Bailey remained the only starter without a win, and apart from one outing (May 1st vs. STL, 6.2IP, 2ER), he had pitched lousy. Riding the wave of determination that has been recently advertised by Cincinnati, Bailey placed the adolescent season behind him and pitched wonderfully against the Pirates on Wednesday. "That's the most well-pitched two days that I've seen in a long time." "He was very strong with the fastball. His problem's been he's had a long pitch count in the fifth. That's normally his problem. One of the ways you get out of that is to call a lot of fastballs and don't move too much around home plate. We got ahead with that and got quick outs." If it weren't for Cueto's dominance, most of Tuesday's headlines would be bearing the name of Chris "Vicious" Heisey. After a dismal 0-5 debut against the New York Mets on May 3rd, Heisey went 3-4 on Tuesday notching his first major league hit and homerun. Heisey led off the game with a single down the third base line and crushed a two run shot off Jeff Karstens in the eighth inning. The Reds beat the Pirates 9-0. Reds' top prospect Juan Francisco was rushed to the hospital prior to Tuesday's game in Louisville. Francisco had emergency appendectomy surgery and is expected to miss four to six weeks. After a rough start to the season, Francisco hit .627 with three home runs and fourteen RBI in his last seven games with the Bats. After being swept by Pittsburg last time they visited, the Reds will attempt redemption on Wednesday. Homer Bailey (0-2, 7.24) will face Zach Duke (2-3, 5.13) at 12:35pm. MIKE "GODZILLA" LEAKE AND JOEY VOTTO PROPEL THE REDS OVER THE CUBS....TAKE THAT CHICAGO! While attending the Reds game on Sunday I couldn't help but notice two things. Beyond the obvious annoyance caused by Cubs fans as they visited what they call "Wrigley South", there was an unusual abundance of "W" t-shirts and Ryne Sandberg jerseys being worn by Ryan Dempster lovers. I understand the concept, but fail to comprehend the delivery of the Cubs' "W" obsession. After experiencing a 14-2 embarrassment at GABP on Saturday, what would convince a Cub fan to don a t-shirt exclaiming victory the very next day? What does it feel like to optimistically sport your team's victory apparel after a humiliating loss, fail to win once again, and lose the series to the Reds? I wouldn't know..... I don't root for the Cubs. GABP is full of throwbacks. Whether it's a classic Joe Morgan jersey or an exquisite Frank Robinson, Reds fans always represent correctly. I may be spoiled by fellow fans' knowledge of their team, but Cubs fans need some education. Out of all the Cubs fans surrounding me in section 428 at GABP on Sunday, the only non-current jersey I saw (besides about fifty Sandbergs) was a Sammy Sosa. Weak. Mike "Godzilla" Leake pitched five perfect innings of baseball against the Cubs on Sunday. Rookie Starlin Castro notched the first Cubs' hit leading off the sixth. Brandon Phillips made a slick play behind second base, but pulled Joey Votto inches off the first base bag with a high throw. Behind the Reds 2-0 in the seventh inning with two outs, the Cubs managed to collect a double followed by an infield single. Leake then threw a wild pitch and allowed his first run of the game. Cubs' Tyler Colvin then homered into his own bullpen to put the Cubs ahead 3-2. The lead was short lived. Joey Votto blasted a three-run homer to the Sun/Moon Deck in the bottom of the seventh and after two scoreless inning by Reds' relievers, the Reds won 5-3. WARNING: The video below is not safe for work, school, young children, your mother, and probably not you. The opinions expressed are not relevant to this article, instead they oppose mainstream politics, but the title of the second song in the video (Stick that f@#king flag, up your g@%dam a$$, you sonofab#%tch) is extremely fitting. As the ballpark emptied I was expecting mounds of banter. I can only assume that after Friday's 14-7 loss to the Cubs, the visiting Chicagoans were running their mouths profusely. After a two straight victories by the Reds to finalize the series, instead of emphasizing the obvious and returning the trash talk, Reds' fans departed quietly. As hundreds of Ryne Sandbergs vanished from GABP to head back north, the defeat and honor could not have been laid on any thicker than it was with the Reds fans' civil silence. Happy Mother's Day. "The last three outings, I've been able to pretty much put the ball where I want it." Be that as it may, we can only hope that Aaron Harang can keep "putting it where he wants it." The Reds gained a game on the Cardinals on Saturday and are now just 3.5 games behind them in the NL Central. Mike "Godzilla" Leake (2-0, 2.94) will face former Red Ryan Dempster (2-2, 2.95) at 1:10pm on Sunday as the Reds attempt to take the series from the Cubs. Aroldis Chapman will make his sixth start for the Louisville Bats on Sunday against Rochester. Chapman is 2-1 with a 3.12 ERA with the Bats. First pitch will be at 2:05pm. DID BRANDON PHILLIPS COST THE REDS A WIN? Brandon Phillips' lack of hustle on the ball off the wall last night was addressed. Phillips ended up with a double, instead of a triple, likely costing the Reds a run. "He's been talked to about this," Dusty Baker said. "We've talked to Brandon quite often. Is it hard for Baker to watch? "What's tough as a manager is when you've got an A student who's getting Bs," Baker said. I tried to talk to Phillips. He did not want to talk about it last night. He did give me one quote. However, I can't use it here. In episode four of HBO's "Hard Knocks: Training Camp with the Cincinnati Bengals", Roy Williams and Tank Johnson visited their friend Laynce Nix at Great American Ballpark. The trio became comrades when they spent time together in Texas. Nix was a Ranger from '03-'06, while Johnson and Williams are both former Cowboys. During his visit to GABP, Roy Williams wore the 2009 Civil Rights Game throwback Nix jersey. After Laynce Nix hit a game winning homerun to beat the New York Mets 3-2 on Monday, Tank Johnson tweeted this. We believe Nix's new pet name is appropriate. Plus, we love emulating an explosion whenever we hear Nix's name being announced or when Tank Johnson flattens an opponent. Feel free to join in. Mike "Godzilla" Leake looked superb on Monday against the Mets throwing exactly 100 pitches. He went six innings, allowed only one earned run on four hits, and struck out four. The Reds' bullpen (Herrera, Lincoln, Rhodes, Cordero, and Masset) combined for five scoreless innings after Leake's departure. The game was tied at two until Laynce Nix crushed a Manny Acosta curveball to right field for a homerun during the bottom of the eleventh inning. The Reds beat the Mets 3-2. Cuban defector Aroldis Chapman had a very rough outing on Monday against AAA Buffalo. After pitching three scoreless innings, and receiving a 7-0 lead, Chapman fell apart during the fourth and allowed five Buffalo runs. Chapman left after five innings after allowing six earned runs on nine hits while striking out eight. Although Chapman struggled, the Bats offense was superior, scoring twenty runs against Buffalo. Chapman and the Bats both walked away with a win. Reds' outfielder Chris Dickerson had surgery on his right wrist Monday. Dickerson will be expected to miss 4-6 weeks. Bronson Arroyo (1-2, 6.37) will face John Maine (1-1, 7.15) at 7:10pm on Tuesday. Twenty years ago the Cincinnati Reds won their fifth World Series beating Tony La Russa's Oakland Athletics in a four game sweep. WhackReds.com is celebrating the 20th anniversary of the Reds' last World Championship, and the 2010 season, by dissecting each aspect of the respective teams. This is part 3/12 of the Whack Reds 2010 Player Profiles. Chris Sabo and Scott Rolen combined for eight All-Star games, seven gold gloves, two World Series Championships, and both won Rookie of the Year Awards. Scott Rolen began his career in 1996, while Chris Sabo ended his the same year. Since 1996 the Reds have had a multitude of players man the hot corner. Sabo and Rolen may be two of the more established athlete's to occupy the position in recent years. Barry Switzer once said, "Some people are born on third base and go through life thinking they hit a triple." As we examine each player respectively, we reveal how most would agree Sabo and Rolen belong on third base. *Update: Aaron Harang pitched well, but the Reds endured yet another loss to the St. Louis Cardinals on Sunday. Harang went six innings allowing three runs on seven hits while fanning six, but Chris Carpenter held the Reds to only two hits over seven innings. Albert Pujols hit a bases loaded double, off Nick Masset in the seventh, to put the game out of reach. Albert Pujols went 1-4 with three RBI. I.H.T.A.P!
{ "redpajama_set_name": "RedPajamaC4" }
9,168
A SILVER WAITER BIRMINGHAM, 1957, MAKER'S MARK BG GBP 50 - GBP 80 Shaped circular and on three scroll feet, with a Lamerie pattern border, engraved with an inscription, marked on back 6¾ in. (17 cm.) diameter 6 oz. (190 gr.) The inscription reads 'PRESENTED TO H.R.H. THE PRINCESS MARGARET COUNTESS OF SNOWDON BY T.E.M. SALES LTD. MANUFACTURES OF THE COBALT BOMB UNIT ON THE OCCASION OF THE OPENING OF THE UNIT AT NORTH ORMESBY HOSPITAL MIDDLESBROUGH ON 10TH APRIL 1962.' Presented to H.R.H. The Princess Margaret, Countess of Snowdon (1930-2003) by T.E.M. Sales Ltd. on the occasion of the opening of the Cobalt Bomb Unit, North Ormesby Hospital, Middlesbrough on 10th April 1962. The Cobalt Bomb Unit at North Ormesby Hospital, opened by Princess Margaret on 10 April 1962, had been manufactured by T. E. M. Sales Limted. In her speech, Princess Margaret revealed that the unit, which had been designed for the treatment of cancer and other diseases, cost £27,000 and was paid for in part by a legacy of £4,000 left by Mr. J.H. Lincoln, a crane driver, of Green Lane, Middlesex, who had died aged 80 the previous December. Proceeds from this lot will be donated to charity.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,360
The Compassionate Warrior: Abd El-Kader of Algeria Reviewed by Edith Campbell Review Source: Crazy QuiltEdi Book Author: Elsa Marston Marston combines her love of scholarship and of young adult literature as she writes about Emir Abdel Kader. At times, she speaks directly to her audience in a tone that guides them as they learn more, not only about this brilliant and compassionate leader but, also about Algeria. France's relationship with the country was just beginning as Algeria struggled to eventually become a unified nation. Their relationship was complex and interpreted differently through the lens of each of the cultures. Marston provides only what she could document, resulting in a book that is a rare historical document. I think young adults would be more engaged in a story that included more about the Emir's personal and family life, however this books focuses more on his political accomplishments along with the country's development. Readers gain insights not only into a country we here tend to ignore, but also into the complex arena of international relations. Nothing is as simple as it seems! The Compassionate Warrior by Elsa Marston Genres: Africa, Biography and Autobiography, Middle East, Muslim, Religion, War, World History Buy at Powell's Books SYNOPSIS: A brilliant military strategist, superb horseman, statesman, philosopher, Muslim hero . . . Emir Abdel Kader (1808-1883) was an international celebrity in his own time, known for his generosity and kindness even towards enemies. Today he is recognized as one of the noblest leaders of the 19th century and a pioneer in interfaith dialogue. This fascinating biography of the heroic Arab who led the resistance to the French conquest of Algeria, endured betrayal and imprisonment, and in 1860, in Syria, saved thousands of innocent people from mob violence brings a vital message for our times.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,682
A modern take on the maxi skirt in our Luxe Velvet fabric - the March Skirt is a must have this season. Featuring a straight cut silhouette, side splits & a elasticated waistband. Dress up with a silk black cami and your favourite heels to complete your evening look!
{ "redpajama_set_name": "RedPajamaC4" }
2,754
\section{Introduction} In recent years, harmonic generation (HG) has become one of the richest veins of research for atomic, molecular and optical physics. Not only has HG enabled ultrashort light pulse generation \cite{attosecond_pulse_train}, but it has also given rise to a series of very sensitive measurements of molecular \cite{multielectron_molecules}, atomic and even electronic dynamics \cite{multielectron_atoms}. The sensitive nature of HG has made it increasingly important to develop accurate methods of modelling the process. Many studies aimed at describing HG make use of the single active electron (SAE) model \cite{sae_gavrila,sae_schafer}, a significant simplification which allows for the efficient computation of harmonic spectra. SAE methods have been used to probe the relationship between atomic structure and HG. For instance, the Cooper minimum in Argon has been linked with the minimum in the photoionization spectrum caused by a zero dipole moment between the $p$ ground state wavefunction and the $d$ wavefunction of the photoionized electron for a photon energy of around 48 eV \cite{cooper_minimum_hhg_worner,cooper_minimum_hhg_higuet}. The minimum is observed to exist in both the photoionization spectrum and the harmonic spectrum, and is easily described by the SAE method as it does not depend on the interactions between different electrons. However, there have been various studies carried out in molecular systems where multielectron dynamics are found to be of great importance \cite{corkum_review,multielectron_molecules}. Even in atomic systems there are features of photoionization, and hence harmonic spectra, that are the result of electronic interactions which require a multielectron description \cite{brown_prl,multielectron_atoms,many_e-_hhg,multichannel_hhg}. Over the last few years, we have developed time-dependent $R$-matrix (TDRM) theory to model the interaction of atoms with short, intense laser pulses, maintaining a full description of the multielectron dynamics involved \cite{tdrm,tdrm_argon,collect_c+1}. TDRM has recently been extended to account for harmonic generation, and this capability was demonstrated in showing how autoionizing resonances can affect the harmonic spectrum of Argon. The appearance of the autoionizing resonances in these spectra is a consequence of multielectron dynamics: the interference between the response of $3p$ and $3s$ electrons to the laser field \cite{brown_prl}. These calculations represent an important shift in thinking on HG: the multielectron nature of the process is reflected in the theoretical approach, and while there are many processes that can be adequately described using SAE methods, there are many for which this, more rigorous, description may be required. The determination of the harmonic spectrum can proceed through the calculation of the time-dependent expectation value of the dipole, dipole velocity or dipole acceleration operator. At present there is discussion about which of these operators offers the best prediction of the harmonic response for a single atom. Recent work \cite{dipole_gauge_madsen,velocity_hhg} has suggested that there is a natural connection with the dipole velocity, while, commonly, the dipole acceleration operator is used \cite{acceleration_hhg_eberly,acceleration_hhg_lappas,acceleration_hhg_burnett}, especially for the description of high order harmonics as better resolution can be obtained for the high energy peaks \cite{dipacc}. Much early work in the field used the dipole length \cite{length_hhg_eberly,length_hhg_bandarage}, and up until this point the description of HG in TDRM has been restricted to using this operator \cite{brown_prl}. We note that the use of these various forms has been verified only within the SAE approximation, and hence we seek herein to verify the independence of HG with respect to the use of the dipole, its velocity or acceleration in a multielectron system. We also assess which form offers the most numerically stable method, particularly when used with a limited multielectron basis set. Studies assessing the propagation of the wavefunction have demonstrated that to obtain the most accurate results for a limited basis in the TDRM approach the laser field is best described in the dipole length gauge \cite{tdrm_dipole_gauge}. On the other hand, time-propagation in SAE calculations is commonly performed by describing the laser field in the velocity gauge. This difference indicates that we cannot necessarily rely upon knowledge gained from SAE calculations for the assessment of TDRM calculations. We have extended the TDRM method to calculate the harmonic spectrum from the dipole velocity and acceleration operators simultaneously with the dipole operator spectrum. In this paper we cover the major theoretical aspects of this extension, and apply the TDRM codes to He in a 390nm laser field. Helium is chosen for three reasons. Firstly, the simple structure allows for the systematic varying of the multielectron basis functions, the impact of which has been assessed for TDRM in terms of photoionization \cite{tdrm_dipole_gauge}, but not for HG. Secondly, the absence of a closed core simplifies the calculation of dipole acceleration matrix elements, and hence we can compare spectra in all three forms. Finally, using He allows us to benchmark our approach against a proven alternative method: we compare our results with those obtained using the HELIUM code \cite{HELIUM}. \section{Theory} \subsection{TDRM Theory} The TDRM approach is an {\it ab initio} nonperturbative theory for describing ultrafast atomic processes. Details of the method can be found in \cite{tdrm,tdrm_dipole_gauge}, so we only give a short overview here. The time-dependent Schr\"{o}dinger equation for an atom containing ($N+1$) electrons is \begin{equation} \label{tdse} i\frac{\partial}{\partial t} \Psi \left( \mathbf{X}_{N+1},t\right)= H\left(t\right)\Psi\left(\mathbf{X}_{N+1},t\right). \end{equation} The Hamiltonian, $H$, contains both the non-relativistic Hamiltonian of the $N+1$-electron atom or ion in the absence of the laser field and the laser interaction term. The laser field is described using the dipole approximation in the length form, and is assumed to be linearly polarized and spatially homogeneous. This form provides the most reliable ionization yields when only a limited amount of atomic structure is included \cite{tdrm_dipole_gauge}. We propagate a solution of the time-dependent Schr\"{o}dinger equation $\Psi$ on a discrete time scale with time step $\Delta t$ in a Crank-Nicolson scheme. We can write the wavefunction at a time $t_{q+1}$ in terms of the wavefunction at the previous time step $t_q$: \begin{equation} \label{recurs1} (H_m-E)\Psi _{t_{q+1}} = -(H_m +E)\Psi _{t_q}. \end{equation} Here the imaginary energy $E$ is defined as $2i/\Delta t$ and $H_m$ is the Hamiltonian at the midpoint of the time interval, $t_{q+1/2}$. In $R$-matrix theory, configuration space is partitioned into an inner and outer region. In the inner region, all electrons are within a distance $a_{\mathrm{in}}$ of the nucleus, and full account is taken of all interactions between all electrons. In the outer region, an ionized electron moves beyond the boundary $a_{\mathrm{in}}$, and thus exchange interactions between this electron and the electrons remaining close to the nucleus can be neglected. The ionized electron then moves in only the long-range multipole potential of the residual $N$-electron core and the laser field. Following \cite{tdrm_argon} we can evaluate Eq. (\ref{recurs1}) at the boundary $a_{\mathrm{in}}$ as a matrix equation \begin{equation} \label{frtmat} \mathbf{F}\left(a_{\mathrm{in}}\right)= \mathbf{R}(a_{\mathrm{in}})\bar{\mathbf{F}}\left(a_{\mathrm{in}}\right)+\mathbf{T}\left(a_{\mathrm{in}}\right), \end{equation} in which the wavefunction $\mathbf{F}$, at the boundary is described in terms of its derivative, $\bar{\mathbf F}$, plus an inhomogeneous vector, ${\mathbf T}$, arising from the right hand side of Eq. (\ref{recurs1}). The $R$-matrix, $\mathbf{R}$, connects the inner and outer region wavefunction at the boundary $a_{\mathrm{in}}$. Given an inner region wavefunction, $\mathbf{R}$ and $\mathbf{T}$ are evaluated at the boundary $a_{\mathrm{in}}$. Subsequently, they are propagated outwards in space up to a boundary, $a_{\mathrm{out}}$ where it can be assumed that the wavefunction $\mathbf{F}$ has vanished. The wavefunction vector $\mathbf{F}$ is set to zero and then propagated inwards to the inner region boundary. Once $\mathbf{F}$ has been determined at each boundary point, the full wavefunction can be extracted from the $R$-matrix equations. We can then iterate the procedure using Eq. (\ref{recurs1}). \subsection{Harmonic generation} The electric field produced by an accelerating charge is given, using the non-relativistic Lienard-Wiechert potentials in the far field limit, by \begin{equation} \label{lienard} E(t)=k\left\langle\psi(t)\left| \frac{[p_z,H]}{i\hbar}\right|\psi(t)\right\rangle + ke E_{\mathrm{laser}}(t), \end{equation} where $e$ is the electronic charge, $z$ is the laser polarization axis, $k$ is a proportionality constant, $p_z$ is the canonical momentum and $E_{\mathrm{laser}}$ is the electric field of the laser pulse. We can write \begin{equation} \label{ehren_theorem} \left \langle \psi(t) \left| \frac{[p_z,H]}{i\hbar} \right| \psi(t) \right \rangle = \frac{d}{dt} \left \langle \psi(t) | p_z | \psi(t) \right \rangle, \end{equation} and it follows that \begin{equation} E(t) \propto \mathbf{\ddot{d}}(t) = \frac{d^2}{dt^2}\langle\psi(t)|\mathbf{z}|\psi(t)\rangle. \end{equation} The power spectrum of the emitted radiation is then given, up to a proportionality constant, by $|\mathbf{\ddot{d}}(\omega)|^2$- the Fourier transform of $\mathbf{\ddot{d}}(t)$ squared. Although the radiation produced is proportional to the dipole acceleration, it is common practice in HG calculations to calculate $\mathbf{d}(\omega)$, i.e. to use the expectation value of the dipole length instead. This is because a simple relationship exists between $\mathbf{d}$ and $\mathbf{\ddot{d}}$ which can be extended to include the dipole velocity form: \begin{equation} \label{rescaling} \omega ^4|\mathbf{d}(\omega)|^2 = \omega ^2 |\dot{\mathbf{d}}(\omega)|^2 = |\ddot{\mathbf{d}}(\omega)|^2 . \end{equation} Therefore the harmonic response of a single atom can be expressed in terms of the expectation value of the dipole operator \begin{equation} \mathbf{d}\left(t\right)=\langle \Psi \left(t\right) |-e\mathbf{z}|\Psi\left(t\right)\rangle, \label{inducedip_len} \end{equation} or of its velocity \begin{equation} \mathbf{\dot{d}}\left(t\right)=\frac{d}{dt}\langle \Psi \left(t\right) |-e \mathbf{z}|\Psi\left(t\right)\rangle, \label{inducedip_vel} \end{equation} or acceleration \begin{equation} \mathbf{\ddot{d}}\left(t\right)=\frac{d^2}{dt^2}\langle \Psi \left(t\right) |-e \mathbf{z}|\Psi\left(t\right)\rangle, \label{inducedip_acc} \end{equation} where $\mathbf{z}$ is the total position operator along the laser polarization axis. As discussed in \cite{tdrm_dipole_gauge} the TDRM code can use either the length or velocity gauge for the propagation of the wavefunction. While, in keeping with the findings of \cite{tdrm_dipole_gauge}, we use the length gauge, we can still utilize the dipole velocity matrix elements produced by the $R$-matrix suite of codes which `seed' the TDRM code. Thus we can store both $\mathbf{z}$ and $d\mathbf{z}/dt$ and use Eqns. (\ref{inducedip_len}) and (\ref{inducedip_vel}) directly for the determination of the time-varying expectation values of the dipole operator and the dipole velocity. However, in order to calculate the expectation value of the dipole acceleration we cannot use Eq. (\ref{inducedip_acc}) directly. Instead, using Ehrenfest's theorem, it is possible to write the dipole acceleration as \begin{equation} \label{ehren} \mathbf{\ddot{d}}\left ( t \right) = \langle \frac{\partial H} {\partial r} \rangle = \langle \frac{eZ\cos\theta}{\mathbf{r}\cdot\mathbf{r}} \rangle -eN_{elec}\langle \Psi | E(t) | \Psi \rangle, \end{equation} where $Z$ is the nuclear charge, $\mathbf{r}$ the total position operator, $\theta$ the angle between $\hat{\mathbf{r}}$ and $\hat{\mathbf{z}}$ and $N_{elec}$ the number of electrons. The second term in Eq. (\ref{ehren}) is often seen without this factor of $N_{elec}$ as in the SAE approximation it is just 1. We can make a small change to the way the radial integrals are calculated in the $R$-matrix suite which allows the calculation of $\langle 1/\mathbf{r}\cdot\mathbf{r} \rangle$ instead of $\langle \mathbf{r} \rangle$. Then we can use Eq. (\ref{ehren}) to calculate the dipole acceleration. Thus, we can now simultaneously calculate harmonic spectra using the dipole length, velocity and acceleration operators. The propagation of the wavefunction is still carried out in the length gauge. The use of the acceleration form will however be restricted to He like targets. The use of Ehrenfest's theorem (in Eqs. (\ref{ehren_theorem}) and (\ref{ehren})) requires that the wavefunction be exact, or close to it. For general multielectron systems we normally impose a fixed core where (at least) the first two electrons are restricted to a single orbital. Imposing this restriction means that the electronic repulsion is not fully described. More precisely, if the orbital of electron $e_1$ is fixed, and the orbital of electron $e_2$ is not, then the action on $e_2$ will not necessarily equal minus the reaction on $e_1$. Thus, the commutator \begin{equation} \left[(\mathbf{p_1}+\mathbf{p_2}),\frac{1}{|\mathbf{r_1}-\mathbf{r_2}|}\right] \end{equation} may not be guaranteed to vanish. On the other hand \begin{equation} \left[(\mathbf{r_1}+\mathbf{r_2}),\frac{1}{|\mathbf{r_1}-\mathbf{r_2}|}\right] \end{equation} will still vanish. Thus, while the expectation value $\langle[\mathbf{r}\cos\theta,H]\rangle$ can be calculated accurately, $\langle\left[\left[\mathbf{r}\cos\theta,H\right],H\right]\rangle$ can not, rendering Ehrenfest's theorem untenable. Thus, the comparisons we employ for the simple He test case which follows can be extended to general multielectron systems only for the dipole length and velocity forms. \subsection{Calculation parameters} \label{sec:calcparam} The one-electron basis used for describing the residual He$^+$ in the inner region consists of orbitals expressed in terms of $B$ splines. The residual He$^+$ ion is represented through a series of models of increasing complexity \cite{tdrm_dipole_gauge}. The basic model consists of only the He\textsuperscript{+} $1s$ state, which we call 1T (1 {\bf T}rue state). We also use two models comprising six states. The first is built using true orbitals $1s$, $2s$, $2p$, $3s$, $3p$, $3d$ (6T) and the other using 5 pseudo orbitals and the true $1s$ orbital: $1s, \overline{2}s, \overline{2}p, \overline{3}s, \overline{3}p, \overline{3}d$ called 6P, (6 states with {\bf P}seudostates). Pseudostate models have been found to be more accurate in the time propagation of the He wavefunction responding to short light fields, especially in the velocity-gauge description of the light field. Pseudostate models may thus provide a better basis for the description of the ionization and HG processes, provided that these processes are not affected by artificial resonances introduced by the pseudostates. The inner region radius is set at 20 a.u. which is sufficiently large to contain the residual ion for each model we use. The outer region boundary is set at 600 a.u. to prevent any reflections of the wavefunction for the duration of the short laser pulse employed. The set of continuum orbitals contains 80 $B$ splines for each angular momentum, $\ell$, of the continuum electron up to a maximum value, $L_{max}=19$. Convergence testing was carried out retaining angular momenta up to a value of $L_{max}=27$ and, while changes in the harmonic spectra are observed, they occur at energies beyond the cutoff- outside the region of interest here. The outer region is divided into sectors of 2 a.u. containing 35 9th order B-splines per channel. The time step used in the wavefunction propagation is 0.1 a.u. We use 390 nm laser pulses, consisting of a 3 cycle $\sin^2$ ramp-on followed by two cycles at peak intensity, followed by a 3 cycle $\sin^2$ ramp-off (3-2-3). We also calculate spectra for different pulse shapes and find that while the spectra change, the comparisons between them are generally described by the results presented below for the 3-2-3 pulse. There is one important exception to this general observation, which is discussed in Sec. \ref{sub:Comparison of various pulse lengths}. \section{Results} \subsection{Comparison of various target states} \label{sub:Comparison of various target states} The harmonic response, as calculated from the expectation value of the dipole acceleration, of a He target in the 1T, 6T and 6P configurations is shown in Fig. \ref{fig:all-targ-acc-comparison}. The spectra display the expected form- a pronounced first harmonic peak followed by a plateau of peaks at odd multiples of the fundamental photon energy, which decay exponentially beyond a cutoff. The cutoff of the plateau appears at a photon energy of approximately 45 eV. The standard formula for the cutoff energy, $I_p+3.2U_p$ \cite{cutoff_law}, where $I_p$ is the ionization potential and $U_p$ the ponderomotive energy, is not necessarily appropriate in this wavelength and intensity regime. Nevertheless, for the current parameters, it predicts a cutoff energy of 42 eV. The observed cutoff is therefore not inconsistent with the cutoff formula. \begin{figure}[t] \centering \includegraphics[width=7.8cm]{all-targ-acc-comp.eps} \caption{(Color online) The harmonic spectrum (up to a constant of proportionality) as calculated from the dipole acceleration for He in a 390 nm, $4\times 10^{14}$ Wcm$^{-2}$, 3-2-3 laser field, using as a model residual ion description, the $1s$ state (black, dotted line), the $1s$, $2s$, $2p$, $3s$, $3p$, $3d$ states (red, dashed line), and the $1s,\overline{2}s,\overline{2}p, \overline{3}s,\overline{3}p,\overline{3}d$ pseudostates (blue, solid line). The single state model provides a reasonable approximation to the more detailed descriptions beyond the first harmonic where there is a large discrepancy between the spectra. \label{fig:all-targ-acc-comparison}} \end{figure} We can compare the spectra to assess how the description of atomic structure affects the calculated HG spectra. The 1T calculations are in better agreement with the more detailed calculations at higher energies, especially in the cutoff region between the 13th and 19th harmonics where agreement is within 30\%. In the low energy region, and especially in the first harmonic, the spectra differ significantly- the first harmonic response in the 1T model is 60 times greater than that in the 6P model. The inconsistencies in the lower harmonics between the 1T and the 6T and 6P models imply that the low energy harmonics in the dipole acceleration calculation are highly sensitive to changes in the atomic structure, and that in the higher energy cutoff region the details of the atomic structure are not as important. There is a factor 3 difference in the first harmonic peak between the 6T and 6P models. As pseudostates may better represent the changes to the ground state than true states, this difference implies that the first harmonic is especially sensitive to the description of the ground state. The 6P spectrum shows a double peak structure at the 9th harmonic stage which the two true state models do not. \begin{figure}[t] \centering \includegraphics[width=7.8cm]{all-targ-len-comp.eps} \caption{(Color online) The length form harmonic spectrum of He in a 390 nm, $4\times 10^{14}$ Wcm$^{-2}$, 3-2-3 laser field, using as a residual ion description, the 1T (black, dotted line), 6T (red, dashed line), and 6P (blue, solid line) models (See Fig. \ref{fig:all-targ-acc-comparison} for details). Results from the 1T model provide a reasonable approximation to results from the more detailed descriptions especially in the cutoff region. There is also good agreement for the first harmonic when compared with the large discrepancy in Fig. \ref{fig:all-targ-acc-comparison}. \label{fig:all-targ-len-comparison}} \end{figure} As the first term in the dipole acceleration is proportional to $1/r^2$ it is most sensitive to changes in the wavefunction at small $r$. If the description of the atomic structure close to the nucleus is not exact, this can lead to significant inaccuracies in the low energy region of the spectra calculated from the dipole acceleration, especially the first harmonic peak. Figure \ref{fig:all-targ-len-comparison} shows the same harmonic spectra as Fig. \ref{fig:all-targ-acc-comparison} but in this case the spectra are calculated from the dipole length operator. In this form the harmonics are far less sensitive to the details of the atomic structure close to the nucleus as can be seen by the excellent agreement between the three spectra at the first and third harmonics (within 20\%). In fact the agreement between the spectra from the different target states is generally better in the length and velocity forms than in the acceleration- except for the 9th and 11th harmonics, the agreement between the 6T and 6P dipole-length spectra is within 20\%. This further highlights that the dipole acceleration is especially sensitive to the description of atomic structure. The main difference between the three spectra appears again in the 9th harmonic. This difference is very similar to the difference seen in Fig. {\ref{fig:all-targ-acc-comparison}} in which the dipole acceleration was used to determine the harmonic spectrum. This indicates that this difference originates from the different bases used, rather than the choice of operator for the determination of the harmonic spectrum. This topic will be discussed further in section \ref{sub:Comparison with HELIUM}. \subsection{Comparison of dipole length, dipole velocity and dipole acceleration forms} \label{sub:Comparison of various gauges} As has been addressed in the previous section, TDRM theory can calculate harmonic spectra from the dipole length, dipole velocity or dipole acceleration operators. We have already seen how the dipole acceleration is sensitive to the description of the atomic structure, particularly when it comes to the low energy region of the spectrum. Figure \ref{fig:6P_gauge_comparison} shows the harmonic spectrum of 6P He in a 390 nm, $4\times 10^{14}$ Wcm$^{-2}$ laser field as calculated using the dipole length, velocity and acceleration forms of the dipole matrix elements. The pseudostates model gives a more accurate description of the changes in the ground state due to the laser pulse, and hence should give a more accurate picture of the harmonic spectrum than the true state model. In terms of the agreement between the spectra this holds true, as the 6P model gives a consistent agreement between the three different approaches to calculate the harmonic spectrum where the 6T model breaks down at low harmonics. For the 6P model the three spectra agree within 20\% at every harmonic peak up to the 19th, well into the cutoff region. In the 6T there is agreement within 20\% between the dipole length and velocity spectra, but the dipole acceleration spectrum differs by 60\%, 30\% and 40\% in the first, third and fifth harmonics respectively. \begin{figure}[t] \centering \includegraphics[width=7.8cm]{6P_4nf-comp.eps} \caption{(Color online) The harmonic spectrum of a pseudostate (6P) He target in a 390 nm, $4\times 10^{14}$ Wcm$^{-2}$ 3-2-3 laser field, as calculated from the dipole length (black, dotted line), velocity (red, dashed line) and acceleration (blue, solid line). Agreement to within 20\% is found between all three spectra up to the 19th harmonic peak (60 eV). The spectra diverge beyond this. \label{fig:6P_gauge_comparison}} \end{figure} Regardless of which model is used, the three spectra diverge beyond the 19th harmonic (Fig. \ref{fig:6T_gauge_comparison}), with the dipole length spectrum becoming noisy and the dipole acceleration spectrum displaying a few more weak harmonics decaying into noise. The dipole velocity spectrum on the other hand displays a second plateau of peaks not seen in the other spectra. These peaks are not predicted classically, and their absence from the other spectra implies that they are spurious. This implies that the length and velocity forms are reliable, but only in an energy range up to and including the cutoff region. This is especially important as for general multielectron targets the acceleration form will be prohibitively sensitive to the limitations in the description atomic structure (See Sec. \ref{sec:calcparam}). However, by using both the dipole length and dipole velocity operators it is possible to obtain reliable harmonic spectra for multielectron systems using the TDRM approach. \begin{figure}[t] \centering \includegraphics[width=7.8cm]{6T_4nf-comp.eps} \caption{(Color online) The high energy harmonic spectrum of true state (6T) He target in a 390 nm, $4\times 10^{14}$ Wcm$^{-2}$ 3-2-3 laser field, as calculated from the dipole length (black, dotted line), velocity (red, dashed line) and acceleration (blue, solid line). The leftmost harmonic shown is the 19th, above which the spectra diverge. \label{fig:6T_gauge_comparison}} \end{figure} \subsection{Comparison with HELIUM} \label{sub:Comparison with HELIUM} Having demonstrated that the TDRM method is self consistent within a certain energy range, we now seek to benchmark our results against those from a proven alternative method. The HELIUM method \cite{HELIUM} uses direct numerical integration of the full-dimensional TDSE to describe a two-electron system. By solving the TDSE directly, no significant approximations are made, and thus all important multielectron effects are included. This makes HELIUM an excellent code against which to benchmark TDRM. Figure \ref{fig:6TP-hel-comparison} shows the length form harmonic spectra produced by the 6T and 6P models of He alongside that produced by the HELIUM code, for a target in a 390 nm, $4\times 10^{14} $ Wcm$^{-2}$, 3-2-3 laser field. At the harmonic peaks the agreement is very good. The 6P and HELIUM spectra agree to within 20\% up to the 21st harmonics, while the 6T spectrum is within 30\% except at the 9th and 11th harmonics The inset in Fig. \ref{fig:6TP-hel-comparison} shows detail in the 9th harmonic from the three calculations, and the TDRM 1T model. The 6P model and HELIUM spectra show a structured peak which the 1T and 6T do not. The ponderomotive energy in the laser field shifts the He ground state down by around 5.7 eV, shifting the $1s3p$ bound state into the vicinity of the 9th harmonic peak. The presence of a bound state has been shown to give rise to such structure in the harmonic peaks \cite{brown_prl}. It is useful to notice that the 6T model may not describe the changes to the He$^+$ ground state in the laser field as accurately. Thus, the shift of the $1s3p$ state peak may differ and consequently we do not observe the double peak structure in the 9th harmonic for the 6T spectrum. The 1T model does not account for any changes to the He$^+$ ground state, and differences between the 1T model and the other models are thus even larger. Expansion of the basis set in the TDRM approach thus leads to a harmonic spectrum which gets closer to the benchmark harmonic spectrum obtained using the HELIUM code. \begin{figure}[t] \centering \includegraphics[width=7.8cm]{6TP-hel-comp.eps} \caption{(Color online) The harmonic spectra as calculated from the dipole length produced from the 6T (black, dotted line) and 6P (red, dashed line) He models for TDRM, and from the HELIUM code (blue, solid line). The 6P and 6T spectra agree with the HELIUM spectrum to within 20\% and 30\% respectively up to the 21st harmonic peak (except at the 9th and 11th harmonics for the 6T spectrum). Inset: Both the 6P and HELIUM models have a structured peak at the 9th harmonic. The 1T (green circles) and 6T spectra do not. \label{fig:6TP-hel-comparison}} \end{figure} The agreement for the TDRM velocity form spectrum is even better: within 15\% when comparing the velocity form, 6P, TDRM spectrum with the length form HELIUM spectrum. The excellent agreement between the spectra serves to give weight to the results obtained from both methods. The sensitivity of the harmonic spectra to the description of atomic structure makes it even more remarkable that the two methods overlap, especially in the low energy region. Figure \ref{fig:td-hel-sae} shows the low energy region of the velocity form spectrum obtained from the 6P model TDRM code, alongside the length form spectra from the HELIUM code and from an SAE simplification derived from the HELIUM code \cite{helsae}. The three spectra agree well in the first harmonic, whereas the acceleration form spectra (not shown) vary widely. This confirms that the dipole velocity and length are significantly less sensitive to the description of atomic structure close to the nucleus, and are probably more reliable in the low energy, especially first harmonic, region. The gridspacing in HELIUM and the limited basis set in TDRM impose constraints on the calculations very close to the nucleus. These constraints make it likely that the acceleration form spectra are less reliable in the first harmonic, which could give rise to the discrepancy between the two methods. \begin{figure}[t] \centering \includegraphics[width=7.8cm]{td-hel-sae.eps} \caption{(Color online) The low energy region of the dipole velocity harmonic spectrum produced from the 6P He model in TDRM (black, dotted line), and the length form harmonic spectra from the HELIUM code (red, dashed line), and its SAE derivative (blue, solid line). The TDRM and HELIUM spectra are indistinguishable, but the SAE spectrum overestimates the harmonic spectrum at low energies. \label{fig:td-hel-sae}} \end{figure} In the third and fifth harmonics the SAE model markedly overestimates the harmonic spectra obtained from both the TDRM and HELIUM models, which are in excellent agreement with each other. This implies that the SAE model is not sufficient to describe low energy harmonic spectra, and that the lowest energy harmonics are significantly more sensitive to atomic structure. We note that in the plateau and cutoff regions the SAE spectrum is in good agreement with the full HELIUM spectrum, lending justification to the use of the SAE approximation for investigating the generation of higher harmonics in He. \subsection{Comparison of various pulse lengths} \label{sub:Comparison of various pulse lengths} To probe the effect of the laser pulse profile on the harmonic spectra, as well as the 3-2-3 (3 cycles, $\sin^2$ ramp on, 2 cycles peak intensity, 3 cycles $\sin^2$ ramp off) profile, we ran calculations for various longer pulses, namely 5-2-5, 3-4-3 and 5-4-5 pulses. Broadly speaking, while the spectra themselves change (with narrowing peaks for the longer pulses), the comparisons between the 1T, 6T and 6P models, with the HELIUM results or between the dipole length, velocity and acceleration forms do not change significantly. Figure \ref{fig:pulse-comparison} shows the spectra produced by a 3-2-3 and a 5-4-5 laser pulse. The peak values are not changed significantly by the different pulse profile, but the longer 5-4-5 pulse gives rise to narrower peaks and greater contrast. This gives a greater energy resolution between different peaks. Therefore the broad 9th harmonic peak in the 3-2-3 spectrum in Fig. \ref{fig:pulse-comparison} is further broadened by the presence of the nearby $1s3p$ bound state, whereas the narrower peak arising from the 5-4-5 pulse is isolated from any nearby atomic structure. \begin{figure}[t] \centering \includegraphics[width=7.8cm]{pl2v4_comp.eps} \caption{(Color online) The dipole velocity harmonic spectra produced from the 6P He model for a 3-2-3 (blue, solid line) (3 cycle, $\sin ^2$ ramp on, 2 cycles peak intensity, 3 cycle, $\sin^2$ ramp off) and a 5-4-5 pulse (red, dashed line). Both pulses have a peak intensity of $4\times 10^{14}$ Wcm$^{-2}$ and a wavelength of 390 nm. The longer 5-4-5 pulse gives rise to narrower harmonic peaks, but the peak values are still within 20\% of the 3-2-3 spectrum. Inset: There is a significant difference between the two spectra in the 9th harmonic. \label{fig:pulse-comparison} } \end{figure} \section{Conclusions} \label{sec:Conclusions} We have extended the calculation of harmonic spectra in TDRM theory by determining these spectra through the time-varying expectation value of the dipole length, dipole velocity and dipole acceleration operators, and applied the adapted codes to He irradiated by a 390 nm, $4\times10^{14}$ Wcm$^{-2}$ laser field. We have compared the spectra calculated using each form, assessed the effect of changing the multielectron basis set used to describe the residual ion, and benchmarked our results against those obtained from the HELIUM method. We have shown that for harmonic photon energies up to and including the cutoff region the TDRM method provides results which are both self-consistent (between dipole length, velocity and acceleration forms) and consistent with an independent approach. The favorable comparison between the TDRM and HELIUM methods in the velocity and length form spectra implies that the present approach can provide excellent results. Care must be taken in the lower harmonics especially if using the dipole acceleration operator where the sensitivity to inaccuracies in the description of the atomic structure can seriously affect the reliability of the spectra obtained. For general multielectron systems we can perform the calculations using both the dipole length and velocity, and compare the two spectra in order to establish bounds on the reliability of the results. Both methods give excellent agreement for He well into the cutoff region. The divergence of the spectra beyond this occurs at energies which are usually outside the region of interest. We have also probed the advantages of the various residual ion descriptions, which can be used within the TDRM method, finding that smaller basis sets, such as the 1T single target state, provide an efficient way of testing the code, and a reasonable approximation to the harmonic spectrum, but larger basis sets give more detailed spectra, as would be expected from their better description of the atomic structure involved. We also find that the inclusion of pseudostates in the He$^+$ basis seems to lead to more accurate harmonic spectra. This is particularly noticeable when compared with the highly accurate HELIUM method. This is largely due to the more precise way in which the pseudostate model describes the variations in the He$^+$ ground state in response to the laser field. However, the use of pseudostates for general multielectron atoms can be problematic. By introducing non-physical thresholds into the system, pseudo-resonances can show up in the harmonic spectrum. These inadvertant features do not appear in the He case presented here, as the energies at which they become important are outside the harmonic region of interest. For general multielectron atoms, this is not necessarily the case. This does not mean that accurate calculations are not possible for larger atoms. Pseudostates can be used as long as care is taken- with knowledge of the position of pseudo-thresholds, unphysical resonances can be identified and disregarded. Secondly, although the 6T He model is not as close to the HELIUM spectrum as the 6P, it is still within 30\% at every harmonic peak except the 9th and 11th (in the dipole length spectra). Physical orbitals can thus also be used to improve accuracy of harmonic spectra. The number of physical orbitals required may be larger than if pseudo-orbitals are used, but this is not a fundamental problem: it affects only the scale of the calculations. With careful analysis of, and comparison between, pure physical orbital and pseudostate models we can reliably assess the accuracy of harmonic spectra for general multielectron systems using TDRM theory. Furthermore, even models which use only physical orbitals already offer significant gains over SAE models. A simple example of this is HG in Ar$^+$. Harmonics produced by Ar$^+$ ions have been suggested to be the source of the highest harmonics observed from a neutral Ar target \cite{argon+_gibson,argon+_zepf}. The presence of three low-lying, $3s^23p^4$ Ar$^2+$ thresholds can have a significant effect on the harmonic spectrum, and hence interactions between channels associated with these thresholds must be accounted for. These interactions are neglected in an SAE calculation, but would be accounted for in a TDRM calculation involving purely physical orbitals. We find that the reliability of the results is not significantly affected by the particular laser pulse profile used. We compared results for four different laser pulse profiles, finding that while the harmonic spectra differed between cases, the changes were consistent between the various target state models, and with the HELIUM code results. In cases where atomic structure gives rise to structure in the harmonic spectrum the laser pulse length may affect the way in which this is observed. The greater energy resolution afforded by longer pulses can isolate the separate effects of atomic structure. The results presented are also consistent with those from various peak intensities. We calculated harmonic spectra for intensities between $1\times 10^{14}$ Wcm$ ^{-2} $ and $4\times 10^{14}$ Wcm$ ^{-2} $, finding that the results are largely consistent. At lower intensities the plateau region is severely truncated and so it is difficult to compare between the various spectra, but the agreement is still evident in the cutoff region. The TDRM method has been rigorously tested up to intensities of 4 $\times 10 ^{14}$ Wcm$^{-2}$ and at wavelengths up to 390 nm, but requires a significant amount of development to extend beyond these limits. It will be interesting to compare these findings with those that will be determined using the new RMT ($R$-matrix with time) codes \cite{RMT,RMT2} which may be better suited to address higher intensities and longer wavelengths. While the TDRM method has been proven to provide interesting insight into the multielectron nature of HG, it has thus far only been implemented for general multielectron atoms using the dipole length operator \cite{brown_prl}. The next stage will be to apply TDRM at high intensities to systems other than He. While the dipole acceleration is too sensitive to the description of atomic structure to accurately describe such atoms, the length and velocity forms are stable enough to provide good results for general targets. \section{Acknowledgements} \label{sec:Acknowledgements} ACB and DJR acknowledge support from the Department of Employment and Learning NI under the programme for government. HWH is supported by the EPSRC under grant reference number EP/G055416/1. The authors would like to thank Prof K T Taylor and Dr J S Parker for valuable discussions and assistance with the HELIUM code calculations.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,398
{"url":"https:\/\/www.physicsforums.com\/threads\/tennis-ball-launcher-help.53115\/","text":"# Tennis Ball Launcher Help\n\n1. Nov 17, 2004\n\n### bkamer\n\nMe and two friends need to build a tennis ball launcher using nothing but stored mechanical energy to propel it. Our concept is similiar to a mortar or cannon. We tried some comercial springs from Home Depot and even a spring from inside a sprinkler head, but the tennis ball does not have enough velocity to even get out of the barrel. We are considering using a mattress spring, but need some ideas on how to increase the velocity. Any suggestions are welcome. Thanks in advance.\n\n2. Nov 17, 2004\n\n### Sirus\n\nThink about reducing friction. If that doesn't help too much, the only options using a spring would be to reduce mass, increase spring constant, or compress the spring more. As you most likely know, this is due to:\n$$F_{net}=ma$$ and\n$$F_{spring}=-kx$$\n\nLast edited: Nov 17, 2004\n3. Nov 18, 2004\n\n### NoTime\n\nIs the spring end of tube open?\nIf not and the ball fits snugly in the tube, you have problems with air pressure.","date":"2016-10-25 16:04:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4397273659706116, \"perplexity\": 1427.4040813262318}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-44\/segments\/1476988720154.20\/warc\/CC-MAIN-20161020183840-00548-ip-10-171-6-4.ec2.internal.warc.gz\"}"}
null
null
Notezilla app for Android gives the convenience of having access to all your desktop sticky notes that you created using the Windows version of Notezilla. Sticky notes on all your devices remain in sync. The reminders that you create on Windows PC will pop up on your phone :). Sticky notes created on phone will automatically sync with your Windows PC. The Android version works on both phones & tablets. Create checklist notes to keep track of pending tasks. Motivates you to reach your goal faster.
{ "redpajama_set_name": "RedPajamaC4" }
5,377
\section{Coupled Cluster and Strong Correlation} \label{sec:SRCC} Single-reference (SR) coupled-cluster (CC) methods offers a reliable description of weakly correlated systems through a well-defined hierarchy of systematically improvable models. \cite{Cizek_1966,Paldus_1972,Crawford_2000,Bartlett_2007,Shavitt_2009} On top of this hierarchy stands full CC (FCC), which is equivalent to full configuration interaction (FCI), and consequently provides, at a very expensive computational cost, the exact wave function and energy of the system in a given basis set. Fortunately, more affordable methods have been designed and the popular CCSD(T) method, which includes singles, doubles and non-iterative triples, is nowadays considered as the gold standard of quantum chemistry for ground-state energies and properties. \cite{Purvis_1982,Raghavachari_1989} Despite its success for weakly correlated systems, it is now widely known that CCSD(T) flagrantly breaks down in the presence of strong correlation as one cannot efficiently describe such systems with a single (reference) Slater determinant. This has motivated quantum chemists to design multi-reference CC (MRCC) methods. \cite{Jeziorski_1981,Mahapatra_1998,Mahapatra_1999,Lyakh_2012,Kohn_2013} However, it is fair to say that these methods are computationally demanding and still far from being black-box. Because SRCC works so well for weak correlation, it would be convenient to be able to treat strong correlation within the very same framework. This is further motivated by the fact that one can compensate the poor quality of the reference wave function by simply increasing the maximum excitation degree of the CC expansion. However, this is inevitably associated with a rapid growth of the computational cost, and hence one cannot always afford this brute-force strategy. The development of SR-based methods for strong correlation is ongoing and some of them (usually based on the ``addition-by-subtraction'' principle) have shown promising results. A non-exhaustive list includes pair coupled-cluster doubles, \cite{Henderson_2014a,Henderson_2014b,Stein_2014,Shepherd_2016,Boguslawski_2017a,Boguslawski_2017b,Boguslawski_2019} singlet-paired CCD, \cite{Bulik_2015,Gomez_2016} the distinguishable cluster methods, \cite{Kats_2013,Kats_2014,Kats_2015,Kats_2016,Kats_2018,Kats_2019,Kats_2019a,Rishi_2016,Rishi_2019,Rishi_2019a} CCD-based variants involving a well-defined subset of diagrams, \cite{Scuseria_2008,Peng_2013,Scuseria_2013,Shepherd_2014,Shepherd_2014a} the $n$CC hierarchy, \cite{Bartlett_2006,Musial_2007} and parametrized CCSD. \cite{Huntington_2010} Each of these methods sheds new light on the failures of SRCC to treat static correlation. For the sake of brevity, we omit the single-reference prefix hereafter. The CC wave function $\ket{\Psi_\text{CC}}$ is obtained by applying a wave operator onto a single Slater determinant reference $\ket{\Psi_0}$ as \begin{equation} \ket{\Psi_\text{CC}} = e^{\Hat{T}} \ket{\Psi_0}. \end{equation} In CC theory, the wave operator is defined as the exponential of the cluster operator \begin{equation} \Hat{T} = \sum_{k=1}^N \Hat{T}_k, \end{equation} which is the sum of the $k$th-degree excitation operator up to $k=N$ (where $N$ is the number of electrons). In second quantized form, we have \begin{equation} \label{eq:excitationOp} \Hat{T}_k = \frac{1}{(k!)^2} \sum_{ij\dots}\sum_{ab\dots} \ta{ij\dots}{ab\dots} \cre{a}\cre{b} \dots \ani{j}\ani{i}, \end{equation} $\ani{i}$ and $\cre{a}$ being the second quantization annihilation and creation operators, respectively, which annihilates (creates) an electron in the spin-orbital $i$ ($a$). The cluster amplitudes $\ta{ij\dots}{ab\dots}$ are the quantities of interest in order to compute the CC energy (see below). Throughout the paper, $p$, $q$, $r$, and $s$ denote general spin-orbitals, $i$, $j$, $k$, and $l$ refer to occupied spin-orbitals (hole states) and $a$, $b$, $c$, and $d$ to unoccupied spin-orbitals (particle states). In quantum mechanics, one convenient way to determine the parameters of a wave function ans\"atz is to minimize the the energy with respect to its parameters. The Rayleigh-Ritz variational principle ensures that the energy thus obtained is an upper bound to the exact ground-state energy. Following this strategy, the variational CC (VCC) energy \cite{Bartlett_1988,Kutzelnigg_1991,Szalay_1995,Kutzelnigg_1998,Kutzelnigg_2010,Cooper_2010,Knowles_2010,Robinson_2011,Harsha_2018} \begin{equation} \label{eq:EVCC} E_\text{VCC} = \frac{\mel{\Psi_0}{e^{\Hat{T}^{\dag}} \Hat{H} e^{\Hat{T}}}{\Psi_0}}{\mel{\Psi_0}{e^{\Hat{T}^{\dag}} e^{\Hat{T}}}{\Psi_0}}, \end{equation} is thus minimized with respect to the cluster amplitudes which ensures \begin{equation} \label{eq:varPcp} \min_{\ta{ij\dots}{ab\dots}}E_\text{VCC} \ge E_\text{FCI}. \end{equation} Unfortunately, independently of the excitation rank of $\Hat{T}$, this procedure is not tractable in practice. Indeed, because the series expansion of the exponential in Eq.~\eqref{eq:EVCC} does not truncate before the $N$th-order term, VCC has an inherent exponential scaling with respect to system size. Usually, one sacrifices the attractive upper bound property of the variational principle in exchange for computational tractability. To do so, the similarity-transformed Schr\"odinger equation \begin{equation} \label{eq:schroEq} e^{-\Hat{T}} \Hat{H} e^{\Hat{T}} \ket{\Psi_0} = \Bar{H} \ket{\Psi_0} = E \ket{\Psi_0} \end{equation} is projected onto the reference determinant $\ket{\Psi_0}$, which gives \begin{equation} \label{eq:TCCnrj} E_\text{TCC} = \mel{\Psi_0}{\Bar{H}}{\Psi_0}. \end{equation} This energy can be seen as the expectation value of a similarity-transformed Hamiltonian $\Bar{H} = e^{-\Hat{T}} \Hat{H} e^{\Hat{T}}$ for the reference determinant $\ket{\Psi_0}$. One can expand $\Bar{H}$ thanks to the Baker-Campbell-Hausdorff formula and show that this series naturally truncates after the fourth-order term. This truncation is due to the two-electron nature of the Hamiltonian and is responsible for the affordable polynomial scaling of this method (contrary to the exponential cost of the variational approach). In such a case, the cluster amplitudes are no longer determined by minimization of the VCC energy functional \eqref{eq:EVCC} but via the amplitude equations \begin{equation} \label{eq:T2_eq} \mel*{\Psi_{ij\dots}^{ab\dots}}{\Bar{H}}{\Psi_0} = 0, \end{equation} which are the projection of the similarity-transformed Schr\"odinger equation \eqref{eq:schroEq} onto excited determinants. In Eq.~\eqref{eq:T2_eq}, the determinant $\ket*{\Psi_{ij\dots}^{ab\dots}}$ is obtained by promoting the electrons occupying the orbitals $i,j,\dots$ in $\ket{\Psi_0}$ to the vacant orbitals $a,b,\dots$. One usually refers to this type of methods as traditional CC (TCC). As reported in Refs.~\onlinecite{VanVoorhis_2000,Cooper_2010,Evangelista_2011}, VCC has been shown to give correct results in situations where TCC fails. These benchmark studies evidenced that the breakdowns of TCC cannot be explained solely by its single-reference nature, as part of the problem actually originates from its non-variational character. Unfortunately, because of the exponential scaling of VCC, it is computationally cumbersome and cannot be applied in practice except for small molecules in small basis sets. This drawback has motivated the search for approximate methods that retain the advantages of VCC but at a polynomial cost. Because VCC inherits its exponential scaling from the lack of truncation of its energy functional [see Eq.~\eqref{eq:EVCC}], some authors have designed ingenious truncation schemes. \cite{Bartlett_1988, Kutzelnigg_1991} The quasi-variational CC (QVCC) method from Knowles' group has been designed along these lines. \cite{Robinson_2011,Robinson_2012,Robinson_2012a,Robinson_2012b,Robinson_2012c} This method, which is an improvement of the former linked pair functional, \cite{Knowles_2010} can be seen as an infinity summation of a given subset of diagrams of the VCC energy functional. This method has most of the desirable properties of an approximate VCC theory [see Ref.~\onlinecite{Robinson_2012} for an exhaustive discussion of these properties] but is not an upper bound to the exact energy. Yet, QVCC has been proved to be much more robust than TCC in cases where the latter exhibits non-variational collapse below the FCI energy like, for example, in the symmetric bond stretching of the nitrogen and acetylene molecules. \cite{Robinson_2012a,Robinson_2012b,Robinson_2012c} Using VCC instead of TCC has its advantages but its computational complexity is very nettlesome. It would be simpler if one could describe strong correlation while retaining the projective way of solving the equations and its associated polynomial cost. Surprisingly, restricting the cluster operator to paired double excitations (pCCD), which is a simplification with respect to CC with doubles (CCD), \cite{Pople_1976} can give qualitatively good results for strongly correlated systems. \cite{Henderson_2014a,Henderson_2014b,Stein_2014,Gomez_2016,Shepherd_2016,Boguslawski_2017a,Boguslawski_2017b,Boguslawski_2019} This can be understood thanks to the concept of seniority number which is defined as the number of unpaired electrons in a determinant. \cite{Ring_1980} Indeed, the seniority-zero subspace (\textit{i.e.}, the set of all closed-shell determinants) has proven to give a good description of static correlation. \cite{Bytautas_2011} Unfortunately, doubly-occupied configuration interaction (DOCI), which is a CI calculation in the seniority-zero subspace, inherits the exponential scaling of its FCI parent. \cite{Allen_1962,Smith_1965,Veillard_1967,Weinhold_1967,Couty_1997,Kollmar_2003,Bytautas_2011} However, benchmark results \cite{Henderson_2014a,Henderson_2014b,Henderson_2015,Shepherd_2016} have shown that pCCD provides ground-state energies which are almost indistinguishable from the DOCI ones but at a mean-field computational cost, hence providing a tractable way to qualitatively describe strongly correlated systems. Note that pCCD is equivalent to the antisymmetric product of 1-reference orbital geminals (AP1roG) \cite{Limacher_2013,Limacher_2014,Tecmer_2014,Boguslawski_2014a,Boguslawski_2014b,Boguslawski_2014c,Tecmer_2015,Boguslawski_2015,Boguslawski_2016a,Boguslawski_2016b,Fecteau_2020,Johnson_2020} which has been designed as a computationally tractable approximation to the antisymmetric product of geminals (APG), \cite{Coleman_1963,Coleman_1965} a method that has been recently further explored by the group of Scuseria. \cite{Henderson_2019,Khamoshi_2019,Henderson_2020,Dutta_2020,Khamoshi_2021,Dutta_2021} Because the seniority-zero subspace is not invariant to orbital rotations, one must energetically optimize the orbitals to obtain the optimal pairing scheme, (\textit{i.e.}, the orbital set that minimizes the energy in the seniority-zero subspace). \cite{Bytautas_2011} In Ref.~\onlinecite{Limacher_2013}, Limacher \textit{et al.}~determined this pairing scheme by optimizing the orbitals at the DOCI level and then using these orbitals for their geminal wave function methods. Later, Henderson \textit{et al.}~designed an orbital-optimized pCCD (oo-pCCD) procedure which provides a more straightforward route to obtain this optimal pairing scheme. \cite{Henderson_2014a} \section{Coupled Cluster and Excited States} \label{sec:ES} \subsection{TCC for excited states} \label{sec:TCC4ES} Excited-state energies and properties can be computed within the TCC paradigm through the well-established equation-of-motion (EOM) formalism. \cite{Rowe_1968,Monkhorst_1977,Koch_1990,Stanton_1993,Koch_1994} In EOM-CC, one applies a suitably chosen (linear) excitation operator on a ground-state CC wave function to compute excited states. This procedure can be conveniently recast as a non-Hermitian eigenvalue problem involving the similarity-transformed Hamiltonian $\bar{H}$ in a space of excited determinants. \cite{Shavitt_2009} Like in ground-state TCC, one can systematically expand the excitation space to form a well-defined hierarchy of EOM methods. As an example, EOM-CCSD restricts the set of excited determinants to singles and doubles. EOM-CCSD is known to accurately describe single excitations \cite{Loos_2018b,Loos_2020c} but dramatically fails to describe double excitations because of the lack of triples and higher excitations.\cite{Loos_2019c,Loos_2020d} This shortcoming can be corrected by the inclusion of these higher excitations but this is not without a steep increase of the computational cost. \cite{Kucharski_1991,Christiansen_1995b,Kucharski_2001,Kowalski_2001,Hirata_2000,Hirata_2004} Albeit being by far the most popular excited-state formalism, EOM is not the only route to excited states within CC theory. Indeed, the amplitude equations \eqref{eq:T2_eq} constitute a set of non-linear polynomial equations and consequently possess many solutions. These solutions, sometimes labeled as ``non-standard'', can be non-physical or correspond to genuine excited states. \cite{Piecuch_2000} Therefore, performing a first standard ground-state CC calculation and a second one converging towards a given excited state provides an alternative way to obtain excitation energies. \cite{Adamowicz_1985,Lee_2019} Lee \textit{et al.}~refers to this type of methods as $\Delta$CC \cite{Lee_2019} by analogy with the $\Delta$SCF methods where one basically follows the same procedure but at the self-consistent field (SCF) level. Indeed, the use of Hartree-Fock (HF) or Kohn-Sham higher-energy solutions corresponding to excited states is becoming more and more popular and new algorithms designed to target such solutions, like the maximum overlap method (MOM) \cite{Gilbert_2008,Barca_2014,Barca_2018a,Barca_2018b} or more involved variants, \cite{Thom_2008,Zhao_2016a,Ye_2017,Shea_2018,Thompson_2018,Ye_2019,Tran_2019,Burton_2019c,Zhao_2020,Hait_2020,Hait_2020b,Levi_2020a,Levi_2020b,Dong_2020,Hait_2021} are being actively developed. Besides providing a qualitatively good description of excited states, \cite{Hait_2021} these solutions can also be very helpful for $\Delta$CC methods, as we shall illustrate below (see also Ref.~\onlinecite{Lee_2019}). The set of orbitals used, particularly the orbitals that constitute $\ket{\Psi_0}$, strongly influences the performance of $\Delta$CC methods. Importantly, the use of state-specific orbitals plays the role of a magnifying glass and facilitates the convergence towards a given CC solution by enlarging the associated basin of attraction. In addition to the orbital set, two other factors influence in a significant way the solutions that can be reached: the guess amplitudes for the CC equations and the algorithm employed for solving these equations. Even if the chosen orbitals can enlarge or shrink the basin of attraction of a given solution, one still has to pick an appropriate starting point within this basin to be able to converge to the desired solution. Moreover, the type of iterative algorithms (usually based on Newton-Raphson method and/or supplemented by Pulay's DIIS method \cite{Pulay_1980,Pulay_1982,Scuseria_1986}) must also be carefully chosen so as to target, for example, saddle points or maxima instead of minima. For example, as shown in Ref.~\onlinecite{Kossoski_2021}, the usual CC iterative algorithm is inappropriate to converge towards excited states. Because of the non-linearity of the CC equations [see Eq.~\eqref{eq:T2_eq}], the number of solutions can be higher that the physically meaningful number. However, claiming that a given solution corresponds to a genuine electronic state (or not) is a rather tricky task as the overall picture behind the structure of the CC solutions is still far from being thoroughly understood. Zivkovic and Monkhorst were the first to tackle this outstanding problem with their seminal work on the existence conditions of the higher roots of the CC equations. \cite{Zivkovic_1977,Zivkovic_1978} However, their model was too simplistic and most of the pathological solutions that they found or predicted were due to this unrealistic model as argued later by Jankowski \textit{et al.}~who investigated the CCD solutions of $^{1}A_1$ symmetry in the \ce{H4} molecule. \cite{Jankowski_1994,Jankowski_1994a,Jankowski_1995} Still, they evidenced that some non-standard solutions may be non-physical. They also showed that the CC solution structure highly depends on the reference. \cite{Jankowski_1995} Few years later, the introduction of the homotopy method (which gives all the solutions of a set of non-linear equations) in the CC paradigm enabled the first systematic study on the structure of the CC energy landscape. \cite{Kowalski_1998,Kowalski_1998a} In particular, these studies showed that, in practice, the number of CC solutions is much lower than the theoretical upper bound known as B\'ezout's number. We refer the interested reader to the series of papers by Jankowski \textit{et al.} \cite{Jankowski_1999,Jankowski_1999a,Jankowski_1999b,Jankowski_1999c} and the book chapter of Piecuch and Kowalski \cite{Piecuch_2000} for an extensive discussion about the homotopy method and the higher-energy solutions of the CC equations. We should also mention that the homotopy method has been employed to locate the CC solutions of the PPP model for some cyclic polyenes, \cite{Podeszwa_2002,Podeszwa_2003} as well as in the context of MRCC and the Bloch equation formalism. \cite{Paldus_1993,Kowalski_2000,Kowalski_2000a} More recent studies have further improved our understanding of the CC energy landscape from which multiple solutions emerge. \cite{Mayhall_2010,Lee_2019} As pointed out by Mayhall in their study of the CCSD solutions in the \ce{NiH} molecule, the problem of the CC solution structure still needs to be addressed for more realistic systems. \cite{Mayhall_2010} Lee \textit{et al.}~showed that $\Delta$CC can provide fairly accurate double excitation and double core-hole energies. \cite{Lee_2019} Recently, we have pursued along these lines by analyzing the non-standard solutions of the pCCD equations. We have shown that the agreement between pCCD and DOCI holds for excited states on the condition that state-specific optimized orbitals are employed. \cite{Kossoski_2021} Moreover, Ref.~\onlinecite{Kossoski_2021} brought some answers to Mayhall's open question as we have shown that $\Delta$oo-pCCD provides double excitation energies that are comparable in terms of accuracy to the more expensive EOM-CCSDT method \cite{Noga_1987,Scuseria_1988b,Kucharski_2001,Kowalski_2001,Kowalski_2001b} for a set of small (yet realistic) molecules. It is worth mentioning again that all the studies mentioned above deal with TCC methods. \subsection{VCC for excited states} \label{sec:VCC4ES} For the sake of clarity, from here on, we restrict ourselves to VCCD (\textit{i.e.}, $\Hat{T} = \Hat{T}_2$) but the procedure presented below is general and can be applied to higher-order variants. To the best of our knowledge, the present study is the first one to investigate excited states at the VCC level. Because saddle points and maxima of the HF energy functional represent excited states, one can genuinely wonder if the same holds for the VCC energy functional \eqref{eq:EVCC}. Thus, we seek for its stationary points, \textit{i.e.}, the different sets of cluster amplitudes $\boldsymbol{t}$ with elements $\ta{ij}{ab}$ satisfying \begin{equation} \label{eq:dEVCC} \pdv{E_\text{VCC}}{\ta{ij}{ab}} = r_{ij}^{ab} = 0, \end{equation} where the VCCD residuals $r_{ij}^{ab}$ are the elements of the tensor $\boldsymbol{r}$. The ground-state variational solution obtained via the minimization of Eq.~\eqref{eq:EVCC} is also a solution of the more general equations \eqref{eq:dEVCC} which provide all the stationary solutions of the VCCD equations. In this study, we restrict ourselves to solutions with real cluster amplitudes. The explicit expressions of the residual equations under this assumption are derived in Appendix~\ref{app:appendixA}. Of course, stationary points of the VCC energy functional associated with complex cluster amplitudes may also exist. Indeed, the hermiticity of $e^{\Hat{T}^{\dagger}}\Hat{H} e^{\Hat{T}}$ ensures that $E_\text{VCC}$ is real for any set of amplitudes. \cite{Kutzelnigg_1991} Because VCC has an inherent exponential scaling, one can take advantage of the more convenient FCI representation to implement VCC algorithms. \cite{VanVoorhis_2000,Cooper_2010} Following Van Voorhis and Head-Gordon, we represent the (unnormalized) CC wave function as a CI vector (\textit{i.e.}, in the Slater determinant basis) by $N$ successive applications of the cluster operator $\Hat{T}$ on the reference wave function: \begin{equation} \label{eq:CCtoCI} \begin{split} &\ket{\Psi_\text{CC}} = e^{\Hat{T}}\ket{\Psi_0} \\ &= \ket{\Psi_0} + \Hat{T} \qty(\ket{\Psi_0} + \frac{\Hat{T}}{2}\qty( \dots \qty(\ket{\Psi_0} + \frac{\Hat{T}}{N-1} \qty(\qty(1 + \frac{\Hat{T}}{N})\ket{\Psi_0}))\dots)). \end{split} \end{equation} Using this CI representation, the action of second quantized operators on the CC wave function is quite straightforward, and one can evaluate the energy \eqref{eq:EVCC} and the residuals \eqref{eq:derivationVCCampEq} by simple matrix products. Note that the coefficients of the resulting CC wave function [see Eq.~\ref{eq:CCtoCI}] are equal to the cluster analysis of the CI coefficients. \cite{Cizek_1969,Monkhorst_1977,Lehtola_2017,Magoulas_2021} In their VCCD benchmark study, Van Voorhis and Head-Gordon \cite{VanVoorhis_2000} relied on the standard TCCD iterative procedure (where one computes an approximate diagonal Jacobian matrix based on the difference of the Fock matrix elements $f_p^q$) to solve Eq.~\eqref{eq:dEVCC}: \begin{equation} \label{eq:updateAmpVCC} \ta{ij}{ab} \leftarrow \ta{ij}{ab} - \frac{r_{ij}^{ab}}{\f{a}{a} + \f{b}{b} - \f{i}{i} - \f{j}{j}}. \end{equation} However, this approximate form of the Jacobian matrix cannot be employed to target excited states as it systematically converges towards the ground state or eventually diverges (see Ref.~\onlinecite{Kossoski_2021} for an exhaustive discussion on this point). If one aims at excited states, one should consider the exact diagonal of the Jacobian matrix (or, at least, an approximation which preserves the sign of the exact diagonal). Even better, yet more expensive, one can employ the whole exact Jacobian matrix $\boldsymbol{J}$ with elements \begin{equation} \label{eq:jacobian} J_{ij,kl}^{ab,cd} = \pdv{r_{ij}^{ab}}{t_{kl}^{cd}}, \end{equation} which is then used to update the amplitudes according to the usual Newton-Raphson algorithm, \textit{i.e.}, \begin{equation} \label{eq:updateAmpVCCDiagHess} \boldsymbol{t} \leftarrow \boldsymbol{t} - \boldsymbol{J}^{-1} \cdot \boldsymbol{r}. \end{equation} The general expression of the Jacobian matrix elements is given in Appendix \ref{app:appendixA}. Note that updating the amplitudes via Eq.~\eqref{eq:updateAmpVCCDiagHess} is more computationally expensive than via Eq.~\eqref{eq:updateAmpVCC} as one must compute the entire Jacobian matrix and invert it. However, the information about the curvature of the VCC energy contained in the exact Jacobian matrix is essential to converge towards saddle points. In difficult cases, it can be useful to damp the Newton-Raphson steps. However, one has to ensure that the structure of the Jacobian matrix is preserved during this process. This can be done by diagonalizing the Jacobian and adding a positive/negative constant to the positive/negative eigenvalues, similarly to what we have recently done for orbital optimization at the pCCD level. \cite{Kossoski_2021} To fully specify our algorithm, we still need to choose our reference $\ket{\Psi_0}$ as well as the starting values of the cluster amplitudes. In this study, we rely on both ground- and excited-state HF wave functions as references in order to study the influence of state-specific references. The orbitals employed to construct these excited-state HF wave functions have been obtained using initial MOM (IMOM). \cite{Gilbert_2008,Barca_2014,Barca_2018a,Barca_2018b} State-specific orbitals optimized at the correlated level are also considered, as discussed below. Regarding the starting values of the cluster amplitudes $\boldsymbol{t}$, once again we have taken advantage of the FCI representation by obtaining these via a cluster analysis of the corresponding CI eigenvectors. \cite{Monkhorst_1977,Lehtola_2017} \subsection{Orbital optimization for excited states} \label{sec:OO4ES} The solutions obtained via this iterative process [see Eq.~\eqref{eq:updateAmpVCCDiagHess}] are stationary points of the VCCD energy functional with respect to the cluster amplitudes but not with respect to the orbital coefficients. Indeed, the orbitals have usually been obtained at the HF level and no longer represent a stationary point when electron correlation is introduced. The next step is thus to optimize the orbitals at the corresponding correlated level to find solutions that are stationary with respect to both the cluster amplitudes and the orbital coefficients. As usually done, \cite{Scuseria_1987,Bozkaya_2011} we introduce a unitary operator $e^{\hat{\kappa}}$ into the VCCD energy functional, \begin{equation} \label{eq:orbVCC} E_\text{VCC}(\hat{\kappa}) = \frac{\mel{\Psi_0}{e^{\Hat{T}^{\dag}} e^{-\hat{\kappa}} \Hat{H} e^{\hat{\kappa}} e^{\Hat{T}}}{\Psi_0}}{\mel{\Psi_0}{e^{\Hat{T}^{\dag}} e^{\Hat{T}}}{\Psi_0}}, \end{equation} to account for orbital rotations. Now, Eq.~\eqref{eq:orbVCC} can be minimized with respect to the cluster amplitudes $t_{ij}^{ab}$ and to the orbital rotation parameters $\kappa_{pq}$ of the one-electron anti-Hermitian operator $\hat{\kappa}$. For a given set of cluster amplitudes, we search for the stationary points with respect to the orbital rotation parameters using the second-order Newton-Raphson method. We then expand the VCC energy around $\boldsymbol{\kappa} = \boldsymbol{0}$, \begin{equation} \label{eq:energy_expansion} E_\text{VCC}(\boldsymbol{\kappa}) \approx E_\text{VCC}(\boldsymbol{0}) + \boldsymbol{g} \cdot \boldsymbol{\kappa} + \frac{1}{2} \boldsymbol{\kappa^{\dag}} \cdot \boldsymbol{H} \cdot \boldsymbol{\kappa}, \end{equation} where $\boldsymbol{g}$ is the orbital gradient and $\boldsymbol{H}$ is the orbital Hessian, both evaluated at $\boldsymbol{\kappa}=\boldsymbol{0}$, \textit{i.e.}, \begin{align} \label{eq:orbGradHess} g_{pq} & = \left. \pdv{E_\text{VCC}(\boldsymbol{\kappa})}{ \kappa_{pq}} \right|_{\boldsymbol{\kappa} = \boldsymbol{0}}, & H_{pq,rs} & = \left. \pdv{E_\text{VCC}(\boldsymbol{\kappa})}{ \kappa_{pq}}{ \kappa_{rs}} \right|_{\boldsymbol{\kappa} = \boldsymbol{0}}. \end{align} The orbitals are then updated following the usual Newton-Raphson step \begin{equation} \label{eq:updateCoeff} \boldsymbol{C} \leftarrow \boldsymbol{C} \cdot e^{-\boldsymbol{H}^{-1} \cdot \boldsymbol{g}}, \end{equation} where $\boldsymbol{C}$ is the orbital coefficient matrix. Then, one finds the solution of Eq.~\eqref{eq:dEVCC} for this new set of orbitals and the procedure is repeated until convergence. To compute the gradient and the Hessian, one must compute the one- and two-body density matrices, \cite{Henderson_2014a} with respective elements \begin{subequations} \begin{align} \label{eq:onebody} \gamma_{pq} &= \sum_{\sigma} \frac{\mel{\Psi_\text{CC}}{\cre{q_{\sigma}}\ani{p_{\sigma}}}{\Psi_\text{CC}}}{\braket{\Psi_\text{CC}}{\Psi_\text{CC}}}, \\ \label{eq:twobody} \Gamma_{pq,rs} &= \sum_{\sigma \sigma'} \frac{\mel{\Psi_\text{CC}}{\cre{s_{\sigma}}\cre{r_{\sigma'}}\ani{q_{\sigma'}}\ani{p_{\sigma}}}{\Psi_\text{CC}}}{\braket{\Psi_\text{CC}}{\Psi_\text{CC}}}, \end{align} \end{subequations} where the orbital index refers to spatial orbitals, and $\sigma$ and $\sigma'$ to spin indexes. Once again, we take advantage of the CI representation of the VCCD wave function to compute these quantities. We express the string of second quantized operators in Eqs.~\eqref{eq:onebody} and \eqref{eq:twobody} as a matrix in the Slater determinant basis, and then evaluate the elements of the one- and two-body density matrices by simple matrix products. In the present study, we restrict the cluster operator to a pair double excitation operator \begin{equation} \hat{T} = \sum_{ia} t_{ii}^{aa} P_a^{\dag} P_i, \end{equation} (with $P_q^{\dag} = a_{q\uparrow}^{\dag} a_{q\downarrow}^{\dag}$) and investigate the properties of ground and excited states at the traditional pCCD (TpCCD) and variational pCCD (VpCCD) levels. This choice is motivated by the two following arguments. Firstly, our aim is to compare the VpCCD solution structure with its TpCCD counterpart (which has received our attention recently \cite{Kossoski_2021}) in order to provide new insights into the multiple solutions of the VCC equations. Secondly, this restriction of the cluster operator significantly lowers both the computational cost and the complexity of the energy landscape, hence simplifying the present analysis. The VpCCD equations are easily obtained from their VCCD analogs (see Appendix~\ref{app:appendixB} for their explicit expressions). We refer the interested reader to Ref.~\onlinecite{Henderson_2014a} for a complete list of equations and an exhaustive discussion of the orbital optimization algorithm in the case of ground-state TpCCD and to Ref.~\onlinecite{Kossoski_2021} for the case of excited-state TpCCD. In the following, taking the symmetric dissociation of the linear \ce{H4} molecule as a first case study, ground- and excited-state energies obtained at the TpCCD and VpCCD levels are compared to DOCI for three different sets of orbitals: ground-state HF orbitals, state-specific HF orbitals and state-specific orbitals optimized at the VpCCD level. In a second stage, we look at the various TpCCD, VpCCD and DOCI electronic states in the presence of strong correlation (\textit{i.e.}, near degeneracies) by examining the continuous deformation of \ce{H4} from a square to a rectangular arrangement. \section{Computational details} \label{sec:compdet} The computational methods investigated here (HF, MOM, TpCCD, VpCCD, DOCI, and FCI) have been implemented as standalone \textsc{mathematica} modules, \cite{Mathematica} which makes them easily interconnectable and modifiable depending on the actual purpose. These are provided in an accompanying notebook available for download from Zenodo at \href{http://doi.org/10.5281/zenodo.4971905}{http://doi.org/10.5281/zenodo.4971905}. All the calculations have been performed in the restricted formalism. The only required input is the one- and two-electron integrals which are usually computed with a third-party software like {\textsc{quantum package}}.\cite{Garniron_2017b,Garniron_2018,Garniron_2019} The convergence threshold (based on the DIIS commutator) was set to $10^{-10}$ a.u.~for the restricted HF (RHF) calculations, while the convergence thresholds (based on the maximum absolute value of the gradient) for the cluster amplitude and orbital optimization procedures were both set to $10^{-6}$ a.u. \section{Results and discussion} \label{sec:res} \subsection{Influence of the orbital set: the linear \ce{H4} molecule} \label{subsec:linearH4} \begin{figure*} \includegraphics[width=0.58\textwidth]{Fig1a} \includegraphics[width=0.31\textwidth]{Fig1b} \caption{ Left: Energies (in hartree) of the linear \ce{H4} molecule in the STO-6G basis set as functions of the bond length $R$ (in bohr) for various methods using the ground-state RHF determinant as reference wave function: DOCI (markers), VpCCD (solid), and TpCCD (dashed). The real part of the complex TpCCD solutions are represented as dashed black lines. Right: Energy differences between DOCI and TpCCD (VpCCD) as functions of $R$ in the top (bottom) panel. } \label{fig:H4RHF} \end{figure*} \begin{figure} \includegraphics[width=\linewidth]{Fig2} \caption{ Energies (in hartree) of various RHF solutions as functions of the bond length $R$ (in bohr) for the linear \ce{H4} molecule in the STO-6G basis set. \label{fig:H4MOM}} \end{figure} \begin{figure*} \includegraphics[width=0.58\textwidth]{Fig3a} \includegraphics[width=0.31\textwidth]{Fig3b} \caption{ Left: Energies (in hartree) of the linear \ce{H4} molecule in the STO-6G basis set for various methods using state-specific RHF determinants as functions of the bond length $R$ (in bohr): DOCI (dots), VpCCD (solid), and TpCCD (dashed). Right: Energy differences between DOCI and TpCCD (VpCCD) as functions of $R$ in the top (bottom) panel.} \label{fig:H4Correlated} \end{figure*} \begin{figure*} \includegraphics[width=0.58\textwidth]{Fig4a} \includegraphics[width=0.31\textwidth]{Fig4b} \caption{ Left: Energies (in hartree) of the linear \ce{H4} model in a STO-6G basis set for the orbital-optimized VpCCD method (solid) and DOCI using the same orbitals (dots) as functions of the bond length $R$ (in bohr). Right: Energy differences between oo-DOCI and oo-TpCCD (oo-VpCCD) as functions of $R$ in the top (bottom) panel.} \label{fig:H4ooCorrelated} \end{figure*} As a first example, we consider the symmetric stretching of the linear \ce{H4} molecule in a minimal basis (STO-6G \cite{Hehre_1969}). This corresponds to a system with 4 electrons in 4 spatial orbitals with respective symmetries $\sigma_g$, $\sigma_u$, $\sigma_g^*$, and $\sigma_u^*$ (in ascending energies). Linear chains of hydrogens are prototypical examples of left-right correlation and, therefore, have been widely studied in order to probe electronic structure methods in presence of such correlation. \cite{Hachmann_2006,Al-Saidi_2007,Sinitskiy_2010,Bytautas_2011,Stella_2011,Robinson_2012c,Limacher_2013,Kats_2013,Henderson_2014a,Motta_2017,Motta_2020,Vollhard_2020,Giner_2020} Hereafter, the distance between two consecutive hydrogens is denoted by $R$. The first stage of this study consists in investigating the quality of the TpCCD and VpCCD ground- and excited-state energies in the case where the reference wave function is chosen as the ground-state RHF determinant, a choice that obviously induces a bias towards the ground state. The VpCCD energies (solid lines) are plotted alongside the DOCI ones (markers) in the left-hand-side of Fig.~\ref{fig:H4RHF}. Thanks to the simplicity of this model, one can access, via \textsc{mathematica}'s implementation of the Jenkins-Traub algorithm, \cite{Jenkins_1970a,Jenkins_1970b} the entire set of solutions (with real cluster amplitudes) associated with the system of polynomial equations \eqref{eq:dEVCC}. These VpCDD solutions are represented as thin solid lines in Fig.~\ref{fig:H4RHF}, while the thick parts of the curves correspond to the energies that we have been able to obtain using the Newton-Raphson algorithm described earlier [see Sec.~\ref{sec:OO4ES}]. Figure~\ref{fig:H4RHF} also shows the TpCCD energies (dashed lines) which are also determined with the Jenkins-Traub algorithm applied to Eq.~\eqref{eq:T2_eq}. In addition, the difference between TpCCD (VpCCD) and DOCI energies are also plotted in the top (bottom) right panel of Fig.~\ref{fig:H4RHF}. Considering the ground-state RHF determinant as reference wave function, the convergence towards the VpCCD ground state, $(\sigma_g)^2 (\sigma_u)^2$, is numerically straightforward all along the potential energy curve (PEC). On the other hand, converging excited states with the Newton-Raphson algorithm has been found to be much more challenging. We have not been able to converge the two lowest VpCCD excited states, $(\sigma_g)^2 (\sigma_g^*)^2$ and $(\sigma_u)^2 (\sigma_g^*)^2$, further than $R=\SI{2.0}{\bohr}$ for this set of orbitals. Even worse, the two other doubly-excited states, $(\sigma_g)^2 (\sigma_u^*)^2$ and $(\sigma_u)^2 (\sigma_u^*)^2$, have been reached only for $R=\SI{1.0}{\bohr}$. This is not the case for the $(\sigma_g^*)^2 (\sigma_u^*)^2$ quadruply-excited state for which one can converge VpCCD calculations fairly easily all along the PEC with the Newton-Raphson algorithm. This might be due to the fact that the corresponding stationary points are maxima for the quadruply-excited state whereas doubly-excited states correspond to saddle points (see below). Despite such numerical difficulties, the complete set of solutions could be obtained thanks to the Jenkins-Traub algorithm. Overall, the VpCCD method provides a fairly good approximation to the DOCI energies. At small $R$ (\textit{i.e.}, in the weak correlation regime), VpCCD is in much closer agreement with DOCI than TpCCD, most noticeably for the $(\sigma_g^*)^2 (\sigma_u^*)^2$ quadruply-excited state. At large $R$ (\textit{i.e.}, in the strong correlation regime), this comparison is trickier. Yet, the difference between VpCCD and DOCI seems more regular (see the bottom-right panel of Fig.~\ref{fig:H4RHF}) whereas the behavior of TpCCD is more erratic (top-right panel). Thus, one can state that, if one considers the ground-state RHF determinant as reference wave function, the main difficulties associated with VpCCD calculations concern its convergence, as the energies compare very favourably with the DOCI reference (at least in the weak correlation regime). At large $R$, the agreement between VpCCD and DOCI is less obvious as we shall see below. Thanks to previous investigations, we know that some of the TpCCD excited-state solutions can be labeled as non-physical. \cite{Jankowski_1994,Jankowski_1994a,Jankowski_1995,Jankowski_1999,Jankowski_1999a,Jankowski_1999b,Jankowski_1999c,Piecuch_2000,Kossoski_2021} For example, in the case of the linear \ce{H4} molecule using the ground-state RHF determinant as reference wave function, the lowest-lying DOCI excited state (blue markers in Fig.~\ref{fig:H4RHF}) can be represented by two TpCCD solutions (dashed blue curves). \cite{Kossoski_2021} These solutions eventually merge for $R>\SI{1.7}{\bohr}$ and become a complex conjugate pair of solutions with real components represented as black dashed lines in Fig.~\ref{fig:H4RHF}. The same phenomenon occurs for the fourth doubly-excited state, but the complex conjugate pair of solutions exists up to $R = \SI{3.4}{\bohr}$ before splitting into two real solutions (dashed green curves). In the case of VpCCD, there are only six real-amplitude solutions in the weak correlation regime. However, for $R\gtrsim \SI{3.5}{\bohr}$, two additional real solutions appear as one can see in the inset of Fig.~\ref{fig:H4RHF}. The fact that these spurious solutions appear as a pair indicate that they may exist for smaller $R$ as a pair of solutions with complex conjugate amplitudes. Because all the solutions are energetically close in this region of the PECs, it is hard to tell whether a solution is unphysical or not, and which one models better the corresponding DOCI solution. This is why the curve corresponding to the difference between VpCCD and DOCI for the fourth doubly-excited state stops at $R=\SI{3.2}{\bohr}$ in the bottom right panel of Fig.~\ref{fig:H4RHF}. The same unpredictability occurs between the green and purple TpCCD curves in the strong correlation regime. Therefore, in the weak correlation regime, the problems caused by unphysical solutions seem to be less severe in VpCCD than in TpCCD. Yet, when the correlation becomes strong, VpCCD is also plagued by unphysical solutions. Note that unphysical solutions at the VpCCD level are due to the approximate nature of the method which originates from the truncation of the cluster operator $\Hat{T}$. On the other hand, unphysical TpCCD solutions can originate from this same truncation and/or from the projection step of Eq.~\eqref{eq:TCCnrj}. The stability analysis of the various stationary points via the computation of the eigenvalues of the Jacobian matrix \eqref{eq:jacobian} provides useful information on the presence of additional solutions. \cite{Szakacs_2008,Surjan_2010} For example, a change in the number of negative eigenvalues (the saddle point index) indicates the appearance of additional solutions. \cite{Burton_2021} For $R<\SI{3.4}{\bohr}$, the index of the VpCCD solutions, in ascending energies, increases smoothly (0, 1, 2, 2, 3, and 4) up to the cyan curve which is an index-4 stationary point (\textit{i.e.}, a maximum). At $R=\SI{3.4}{\bohr}$, the index associated to the green VpCCD solution decreases by one unit, this solution becoming an index-1 saddle point. We see in the inset of Fig.~\ref{fig:H4RHF} that two additional solutions appear right after this index variation, these two spurious solutions being index-3 saddle points. Because the agreement between DOCI and both TpCCD and VpCCD ground-state energies is very satisfying when one employs as reference the ground-state RHF wave function, one can reasonably wonder if the same similarity holds in the case of excited states by considering excited-state RHF wave functions as state-specific references. We have recently shown that this holds for TpCCD when one uses state-specific orbitals optimized at this correlated level, \cite{Kossoski_2021} but it remains to be seen whether this still applies with state-specific mean-field orbitals. Using IMOM, \cite{Barca_2018} we have obtained five additional restricted solutions of the RHF equations, corresponding to the five possible non-Aufbau closed-shell determinants (see Fig.~\ref{fig:H4MOM}). Note that each excited-state solution corresponds to a different set of orthogonal orbitals, but these sets are, a priori, not orthogonal to each others because they originate from distinct Fock operators. Of course, spatially symmetry-broken RHF solutions do exist but we have not considered them here. For $R > \SI{2.4}{\bohr}$, an additional solution, plotted as a black line in Fig.~\ref{fig:H4MOM}, has been found by systematically occupying the two highest-energy molecular orbitals at each SCF cycle. The molecular orbitals associated with this \ce{H+}~\ce{H-}~\ce{H-}~\ce{H+} ionic configuration have a more localized character than the orbitals constituting the $(\sigma_g^*)^2 (\sigma_u^*)^2$ determinant (the orbitals associated with these two solutions are available in the \hyperlink{SI}{supplementary material}). A stability analysis of these RHF solutions \cite{Seeger_1977,Fukutome_1981,Stuber_2003} shows that the cyan curve is a maxima at small $R$ but, for $R>\SI{2.4}{\bohr}$, it becomes a saddle point whereas the ionic configuration (\textit{i.e.}, the black curve) corresponds to a maxima. The MOM excited states represented in Fig.~\ref{fig:H4MOM} can be used as reference wave functions at both the TCC (see Ref.~\onlinecite{Lee_2019}) and VCC levels. The energies at the three different correlated levels (namely, DOCI, TpCCD, and VpCCD) using these state-specific excited-state RHF reference wave functions are plotted in Fig.~\ref{fig:H4Correlated} and are labeled as MOM-DOCI, MOM-TpCCD, and MOM-VpCCD in the following. As one can see, these energies are visually indistinguishable, except at large $R$ in the case of the $(\sigma_g^*)^2(\sigma_u^*)^2$ state. Still, the right panel of Fig.~\ref{fig:H4Correlated} shows that MOM-VpCCD is closer to MOM-DOCI than MOM-TpCCD by roughly one order of magnitude all along the PEC. Also, we can see in the top-right panel that the difference between MOM-TpCCD and MOM-DOCI is less erratic than its analog using ground-state RHF orbitals (see Fig.~\ref{fig:H4RHF}). As expected, using state-specific RHF determinants as reference rather than the ground-state one significantly improves the description of excited states at the TpCCD level. Therefore, if one wants to target excited states at the TpCCD level, it is worth investing in the design of proper state-specific references in order to make the key projection step in Eq.~\eqref{eq:TCCnrj} more effective. Even if it is less pronounced at the VpCCD level, state-specific RHF reference determinants also improve the accuracy of the excited-state energies (with respect to DOCI). The most noticeable positive side effect of these state-specific references on VpCCD is the greater ease of convergence. Indeed, as shown in Fig.~\ref{fig:H4Correlated}, we have been able to converge almost all the states up to $R \simeq \SI{3.5}{\bohr}$. Therefore, we argue that using state-specific references enlarge the basin of attraction of the associated solution and consequently facilitates the convergence towards it. The logical next step is to compare DOCI, TpCCD, and VpCCD at the orbital-optimized (oo) level (as described in Sec.~\ref{sec:OO4ES}). For ground states, DOCI and pCCD have already been shown to perform best when one relaxes the spatial symmetry constraint as it allows the orbitals to be fully localized. \cite{Limacher_2014,Boguslawski_2014b,Boguslawski_2014c} On the other hand, relaxing this constraint also considerably increases, in principle, the number of attainable solutions. For example, multiple solutions corresponding to the ground state have already been observed. \cite{Limacher_2014,Boguslawski_2014b,Boguslawski_2014c} However, in the case of excited states, we have only obtained symmetry-adapted solutions even if the orbital optimization algorithm could, in principle, target symmetry-broken solutions. \cite{Henderson_2014a} It may be due to the lack of flexibility associated with the minimal basis. Indeed, we have shown, at the TpCCD level, that for larger molecules in larger basis sets one could also break the spatial symmetry to improve the description of excited states. \cite{Kossoski_2021} We expect analog symmetry-broken excited-state solutions for larger molecules in the case of VpCCD. As shown by Henderson \textit{et al.}, \cite{Henderson_2014a} DOCI and TpCCD optimized orbitals are virtually indistinguishable in molecular systems. Here, we have observed that the VpCCD optimized orbitals are also virtually indistinguishable from the two other sets. Therefore, as expected, oo-TpCCD and oo-VpCCD energies are also highly similar, so that we only report oo-VpCCD energies in Fig.~\ref{fig:H4ooCorrelated}. The right panel of Fig.~\ref{fig:H4ooCorrelated} evidences that the accuracy of oo-TpCCD and oo-VpCCD is similar to their MOM-TpCCD and MOM-VpCCD counterparts (see Fig.~\ref{fig:H4Correlated}), at least in the weak correlation regime (always having DOCI as the reference results). However, in the strong correlation regime, the scenario is rather different. The orbital optimization at the correlated level allows them to strongly localize when the bond is stretched, hence the PECs exhibit the correct dissociation limits. As a direct consequence, the agreement between VpCCD (and TpCCD) and DOCI is improved at large $R$, as compared to MOM-VpCCD (and MOM-TpCCD). We can then conclude that, in the absence of strong correlation effects, state-specific RHF determinants provide robust and cheaper alternatives to determinants made of optimized orbitals at the correlated level. To further illustrate this, we provide the VpCCD optimized orbitals as well as the MOM orbitals in the \hyperlink{SI}{supplementary material}. \subsection{A strong correlation model: the ring \ce{H4} molecule} \begin{figure} \includegraphics[width=\linewidth]{Fig5} \caption{The ring \ce{H4} model. Here the diameter of the circle $d$ is set to $\SI{6.569}{\bohr}$.} \label{fig:dhmodel} \end{figure} \begin{figure*} \includegraphics[width=0.295\textwidth]{Fig6a} \includegraphics[width=0.39\textwidth]{Fig6b} \includegraphics[width=0.295\textwidth]{Fig6c} \caption{ Center: Energies (in hartree) of the ring \ce{H4} model in the STO-6G basis set for the VpCCD method as functions of $\theta$ (in degree) using the ground-state RHF determinant of configuration ($a_{1g}^2b_{2u}^2$) as reference (symmetry-adapted orbitals are considered). Bottom-left and bottom-right: VpCCD (thick solid), TpCCD (thin solid), and DOCI (dashed) energies of the ground and first two excited states. Top-right and left panels provide the energies of the quadruply-excited state and of the highest-lying doubly-excited states, respectively.} \label{fig:D2hH4} \end{figure*} We now turn our attention to another widely known model for strong correlation, namely the ring \ce{H4} molecule where the four hydrogen atoms lie on a circle of diameter $d = \SI{6.569}{\bohr}$. As represented in Fig.~\ref{fig:dhmodel}, varying the angle $\theta$ with respect to $\theta = \SI{90}{\degree}$ connects two equivalent $D_{2h}$ rectangular geometries with non-degenerate molecular orbitals. At $\theta = \SI{90}{\degree}$ though, the $D_{4h}$ square-planar geometry has strictly degenerate orbitals and strong multi-reference effects are at play. Therefore, giving an accurate description of this system as a function of $\theta$ has been shown to be a real challenge for CC methods. \cite{VanVoorhis_2000,Robinson_2012a,Robinson_2012b,Robinson_2012c,Kats_2013,Limacher_2014,Burton_2016,Qiu_2017} In the following, we restrict ourselves to the minimal STO-6G basis set in which the $D_{2h}$ symmetry-adapted molecular orbitals are determined solely by symmetry. The resulting four molecular orbitals, ordered by ascending energy, have the symmetry $a_{1g}$, $b_{2u}$, $b_{3u}$, and $b_{1g}$. At $\theta=\SI{90}{\degree}$, the $b_{2u}$ and $b_{3u}$ orbitals are degenerate and form a pair of orbitals of $e_{g}$ symmetry in the $D_{4h}$ point group. Although one can choose to break spatial symmetry to gain flexibility, in a first stage, we restrict ourselves to the symmetry-adapted RHF molecular orbitals. In such situation, excited-state RHF wave functions correspond to non-Aufbau determinants built with this set of symmetry-pure orbitals, hence freeing ourselves from the orbital optimization issue to focus solely on the optimization of the cluster amplitudes. Because we deal with the seniority-zero subspace, the RHF determinants are made, by definition, of two doubly-occupied orbitals. For example, for the ground-state determinant at $\theta=\SI{90}{\degree}$, the lowest-energy $a_{1g}$ orbital and one of the doubly-degenerate $b_{2u}$ and $b_{3u}$ orbitals are doubly-occupied. Of course, in this case, the seniority-zero determinant is a poor approximation of the exact wave function as it tries to model an inherently multi-reference wave function with a single Slater determinant. We start by looking at the description of the ground state using the symmetry-adapted orbitals. The DOCI (dashed lines), VpCCD (thick solid lines), and TpCCD (thin solid lines) energies are plotted in the bottom-left panel of Fig.~\ref{fig:D2hH4}. The configuration of the reference determinant is $a_{1g}^2b_{2u}^2$ for $\theta < \SI{90}{\degree}$ and $a_{1g}^2b_{3u}^2$ for $\theta > \SI{90}{\degree}$. Hereafter, the electronic configuration of the RHF wave function is given for $\theta < \SI{90}{\degree}$; the corresponding configuration for $\theta > \SI{90}{\degree}$ is obtained by simply swapping $b_{2u}$ and $b_{3u}$. The first interesting fact is that TpCCD does not closely match DOCI for this system. On the other hand, VpCCD provides energies in fairly good agreement with DOCI. Therefore, the hermiticity property of VpCCD leads to a notable improvement over TpCCD. Yet, VpCCD exhibits a cusp at $\theta = \SI{90}{\degree}$ which is not present in DOCI. The derivative of the TpCCD PEC is also discontinuous at $\theta =\SI{90}{\degree}$ with an upside-down cusp compared to VpCCD. The comparison of the two pCCD variants and DOCI for the ground-state PEC provides similar insights to those reported in Ref.~\onlinecite{VanVoorhis_2000} in the case of VCCD, TCCD, and FCI. At the RHF level, the ground state and the lowest-lying excited state form a conical intersection. This is a drawback of the HF approximation as FCI produces an avoided crossing (HF and FCI energies are given in the \hyperlink{SI}{supplementary material}). In the seniority-zero subspace, the avoided crossing between these two states is not present. Indeed, as shown in the bottom-left panel of Fig.~\ref{fig:D2hH4}, the DOCI dashed curves are smooth. Yet, they do not form an avoided crossing. Then, we turn to the description of the excited states. The simplicity of the ring \ce{H4} model in a minimal basis allows us to access the entire set of solutions using the Jenkins-Traub algorithm (see Sec.~\ref{subsec:linearH4}). All the VpCCD solutions (with real cluster amplitudes) obtained with a ground-state RHF reference made of symmetry-adapted orbitals are represented in the center panel of Fig.~\ref{fig:D2hH4}, except for the quadruply-excited state which is plotted in the top-left panel. The convergence towards the ground state and the quadruply-excited state is fairly straightforward using the Newton-Raphson algorithm presented in Sec.~\ref{sec:ES}. However, the VpCCD solutions represented by the blue and cyan curves are the only two other solutions that we have been able to get for all $\theta$ values using the Newton-Raphson algorithm. In addition, we have obtained some parts of the three highest VpCCD PECs of Fig.~\ref{fig:D2hH4}, but the iterative algorithm was highly oscillatory and we have not been able to get any of these solutions for all values of $\theta$. The agreement between the VpCCD and DOCI excited states is less evident than for the linear \ce{H4} model studied in Sec.~\ref{subsec:linearH4}. The first important point to mention here is that there are more VpCCD than DOCI solutions, for all values of $\theta$. More importantly, as we shall discuss below, it is challenging to tell which of these solutions are unphysical. We believe that three VpCCD solutions can be assigned to DOCI states with certainty: the ground state as well as the pink (quadruply-) and cyan (doubly-)excited states. However, even if the VpCCD solution corresponding to the quadruply-excited state (top-left panel) is attainable for all $\theta$, it is a poor approximation to its DOCI counterpart, exhibiting a cusp and a local maxima at $\theta=\SI{90}{\degree}$. Meanwhile, the two highest-lying DOCI doubly-excited states could correspond to some of the three VpCCD solutions (see top-right panel of Fig.~\ref{fig:D2hH4}). One could argue that the brown curve should be associated with the dashed brown curve, but for the two other VpCCD solutions it is hard to tell which one is unphysical. Finally, one can see in the bottom-left panel of Fig.~\ref{fig:D2hH4} that the lowest-lying doubly-excited state could be associated with two VpCCD solutions. However, these solutions eventually disappear around $\theta=\SI{80}{\degree}$ and $\theta=\SI{100}{\degree}$. Similarly to the spurious solutions in the linear \ce{H4} model, it is possible that beyond this region the two solutions acquire complex-valued cluster amplitudes. Even for the part of the PECs where the solutions are real, the agreement with DOCI is quite poor (except around $\theta=\SI{90}{\degree}$ for the lower VpCCD solution), the PECs having the wrong topology. Alternatively, one could argue that the green VpCCD solution is associated with the lowest-energy DOCI excited state because it has the same topology as the first RHF excited state. Moreover, this solution exists for all geometry in contrast with the blue and yellow ones. We think that, at this stage, it would be arbitrary to assign a particular VpCCD solution to this DOCI state. \begin{figure} \includegraphics[width=\linewidth]{Fig7} \caption{Energies (in hartree) of the ring \ce{H4} model in the STO-6G basis set as functions of $\theta$ (in degree) for the four lowest VpCCD solutions obtained with two symmetry-pure RHF references: the ground-state determinant of configuration $a_{1g}^2 b_{2u}^2$ and the lowest excited-state determinant of configuration $a_{1g}^2 b_{3u}^2$. Note that the dashed lines are used only for readability.} \label{fig:MOMring} \end{figure} We now compare the previous set of VpCCD solutions with the ones obtained using non-Aufbau reference determinants made of the same set of symmetry-adapted orbitals. More specifically, we consider the lowest-lying RHF excited state of configuration $a_{1g}^2b_{3u}^2$, \textit{i.e.}, the other adiabatic state involved in the conical intersection with the $a_{1g}^2b_{2u}^2$ RHF ground state (see the \hyperlink{SI}{supplementary material}). At $\theta=\SI{90}{\degree}$, these two configurations become degenerate, and the choice of the $e_{g}$ orbital to occupy is arbitrary. Therefore, it seems logic to compare these two closely related references as function of $\theta$. This may shed light on the meaning of the VpCCD solutions observed in Fig.~\ref{fig:D2hH4}. The four lowest VpCCD solutions of Fig.~\ref{fig:D2hH4}, \textit{i.e.}, the solutions obtained using the ground-state RHF determinant of configuration $a_{1g}^2b_{2u}^2$ as reference, are also reported in Fig.~\ref{fig:MOMring} alongside the four lowest solutions obtained using the lowest-lying RHF excited state ($a_{1g}^2b_{3u}^2$) as reference. One can see that three $a_{1g}^2b_{3u}^2$-VpCCD solutions (dashed) are connected with three $a_{1g}^2b_{2u}^2$-VpCCD solutions (solid) at $\theta=\SI{90}{\degree}$, while the remaining pair of solutions coincide between $\theta=\SI{80}{\degree}$ and $\theta=\SI{100}{\degree}$. For other $\theta$ values, the $a_{1g}^2b_{2u}^2$-VpCCD solution disappears. As already mentioned earlier, the two lowest diabatic RHF states, $a_{1g}^2b_{2u}^2$ and $a_{1g}^2b_{3u}^2$, intersect at $\theta = \SI{90}{\degree}$. Hence, the lowest adiabatic RHF state has a cusp at $\theta = \SI{90}{\degree}$. Cusps observed in ground-state CC PECs are often claimed to be unphysical. \cite{VanVoorhis_2000,Bulik_2015} However, it has been pointed out by Burton and Thom that these cusps are not unphysical but are consequences of the RHF reference used to construct the corresponding CC wave functions. \cite{Burton_2016} They further argued that these cusps indicate crossing of solutions at the CC level. This is indeed what we observe in Fig.~\ref{fig:MOMring} where the cusps on the VpCCD PECs are actually formed by two VpCCD solutions obtained with distinct reference RHF wave functions. In short, the inherent single-reference character of pCCD prevents it from correctly describing the FCI avoided crossing. On the other hand, a non-orthogonal CI (which is inherently multi-reference) between the two RHF states reproduces the correct shape of the PEC. \cite{Burton_2016} We should also mention that the projected CC method introduced by Qiu \textit{et al.}, in which one constructs a CCSD wave function on top of a projected HF reference, does not exhibit such a cusp. \cite{Qiu_2017} \begin{figure*} \includegraphics[width=0.48\textwidth]{Fig8a} \includegraphics[width=0.5\textwidth]{Fig8b} \caption{Left: Energies (in hartree) of the ring \ce{H4} model in the STO-6G basis set at various correlated levels (VpCCD, TpCCD, and DOCI) and for various orbital sets (see right panel) as functions of $\theta$ (in degree). Right: Orbital representation of the set of symmetry-adapted (sa) orbitals and the two sets of symmetry-broken (sb) orbitals. For each set of orbitals, the reference determinant is built from the two leftmost orbitals. The irreducible representation of each orbital in the corresponding point group of the electron density is given in parenthesis.} \label{fig:sbring} \end{figure*} As stated earlier, the $D_{2h}$ molecular orbitals are fully determined by the spatial symmetry of the system. The corresponding set of symmetry-adapted (sa) orbitals is represented in Fig.~\ref{fig:sbring} and ordered by ascending energies. However, one may wonder if there exists solutions associated with (spatial) symmetry-broken orbital sets. A stability analysis in the space of real RHF solutions reveals that the symmetry-pure ground-state RHF solution is a minimum with respect to occupied-virtual rotations. \cite{Seeger_1977,Fukutome_1981,Stuber_2003} Thus, there is, a priori, no spatially symmetry-broken RHF solution lower in energy. Next, we study the influence of orbital rotations on the VpCCD energy for the ground state of the ring \ce{H4} model. The diagonalization of the orbital Hessian [see Eq.~\eqref{eq:orbGradHess}] associated with this solution shows that this stationary point is an index-2 saddle point. Therefore, there is at least one additional state below the sa-VpCCD ground-state PEC. Indeed, by following the direction provided by the eigenvectors associated with the two distinct negative eigenvalues, we have been able to locate two additional VpCCD solutions. The first one, labeled as ``sb$_{1}$'' (symmetry-broken) in Fig.~\ref{fig:sbring}, results from occupied-occupied and virtual-virtual rotations. Let us recall that, although the HF energy is invariant under occupied-occupied and virtual-virtual rotations, it is not the case for pCCD as the seniority-zero subspace depends on the orbital basis used to define it. \cite{Bytautas_2011} The direction associated with the second negative eigenvalue involves occupied-virtual rotations. Going downhill following the eigenvector associated with this eigenvalue leads to a different spatially symmetry-broken solution (see the ``sb$_{2}$'' orbital set in Fig.~\ref{fig:sbring}). The point group of the electron density associated with each solution and the irreducible representation of each orbital are also given in the right panel of Fig.~\ref{fig:sbring}. A stability analysis of these two additional solutions shows that they are minima with respect to orbital rotations. Note that, for each set of symmetry-broken orbitals, four of the six possible reference determinants yield the same correlated energies. The two other references, $1a_{1}^21b_{2}^2$ and $2a_{1}^22b_{2}^2$ for the sb$_{1}$ set and $1a_{g}^22b_{u}^2$ and $1b_{u}^22a_{g}^2$ for the sb$_{2}$ set, yield energies close to the quadruply-excited state obtained with symmetry-adapted orbitals. The left panel of Fig.~\ref{fig:sbring} shows that the agreement between VpCCD and DOCI is much better when one considers the two sets of symmetry-broken orbitals. In addition, we emphasize that the sb$_{1}$-DOCI/VpCCD PECs (which exhibit a cusp at $\theta = \SI{90}{\degree}$) are fairly good approximations of the ground-state FCI PEC while containing only seniority-zero determinants. Likewise, the cuspless sb$_{2}$-DOCI/VpCCD PECs are close in energy to the lowest-lying FCI excited-state one. The downside is that the corresponding wave functions do not possess the correct spatial symmetry. This is the famous L\"owdin symmetry dilemma. \cite{Lowdin_1963,Lykos_1963,Lowdin_1969} Moreover, for these two orbital sets, the TpCCD energies are in good agreement with DOCI (see the \hyperlink{SI}{supplementary material}). Yet, VpCCD is closer to DOCI than TpCCD as already seen in the linear \ce{H4} case [see Sec.~\ref{subsec:linearH4}]. The cusps exhibited by the sb$_1$-DOCI/VpCCD PECs are also due to a crossing between two diabatic states. More specifically, it originates from a change of axis along which the $D_{2h}$ symmetry breaks, leading to two different $C_{2v}$ subgroups related by a rotation of $\pi/2$. Unfortunately, we have not been able to converge to the higher-lying symmetry-broken $C_{2v}$ state. One would have noticed that we have not plotted the excited-state TpCCD energies in Fig.~\ref{fig:D2hH4}. In fact, TpCCD suffers from the same issues related to additional solutions and their physical meaning. Similarly to VpCCD, projection on non-Aufbau references leads to moderate improvements. Yet, the TpCCD energy landscapes remain plagued by unphysical solutions. Consequently, it is hardly possible to assign a TpCCD solution to a given DOCI excited state, as discussed in the case of VpCCD. Finally, we would like to mention that the improvement of VpCCD/TpCCD brought by state-specific reference wave functions is mitigated in comparison to the case of the linear \ce{H4} molecule. Therefore, it seems that state-specific (MOM or oo-pCCD) references provide a very significant improvement for weak correlation, but does not help much in the presence of strong correlation. In such a case, if one is willing to sacrifice the spatial symmetry of the wave function, the description of the ground state (at least) can be improved. Symmetry-broken excited-state wave functions also exist at the VpCCD level but we have struggled to systematically converge towards these solutions. Hence, their performance still need to be properly assessed. This is left for future work. \section{Conclusion} \label{sec:ccl} Recently, there has been a renewed interest in single-reference methods for excited states in the context of Hartree-Fock, density-functional, and coupled-cluster theories.\cite{Gilbert_2008,Thom_2008,Barca_2014,Barca_2018a,Barca_2018b,Mayhall_2010,Lee_2019,Zhao_2016a,Ye_2017,Shea_2018,Thompson_2018,Ye_2019,Tran_2019,Burton_2019c,Zhao_2020,Hait_2020,Hait_2020b,Levi_2020a,Levi_2020b,Dong_2020,Hait_2021,Burton_2021,Kossoski_2021} This has been made possible thanks to the development of new algorithms specifically designed to target higher-energy solutions of these non-linear equations. These so-called non-standard solutions provide genuine alternatives to the usual linear response and equation-of-motion formalisms (which are naturally biased towards the reference ground state) for the determination of accurate excited-state energies in molecular systems. This is especially true for double excitations which are known to be difficult to model with the two latter formalisms. \cite{Hirata_2000,Sundstrom_2014,Watson_2012,Loos_2018b,Loos_2019c,Loos_2020c,Loos_2020d,Veril_2021} There is, therefore, a real need for a better understanding of the structure of the energy landscape associated with these methods. In this study, we have focused on the case of CC. Due to the non-linearity of the CC equations, the topology of its energy landscape from which multiple solutions emerge is still far from being thoroughly understood. During the last decades though, several groups have been tackling this formidable problem. \cite{Piecuch_2000,Mayhall_2010,Lee_2019,Kossoski_2021,Csirik_2021} In a recent study, \cite{Kossoski_2021} we have pursued along these lines by investigating the structure of the CC energy surface and the comparison between DOCI and TpCCD for excited states. More specifically, we have shown that the agreement which has been observed for ground-state energies \cite{Henderson_2014a,Henderson_2014b,Henderson_2015,Shepherd_2016} remains in the case of excited states only if one minimizes the TpCCD energy with respect to the orbital coefficients. In the present study, we have investigated the solution structure of the VCC method, a version of CC where the cluster amplitudes and the energy are determined variationally instead of the usual projective way. To the best of our knowledge, VCC excited states have never been investigated before. Restricting ourselves to the case of pCCD (in which the cluster operator includes only pair excitations), we have looked at the VpCCD solution structure of two model systems, namely the linear and ring \ce{H4} molecules, both in the minimal STO-6G basis. The former system has been used to investigate the influence of the orbital set on the VpCCD and TpCCD energy landscape. In contrast to TpCCD, VpCCD provides a much better approximation to the DOCI solution structure when one builds the reference determinant with ground-state RHF orbitals, at least in the weak correlation regime. When the correlation becomes strong, \textit{i.e.}, when the hydrogen chain is stretched, additional spurious solutions appear due to the truncation of the excitation operator $\Hat{T}$, and VpCCD does not seem to improve with respect to TpCCD. In either regime, however, these excited-state solutions are hardly attainable using an iterative Newton-Raphson algorithm. Therefore, we replaced the ground-state RHF reference wave function by state-specific excited-state RHF references computed with MOM and targeted the corresponding VpCCD solution for each of them. We have observed that these state-specific references enlarge the basin of attraction of their associated solution, hence easing the convergence of the Newton-Raphson algorithm towards the targeted VpCCD solution. In addition, considering state-specific RHF orbitals greatly improves the TpCCD results for excited states. However, the difference between TpCCD and DOCI energies remains roughly one order of magnitude larger than the one between VpCCD and DOCI. Then, we have turned our attention to the situation where the reference orbitals are optimized at the correlated level. In the weak correlation regime, the agreement between DOCI and the two variants of pCCD (TpCCD and VpCCD) is only slightly better than with MOM orbitals. However, in the strong correlation regime, the orbital optimization procedure allows the orbitals to localize further (while keeping their spatial symmetry), hence improving the accuracy of VpCCD and TpCCD (with respect to DOCI) at large internuclear separation. The take-home message of this first part is that TpCCD energies computed with state-specific RHF orbitals provide a good balance between robustness and computational cost to describe excited states, at least in the weak correlation regime. Of course, further studies on real molecules are required to assess the accuracy of these methods. In a second stage, we have studied the ring \ce{H4} molecule to investigate the influence of strong correlation on the energy landscape. We have seen that spurious VpCCD solutions, due to the truncation of the cluster operator, seem unavoidable in the presence of strong static correlation. Therefore, the description of excited states is much less accurate than in the weak correlation regime. Even worse, these spurious solutions prevent an unambiguous assignment of (some of) the excited states. This problem remains if one considers state-specific references at the VpCCD level. TpCCD suffers from the same issues, but in a more severe way. In addition, inspired by Burton and Thom, \cite{Burton_2016} we have investigated the physical origin of the cusps of the PEC at the VpCCD level. In agreement with Burton and Thom, \cite{Burton_2016} we have shown that, at the VpCCD level, these cusps are due to crossing of diabatic states obtained with distinct reference determinants. Finally, we have investigated spatially symmetry-broken VpCCD solutions of the ring \ce{H4} molecule. In a minimal basis set, the symmetry-adapted molecular orbitals are completely determined by the $D_{2h}$ symmetry of the system. Yet, one can deliberately break this symmetry to relax the constraints imposed on the molecular orbitals. Doing so, we have shown that it is possible to locate two symmetry-broken VpCCD ground-state wave functions with energies in much better agreement with FCI, improving in the process the agreement between DOCI and VpCCD. \begin{acknowledgements} The authors thank Hugh G.~A.~Burton for insightful discussions on the energy landscape of coupled-cluster methods. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No.~863481). \end{acknowledgements} \section*{Data availability statement} The \hypertarget{SI}{data} that support the findings of this study are openly available in Zenodo at \href{http://doi.org/10.5281/zenodo.4971905}{http://doi.org/10.5281/zenodo.4971905}.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,335
Q: Minimize Google Chrome instead of closing window when X button is clicked I want to let Chrome always stay in memory and not closed. Is there a extension / trick / hack to change the X (close) Button function in Google Chrome from closing window to minimizing window to taskbar ?
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,322
Q: Indispensable flags for compiling and running java code I've started learning java today, using the current JDK for compiling/running and notepad++ as editor. I dont't want to use IDE's until I've understood how things work by doing them myself and not klicking some button, so I'm using shellcommands for compiling and running my first simple programs. Unfortunately, most tutorials I've found are covering examples using NetBeans or Eclipse, so at the end some button gets pressed and the magic begins. I understand what happens when I compile code with javac and run it in the JVM, but I need serious clues to know if it happens in the right way. I know, that flags are very suitable for that purpose and allow me to control nearly all steps of the compilation process done by javac and during runtime by the JVM, but the official list of flags provided by oracle is absolutely overwhelming. So, as suggested by the title, I'm searching for some reference mentioning and explaining the most important flags which are necessary to compile and run java code secure and stable. Anything I've found till now is either not caring about this, using default's configuration of an IDE or something that deals with flags on a level far above mine, so I decided to ask here for further advance. A: You simply type javac or java and hit enter, you will get all the options they use. Your output will be something like this: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\StackOverflow>java Usage: java [-options] class [args...] (to execute a class) or java [-options] -jar jarfile [args...] (to execute a jar file) where options include: -d32 use a 32-bit data model if available -d64 use a 64-bit data model if available -client to select the "client" VM -server to select the "server" VM -hotspot is a synonym for the "client" VM [deprecated] The default VM is client. -cp <class search path of directories and zip/jar files> -classpath <class search path of directories and zip/jar files> A ; separated list of directories, JAR archives, and ZIP archives to search for class files. -D<name>=<value> set a system property -verbose:[class|gc|jni] enable verbose output -version print product version and exit -version:<value> require the specified version to run -showversion print product version and continue -jre-restrict-search | -no-jre-restrict-search include/exclude user private JREs in the version search -? -help print this help message -X print help on non-standard options -ea[:<packagename>...|:<classname>] -enableassertions[:<packagename>...|:<classname>] enable assertions with specified granularity -da[:<packagename>...|:<classname>] -disableassertions[:<packagename>...|:<classname>] disable assertions with specified granularity -esa | -enablesystemassertions enable system assertions -dsa | -disablesystemassertions disable system assertions -agentlib:<libname>[=<options>] load native agent library <libname>, e.g. -agentlib:hprof see also, -agentlib:jdwp=help and -agentlib:hprof=help -agentpath:<pathname>[=<options>] load native agent library by full pathname -javaagent:<jarpath>[=<options>] load Java programming language agent, see java.lang.instrument -splash:<imagepath> show splash screen with specified image See http://www.oracle.com/technetwork/java/javase/documentation/index.html for m ore details. Similarly, you'll get the output for javac.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,517
Anti-mondes est une collection de science-fiction des éditions OPTA, dirigée par Michel Demuth, qui a publié 34 volumes de 1972 à 1977. On y trouve en alternance des œuvres originales américaines et des rééditions de titres parus dans la collection Galaxie-bis du même éditeur. Les couvertures ont été réalisées par des dessinateurs reconnus comme Moebius ou Philippe Caza. Titres Les huit premiers titres sont donnés selon l'ordre de parution, mais les ouvrages ne portent pas de numéro. L'Île des morts par Roger Zelazny - couverture de Moebius La Tour de verre par Robert Silverberg, 1972 - couverture de Wojtek Siudmak Mechasme par John T. Sladek Message de Frolix 8 par Philip K. Dick, 1972 - couverture de Philippe Caza L'Envol de la locomotive sacrée par Richard A. Lupoff - couverture de Wojtek Siudmak Les Quatrièmes Demeures par Raphael A. Lafferty - couverture de Jean Solé Le Faiseur d'univers par Philip Jose Farmer Après la déglingue par Ron Goulart, 1973 - couverture de Philippe Caza Rêve de fer par Norman Spinrad Génies en boîtes par Fritz Leiber - couverture de Jean Solé Le Temps des changements par Robert Silverberg,1974 - couverture de Hampton Le Dieu venu du Centaure par Philip K. Dick - couverture de Philippe Caza L'Image au miroir par Michael Coney, 1974 - couverture de Wojtek Siudmak La Semence du démon par Dean R. Koontz, 1974 - couverture de Bernard Moro Les Triffides par John Wyndham - couverture de Moebius L'Effet Muller-Fokker par John T. Sladek Frankenstein délivré par Brian W. Aldiss Les Portes de la création par Philip Jose Farmer - couverture de Bernard Moro Maître des arts par Willima Rostler - couverture de Georges Raimondo La Quête de la Sainte Grille par Robert F. Young - couverture de Loro L'Homme infini par Daniel Galouye - couverture de Jean-Claude Hadi Zodiacal par Piers Anthony - couverture de Romain Slocombe Le Seigneur des airs par Michaël Moorcock - couverture de Cathy Millet La Chair dans la fournaise par Dean R. Koontz La Fin du rêve par Philip Wylie King Kong blues par Sam J. Lundwall - couverture de Bernard Moro Orbitville par Bob Shaw - couverture de Didier Gaillard La Guerre éternelle par Joe Haldeman Le Léviathan des terres par Michaël Moorcock Un spectre hante le Texas par Fritz Leiber - couverture de Thierry Leroux Le Disparu par Katherine MacLean - couverture de François Allot L'Intersection Einstein par Samuel R. Delany - couverture de Didier Gaillard Le Grand Chalabala par Jean-Baptiste Baronian Morituri par Michael Kurland - couverture de Philippe Caza Liens externes Collection de littérature de science-fiction 1972 en science-fiction 1977 en science-fiction
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,067
Q: Python3 not accessing requests module I've installed the requests module for Python3 on my system, and appears to have installed completely fine. When I run a script involving use of said package on PyCham using Python3 interpreter, it runs without a problem. However, when executed outside this environment, this error pops up: ImportError: no module named requests This happens despite PATH containing Python34, which invokes correctly when call via cmd, and me double checking the installation via pip. Is there any possible area you could point me to that could resolve this problem? Thanks in advance. A: Maybe you have two Pythons installed. One is used by PyCharm (and it has requests) and second is used in cmd. Beside pip can be part of Python used by PyCharm not Python used in cmd.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,712
SCHOOL OFFICIALS DEFEND SCORES ON STATEWIDE TEST LORETTA WALDMAN; Courant Staff WriterTHE HARTFORD COURANT The percentage of town 10th-graders reaching goal on the Connecticut Academic Performance Test fell slightly this year in math and science. And while the percentage of students achieving goal in reading and writing increased over last year, the results in writing still lagged behind percentages statewide and in districts with similar socio-economic features. While concerned about losing ground, school administrators say that direct comparisons are difficult between two distinct groups of students. Despite the declines, they say there is much to be encouraged about in this year's results. The percentage of students needing intervention, for one, declined on all four sections of the test, they said. And the percentage of students considered proficient -- either those who achieved goal or scored in the range just short of the goal -- is substantial and continues to rise, they said. "For us, that was the planned focus, looking at students at the borders," said RoseMarie Cipriano, principal of Plainville High School. The ultimate expectation, of course, is to reach goal, Cipriano said, "but we need to get [students] out of the intervention level in order to move them up." Administrators suspect changes to the language arts curriculum six or seven years ago may at last be bearing fruit. Results on the reading section of the test showed dramatic improvement, with students achieving goal this year over last increasing nearly 12 percentage points. On the writing section, students achieving goal climbed nearly 11 percentage points over last year. Math has long been recognized as an area of weakness, Cipriano said, and changes to that curriculum were made following a comprehensive review last year. They include new elementary level math textbooks, improvements in professional development and increased emphasis on basic skills such as fractions, estimating and problem solving. Linda VanWagenen, director of curriculum, instruction and assessment, is working to better match middle school and high school math instruction. "We strive to continuously improve our curriculum," said Superintendent Kathleen Binkowski, noting that instructional leaders are in the process of creating action plans and are expected to submit recommendations within the next week. Binkowski said she hopes to include them in a report on the CAPT to the school board Monday. "We try to get very specific when there is a common problem and ask 'what was it [students] didn't learn that we need to be stressing?"' added Cipriano, who will make the presentation to the school board. CAPT Scores Below are the percentages of high school sophomores who met the state goal on the Connecticut Academic Performance Test 2001 and 2002 locally and in the state as a whole. %% Math 2001 2002 PLAINVILLE 39.7 38 State 44.6 44.1 Science 2001 2002 PLAINVILLE 43.2 39.2 Reading 2001 2002 Writing 2001 2002
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,923
Join HEMA band for their free spring concert! Bring the family and see how music education can be easy and affordable and how it can fit into your homeschool plan. When: May 6, 2019 Time: 7:00 p.m. Location: Dearborn FMC 2801 Telegraph Rd.
{ "redpajama_set_name": "RedPajamaC4" }
1,318
Q: Why proc process faster than others? As we know there is some process which is faster than other. But I always wonder Why proc process faster than others? A: Confusion. The /proc/ file system (read proc(5)) is indeed a pseudo-file system without real files on any hard disk. So reading it is quick (and could be faster than reading a file on a spinning hard disk). For example you could write some C code fopen-ing /proc/self/maps, looping on every line using fgets, and showing that line on your stdout, and finally fcloseing it. See this. On Linux /proc/ is a convenient way to query the kernel about the operating system's state. You generally read (not write) pseudo-files from it. Try also cat /proc/$$/status and cat /proc/self/maps in some terminal, and think a bit to understand the output. BTW, if you want to do quickly some IO on files of reasonable size, put them on some tmpfs filesystem (which would be lost at shutdown time, and has some limitations).
{ "redpajama_set_name": "RedPajamaStackExchange" }
274
Data loss is no joke. Keep it safe with T2. A solid backup and disaster recovery plan is essential for long term business success. In the event of a disaster, your company can't afford to lose valuable data and applications. Insuring your data is protected, T2 Data Protection is a fully automated and managed backup service, designed to protect files, applications, and entire servers. We don't just give you the tools - we do the work. We were running out of space and we knew it. By looking to Tabush to help us solve this issue, we knew we were in good hands. Our days, quite simply, just run much smoother. The fear of losing our data is gone and now we can focus on our business. Built on enterprise-level technology in use by over 35,000 businesses, utilizing Tabush Group's rock solid infrastructure, and backed by our team of experts, our T2 Data Protection solution is the best way to protect business critical systems in today's environment, with security to keep your important data safe. Our client's data gets the treatment it deserves. Three geographically-dispersed datacenters meet ISO 9001:2000 certification and SSAE-16 compliance, combined with four levels of security (including military-grade encryption) keep your data safe from beginning to end with built-in redundancies for disaster protection. We ease your burdens by managing the backups for you. Daily monitoring by our team of experts ensures that we always have current and reliable backups. Failproof recovery means you can always get your data back. We have proven it dozens of times, reliably and quickly restoring single files, critical databases, and entire systems. Today, a business will suffer from data loss. Don't let it be yours. Learn more about our T2 Data Protection solution.
{ "redpajama_set_name": "RedPajamaC4" }
8,730
Seoul to operate 'walk-thru' COVID-19 test centers for all entrants living in Seoul Updated: 2020-04-02 17:16:34 KST Novel coronavirus cases continue to be found among new arrivals coming into the nation. South Korea's health authorities say 8-percent of those cases have caused secondary infections, stressing the need for stricter self-quarantining. As of Thursday, over 9,9-hundred people in South Korea have been infected with the virus, roughly 6-hundred of them being imported cases. "For the past two weeks, there have been 508 imported cases of COVID-19 and 41 of them have led to secondary infections. That's about 8-percent." The official added that the virus can spread to another person before AND after the infected experiences symptoms and therefore, stricter self-quarantine measures are all the more necessary. To curb secondary infections from imported cases, the capital city of Seoul will be running walk-through coronavirus test centers at Jamsil Sports Complex. Starting Friday, all incoming Seoul residents will be tested on their first day of arrival. Up to 1-thousand people can be tested per day and shuttle buses from the airport will carry entrants to the test centers. As concerns linger surrounding the number of daily new infections, many are questioning whether it is right to end the social distancing campaign on April 5th. "We can't infinitely delay our daily lives and I am aware that people are becoming exhausted. However, the virus is spreading at an unprecedented speed on a global level and we continue to see imported cases and cluster infections. So alleviating social distancing could restart rapid spread of the virus." The government says it will announce this week whether social distancing will continue on or not. Oh Jung-hee, Arirang News. Reporter : jhlucyoh@arirang.com
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,698
<ion-view id="mercadolibre" hide-tabs> <ion-nav-bar class="bar-royal"> <ion-nav-back-button> </ion-nav-back-button> <ion-nav-title> {{'Buy'|translate}} </ion-nav-title> </ion-nav-bar> <ion-content class="add-bottom-for-cta"> <!-- BUY --> <div class="list"> <div class="item head"> <div class="sending-label"> <i class="icon big-icon-svg"> <div class="bg icon-amazon"></div> </i> <span>Mercado Livre Brazil Gift Cards</span> </div> <div class="amount-label"> <div class="amount">{{amountUnitStr}}</div> </div> </div> <div class="info"> <div class="item item-icon-right" ng-click="showWalletSelector()"> <div class="label" translate>From</div> <div class="wallet"> <i class="icon big-icon-svg"> <img src="img/icon-wallet.svg" ng-class="{'wallet-background-color-default': !wallet.color}" ng-style="{'background-color': wallet.color}" class="bg"> </i> {{wallet ? wallet.name : '...'}} </div> <i class="icon bp-arrow-right"></i> </div> <div ng-show="totalAmountStr"> <div class="item item-divider" translate> Details </div> <div class="item"> <span translate>Gift card</span> <span class="item-note"> {{amount | currency:'$ ':2}}<span ng-if="amount"> {{currencyIsoCode}}</span> </span> </div> <div class="item"> <span translate>Network Cost</span> <span class="item-note"> <span>{{invoiceFee | currency:'$ ':2}}<span ng-if="invoiceFee"> {{currencyIsoCode}}</span> </span> </div> <div class="item"> <span translate>Miner Fee</span> <span class="item-note"> <span>{{networkFee | currency:'$ ':2}}<span ng-if="networkFee"> {{currencyIsoCode}}</span> </span> </div> <div class="item"> <span translate>Total</span> <span class="item-note"> <span ng-if="totalAmount">{{totalAmount | currency:'$ ':2}} {{currencyIsoCode}}</span> <span ng-if="totalAmountStr">({{totalAmountStr}})</span> </span> </div> </div> </div> </div> </ion-content> <click-to-accept is-disabled="!wallet || !totalAmountStr" ng-click="buyConfirm()" ng-if="!isCordova" click-send-status="sendStatus"> {{'Confirm purchase'|translate}} </click-to-accept> <slide-to-accept ng-if="isCordova && wallet && totalAmountStr" slide-on-confirm="buyConfirm()" slide-send-status="sendStatus"> {{'Slide to buy'|translate}} </slide-to-accept> <slide-to-accept-success slide-success-show="sendStatus === 'success'" slide-success-on-confirm="goBackHome()" slide-success-hide-on-confirm="true"> <span ng-show="mlGiftCard.status == 'FAILURE'"> Your purchase could not be completed </span> <span ng-show="mlGiftCard.status == 'PENDING'"> Your purchase was added to the list of pending </span> <span ng-show="mlGiftCard.status == 'SUCCESS' || mlGiftCard.status == 'active'"> Bought {{mlGiftCard.amount}} {{mlGiftCard.currency}} </span> <div class="m10 size-14" ng-show="mlGiftCard.status == 'SUCCESS' || mlGiftCard.cardStatus == 'active'"> Gift card generated and ready to use. </div> </slide-to-accept-success> <wallet-selector wallet-selector-title="walletSelectorTitle" wallet-selector-wallets="wallets" wallet-selector-selected-wallet="wallet" wallet-selector-show="showWallets" wallet-selector-on-select="onWalletSelect"> </wallet-selector> </ion-view>
{ "redpajama_set_name": "RedPajamaGithub" }
7,515
\section{Introduction} The classical Liouville theorem which is named after Joseph Liouville states that any bounded holomorphic function on the complex plane $\mathbb C$ must be constant, which also holds for harmonic functions on $\mathbb R^n.$ It has a huge impact crossing many fields such as complex analysis, partial differential equations, geometry and so on. In 1975, S. T. Yau {\cite{Yau1}} proved that any bounded harmonic function must be constant on an $n$-dimension manifold with nonnegative Ricci curvature, and later, he and S. Y. Cheng {\cite{ChengYau}} showed a gradient estimate on the manifold with Ricci curvature lower bounded to give an effective version of the Liouville theorem, where the gradient estimate method plays a crucial role in the study of harmonic function on complete manifolds. Since Yau's seminal paper, the Liouville type property and the gradient method have been generalized to many partial differential equations on manifolds (see e.g. \cite{Li, CM3} and the references therein for more information). { Let $(M^n,g)$ be a complete Riemannian manifold. Denote by $\mathcal H_d(M)$ the space of harmonic functions with polynomial growth at most $d$: \begin{align*} \mathcal H_d(M) = \{u:u(x) \text{ is harmonic on } M^n; \sup_{B_p(r)}|u(x)|\leq C(1+r)^d \text{ for some }C>0 \}, \end{align*} where $B_p(r)$ is the geodesic ball with radius $r$ centered at $p.$ In {\cite[1987]{Yau2}}, Yau gave a conjecture: \begin{conjecture}{\label{Yau}} Let $M$ be a complete manifold with nonnegative Ricci curvature, then $\text{dim}(\mathcal H_d(M))<\infty$ for any $d>0.$ \end{conjecture} This conjecture can be regard as a more general Liouville property. The conjecture was firstly proved by P. Li and L.-F. Tam for the case $d=1$ in \cite{Li-Tam89}, and for the $2$-dimensional case ($n=2$) in \cite{Li-Tam91}. T. H. Colding and W. P. Minicozzi ({\cite{CM1,CM2}}) completely proved the conjecture in 1997. In {\cite{CM1}}, they proved the result of finite dimension of $\mathcal H_d(M)$ under weaker assumptions: $M$ is a complete manifold satisfying a volume doubling bound and a scale-invariant Poincare inequality. One can see more in {\cite{CM3}}. In fact, the study of $\mathcal H_d(M)$ has been further developed, see e.g. \cite{Xu,Ding,Huang} for the existence of non-constant functions in $\mathcal H_d(M)$, and see \cite{Liu, Ni} for the study of Yau's conjecture in complete K\"ahler manifolds. } {In this paper, we consider a Liouville theorem and the space of harmonic function with polynomial growth on complete gradient Ricci solitons with constant scalar curvature. In fact, the study of Liouville type theorem and {\bf Conjecture \ref{Yau}} in gradient shrinking Ricci solitons have been also considered by some researchers, recently. To state the results, we introduce some notations.} A complete Riemannian manifold $(M^n, g)$ is a gradient shrinking Ricci soliton if there exists a smooth function $f$ on $M$ {satisfying} the equation \begin{eqnarray*} \mathrm{Ric}+\nabla\nabla f={\frac{1}{2}}g, \end{eqnarray*} where $\mathrm{Ric}$ is the Ricci tensor and $\nabla\nabla f$ is the Hessian of the function $f$. The function $f$ is called a potential function of the gradient shrinking Ricci soliton. Ricci soliton can be seen as natural extensions of Einstein metrics. Obviously, if $f$ is a constant function, the gradient Ricci soliton is simply an Einstein manifold. On the other hand, gradient Ricci solitons are also self-similar solutions to Hamilton's Ricci flow and play an important role in the study of formation of singularities in the Ricci flow. We {refer to} {\cite{Cao}} for a nice survey on the subject. {For shrinking Ricci solitons, many people studied the $f$-harmonic functions, that is the function $u$ such that $\triangle u-<\nabla u, \nabla f>=0$. H. Ge and S. Zhang {\cite{GZ}} showed that any positive $f$-harmonic function on shrinkers is constant. However, for the standard harmonic function on gradient shrinking Ricci solitons, we do not know whether any bounded harmonic function is constant. In {\cite{MS}} O. Munteanu and N. Sesum proved the harmonic function $u$ with $\int_{M}|\nabla u|^2dv<\infty$ on gradient shrinking Kahler-Ricci soliton is constant. } {As an analogue of Yau's conjecture ({\bf Conjecture \ref{Yau}}), an interesting question is following: \begin{question}\label{Yau_s} Let $M$ be a complete shrinking Ricci soliton, then $\text{dim}(\mathcal H_d(M))<\infty$ for any $d>0.$ \end{question} } {For this question, the following results are known. O. Munteanu and J. Wang {\cite{MW}} showed that the space of holomorphic functions with polynomial growth on Kahler-Ricci shrinkers has finite dimension. Recently, J.-Y. Wu and P. Wu {\cite{JWPW}} proved that the space of harmonic functions with polynomial growth on complete non-compact shrinkers with $R(x)d^2(x,o)\leq c_0$ has finite dimension, where $d(x,o)$ is the distance function from point $x\in M$ to a fixed poiont $o\in M$.} { In this paper we will prove a Liouville theorem and affirmatively answer {\bf Question \ref{Yau_s}} on a gradient shrinking Ricci soliton with constant scalar curvature. Our main results are stated as follows. \begin{theorem}{\label{Liouville}} Let $(M, g, f)$ be a complete non-compact gradient shrinking Ricci soliton with constant scalar curvature. Any bounded harmonic function on $M$ is constant. \end{theorem} Since K\"ahler-Ricci solitons are also Ricci solitons, we have the following corollary. \begin{corollary} All bounded harmonic (holomophic) functions on complete gradient shrinking K\"ahler-Ricci solitons with constant scalar curvature are constant. \end{corollary} Consequently, using Theorem {\ref{Liouville}}, we obtain a uniform doubling property of harmonic functions with polynomial growth. By the {argument} in {\cite{MW}}, we {prove} the following theorem. \begin{theorem}{\label{Dimension}} Let $(M, g, f)$ be a complete non-compact gradient shrinking Ricci soliton with constant scalar curvature, then $\text{dim}(\mathcal H_d(M))<\infty$ for any $d>0.$ \end{theorem} } Recently, Colding and Minicozzi {\cite{CM4}} considered the solution of the heat equation which is often called caloric functions. We say $u\in \mathcal{P}_d(M)$ if $u$ is an ancient caloric function and for some $p\in M$ and a constant $C_n$ such that $$\sup_{B_p(r)\times[-r^2,0]}|u|\leq C_u(1+r)^d,\ \forall r>0.$$ They proved that if a complete manifold $M$ has polynomial volume growth and $k$ is a nonnegative integer, then $$\dim \mathcal{P}_{2k}(M)\leq\sum^k_{i=0}\dim \mathcal{H}_{2i}(M).$$ By Theorem {\ref{Dimension}}, we obtain the following corollary. \begin{corollary} Let $(M, g, f)$ be a complete non-compact gradient shrinking Ricci soliton with constant scalar curvature, then $\text{dim}(\mathcal P_d(M))<\infty$ for any $d>0.$ \end{corollary} \begin{remark} { In many studies of Yau's conjecture ({\bf Conjecture \ref{Yau}}), the method of frequency function plays a crucial role (see \cite{CM1,CM2,Xu}). In this paper our argument, including the proofs of Theorem \ref{Liouville} and Theorem \ref{Dimension}, is also based on the method of frequency function. However, instead of using the frequency function given in the above references, our frequency function is from \cite{Zhu} (see also \cite{Ou}), which has its own advantage in computation. It should be also noted that J. Ou in {\cite{Ou}} proved a monotonicity result for the used frequency function on gradient shrinking Ricci solitons with constant scalar curvature, which plays a role in our proof of Theorem \ref{Dimension}.} \end{remark} \begin{remark} In fact, a lot of attention has been paid to the study of gradient Ricci solitons with constant scalar curvature in recent years. Peterson and Wylie {\cite{PW1}} defined a gradient Ricci soliton $(M,g,f)$ to be rigid if it is a flat bundle $N\times_{\Gamma}\mathbb{R}^k$, where $N$ is an Einstein manifold, $\Gamma$ acts freely on $N$ and by orthogonal transformations on $\mathbb{R}^k$, and $f=\frac{\lambda}{2}|x|^2$ on $\mathbb{R}^k$. They \cite{PW1} proved that a gradient Ricci soliton is rigid if and only if it has constant scalar curvature and is radially flat, i.e. $Rm(\nabla f, \cdot, \nabla f, \cdot)=0$. H.-D. Cao proposed the following conjecture: \begin{conjecture}[H.-D. Cao] \label{Caoconjecture} Gradient shrinking Ricci solitons with constant scalar curvature are rigid \end{conjecture} Fern\'andez-L\'opez and Garc\'ia-R\'io \cite{FG} investigated gradient shrinking Ricci solitons with constant scalar curvature using isoparametric theory, they proved that the constant scalar curvature has to be $0,\ 2\lambda,\ ...,\ (n-1)\lambda$, or $n\lambda$. They \cite{FG} classified four-dimensional and six-dimensional gradient K\"ahler-Ricci solitons of constant scalar curvature. J.-Y. Wu, P. Wu and Wylie \cite{WWW} classified four-dimensional gradient K\"ahler-Ricci solitons with constant scalar curvature, independently. Finally, X. Cheng and D. Zhou {\cite{ChengZhou}} finished the classification of 4-d gradient Ricci shrinking solitons with constant scalar curvature. { Whether {\bf Conjecture {\ref{Caoconjecture}}} for high dimension holds or not is still unknown. In some sense Theorem \ref{Liouville} and Theorem \ref{Dimension} provide some positive information on this conjecture since our results are expected if the conjecture has an affirmative answer.} \end{remark} {The paper is organized as follows.} We recall some properties of gradient shrinking Ricci solitons in Section 2. We will prove Theorem {\ref{Liouville}} in Section 3. Theorem {\ref{Dimension}} will be proved in Section 4. {\bf Acknowledgement} The second author thanks Professor Huai-Dong Cao and Professor Fei He for helpful conversations. The first author was supported in part by NSFC Grant No. 11901594. The second author was supported in part by the Fundamental Research Funds for the Central Universities (No. 20720220042). \section{Preliminaries} We recall some basic results for gradient shrinking Ricci solitons with constant scalar curvature. \begin{lemma}{(Hamilton \cite{Ham1})\label{Ham1}} Let $(M^n,g_{ij},f)$ be a complete gradient shrinking Ricci soliton. Then, we have \begin{eqnarray*} \nabla_i R=2R_{ij}\nabla_j f \end{eqnarray*} and \begin{eqnarray*} R+|\nabla f|^2-f=C_0 \end{eqnarray*} for some constant $C_0$. \end{lemma} \begin{lemma} [B.-L. Chen \cite{Chen}, Pigola-Rimoldi-Setti \cite{PRS}] Let $(M^n, g, f)$ be a complete gradient shrinking Ricci soliton. Then the scalar curvature is nonnegative. Moreover, the scalar curvature is positive unless $(M^n, g, f)$ is the Gaussian soliton $(\mathbb{R}^n, g_0, \frac{\lambda}{2}|x|^2)$. \end{lemma} In constant scalar curvature case, {\cite{FG}}, Fern\'andez-L\'opez and Garc\'ia-R\'io gave the following proposition: \begin{proposition}[Fern\'andez-L\'opez\& Garc\'ia-R\'io \cite{FG} \label{R1}] Let $(M, g, f)$ be a gradient shrinking Ricci soliton with constant scalar curvature. Then the scalar curvature $R\in \{0, {1\over2}, 1, ..., {n\over 2}\}$. Moreover, no complete gradient shrinking Ricci soliton may exist with $R={1\over 2}$. \end{proposition} \begin{remark} If $R={n\over2}$, we obtain $M$ is n-dimension Einstein manifold. In fact when $(M, g, f)$ is a complete non-compact gradient shrinking Ricci soliton with constant scalar curvature $R$, then $R\in\{0, 1, {3\over2}, 2,..., {{n-1}\over 2}\}$. \end{remark} H.-D. Cao and Zhou \cite{CaoZhou} proved that the potential function of gradient shrinking Ricci solitons grows quadratically, \begin{lemma}[Cao-Zhou \cite{CaoZhou}]\label{cao-zhou} Let $(M, g, f)$ be a complete non-compact gradient shrinking Ricci soliton. Then there exist constants $c_1,c_2$ such that \begin{equation*} \begin{split} \frac{\lambda}{2}(r(x)-c_1)^2\leq f(x)\leq\frac{\lambda}{2}(r(x)+c_2)^2 \end{split} \end{equation*} where $r(x)=d(x_0,x)$ is the distance function from some fixed point $x_0\in M$. \end{lemma} { Note that if we normalize $f$ to $f_0$ by adding the constant $C_0$, then we have \begin{eqnarray}{\label{prop1}} R+|\nabla f_0|^2=f_0 \end{eqnarray} Moreover, if the scalar curvature $R$ is a constant and we set the new $f=f_0-R$, we obtain \begin{eqnarray}{\label{Rij}} 2R_{ij}\nabla_j f=\nabla_i R=0 \end{eqnarray} and \begin{eqnarray}{\label{newf}} |\nabla f|^2=f. \end{eqnarray} Here $f$ is also a potential function of $M$. Let $\rho=2\sqrt{f}$, then $|\nabla \rho|\equiv 1$. {In the rest of the paper, $f$ is the one satisfying (\ref{newf}) and (\ref{Rij}).} } Let $(M, g,f)$ be a shrinking Ricci soliton. Set $D_t:=\{x\in M| f(x)\leq t\}$. In {\cite{ChengZhou}}, Cheng and Zhou have already obtained the volume growth of $D_t$. Here we prove it again: \begin{theorem}[Cheng-Zhou {\cite{ChengZhou}\label{Volumegrowth}}] Let $(M^n, g, f)$ be a complete shrinking Ricci soliton with constant scalar curvature $R$ as above. $\Omega(r):=\{x\in M|\rho(x)\leq r\}.$ Then $Vol(\Omega(r))=cr^{n-2R}$ for some constant $c$. \end{theorem} {\bf{Proof:}} By the method in {\cite{CaoZhou}}: Let $V(r)=\int_{\Omega(r)}dV$. By co-area formula, $V(r)=\int^r_0\int_{\partial \Omega(s)}{1\over |\nabla \rho|}dAds$. That means $$V'(r)=\int_{\partial \Omega(r)}{1\over |\nabla \rho|}dA=\int_{\partial \Omega(r)}dA.$$ First we have \begin{eqnarray*} (n-2R)V(r)&=&nV(r)-2\int_{\Omega(r)}RdV\\ &=&2\int_{\Omega(r)}\triangle fdV\\ &=&2\int_{\partial \Omega(r)}\nabla f\cdot{\nabla \rho\over |\nabla \rho|}dA\\ &=&2\int_{\partial \Omega(r)}|\nabla f|dA\\ &=&rV'(r). \end{eqnarray*} That means ${V(r)\over V'(r)}={n-2R\over r}$, i.e. $\log V(r)=(n-2R)\log r+\log(V(1)).$ So $V(r)=cr^{n-2R}$, where $c=V(1)$ is a constant. {\hfill $\square$}\medskip \begin{remark} By Theorem {\ref{Volumegrowth}}, we obtain $Vol(D_t)=ct^{{n\over2}-R}$ for some positive constant $c$. The area of $\partial D_t$ is $Area(D_t)=c({n-2R}) t^{{n\over2}-R-{1\over2}}$. \end{remark} Now let us introduce some notations. For a harmonic function $u$, we define $$H(t)=\int_{D_t}u^2(t-f)^{\alpha}dv,$$ and $$J(t)=\int_{D_t}|\nabla u|^2(t-f)^{\alpha+1}$$ with $\alpha\geq 0.$ In particular, we denote by $$h(t)=\int_{D_t}u^2dv$$ for $H(t)$ with $\alpha=0.$ Then we define the frequency function as $$N(t)=\frac{J(t)}{H(t)}.$$ When $\alpha\geq 2,$ a direct computation yields \begin{align}\label{HJ} H^\prime(t) =\frac{\alpha+\frac{n}{2}}{t}H(t) +\frac{\alpha}{t}\int_{D_t} R(t-f)^{\alpha-1} u^2dv + \frac{2}{t(\alpha+1)} J(t) -\frac{1}{t}\int_{D_t} R(t-f)^\alpha u^2dv \end{align} and one can show that $$H(t)\leq t^{\alpha}h(t)$$ and for any $0<t<s$, $$h(t)\leq {H(s)\over{(s-t)^{\alpha}}}.$$ Moreover, in \cite{Ou} it is also shown that $t^{\sqrt n-1}N(t)$ with $\alpha\geq2$ is nondescreasing for gradient shrinking Ricci solitons with constant scalar curvature, which plays a role in the present paper. We refer to \cite{Ou} for general discussions on $N(t).$ {In this paper we will focus on the case of} constant scalar curvature $R$. {As mentioned previously,} we choose the potential function $f$ with $|\nabla f|^2=f$. By (\ref{HJ}), we have \begin{eqnarray} H'(t)={\alpha+{n\over2}-R\over t}H(t)+{2\over{(\alpha+1)t}}J(t), \end{eqnarray} and \begin{eqnarray}{\label{diedai}} {d\over{dt}}(\ln H(t)))={{\alpha+{n\over2}-R}\over t}+{2\over(\alpha+1)t}N(t). \end{eqnarray} \begin{proposition}{\label{P1}} If $N(t)\geq m>0$, then ${d\over dt}(t^{-{2m\over{\alpha+1}}}H(t)){>}0$. \end{proposition} {\bf{Proof:}} By (\ref{diedai}), we see that ${d\over{dt}}(\ln H(t)))> {2\over(\alpha+1)t}N(t)\geq{2m\over(\alpha+1)t}$. Thus we finish the proof. {\hfill $\square$}\medskip A harmonic function $u$ is said to be with polynomial growth of order $2d>0$ if \begin{equation} \sup_{x\in D_t}|u(x)| \leq C(t+1)^d. \end{equation} Note that the above definition coincides with the standard one (cf. $H_{2d}(M)$) by using Cao-Zhou's result {(see Lemma \ref{cao-zhou}).} \section{Liouville theorem on Ricci shrinkers with constant scalar curvature} By the definition of $N(t)$ with $\alpha=0,$ one has \begin{align}\label{N0} N(t) =\frac{J(t)}{h(t)}= \frac{\int_{D_t}|\nabla u|^2(t-f)dv}{\int_{D_t}u^2dv}. \end{align} \begin{lemma}\label{N_lim} For $\alpha=0,$ and $u$ is a non-constant harmonic function satisfying $3\delta >u>\delta,$ where $\delta$ is a positive constant, {there holds} \begin{align*} \lim_{t\to 0} N(t)=0 \end{align*} and \begin{align*} \lim_{t\to\infty} N(t)=0. \end{align*} \end{lemma} \noindent{\bf{Proof:}} For $\alpha=0,$ by a direct computation we have \begin{align}\label{N_lim1} \begin{split} N(t) &= \frac{\int_{D_t}|\nabla u|^2(t-f)dv}{\int_{D_t}u^2dv}\\ & = \frac{1}{2}\frac{\int_{D_t}\langle \nabla u^2,\nabla f \rangle dv}{\int_{D_t}u^2dv}\\ & = \frac{1}{2}\frac{-\int_{D_t} u^2 \triangle f dv +\int_{\partial D_t}u^2\langle \nabla f,\frac{\nabla f}{|\nabla f|}\rangle d\sigma }{\int_{D_t}u^2dv}\\ & = -\frac{1}{2}(\frac{n}{2}-R) + \frac{1}{2}\frac{t\int_{\partial D_t}u^2t^{-\frac{1}{2}}d\sigma}{\int_{D_t}u^2dv}\\ & = -\frac{1}{2}(\frac{n}{2}-R) + \frac{1}{2}\frac{th^\prime(t)}{h(t)}\\ & = -\frac{1}{2}(\frac{n}{2}-R) + \frac{1}{2}\frac{(\ln h(t))^\prime}{(\ln t)^\prime}. \end{split} \end{align} We first consider $\lim_{t\to 0}N(t)=0,$ it suffices to prove \begin{align*} \lim_{t\to 0} \frac{(\ln h(t))^\prime}{(\ln t)^\prime} = \frac{n}{2}-R. \end{align*} By LHoptial's rule, we have \begin{align}\label{lhopital} \lim_{t\to 0}\frac{\ln h(t)}{\ln t^{\frac{n}{2}-R}} = \lim_{t\to 0} \frac{(\ln h(t))^\prime}{(\ln t^{\frac{n}{2}-R})^\prime}, \end{align} Note that we also have \begin{align*} \delta^2 ct^{\frac{n}{2}-R}\leq \int_{D_t}u^2dv\leq 9\delta^2 ct^{\frac{n}{2}-R}, \end{align*} which gives \begin{align*} \ln (\delta^2c) + \ln t^{\frac{n}{2}-R} \leq \ln\int_{D_t}u^2dv\leq \ln ( 9\delta^2c) + \ln t^{\frac{n}{2}-R}. \end{align*} For small $t$ we just assume $\ln \int_{D_t}u^2dv<0.$ Therefore, \begin{align*} -\ln (9\delta^2c) + (- \ln t^{\frac{n}{2}-R}) \leq -\ln\int_{D_t}u^2dv\leq -\ln (\delta^2c) + (-\ln t^{\frac{n}{2}-R}), \end{align*} which gives \begin{align*} \frac{-\ln (9\delta^2c)}{(- \ln t^{\frac{n}{2}-R})}+1 \leq \frac{\ln \int_{D_t}u^2}{\ln t^{\frac{n}{2}-R}}\leq \frac{-\ln (\delta^2c)}{(-\ln t^{\frac{n}{2}-R})} +1. \end{align*} This leads to $\lim_{t\to 0}\frac{\ln h(t)}{\ln t^{\frac{n}{2}-R}}=1.$ Combining with Lemma \ref{N_lim} and (\ref{lhopital}), we have $\lim_{t\to 0}N(t)=0.$ Similarly, for large $t,$ we have \begin{align*} \frac{\ln (\delta^2c)}{ \ln t^{\frac{n}{2}-R}}+1 \leq \frac{\ln \int_{D_t}u^2dv}{\ln t^{\frac{n}{2}-R}}\leq \frac{\ln (9\delta^2c)}{\ln t^{\frac{n}{2}-R}} +1, \end{align*} which implies $\lim_{t\to\infty}\frac{\ln h(t)}{\ln t^{\frac{n}{2}-R}}=1,$ and consequently, $\lim_{t\to\infty}N(t)=0.$ {\hfill $\square$}\medskip Now, we can prove the Theorem {\ref{Liouville}}. \begin{proposition}\label{liouville} Let $U$ be a bounded harmonic function, i.e., \begin{align*} |U(x)|<{B}<\infty, \end{align*} where $B$ is a positive constant. Let $u(x)=\frac{\delta U(x)}{B}+\delta+\delta,$ which means $3\delta >u>\delta,$ where $\delta$ is a (small) positive constant. Then we have $u$ is a constant, and consequently, $U$ is a constant. \end{proposition} \noindent {\bf Proof:} By Proposition {\ref{R1}}, we know {in the complete noncompact case} the scalar curvature must be chosen by $\{0, 1, {3\over2}, ..., {n-1\over 2}\}$. We consider the following two cases: \noindent{\textbf{Case I:} $\frac{n-2}{2}\leq R{\leq \frac{n-1}{2}}$\\ There are two different proofs for \textbf{Case I}. The first one is similar to that given in \cite[Theorem 11, page 8]{pigola}. First we suppose that $u$ is not a constant. Let $K(t) = \int_{D_t} |\nabla u|^2dv,$ and then $K^\prime(t) = \int_{\partial D_t}|\nabla u|^2|\nabla f|^{-1} d\sigma.$ Using integration by parts and Cauchy-Schwarz's inequality, we have \begin{align*} K(t) = & -\int_{D_t} u\triangle u dv +\int_{\partial D_t} u\langle \nabla u,\frac{\nabla f}{|\nabla f|}\rangle d\sigma\\ = & \int_{\partial D_t} u\langle \nabla u, \frac{\nabla f}{|\nabla f|}\rangle d\sigma\\ \leq & t^\frac{1}{4}\left(\int_{\partial D_t}u^2dv\right)^\frac{1}{2}\left(\int_{\partial D_t}|\nabla u|^2 t^{-\frac{1}{2}}d\sigma \right)^\frac{1}{2}\\ = &t^\frac{1}{4}\left(\int_{\partial D_t}u^2dv\right)^\frac{1}{2}\left(\int_{\partial D_t}|\nabla u|^2 |\nabla f|^{-1}d\sigma \right)^\frac{1}{2}, \end{align*} which gives \begin{align*} -\left(\frac{1}{K(t)}\right)^\prime =\frac{K^\prime(t)}{K^2(t)} \geq & \frac{1}{t^\frac{1}{2}\int_{\partial D_t}u^2dv}\\ \geq & \frac{1}{9\delta^2 c({n-2R})t^{\frac{n}{2}-R}}, \end{align*} where we have used $\int_{\partial D_t}d\sigma =c(n-2R)t^{\frac{n}{2}-R-\frac{1}{2}}$ and $3\delta >u>\delta.$ Integrating from $t_1$ to $t_2$, we have \begin{align*} \frac{1}{K(t_1)}> \frac{1}{K(t_1)}-\frac{1}{K(t_2)}\geq \frac{1}{9c\delta^2} \int_{t_1}^{t_2}\frac{1}{t^{\frac{n}{2}-R}}dt. \end{align*} Since $\frac{n-2}{2}\leq R\leq \frac{n-1}{2}$, we have $\frac{1}{2}\leq\frac{n}{2}-R \leq 1,$ and consequently, $\frac{1}{t}\leq \frac{1}{t^{\frac{n}{2}-R}}\leq \frac{1}{t^\frac{1}{2}}.$ Therefore, by letting $t_2\to\infty,$ the right-hand side of the above inequality tends to infinity, which implies that $K(t_1)=0.$ We can conclude that $u$ is a constant, which gives a contradiction. Alternatively, we can also prove \textbf{Case I} by Lemma \ref{N_lim} immediately. By the definition of $N$ with $\alpha=0$ (see (\ref{N0})), we have \begin{align*} \frac{t}{2}\int_{D_\frac{t}{2}}|\nabla u|^2dv\leq \int_{D_t}|\nabla u|^2dv=J(t) = N(t)h(t) \leq 9\delta^2 Vol(D_1)N(t) t^{\frac{n}{2}-R}, \end{align*} which implies \begin{align*} \int_{D_\frac{t}{2}}|\nabla u|^2dv \leq C N(t)t^{\frac{n}{2}-R-1}. \end{align*} Since $-\frac{1}{2}\leq \frac{n}{2}-R-1\leq 0$, and $\lim_{t\to\infty}N(t)=0$ by Lemma \ref{N_lim}, we have \begin{align*} \int_M|\nabla u|^2dv = 0 \end{align*} by letting $t\to \infty.$ Hence, $u$ is a constant. \bigskip \noindent{\textbf{Case II:} $0\leq R\leq {{n-3}\over2}$\\ Assume that $u$ is not a constant. By (\ref{N_lim1}), for such $u$ we also have \begin{align*} h^\prime(t) = \frac{\frac{n}{2}-R}{t}h(t) +\frac{2}{t}J(t), \end{align*} which gives \begin{align*} J(t)=\frac{1}{2}(th^\prime(t)-(\frac{n}{2}-R)h(t)). \end{align*} Then \begin{align*} 2t^{-(\frac{n}{2}-R+1)}J(t)=t^{-(\frac{n}{2}-R)}h^\prime(t) - (\frac{n}{2}-R)t^{-(\frac{n}{2}-R+1)}h(t) = (t^{-(\frac{n}{2}-R)}h(t))^\prime. \end{align*} Since $t^{-(\frac{n}{2}-R)}h(t)$ is monotone increasing and $3\delta>u>\delta,$ we have \begin{align*} c\delta^2\leq t^{-(\frac{n}{2}-R)}\int_{D_t}u^2dv \leq 9c\delta^2, \end{align*} which implies \begin{align*} c\delta^2\leq \lim_{t\to 0}t^{-(\frac{n}{2}-R)}h(t)\leq 9c\delta^2 \end{align*} and \begin{align*} c\delta^2\leq \lim_{t\to \infty}t^{-(\frac{n}{2}-R)}h(t)\leq 9c\delta^2 \end{align*} Note that \begin{align*} t^{-(\frac{n}{2}-R)}h(t)-\epsilon^{-(\frac{n}{2}-R)}h(\epsilon) =\int_\epsilon^t (s^{-(\frac{n}{2}-R)}h(s))^\prime ds = 2\int_{\epsilon}^t s^{-(\frac{n}{2}-R+1)}J(s) ds. \end{align*} By letting $\epsilon\to 0$ and $t\to \infty$ we have \begin{align*} 0<C_1 \leq \int_0^\infty s^{-(\frac{n}{2}-R+1)}J(s)ds\leq C_2<\infty. \end{align*} Note that \begin{align*} \int_0^\infty s^{-(\frac{n}{2}-R+1)}J(s) ds & = \int_0^\infty \int_{D_s}s^{-(\frac{n}{2}-R+1)}|\nabla u(x)|^2 (s-f(x))dv ds\\ & = \int_0^\infty \int_{M} s^{-(\frac{n}{2}-R+1)}|\nabla u(x)|^2\chi_{\{x:f(x)<s\}}(x) (s-f(x))dvds\\ & = \int_{M}|\nabla u(x)|^2 \int_0^\infty s^{-(\frac{n}{2}-R+1)}\chi_{\{s:s>f(x)\}}(s) (s-f(x))dsdv\\ & = \int_{M}|\nabla u(x)|^2 \int_f^\infty s^{-(\frac{n}{2}-R+1)} (s-f(x))dsdv\\ & = \int_M |\nabla u(x)|^2 \left(\int_f^\infty s^{-(\frac{n}{2}-R)}ds-f(x)\int_f^\infty s^{-(\frac{n}{2}-R+1)}ds \right)dv\\ & = \int_M |\nabla u(x)|^2 \left(\frac{s^{-(\frac{n}{2}-R)+1}}{-(\frac{n}{2}-R)+1}\big|_f^\infty-f(x) \frac{s^{-(\frac{n}{2}-R)}}{-(\frac{n}{2}-R)}\big|_f^\infty \right)dv\\ & = \int_{M}|\nabla u(x)|^2 \left(\frac{f^{-(\frac{n}{2}-R-1)}}{\frac{n}{2}-R-1}-\frac{f^{-(\frac{n}{2}-R-1)}}{\frac{n}{2}-R}\right)dv\\ & = \frac{1}{(\frac{n}{2}-R)(\frac{n}{2}-R-1)}\int_M |\nabla u|^2 f^{-(\frac{n}{2}-R-1)}dv, \end{align*} which is bounded from above and below. Note that \begin{align*} \int_M |\nabla u|^2 f^{-(\frac{n}{2}-R)+1}dv & = \lim_{t\to \infty} \int_{ D_t} |\nabla u|^2 f^{-(\frac{n}{2}-R)+1}dv. \end{align*} By integration by parts, we have \begin{align*} \int_{ D_t}|\nabla u|^2 f^{-(\frac{n}{2}-R)+1}dv = & -\int_{D_t} u\triangle u f^{-(\frac{n}{2}-R)+1}dv + (\frac{n}{2}-R+1)\int_{ D_t}u\langle \nabla u,\nabla f\rangle f^{-(\frac{n}{2}-R)}dv\\ & + \int_{\partial D_t} t^{-(\frac{n}{2}-R)+1}u\langle \nabla u,\frac{\nabla f}{|\nabla f|} \rangle d\sigma. \end{align*} By integration by parts again, we have \begin{align*} & \int_{ D_t}\langle \nabla u^2,\nabla f\rangle f^{-(\frac{n}{2}-R)}dv \\ = & -\int_{ D_t} u^2 (\triangle f) f^{-(\frac{n}{2}-R)}dv + (\frac{n}{2}-R)\int_{ D_t} u^2|\nabla f|^2 f^{-(\frac{n}{2}-R)-1}dv\\ & +\int_{\partial D_t}t^{-(\frac{n}{2}-R)}u^2\langle \nabla f,\frac{\nabla f}{|\nabla f|} \rangle d\sigma\\ = & \int_{\partial D_t}t^{-(\frac{n}{2}-R)}u^2\langle \nabla f,\frac{\nabla f}{|\nabla f|} \rangle d\sigma, \end{align*} where we have used $\triangle f=\frac{n}{2}-R$ and $f=|\nabla f|^2.$ By a combination of the above equalities, we have \begin{align*} &\int_{ D_t}|\nabla u|^2 f^{-(\frac{n}{2}-R)+1}dv\\ = &(\frac{n}{2}-R+1) t^{-(\frac{n}{2}-R)}\int_{\partial D_t}u^2\langle \nabla f,\frac{\nabla f}{|\nabla f|} \rangle d\sigma + \int_{\partial D_t} t^{-(\frac{n}{2}-R)+1}u\langle \nabla u,\frac{\nabla f}{|\nabla f|} \rangle d\sigma\\ = & (\frac{n}{2}-R+1)\int_{\partial D_t}t^{-(\frac{n}{2}-R)+\frac{1}{2}} u^2d\sigma +t^{-(\frac{n}{2}-R)+1} \int_{D_t} |\nabla u|^2 dv. \end{align*} That is \begin{align*} &\int_{ D_t}|\nabla u|^2 f^{-(\frac{n}{2}-R)+1}dv = (\frac{n}{2}-R+1) \int_{\partial D_t}t^{-(\frac{n}{2}-R)+\frac{1}{2}} u^2d\sigma +t^{-(\frac{n}{2}-R)+1} \int_{D_t} |\nabla u|^2 dv. \end{align*} Note that \begin{align*} J(t) = N(t)h(t), \end{align*} which implies that \begin{align*} t^{-(\frac{n}{2}-R)+1}\int_{D_{\frac{t}{2}}}|\nabla u|^2 dv \leq C_3N(t), \end{align*} where the right-hand side tends to $0$ as $t\to \infty$ or $t\to 0$ given by Lemma \ref{N_lim}. Hence, \begin{align*} \int_M |\nabla u|^2 f^{-(\frac{n}{2}-R)+1}dv =(\frac{n}{2}-R+1) \lim_{t\to \infty} \int_{\partial D_t}u^2 t^{-(\frac{n}{2}-R)+\frac{1}{2}}d\sigma, \end{align*} and \begin{align*} 0= \lim_{t\to 0} \int_{D_t}|\nabla u|^2f^{-(\frac{n}{2}-R)+1}dv =(\frac{n}{2}-R+1) \lim_{t\to 0} \int_{\partial D_t} u^2 t^{-(\frac{n}{2}-R)+\frac{1}{2}} d\sigma \geq C_4>0, \end{align*} which gives a contradiction. Hence, $u$ is a constant. {\hfill $\square$}\medskip \begin{remark} In fact, for {\textbf{Case I:} $\frac{n-2}{2}\leq R{\leq \frac{n-1}{2}},$} by the result given in \cite[Theorem 11,page 8]{pigola}, one can have that any positive harmonic function is constant, which is stronger than the conclusion given in Proposition \ref{liouville}. \end{remark} Similarly by Corollary 3.2 in {\cite{MS}}, we can also obtain the following corollary about the ends of shrinkers with constant scalar curvature. \begin{corollary} Let $(M, g, f)$ be a complete gradient shrinking Ricci soliton with constant scalar curvature. It has at most one nonparabolic end. \end{corollary} {\bf{Proof:}} Let $l$ be the number of nonparabolic ends. By the theory of Li and Tam ({\cite{LT}} Theorem 2.1), $l\leq \dim \mathcal{H}_0(M)$ which finish the proof. {\hfill $\square$}\medskip Then by the result in ({\cite{MS}} Proposition 2.3), we know all the ends of $M$ are nonparabolic if $R\leq a <{n-2\over 2}$. We obtain the following result. \begin{corollary} If $(M, g, f)$ is a complete gradient shrinking Ricci soliton with constant scalar curvature $R\leq {n-3\over2}$, then it is connected at infinity. \end{corollary} \begin{remark} In fact, by the splitting Theorem in {\cite{MW2}} (Theorem 1.7), we know that if there are two ends on a complete gradient shrinking Ricci soliton $(M, g, f)$ with constant scalar curvature. Then $M$ is isometric to $\mathbb{N}^{n-1}\times \mathbb{R}$, where $\mathbb{N}$ is an Einstein manifold with positive scalar curvature. \end{remark} \section{Application : dimension estimate} To prove the Theorem {\ref{Dimension}}, we need to show that the following doubling property for harmonic function with polynomial growth. \begin{proposition}\label{poly} For any $d>0,$ if $u$ is harmonic with polynomial growth with order $2d,$ i.e., $$ \sup_{x\in D_t}|u(x)|\leq C(t+1)^d, $$ then for any $1<t<T<\infty,$ there exists a large $\lambda$ such that $$N(t)\leq {(\alpha+1)\lambda^{\sqrt{n}-1} T^{\sqrt n-1}}(d+\epsilon+\frac{\frac{n}{2}-R+\alpha}{2}),$$ with $\alpha\geq2$, $\epsilon>0.$ Moreover, $$ \frac{H(t_2)}{H(t_1)} \leq \frac{t_2^L}{t_1^L}, $$ or equivalently, $H(2t)\leq 2^L H(t)$ for $1<t<T,$ where $L=\alpha+\frac{n}{2}-R+2\lambda^{\sqrt n-1}T^{\sqrt n-1}(d+\epsilon +\frac{\frac{n}{2}-R+\alpha}{2}).$ \end{proposition} \noindent {\bf Proof:} We denote $N_u(t)=N(t)$. If the conclusion does not hold, then for any large $\lambda_i$, there exist $u_i$ and $T>t_i>1$ such that $N_{u_i}(t_i)=N_i(t_i)>(\alpha+1)\lambda_i^{\sqrt n-1}T^{\sqrt n-1}(d+\epsilon+\frac{\frac{n}{2}-R+\alpha}{2})$. Note that when $\alpha\geq 2,$ $t^{\sqrt{n}-1}N(t)$ is nondescreasing (see \cite{Ou}). Then we have \begin{eqnarray} t^{\sqrt n-1}N_i(t)\geq t_0^{\sqrt n-1}N(t_0)>(\alpha+1)\lambda_i^{\sqrt n-1} T^{\sqrt n-1}(d+\epsilon+\frac{\frac{n}{2}-R+\alpha}{2}), \end{eqnarray} which holds for all $t>t_i$. In particular, we have \begin{eqnarray*} N_i(t)> (\alpha +1)(d+\epsilon+\frac{\frac{n}{2}-R+\alpha}{2}), \end{eqnarray*} for $\lambda_i T>t>T$. Then applying Proposition \ref{P1} with $m=(\alpha +1)(d+\epsilon+\frac{\frac{n}{2}-R+\alpha}{2}),$ we have $t^{-2(d+\epsilon+\frac{\frac{n}{2}-R+\alpha}{2})}H(t)$ is nondescreasing for $\lambda_i T>t>T$. Consequently, we have $$ t^{-2(d+\epsilon+\frac{\frac{n}{2}-R+\alpha}{2})}H_i(t)\geq T^{-2(d+\epsilon+\frac{\frac{n}{2}-R+\alpha}{2})}H_i(T) $$ for $T<t<\lambda_i T.$ Then, by the volume of $D_t$ for the constant scalar curvature case (i.e., $Vol(D_t)=ct^{\frac{n}{2}-R}$), there holds \begin{eqnarray*} ct^{\alpha+\frac{n}{2}-R}\sup_{D_t}u_i^2 &=& t^\alpha Vol(D_t)\sup_{D_t}u_i^2\\ &\geq& \sup_{D_t}u_i^2\int_{D_t}(t-f)^\alpha \\ &\geq& H_i(t)\\ &\geq & T^{-2(d+\epsilon+\frac{\frac{n}{2}-R+\alpha}{2})}H_i(T)t^{2d+2\epsilon+{\frac{n}{2}-R+\alpha}}\\ &\geq & T^{-2(d+\epsilon+\frac{n-R+\alpha}{2})}H_i(T)t^{2d+2\epsilon+\frac{n}{2}-R+\alpha}, \end{eqnarray*} which implies \begin{eqnarray*} c\sup_{D_t}u_i^2 \geq T^{-2d-2\epsilon-\frac{n}{2}+R-\alpha}H_i(T) t^{2d+2\epsilon}, \end{eqnarray*} for $\lambda_i T>t>T$. Then we have \begin{align*} cC^2t^{2d} &>T^{-2d-2\epsilon-\frac{n}{2}+R-\alpha}t^{2d+2\epsilon}H_i(T) \end{align*} for $\lambda_i T>t>T.$ Therefore \begin{align*} \widetilde C ((\lambda_i -1)T)^{-2\epsilon}>\widetilde C t^{-2\epsilon}>H_i(T), \end{align*} for $\lambda_i T>t>(\lambda_i-1)T.$ Since $\lim_{i\to \infty}\lambda_i=\infty,$ we have $\lim_{i\to\infty} H_i(T)=0,$ which implies $\lim_{i\to\infty}u_i=0.$ Thus, for sufficiently large $i,$ we have that $u_i$ is uniformly bounded. By Theorem \ref{Liouville}, we have $u_i$ is a constant, and then $N_i(t)\equiv 0,$ which contradicts to $N_i(t_i)>(\alpha+1)\lambda_i^{\sqrt n-1}T^{\sqrt n-1}(d+\epsilon+\frac{\frac{n}{2}-R+\alpha}{2}).$ Therefore, there exists a $\lambda$ such that \begin{eqnarray} N(t)\leq {(\alpha+1)\lambda^{\sqrt n-1}T^{\sqrt n-1}}(d+\epsilon+\frac{\frac{n}{2}-R+\alpha}{2}) \end{eqnarray} for $1<t<T.$ Consequently, we have \begin{eqnarray*} (\ln H(t))^\prime \leq \frac{L}{t},\quad 1<t<T, \end{eqnarray*} with $L=\alpha+\frac{n}{2}-R+2\lambda^{\sqrt n-1}T^{\sqrt n-1}(d+\epsilon+\frac{\frac{n}{2}-R+\alpha}{2}).$ Then $$ \ln\frac{H(t_2)}{H(t_1)} =\int_{t_1}^{t_2} (\ln H(t))^\prime dt\leq \ln\frac{t_2^L}{t_1^L}, $$ which means $\frac{H(t_2)}{H(t_1)} \leq \frac{t_2^L}{t_1^L}$. {\hfill $\square$}\medskip \bigskip The proof of Theorem {\ref{Dimension}} is given as follows. \noindent{\bf{Proof of Theorem {\ref{Dimension}}:}} {By Proposition \ref{poly}} the harmonic function $u$ with polynomial growth has the doubling property: \begin{eqnarray} \int_{D(5t)}|u|^2dv\leq C(L)\int_{D(t)}|u|^2dv, \end{eqnarray} for {$1<t<5t<T$}. Using the same argument in {\cite{MW}}, we can have the conclusion. For completeness, we include a proof. By {\cite{CaoZhou}} (see also Lemma \ref{cao-zhou}), $t$ can be chosen such that $$D(t)\subset B_p(3r)$$ and $$B_p(4r)\subset D(5t),$$ where $p$ is a fixed point in $M$ and $r=\sqrt{t}$. Thus, we have \begin{eqnarray}{\label{double1}} \int_{B_p(4r_0)}|u|^2dv\leq C_0(L) \int_{B_p(3r_0)}|u|^2dv, \end{eqnarray} for some $r_0$ depending only on $L$. Note that {(\ref{double1})} holds for all harmonic function with polynomial growth on $M$ with some $L$. Then, by a result of Li {\cite{Li}}, there exists a nontrivial $u_0\in \mathcal H_{d}(M)$ such that \begin{eqnarray}{\label{double2}} \int_{B_p(3r_0)}|u_0|^2dv\leq {nV_p(3r_0)\over{\dim{\mathcal H_{d}(M)}}}\sup_{B_p(3r_0)}|u_0|^2, \end{eqnarray} where $V_p(r)$ denotes the volume of $B_p(r).$ On the other hand, the Sobolev constant of $B_p(4r_0)$ can be controlled by a constant depending only on $n$ and $r_0$ ({\cite{MW0}}). Then using the Moser iteration for the subharmonic function $|u_0|$, there holds \begin{eqnarray}{\label{double3}} \sup_{B_p(3r_0)}|u_0|^2dv\leq {C_1(n,d)\over{V_p(4r_0)}}\int_{B_p(4r_0)}|u_0|^2dv. \end{eqnarray} By (\ref{double1}), (\ref{double2}) and (\ref{double3}), we obtain that \begin{eqnarray*} \int_{B_p(4r_0)}|u_0|^2dv&\leq& C_0(L) \int_{B_p(3r_0)}|u_0|^2dv\\ &\leq& C_0(L) {nV_p(3r_0)\over{\dim{\mathcal H_{d}(M)}}}\sup_{B_p(3r_0)}|u_0|^2\\ &\leq& C_0(L) {nV_p(3r_0)\over{\dim{\mathcal H_{d}(M)}}}{C_1(n,d)\over{V_p(4r_0)}}\int_{B_p(4r_0)}|u_0|^2dv, \end{eqnarray*} which implies $\dim \mathcal H_{d}(M)\leq \widetilde C_0(L)<\infty$.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,184
{"url":"http:\/\/sourceforge.net\/p\/matplotlib\/mailman\/attachment\/6946b9500803230902ob3d319awfbcc5994036ffff1%40mail.gmail.com\/1\/","text":"Hello windows users -\n\nCan anybody using mpl 0.91.2 on windows reproduce this bug:\n\nfrom pylab import *\nplot([1,2,3])\nxlabel(r'$\\chi$')\nsavefig('c:\/temp\/test.eps')\n\nThis should give an eps file that when viewed does not show the letter chi along the x-axis (that's the bug).\n\nWe have been discussing this problem, and I think it is a bug in the windows distribution.\n(It worked on all the previous mpl versions fine, just when upgrading from 0.90.1 the problem arose).","date":"2015-01-30 17:14:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6917535066604614, \"perplexity\": 3675.372268475432}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-06\/segments\/1422115856115.9\/warc\/CC-MAIN-20150124161056-00283-ip-10-180-212-252.ec2.internal.warc.gz\"}"}
null
null
#ifndef cmExportFileGenerator_h #define cmExportFileGenerator_h #include "cmCommand.h" #include "cmGeneratorExpression.h" #include "cmVersionMacros.h" #include "cmVersion.h" #define STRINGIFY_HELPER(X) #X #define STRINGIFY(X) STRINGIFY_HELPER(X) #define DEVEL_CMAKE_VERSION(major, minor) ( \ CMake_VERSION_ENCODE(major, minor, 0) > \ CMake_VERSION_ENCODE(CMake_VERSION_MAJOR, CMake_VERSION_MINOR, 0) ? \ STRINGIFY(CMake_VERSION_MAJOR) "." STRINGIFY(CMake_VERSION_MINOR) "." \ STRINGIFY(CMake_VERSION_PATCH) \ : #major "." #minor ".0" \ ) class cmTargetExport; /** \class cmExportFileGenerator * \brief Generate a file exporting targets from a build or install tree. * * cmExportFileGenerator is the superclass for * cmExportBuildFileGenerator and cmExportInstallFileGenerator. It * contains common code generation routines for the two kinds of * export implementations. */ class cmExportFileGenerator { public: cmExportFileGenerator(); virtual ~cmExportFileGenerator() {} /** Set the full path to the export file to generate. */ void SetExportFile(const char* mainFile); const char *GetMainExportFileName() const; /** Set the namespace in which to place exported target names. */ void SetNamespace(const std::string& ns) { this->Namespace = ns; } std::string GetNamespace() const { return this->Namespace; } void SetExportOld(bool exportOld) { this->ExportOld = exportOld; } /** Add a configuration to be exported. */ void AddConfiguration(const std::string& config); /** Actually generate the export file. Returns whether there was an error. */ bool GenerateImportFile(); protected: typedef std::map<std::string, std::string> ImportPropertyMap; // Generate per-configuration target information to the given output // stream. void GenerateImportConfig(std::ostream& os, const std::string& config, std::vector<std::string> &missingTargets); // Methods to implement export file code generation. void GenerateImportHeaderCode(std::ostream& os, const std::string& config = ""); void GenerateImportFooterCode(std::ostream& os); void GenerateImportVersionCode(std::ostream& os); void GenerateImportTargetCode(std::ostream& os, cmGeneratorTarget const* target); void GenerateImportPropertyCode(std::ostream& os, const std::string& config, cmGeneratorTarget const* target, ImportPropertyMap const& properties); void GenerateImportedFileChecksCode(std::ostream& os, cmGeneratorTarget* target, ImportPropertyMap const& properties, const std::set<std::string>& importedLocations); void GenerateImportedFileCheckLoop(std::ostream& os); void GenerateMissingTargetsCheckCode(std::ostream& os, const std::vector<std::string>& missingTargets); void GenerateExpectedTargetsCode(std::ostream& os, const std::string &expectedTargets); // Collect properties with detailed information about targets beyond // their location on disk. void SetImportDetailProperties(const std::string& config, std::string const& suffix, cmGeneratorTarget* target, ImportPropertyMap& properties, std::vector<std::string>& missingTargets); template <typename T> void SetImportLinkProperty(std::string const& suffix, cmGeneratorTarget* target, const std::string& propName, std::vector<T> const& entries, ImportPropertyMap& properties, std::vector<std::string>& missingTargets); /** Each subclass knows how to generate its kind of export file. */ virtual bool GenerateMainFile(std::ostream& os) = 0; /** Each subclass knows where the target files are located. */ virtual void GenerateImportTargetsConfig(std::ostream& os, const std::string& config, std::string const& suffix, std::vector<std::string> &missingTargets) = 0; /** Each subclass knows how to deal with a target that is missing from an * export set. */ virtual void HandleMissingTarget(std::string& link_libs, std::vector<std::string>& missingTargets, cmGeneratorTarget* depender, cmGeneratorTarget* dependee) = 0; void PopulateInterfaceProperty(const std::string&, cmGeneratorTarget *target, cmGeneratorExpression::PreprocessContext, ImportPropertyMap &properties, std::vector<std::string> &missingTargets); bool PopulateInterfaceLinkLibrariesProperty(cmGeneratorTarget* target, cmGeneratorExpression::PreprocessContext, ImportPropertyMap &properties, std::vector<std::string> &missingTargets); void PopulateInterfaceProperty(const std::string& propName, cmGeneratorTarget* target, ImportPropertyMap &properties); void PopulateCompatibleInterfaceProperties(cmGeneratorTarget *target, ImportPropertyMap &properties); void GenerateInterfaceProperties(cmGeneratorTarget const* target, std::ostream& os, const ImportPropertyMap &properties); void PopulateIncludeDirectoriesInterface( cmTargetExport *target, cmGeneratorExpression::PreprocessContext preprocessRule, ImportPropertyMap &properties, std::vector<std::string> &missingTargets); void PopulateSourcesInterface( cmTargetExport *target, cmGeneratorExpression::PreprocessContext preprocessRule, ImportPropertyMap &properties, std::vector<std::string> &missingTargets); void SetImportLinkInterface(const std::string& config, std::string const& suffix, cmGeneratorExpression::PreprocessContext preprocessRule, cmGeneratorTarget* target, ImportPropertyMap& properties, std::vector<std::string>& missingTargets); enum FreeTargetsReplace { ReplaceFreeTargets, NoReplaceFreeTargets }; void ResolveTargetsInGeneratorExpressions(std::string &input, cmGeneratorTarget* target, std::vector<std::string> &missingTargets, FreeTargetsReplace replace = NoReplaceFreeTargets); void GenerateRequiredCMakeVersion(std::ostream& os, const char *versionString); // The namespace in which the exports are placed in the generated file. std::string Namespace; bool ExportOld; // The set of configurations to export. std::vector<std::string> Configurations; // The file to generate. std::string MainImportFile; std::string FileDir; std::string FileBase; std::string FileExt; bool AppendMode; // The set of targets included in the export. std::set<cmGeneratorTarget*> ExportedTargets; private: void PopulateInterfaceProperty(const std::string&, const std::string&, cmGeneratorTarget* target, cmGeneratorExpression::PreprocessContext, ImportPropertyMap &properties, std::vector<std::string> &missingTargets); bool AddTargetNamespace(std::string &input, cmGeneratorTarget* target, std::vector<std::string> &missingTargets); void ResolveTargetsInGeneratorExpression(std::string &input, cmGeneratorTarget* target, std::vector<std::string> &missingTargets); virtual void ReplaceInstallPrefix(std::string &input); virtual std::string InstallNameDir(cmGeneratorTarget* target, const std::string& config) = 0; }; #endif
{ "redpajama_set_name": "RedPajamaGithub" }
1,562
Q: Passing function of multiple arguments that contains class to other function I'm trying to pass function of multiple arguments to other function. I know how to pass a function of single argument function to other function as it was described in C++ primer plus book. However, I get an error when I'm trying to pass multiple arguments with class(poly_3d) to NR_method function. #include <iostream> #define log(x) std::cout<<x<<std::endl; class constants { public: double A; double B; double C; }; double poly_3d(double x, constants cst); double NR_method(double a, double(*poly_3d)(double)); int main() { constants cst; cst.A = 2; cst.B = -8; cst.C = 10; NR_method(3.2, poly_3d); system("PAUSE"); return 0; } double poly_3d(double x, constants cst) { double y = 3 * cst.A*x*x + 2 * cst.B*x + cst.C; return y; } double NR_method(double a, double (*poly_3d)(double)) { double c = (*poly_3d)(a); return c; } So the error I'm getting is from NR_method(3.2, poly_3d) in main function. I know that if poly_3d was single arg, this would work. If this is a horrible way to write codes, then any directions towards learning C++ more effectively for newbies would be much appreciated! Thanks A: Take a look at the following code. We're using a template to make things look nicer. #include <iostream> #define log(x) std::cout<<x<<std::endl; class constants { public: double A; double B; double C; }; /// Note that we take a ref now, no need to copy cst. double poly_3d(double x, constants & cst) { double y = 3 * cst.A*x*x + 2 * cst.B*x + cst.C; return y; } /// Note that we take a ref now, no need to copy cst. template <class F> double NR_method(double a, constants & cst, F func) { return func(a, cst); } int main() { constants cst; cst.A = 2; cst.B = -8; cst.C = 10; NR_method(3.2, cst, &poly_3d); system("PAUSE"); return 0; } A: You are declaring the function poly_3d with 2 arguments but passing only one. I made a few changes on the code for you #include <iostream> #define log(x) std::cout<<x<<std::endl; class constants { public: double A; double B; double C; }; double poly_3d(double x, constants cst); double NR_method(double a, constants cst, double(*poly_3d)(double, constants)); int main() { constants cst; cst.A = 2; cst.B = -8; cst.C = 10; printf("%f", NR_method(3.2, cst, poly_3d)); system("PAUSE"); return 0; } double poly_3d(double x, constants cst) { double y = 3 * cst.A*x*x + 2 * cst.B*x + cst.C; return y; } double NR_method(double a, constants cst, double (*poly)(double, constants)) { return (*poly)(a, cst); } A: Let's start by simplifying your code. (A minimal example removes distractions, allowing you to better focus on the actual issue.) It looks like you started to do this, but it can be taken further. After removing some stuff that is not needed to reproduce the compile error: class constants {}; double poly_3d(double x, constants cst); double NR_method(double a, double(*poly_3d)(double)); int main() { NR_method(3.2, poly_3d); } double poly_3d(double x, constants /*cst*/) { return 3 * x; } double NR_method(double a, double (*poly_3d)(double)) { return (*poly_3d)(a); } Now let's look at the error message: error: invalid conversion from 'double (*)(double, constants)' to 'double (*)(double)' This comes with an indication that the conversion is from poly_3d to the second argument of NR_method. If you look at those things, yes, that is the conversion you requested. The argument list for poly_3d is (double, constant), while the declared argument list for the second argument is just (double). There is a mismatch, which makes the conversion invalid. It's not all that different from the single-parameter case: the signatures must match. You can solve this by changing the argument's signature to math that of poly_3d. Now, if you just make the signatures match, there is another problem in that NR_method does not have a constants value available. That is probably a logical error for you to work out. For a quick workaround to show the elimination of the compiler error, I'll add a local variable. class constants { }; double poly_3d(double x, constants cst); double NR_method(double a, double(*poly_3d)(double, constants)); // <-- Desired signature int main() { NR_method(3.2, poly_3d); } double poly_3d(double x, constants /*cst*/) { return 3.0 * x; } double NR_method(double a, double (*poly_3d)(double, constants)) { constants cst; // <-- Allows this to compile, but probably not what you want. return (*poly_3d)(a, cst); // <-- Needed a second parameter here. } There are ways to make this work nicer (for example, a std::function may be more convenient than a function pointer), but explaining those would fall outside the scope of this question, especially since some decisions would depend on the bigger picture.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,623
Police Investigate Homicide In Charlotte's University City WFAE | By Marshall Terry NICK DE LA CANAL WFAE A man was fatally shot early Tuesday in Charlotte's University City area. Charlotte-Mecklenburg police say officers found 53-year-old Kenny Ollemi with a gunshot wound in a crashed vehicle outside Classic Graphics off IBM drive just after midnight. He was taken to a hospital, where he was pronounced dead a short time later. CMPD says it appears Ollemi was driving the vehicle at the time he was shot and then crashed. It's the 73rd homicide in Charlotte this year, according to CMPD. There were 58 in all of last year. Charlotte Area 2019 Homicides Marshall Terry Marshall came to WFAE after graduating from Appalachian State University, where he worked at the campus radio station and earned a degree in communication. Outside of radio, he loves listening to music and going to see bands - preferably in small, dingy clubs. See stories by Marshall Terry
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
80
Een jager-verzamelaar is een mens die zich in leven houdt door middel van de jacht en/of het verzamelen van eetbare dingen zoals bessen en dieren. Het jagen en verzamelen als overlevingsmethode is sinds de neolithische revolutie zo goed als overal vervangen door landbouw. Oermens In het geschiedenisboek van de mensheid beslaan de hoofdstukken die de jager-verzamelaars behandelen verreweg de meeste bladzijden. Tenminste, dat zou zo moeten zijn gekeken naar de tijd dat de mensheid de aarde voornamelijk als nomaden bevolkte. De eerste mensen, Homo habilis en Homo rudolfensis leefden 2,4 miljoen jaar geleden als jager-verzamelaar. Hun opvolgers - Homo erectus, Homo heidelbergensis en de neanderthaler (Homo neanderthalensis) - leefden ook van jagen en verzamelen. Jagers en verzamelaars gebruikten werktuigen van steen, been en hout. Mogelijk tussen 1,5 en 1 miljoen jaar geleden leerden de eerste mensachtigen vuur te gebruiken. Het vroegste gebruik van vuur zal onderwerp van onderzoek blijven, maar vanaf zo'n 400.000 jaar geleden is het gebruik van vuur ondubbelzinnig en wijdverspreid. Moderne mens Van de 150.000 à 200.000 jaar dat er moderne mensen leven op deze planeet, hebben ze de langste tijd hiervan geleefd als jager-verzamelaars. Deze Homo sapiens verspreidde zich tussen 120.000 en 60.000 jaar geleden vanuit Afrika over de rest van de wereld. Hij maakte belangrijke vorderingen op het gebied van de technologische ontwikkeling. Deze moderne mensensoort maakte juwelen, deed schedelboringen, vond de kano uit, de speerwerper en pijl-en-boog en domesticeerde de grijze wolf als begeleider bij de jacht. Een leven als jager-verzamelaar stond in het teken van overleven. Door op pad te gaan voor het verzamelen van eetbare wilde planten en het jagen op dieren, zorgden zij voor voldoende voedsel voor hun familiegroep. De jager-verzamelaars hadden meestal geen vaste verblijfsplaats. Ze trokken rond in kleine groepen van doorgaans 20-50 personen, steeds op weg naar gebieden waar voldoende voedsel te vinden was. Samenleving Lange tijd is door paleoantropologen verondersteld dat jager-verzamelaars een moeizaam bestaan leidden en dat de overgang naar landbouw een vooruitgang was. Marshall Sahlins stelde echter in 1968 dat de jager-verzamelaars de oorspronkelijke welvaartssamenleving vormden. Op zijn stelling is sindsdien het nodige afgedongen, maar gemiddeld genomen lijken zij waarschijnlijk een gevarieerdere voeding te hebben gehad dan primitieve landbouwers en verkeerden zij in een betere gezondheid. Wel hebben betrekkelijk kleine groepjes jagers en verzamelaars een groot gebied nodig om in te leven. Ook was de levensstijl niet bevorderlijk voor het krijgen van veel kinderen. Kleine kinderen beperken de mobiliteit en bij de nomadische levensstijl moest iedereen mee helpen bij het vinden van voldoende voedsel. Het is dan ook onwaarschijnlijk dat de aarde voor de ontwikkeling van de landbouw ooit meer dan 10 miljoen inwoners telde. Jager-verzamelaars leefden in groepen waarin men tamelijk gelijkwaardig was. Dit was enerzijds een uitvloeisel van de beperkte grootte, anderzijds van de beperkte mate van specialisatie. Iedereen was full-time bezig met het verplaatsen of het verkrijgen van voedsel door middel van jacht, visserij of verzamelen, veel tijd voor iets anders was er niet. In het Oude Nabije Oosten bestonden in de Natufische cultuur al sedentaire dorpen. Door verbetering van jacht- en verzameltechnieken konden de jager-verzamelaars daar langer op dezelfde plaats blijven wonen. Een samenleving van jagers en verzamelaars die als zalmvissers leefden aan de Noordwestkust van Amerika was relatief rijk, had houten huizen en totempalen en leefde (semi-)sedentair. Men hield er slaven op na die vaak in gewelddadige rooftochten in de omgeving gevangen werden genomen. Overgang naar landbouw In het Nabije Oosten, in de Vruchtbare Sikkel, begonnen jager-verzamelaars meer dan 10.000 jaar geleden voedingsgewassen te kweken, voornamelijk granen en groenten. Niet erg lang daarna gingen groepen mensen over tot het houden van vee en huisdieren. Deze overgang van het jager-verzamelaarstijdperk naar de vestiging van landbouwgemeenschappen op een vaste plaats staat bekend als de neolithische revolutie. Vermoedelijk is dit het eerst gebeurd in het stroomgebied van de Eufraat en de Tigris, bekend geworden als Mesopotamië. Hier ontstonden zo'n 6.000 jaar geleden vanuit landbouwerssamenlevingen de eerste stadstaten zoals Ur, Uruk en Lagash. Antropologie
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,863
Pedro Antonio Herrera Moreno (born 7 October 1986), is a Colombian cyclist, who currently rides for amateur team EBSA–Indeportes Boyacá. Major results 2009 3rd Overall Vuelta a Chiriquí 2013 1st Prologue & Stage 3 Vuelta a Cundinamarca 1st Stage 4 Clásica Ciudad de Girardot 2014 1st Time trial, Pan American Road Championships 1st Time trial, National Road Championships 1st Stage 10 Vuelta al Táchira 8th Overall Vuelta Independencia Nacional References External links 1986 births Living people Colombian male cyclists People from Tunja Sportspeople from Boyacá Department 21st-century Colombian people
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,502
Q: No encuentro el error parse error Tengo el siguiente código en php: <?php include ("conex.php"); if ($_SERVER["REQUEST_METHOD"] == "POST"){ $centro = mysqli_real_escape_string($con,$_POST['centro']); $descripcion = mysqli_real_escape_string($con,$_POST['descripcion']); if($centro<> "" and $descripcion <> "") { session_start(); $congregacion = $_SESSION['congregacion']; $sql = "INSERT INTO centros_salidas (id_centro, id_congregacion, nombre, descripcion) VALUES (NULL, '$congregacion', '$centro', '$descripcion')"; mysqli_query($con, $sql); echo mysqli_error($con); if(mysqli_affected_rows($con)){ mysqli_close( $con); $html= 1; }else{ mysqli_close($con); $html= 0; } }else{ $html.= 2; } echo $html; } ?> Obtengo el siguiente mensaje de error: Parse error: syntax error, unexpected end of file in C:\AppServ\www\Control_territorios\php\reg_territorio.php on line 25. y la verdad estoy buscando a ver si me hace falta alguna { o algún ; pero no veo que me haga falta nada, es más el error me dice que esta en línea 25 la cual es el fin del script no se que pueda ser. Agradezco mucho su ayuda. A: Como te dicen los comentarios, el problema es que se te han colado los caracteres aAg en la línea 19. Elimínalo y debería funcionar. También te recomiendo eliminar la etiqueta final de cierre de php ?> no es necesaria y puede darte fallos alguna vez.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,009
{"url":"https:\/\/www.physicsforums.com\/threads\/energy-of-the-fourier-series.579274\/","text":"# Energy of the Fourier Series\n\n#### Jag1972\n\nI have been trying to understand the Fourier series and the relationship between the energy in the original function and its Fourier representation. The example function: y = 3t has a period of 2\u220f. The Fourier coefficients are:\n\nThe Fourier representation has a dc average of 3\u220f, it has no cosine terms but does have sine terms with amplitude equal to -6\/n.\n\nUsing Persavals thereom I can determine the energy in the Fourier series:\n\n$\\frac{1}{\\pi}$$\\int^{2\\pi}_{0}$ $3t^{2}$ dt = $a0\/2^{2}$ + $\\sum bn^{n}$\n\nAfter using about 13 harmonics I got it to 99% of the energy of the original function. I do not know how to get to a limiting value which I think is called convergence. A stable value reached, if there is no stable value then the function diverges. I know there are tests for convergence and divergence but these will not give actual limiting values. My question is that how does one know what the actual limiting value is orthis just something we have to reach ourselves, also would the energy difference be 0 at this limiting value. I hope it will be.\n\n#### anorlunda\n\nMentor\nGold Member\nThat is not an electrical for Fourier question. You are asking about convergence of any infinite series.\n\n\"Energy of the Fourier Series\"\n\n### Physics Forums Values\n\nWe Value Quality\n\u2022 Topics based on mainstream science\n\u2022 Proper English grammar and spelling\nWe Value Civility\n\u2022 Positive and compassionate attitudes\n\u2022 Patience while debating\nWe Value Productivity\n\u2022 Disciplined to remain on-topic\n\u2022 Recognition of own weaknesses\n\u2022 Solo and co-op problem solving","date":"2019-07-18 18:03:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.542164146900177, \"perplexity\": 1080.9846565402868}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195525699.51\/warc\/CC-MAIN-20190718170249-20190718192249-00298.warc.gz\"}"}
null
null
{"url":"https:\/\/mail.freeetextbooks.com\/igcse-maths-notes\/5447-area-of-kite-formed-by-tangents-to-circle-and-radii.html","text":"Area of Kite Formed By Tangents to Circle and Radii\n\nWhat is the area of the kite formed by tangents to a circle and radii to those tangents?\n\nThe diagram shows such a kite The area is\n$A= \\frac{1}{2}( AO \\times BC)$\n\n$BC= \\sqrt{(3-1)^2+(4-1)^2}= \\sqrt{13}$\n.\nTo find the other diagonal we use that triangles ABC and BQO are similar triangles, so\n$\\frac{BQ}{BO}=\\frac{AB}{AO}$\n.\n$AB=\\sqrt{AO^2-BO^2}= \\sqrt{13-2^2}=3$\n, then\n$\\frac{BQ}{BO}=\\frac{AB}{AO} \\rightarrow BQ= \\frac{AB}{AO} BO=\\frac{3}{\\sqrt{13}} \\times 2 = \\frac{6}{\\sqrt{13}}$\n.\nThen\n$BC=\\frac{12}{\\sqrt{13}}$\n.\nThe area of the kite is\n$\\frac{1}{2} \\times \\sqrt{13} \\times \\frac{12}{\\sqrt{13}}=6$\n.","date":"2022-08-11 05:10:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.28357741236686707, \"perplexity\": 1693.4847550061058}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882571234.82\/warc\/CC-MAIN-20220811042804-20220811072804-00356.warc.gz\"}"}
null
null
\section{Introduction} Supergravity theories are supersymmetric extensions of gravity which, apart from diffeomorphism, are also invariant under local supersymmetry and $\textit{possibly}$ some other gauge symmetries. Although they may not be ultraviolet complete on their own, they do describe the low energy sector of string theory which is one of the candidates for a complete quantum theory of gravity. String theory compactified on different compact manifolds gives rise to effective supergravity theories in the low energy limit. Construction of supergravity invariants, therefore, is crucial to understand leading order and higher derivative corrections to black hole entropy arising in the context of string theory. For a class of extremal black holes, such higher derivative corrections have been well studied, see e.g. \cite{LopesCardoso:1998tkj,Sen:2007qy,Mohaupt:2000mj}, facilitated by the construction of various higher derivative invariants in $\mathcal{N}=2$ supergravity in four dimensions\cite{Bergshoeff:1980is,Butter:2013lta,Kuzenko:2015jxa,deWit:2010za}. While supergravity invariants second order in derivatives can be constructed with on-shell supersymmetry techniques, to construct higher derivative or more general matter couplings in supergravity, off-shell methods are found to be useful. This is due to the fact that modifying the on-shell supersymmetry transformations to construct higher derivative actions or general matter couplings, involves the arduous task of iteratively modifying the supersymmetry rules and the equations of motion such that the supersymmetry algebra is satisfied on-shell with respect to the modified equations of motion. In off-shell supersymmetry, the supersymmetry algebra is satisfied off-shell and the transformations are independent of equations of motion which makes them suitable for construction of general matter couplings and higher derivative invariants in supergravity. Conformal supergravity, in particular superconformal ``multiplet calculus'', provides a powerful method to construct supergravity invariants using off-shell supersymmetry. Here, the large number of symmetries in the superconformal algebra allow for shorter representations than for Poincar\'e supergravity. Physical Poincar\'e supergravity is then obtained by use of compensator fields to gauge fix the additional symmetries in the superconformal theory \cite{Kaku:1978ea}. Upon gauge fixing, actions for matter multiplets in conformal supergravity lead to kinetic actions for gravity multiplet in Poincar\'e supergravity. The multiplet that contains all the gauge fields of the superconformal algebra is known as the Weyl multiplet. In $\mathcal{N}=2$ conformal supergravity in four dimensions, the transformation of the Weyl multiplet and the complete action was constructed in \cite{Ferrara:1977ij, deWit:1979dzm, Bergshoeff:1980is}. The Weyl multiplet contains all the gauge fields of the superconformal algebra along with few auxiliary fields required for the closure of the multiplet. It has $24+24$ (bosonic+fermioinic) off-shell degrees of freedom. Different choices of the auxiliary fields lead to different types of Weyl multiplet. The Weyl multiplet discussed in \cite{Bergshoeff:1980is} is known as the standard Weyl multiplet. A different set of auxiliary fields lead to the dilaton Weyl multiplet in four dimensions \cite{Butter:2017pbp}. We will use the standard Weyl multiplet in this work. All the details of the $\mathcal{N}=2$ standard Weyl multiplet is discussed in appendix-\ref{N2Weyl}. Apart from the Weyl multiplet, there can be several matter multiplets in $\mathcal{N}=2$ conformal supergravity. Transformation rules for the matter multiplets such as the 8+8 components vector multiplet and 8+8 components tensor multiplet were presented in \cite{deWit:1979dzm}. A larger matter multiplet containing 24+24 off-shell degrees of freedom called the real scalar multiplet was discussed in \cite{Hegde:2017sgl} which was the generalization of the flat space multiplet discussed in \cite{Howe:1982tm} to conformal supergravity. We will need the tensor multiplet and the real scalar multiplet for our work. The components of the tensor multiplet and its supersymmetry transformation rules will be discussed in appendix-\ref{N2Weyl} and the details about real scalar multiplet will be discussed in section-\ref{Real}. A crucial ingredient to construct actions for these matter multiplets, and thereby Poincar\'e supergravity theories is the $\mathcal{N}=2$ chiral density formula \cite{deRoo:1980mm}. The term density formula refers to a set of objects that appear in the action accompanied by fields such as the vielbein and gravitino. For the action to be invariant under supersymmetry, they transform among each other in a particular way, and satisfy certain constraints such as chirality or reality. The chiral density formula of \cite{deRoo:1980mm} was based on a $16+16$ components chiral multiplet. It was shown that this $16+16$ multiplet can be reduced to an $8+8$ restricted chiral multiplet upon introducing a consistent set of constraints. Further, an embedding was found for the gauge invariant objects of the $8+8$ vector multiplet inside this restricted chiral multiplet. This allowed for the construction of a superconformal action for the vector multiplet in conformal supergravity background and thereby resulted in the construction of a minimal Poincar\'e supergravity theory upon gauge fixing \cite{deWit:1980lyi}. While the free tensor gauge field action is not conformal in four dimensions, an improved tensor multiplet action was constructed, based on a density formula made out of the components of a linear multiplet\footnote{The components of Linear multiplet are the gauge invariant objects of a tensor multiplet} and a vector muiltiplet, in \cite{deWit:1982na} which allowed for a new minimal formulation of Poincar\'e supergravity. Construction of density formula such as the chiral density formula and the linear-vector density formula played a key role in the construction of supergravity theories. The process of embedding products of different multiplets into density formulae is known as the superconformal multiplet calculus. The three minimal Poincar\'e supergravity theories constructed by this method, are based on the actions of two compensating multiplets in the background of Weyl multiplet. Vector multiplet compensator is necessary to obtain the graviphoton in Poincar\'e supergravity. However this alone is insufficient, as the auxiliary field $D$ of the Weyl multiplet occurs linearly in the action and its equation of motion places a non trivial condition on the fields. To resolve this, another compensating multiplet is needed. Tensor multiplet or nonlinear multiplet, when used for this purpose lead to off-shell Poincar\'e supergravity and when the on-shell hypermultiplet is used, one obtains an on-shell Poincar\'e supergravity. The procedure of superconformal multiplet calculus has been used in the past to construct several higher derivative invariants, in various dimensions and of relevance to us here, in four dimensions \cite{Bergshoeff:1980is,Butter:2013lta,Kuzenko:2015jxa,deWit:2010za}. Recently, higher derivative couplings have also been discussed in the literature in the projective superspace formulation of conformal supergravity \cite{Kuzenko:2009zu, Butter:2010jm, Butter:2012sb}. These results have played an important role in understanding higher derivative corrections to black hole entropy. Thus construction of new density formula, thereby new Poincar\'e supergravity invariants is of interest, as it allows us to study and understand these results further. In this paper, we construct a new density formula in four dimensional $\mathcal{N}=2$ conformal supergravity, using the covariant superform method presented in \cite{Butter:2019edc}, which is based on the `ectoplasm' principle presented in \cite{Gates:1997ag,Gates:1997kr}. We will elaborate on this method later. The crucial difference of the new density formula from the earlier ones is that the lowest component (i.e the component with the lowest Weyl weight) of the multiplet from which it is constructed is a spinor $\Sigma_{ijk}$. It is a superconformal primary field\footnote{A field that is invariant under special conformal transformation and S-supersymmetry is known as superconformal primary field. S-supersymmetry is a different kind of supersymmetry that arises in conformal supergravity. It is different from the ordinary supersymmetry which is referred to as the Q-supersymmetry. For details we refer the reader to appendix-\ref{N2Weyl}}, transforms in the $\textbf{4}$ of SU(2) R symmetry, has chirality $-1$, chiral weight $-1/2$ and Weyl weight $+5/2$. Along with its supersymmetry descendents, it appears in the density formula as: \begin{align} S=\int \mathcal{L} d^4 x \;, \end{align} where the Lagrangian density $\mathcal{L}$ is given as: \begin{align}\label{d4.8} e^{-1}\mathcal{L}&=-i\mathcal{L}+2i\bar{\psi}_{ai}\Upsilon^{a}_{j}\varepsilon^{ij}+2i\bar{\psi}_{ai}\gamma^{a}\Psi_{j}\varepsilon^{ij}+\frac{3i}{16}\bar{\psi}_{a}^{j}\gamma^{ab}\gamma^{c}\psi_{bk}\mathcal{G}_{cjl}\varepsilon^{kl}+\frac{i}{16}\bar{\psi}_{a}^{j}\gamma^{c}\gamma^{ab}\psi_{bk}\mathcal{G}_{cjl}\varepsilon^{kl}\nonumber \\ & \;\;\; -\frac{i}{4}\bar{\psi}_{a}^{i}\gamma^{ab}\psi_{b}^{j}\mathcal{D}^{kl}\varepsilon_{ik}\varepsilon_{jl}-\frac{i}{32}\bar{\psi}_{a}^{i}\gamma^{ab}\gamma^{cd}\psi_{b}^{j}\mathcal{B}_{cd}^{kl}\varepsilon_{ik}\varepsilon_{jl}-i\varepsilon^{abcd}\bar{\psi}_{ai}\gamma_{b}\psi_{c}^{j}\bar{\psi}_{dk}\Sigma_{jmn}\varepsilon^{km}\varepsilon^{in}+\text{h.c}\;. \end{align} All the components appearing in the above density formula are related to $\Sigma_{ijk}$ by subsequent Q-supersymmetry transformations. As one can also see, the maximum number of gravitno appearing in the above density formula is three as opposed to four in the chiral density formula or two in the linear-vector density formula. Another crucial difference with the other $\mathcal{N}=2$ density forumlae is that, not all components of the multiplet appear in the density formulae. For example, there are components $\mathcal{C}_{ijkl}$ and $\mathcal{H}^{a}_{ijkl}$ that appear in the supersymmetry transformation of $\Sigma_{ijk}$ (\ref{d2.3}, \ref{d3.2}) but do not appear in the above density formula. Further, we will show that this density formula embeds the $24+24$ real scalar multiplet \cite{Hegde:2017sgl} which further admits a tensor multiplet embedding. We will use the embedding of the tensor multiplet, to construct a new higher derivative invariant for the tensor multiplet in $\mathcal{N}=2$ conformal supergravity. The paper is organized as follows. In section-\ref{density} we will construct the new density formula using the covariant superform approach of \cite{Butter:2019edc}. In section-\ref{Real} we will review the real scalar multiplet of \cite{Hegde:2017sgl} along with the tensor multilplet embedding. In section-\ref{Real_action} we will find the invariant action for the real scalar multiplet by embedding it into the density formula constructed in section-\ref{density}. In section-\ref{tensor_action}, we will use tensor multiplet embedding in the real scalar multiplet to obtain new higher derivative coupling of the tensor multiplet in conformal supergravity. We will end with conclusions and future directions. \section{A new density formula for $\mathcal{N}=2$ conformal supergravity}\label{density} In this section, we will build a new density formula using the covariant superform approach of \cite{Butter:2019edc}. We will briefly outline the method and use it to construct a new density formula for $\mathcal{N}=2$ conformal supergravity. In order to understand this method, consider the following action integral of a d-form ($J$) in a d-dimensional submanifold ($M_d$) embedded in a larger D-dimensional manifold ($\mathcal{M}_D$): \begin{align}\label{density-1} S=\int_{M_d}J \end{align} Under a general diffeomorphism ($\xi$), the d-form transforms as: \begin{align}\label{density-2} \delta_{\xi}J=i_{\xi}dJ+d(i_{\xi} J) \end{align} If the submanifold ($M_d$) is closed or if there are appropriate fall-off boundary conditions, the second term in the above variation (\ref{density-2}) will not contribute to the variation of the action integral (\ref{density-1}). Hence, the action-integral will be invariant under a general diffeomorphism in the larger Manifold ($\mathcal{M}_D$) if the d-form ($J$) satisfies the following condition: \begin{align}\label{density-3} dJ=0 \end{align} Now consider the case where $M_d$ is the space-time manifold and $\mathcal{M}_D$ is a larger manifold where some of the gauge symmetries (for eg: supersymmetry) has been geometerized. In that case, the corresponding gauge transformations will be a part of the generalized diffeomorphism of the larger manifold and hence the action-integral defined on the space-time manifold will be invariant under the corresponding gauge transformation if it satisfies the closure condition (\ref{density-3}). For further discussions, let us restrict ourselves to conformal supergravity in four spacetime dimensions. Apart from diffeomorphism, the other gauge-symmetries present in conformal supergravity are local Lorentz transformation, R-symmetry, dilatation, ordinary supersymmetry (also known as Q-supersymmetry) as well as special supersymmetry (also known as S-supersymmetry). For our purpose, we will consider the case where the larger Manifold ($\mathcal{M}$) is a superspace where the Q-supersymmetry has been geometerized. The corresponding fermionic directions are labelled as $\theta^{m}$. The generalized vielbein (or supervielbein) will have legs along the spacetime manifold as well as the fermionic directions as shown below: \begin{align}\label{density-4} E^A=dx^{\mu}E_{\mu}^{A}+d\theta^{m}E_{m}^{A} \end{align} The supervielbein 1-form when restricted to the spacetime manifold (i.e $\theta=d\theta=0$) are nothing but $(E^A)_{|\theta=d\theta=0}=(e^a,\frac{1}{2}\psi^{i},\frac{1}{2}\psi_{i})$, where $e^a$ is the vielbein 1-form, $\psi^{i}$ and $\psi_{i}$ are the left-chiral and right-chiral gravitino 1-form respectively\footnote{We will be dealing with fermions in the Dirac 4-component notation}. The four form $J$ that we will need for the action integral can be further decomposed as: \begin{align}\label{density-5} J=J_{DCBA}E^A E^B E^C E^D\;, \end{align} where the wedge product between the 1-forms is assumed and the block $J_{DCBA}$ is fully supercovariant. The four-form is further assumed to be invariant under all the other gauge transformations of conformal supergravity except Q-supersymmetry. Invariance under Q-supersymmetry is guaranteed if the four-form ($J$) satisfies the closure condition (\ref{density-3}). However since we have also assumed that $J$ is invariant under other gauge transformations, we can replace the closure condition with the covariant closure condition: \begin{align}\label{density-6} \nabla J=0\;, \end{align} where $\nabla$ is an exterior derivative that is covariant w.r.t all the other gauge transformations of conformal supergravity except Q-supersymmetry. The invariance of the four-form $J$ under local Lorentz transformations, dilatation, special conformal transformation and R-symmetry is manifest by taking the separate blocks $J_{DCBA}$ to be invariant under these symmetries. Invariance under S-supersymmetry would mean that the different blocks will be related to each other by S-transformation since gravitino transforms to a vielbein under S-transformation as given below: \begin{align}\label{s-grav} \delta_{S}\psi_{\mu}{}^{i}=-e_{\mu}^{a}\gamma_a \eta^{i}\;. \end{align} While imposing the covariant closure condition, the supervielbein appearing in the four form $J$ (\ref{density-5})will be taken to be the full supervielbein $E^A$ that has legs along the full superspace $\mathcal{M}$ instead of the restricted superielbein $E^A_{|\theta=d\theta=0}$ that has legs only along the spacetime manifold $M$. However, we will somewhat abuse the notation and write $E^A=(e^a, \frac{1}{2}\psi^{i},\frac{1}{2}\psi_{i})$ while using the covariant closure condition (\ref{density-6}). Once we solve the covariant closure condition and get an appropriate 4-form $J$, the supervielbein in the action-integral will only involve the usual vielbein and the gravitino fields since the action integral is defined on the spacetime manifold $M$ which is a $\theta=d\theta=0$ slice of the full manifold $\mathcal{M}$. In order to use the covariant closure condition, we must know how the covariant exterior derivative acts on the supervielbein and supercovariant objects. The covariant exterior derivative acts on the supervielbeins to give the corresponding superspace torsion tensors as shown below\footnote{For a comprehensive treatment of $\mathcal{N}=2$ conformal supergravity in superspace, we refer the reader to \cite{Butter:2011sr}}: \begin{align}\label{density-7} \nabla e^{a}&=-\frac{1}{2}\bar{\psi}^{i}\gamma^{a}\psi_{i} \equiv t_{0} e^{a}\;, \nonumber \\ \nabla \psi^{i} &=\frac{1}{2}e^{a}e^{b}R(Q)_{ba}{}^{i}-\frac{1}{16}\gamma\cdot T^{ij}e^{a}\gamma_{a}\psi_{j}\equiv t_{3/2}\psi^{i}+t_{1}\psi^{i}\;, \nonumber \\ \nabla \psi_{i}&=\frac{1}{2}e^{a}e^{b}R(Q)_{ba}{}_{i}-\frac{1}{16}\gamma\cdot T_{ij}e^{a}\gamma_{a}\psi^{j}\equiv \bar{t}_{3/2}\psi_{i}+\bar{t}_{1}\psi_{i}\;, \end{align} where, following \cite{Butter:2019edc}, we have introduced shorthands $t_{n}$ and $\bar{t}_{n}$ for the torsion 2-forms. The subscripts denote the Weyl weight of the covariant fields appearing in the expressions. The covariant exterior derivative acts on the super-covariant objects as: \begin{align}\label{density-8} \nabla \Phi \equiv (\nabla_{1}+\nabla_{1/2}+\bar{\nabla}_{1/2})\Phi \end{align} where, \begin{align}\label{density-9} \nabla_{1}\Phi&=e^{a}D_{a}\Phi\;,\nonumber \\ \nabla_{1/2}\Phi&=\frac{1}{2}\bar{\psi}^{k}\nabla_{k}\Phi=\delta^{L}_{Q}\left(\frac{1}{2}\psi\right)\Phi\;, \nonumber \\ \bar{\nabla}_{1/2}\Phi&=\frac{1}{2}\bar{\psi}_{k}\nabla^{k}\Phi =\delta^{R}_{Q}\left(\frac{1}{2}\psi\right)\Phi\;, \nonumber \\ \end{align} where, $D_{a}\Phi$ is the fully super-covariant derivative of $\Phi$ and $\delta_{Q}^{(L)}\left(\frac{1}{2}\psi\right)\Phi$ or $\delta_{Q}^{(R)}\left(\frac{1}{2}\psi\right)\Phi$ is either the the left or the right Q-supersymmetry transformation of $\Phi$ respectively where the parameter has been replaced by $\frac{1}{2}\psi$. The subscripts on the $\nabla$ in the shorthand notation given in (\ref{density-8}) denotes the Weyl weight of the corresponding operators acting on $\Phi$. With the above definitions at hand, the covariant closure condition on $J$ can be written as: \begin{align}\label{density-10} \nabla J=(t_0+t_{1/2}+t_{1}+\bar{t}_{1/2}+\bar{t}_{1}+\nabla_{1}+\nabla_{1/2}+\bar{\nabla}_{1/2})J=0 \end{align} In the above formalism it is easier to decompose the above covariant closure condition on $J$ by the number of gravitino factors contained in them and set them individually to zero. In order to apply this formalism, one has to start with an ansatz which is basically the structure of the term in $J$ that contains the maximum number of gravitino. For example one may start with the following ansatz for the structure of the highest gravitino term in $J$: \begin{align}\label{chiral} J_{\bar{\psi}^4}=\bar{\psi}_{i}\psi_{j}\bar{\psi}_{k}\psi_{l}\varepsilon^{ij}\varepsilon^{kl}A\;, \end{align} where $A$ is a Lorentz as well as SU(2) scalar and has the required Weyl weight of +2 and chiral weight of -2. Upon imposing the covariant closure condition, one can recover the well-known chiral density formula along with the full supersymmetry transformations of the chiral multiplet with $A$ as its lowest component. In the above equation, the subscript denote the number of right-handed gravitino it carries. In general we will decompose $J$ as: \begin{align}\label{density-11} J=\sum_{m+n+p=4}J_{e^m\psi^n\bar{\psi}^p}\;, \end{align} where, $m$ is the number of vielbein, $n$ is the number of left-handed gravitino and $p$ is the number of right-handed gravitino. Now, we will take a different route and propose the following structure for the highest gravitino term in $J$: \begin{align}\label{density-12} J_{e\bar{\psi}^{2}\psi}&= e^{a}\bar{\psi}_{i}\gamma_{a}\psi^{j}\bar{\psi}_{k}\Sigma_{jmn}\varepsilon^{km}\varepsilon^{in}\;,\nonumber \\ J_{{e\psi}^{2}\bar{\psi}}&=e^{a}\bar{\psi}_{i}\gamma_{a}\psi^{j}\bar{\psi}^{k}\Sigma^{imn}\varepsilon_{km}\varepsilon_{jn}\;, \end{align} The component $\Sigma_{ijk}$ appearing above is a spinor field which has chirality $-1$, chiral weight $-1/2$ and Weyl weight $+5/2$. The other object $\Sigma^{ijk}$ that appears above is related to $\Sigma_{ijk}$ by charge conjugation and has the opposite chirality and chiral weight but the same Weyl weight\footnote{We will be following the \textit{chiral notation} whereby raising and lowering of SU(2) indices results in opposite chiral weight and chirality but Weyl weight remains unchanged.}. Both $\Sigma_{ijk}$ and $\Sigma^{ijk}$ are superconformal primary fields and transform in the $\bf{4}$ representation of $SU(2)$-R symmetry. Let us now apply the covariant closure condition $\nabla J=0$ on the above highest gravitino term. As discussed previously, the covariant closure condition, which is a five-form, can be decomposed according to the number of gravitino 1-form it carries as shown below: \begin{align}\label{density-13} \nabla J &=\sum_{m+n+p=5}(\nabla J)_{e^m \psi^n \bar{\psi}^{p}}=0\nonumber \\ & \implies (\nabla J)_{e^m\psi^n\bar{\psi}^{p}}=0 \end{align} The individual $(\nabla J)_{e^m \psi^n \bar{\psi}^p}$ are typically referred to as $e^m \psi^n \bar{\psi}^p$ Bianchi identity. \subsection{Solving the $\bar{\psi}^3\psi^2$ Bianchi} This arises only from $t_{0} J_{e\bar{\psi}^2 \psi}$ and should cancel on its own. As we will show below, it indeed cancels on its own. \begin{align}\label{d1.1} t_{0} J_{e\bar{\psi}^{2}\psi}&=\frac{1}{2}\bar{\psi}^{\ell}\gamma^{a}\psi_{\ell}\bar{\psi}_{i}\gamma_{a}\psi^{j}\bar{\psi}_{k}\Sigma_{jmn}\varepsilon^{km}\varepsilon^{in}\nonumber \\ &=\bar{\psi}_{i}\psi_{\ell}\bar{\psi}^{\ell}\psi^{j}\bar{\psi}_{k}\Sigma_{jmn}\varepsilon^{km}\varepsilon^{in}=0 \end{align} We did a Fierz rearrangement in coming from the first line to the next line. One easy way to see how the last line becomes zero is to understand that $\bar{\psi}_{i}\psi_{\ell}$ is anti-symmetric in SU(2) indices $i$ and $\ell$ and hence proportional to $\varepsilon_{i\ell}$. Explicitly $\bar{\psi}_{i}\psi_{\ell}=\frac{1}{2}\varepsilon_{i\ell}\varepsilon^{np}\bar{\psi}_{n}\psi_{p}$. Similarly ${\psi}^{\ell}\psi^{j}$ is proportional to $\varepsilon^{\ell j}$. Explicitly $\bar{\psi}^{\ell}\psi^{j}=\frac{1}{2}\varepsilon^{\ell j}\varepsilon_{np}\bar{\psi}^{n}\psi^{p}$. Hence $\bar{\psi}_{i}\psi_{\ell}\bar{\psi}^{\ell}\psi^{j}$ is proportional to $\delta_{i}^{j}$. Since $ \Sigma_{ijk}$ transforms in the $\bf{4}$ irrep of SU(2)-R symmetry, this implies $\delta_{i}^{j}\Sigma_{jmn}\varepsilon^{km}\varepsilon^{in}=0$ and hence the expression in (\ref{d1.1}) goes to zero. Similarly one can show that the conjugate Bianchi $\psi^3\bar{\psi}^2$ is also automatically satisfied. \subsection{Solving the $e\bar{\psi}^{3}\psi$ Bianchi} This comes from $\bar{\nabla}_{1/2}J_{e\bar{\psi}^2\psi}$ and $t_{0}J_{e^2\bar{\psi}^{2}}$. We already know what is the structure of $J_{e\bar{\psi}^2\psi}$ from (\ref{density-12}). By solving this Bianchi, our aim is to relate the fields appearing in $J_{e^2\bar{\psi}^{2}}$ to the supersymmetry transformation of $\Sigma_{ijk}$. We will also see that some fields appearing in the supersymmetry transformation of $\Sigma_{ijk}$ will be constrained. In order to solve this Bianchi, let us define: \begin{align}\label{d2.1} & (\bar{\nabla}^{\ell}\gamma_{bc}\Sigma_{jmn})_{\bf{5}}=\varepsilon^{\ell p}\mathcal{A}_{bc jmnp}\;, \;\;\; \bar{\nabla}^{p}\gamma_{bc}\Sigma_{mnp}=\mathcal{B}_{bcmn}\;, \nonumber \\ & (\bar{\nabla}^{\ell}\Sigma_{jmn})_{\bf{5}}=\varepsilon^{\ell p}\mathcal{C}_{jmnp}\;, \;\;\; \bar{\nabla}^{p}\Sigma_{mnp}=\mathcal{D}_{mn}\;. \end{align} In terms of the above defined irreps of SU(2)-R symmetry, the full decompositions of the operator $\bar{\nabla}^{\ell}$ acting on $\Sigma_{ijk}$ takes the following form: \begin{align}\label{d2.2} \bar{\nabla}^{\ell}\gamma_{bc}\Sigma_{jmn}&=\varepsilon^{\ell p}\mathcal{A}_{bc jmnp}+\frac{3}{4}\delta^{\ell}_{(j}\mathcal{B}_{bcmn)}\;,\nonumber \\ \bar{\nabla}^{\ell}\Sigma_{jmn}&=\varepsilon^{\ell p}\mathcal{C}_{jmnp}+\frac{3}{4}\delta^{\ell}_{(j}\mathcal{D}_{mn)}\;. \end{align} And hence, in terms of the fields defined above, the right supersymmetry transformation of $\Sigma_{ijk}$ takes the following form: \begin{align}\label{d2.3} \delta_{Q}^{R}\Sigma_{ijk}&=\bar{\epsilon}_{\ell}\nabla^{\ell}\Sigma_{ijk}\nonumber \\ &=-\frac{1}{2}\bar{\nabla}^{\ell}\Sigma_{ijk}\epsilon^{\ell}+\frac{1}{8}\bar{\nabla}^{\ell}\gamma_{ab}\Sigma_{ijk}\gamma^{ab}\epsilon^{\ell}\nonumber \\ &=-\frac{1}{2}\varepsilon^{\ell m}\mathcal{C}_{ijkm}\epsilon_{\ell}-\frac{3}{8}\mathcal{D}_{(ij}\epsilon_{k)}+\frac{1}{8}\varepsilon^{\ell m}\mathcal{A}_{abijkm}\gamma^{ab}\epsilon_{\ell}+\frac{3}{32}\mathcal{B}_{ab(ij}\gamma^{ab}\epsilon_{k)}\;. \end{align} In terms of the above fields, $\bar{\nabla}_{1/2}J_{e\bar{\psi}^{2}\psi}$ takes the following form: \begin{align}\label{d2.4} \bar{\nabla}_{1/2}J_{e\bar{\psi}^{2}\psi}&=-\frac{1}{8}t_{0}\left(e^{a}e^{b}\right)\bar{\psi}_{i}\gamma_{ab}\psi_{j}\mathcal{D}_{k\ell}\varepsilon^{ik}\varepsilon^{j\ell}\nonumber \\ &\;\;\;+\frac{1}{16}e^{a}\bar{\psi}_{i}\gamma_{a}\psi^{j}\bar{\psi}_{k}\gamma_{bc}\psi_{\ell}\mathcal{A}^{bc}_{jmnp}\varepsilon^{km}\varepsilon^{in}\varepsilon^{\ell p}\nonumber \\ &\;\;\;-\frac{1}{64}t_{0}\left(e^{a}e^{b}\right)\bar{\psi}_{i}\gamma_{ab}\gamma_{cd}\psi_{j}\mathcal{B}^{cd}_{k\ell}\varepsilon^{ik}\varepsilon^{j\ell}\;. \end{align} It is clear that the first and third expressions above are $t_0$ exact and hence it will cancel if we take the $t_{0}$ operation on the following $J_{e^2\bar{\psi}^{2}}$: \begin{align}\label{d2.5} J_{e^2\bar{\psi}^{2}}&=\frac{1}{8}e^{a}e^{b}\bar{\psi}_{i}\gamma_{ab}\psi_{j}\mathcal{D}_{k\ell}\varepsilon^{ik}\varepsilon^{j\ell}+\frac{1}{64}e^{a}e^{b}\bar{\psi}_{i}\gamma_{ab}\gamma_{cd}\psi_{j}\mathcal{B}^{cd}_{k\ell}\varepsilon^{ik}\varepsilon^{j\ell}\;. \end{align} The second term is not $t_0$ exact and will only vanish if we have the following constraint \begin{align}\label{d2.6} \mathcal{A}^{bc}_{jmnp}=0\;. \end{align} The conjugate $e{\psi}^{3}\bar{\psi}$ Bianchi will give the constraint $\mathcal{A}_{bc}^{jmnp}=0$, where $\mathcal{A}_{bc}^{jmnp}$ appears in the left supersymmetry transformation of $\Sigma^{ijk}$ as shown below: \begin{align}\label{d2.7} (\bar{\nabla}_{\ell}\gamma_{bc}\Sigma^{jmn})_{\bf{5}}=\varepsilon_{\ell p}\mathcal{A}_{bc}^{jmnp}\;. \end{align} However this constraint is automatically satisfied because $\mathcal{A}_{bc}^{jmnp}= (\mathcal{A}_{bcjmnp})^{*}$. The conjugate Bianchi will also give the hermitian conjugate of $J_{e^2\bar{\psi}^{2}}$ which is: \begin{align}\label{d2.8} J_{e^2{\psi}^{2}}&=\frac{1}{8}e^{a}e^{b}\bar{\psi}^{i}\gamma_{ab}\psi^{j}\mathcal{D}^{kl}\varepsilon_{ik}\varepsilon_{jl}+\frac{1}{64}e^{a}e^{b}\bar{\psi}^{i}\gamma_{ab}\gamma_{cd}\psi^{j}\mathcal{B}^{cdkl}\varepsilon_{ik}\varepsilon_{jl}\;. \end{align} \subsection{Solving the $e\bar{\psi}^{2}\psi^{2}$ Bianchi} This arises from $\nabla_{1/2}J_{e\bar{\psi}^{2}\psi}$, its hermitian conjugate $\bar{\nabla}_{1/2}J_{e{\psi}^{2}\bar{\psi}}$ and $t_{0}J_{e^2\psi\bar{\psi}}$. By analyzing this Bianchi, we hope to obtain $J_{e^2\psi\bar{\psi}}$ which will contain fields appearing in the left supersymmetry transformation of $\Sigma_{ijk}$. Along with that we will also obtain constraints on some of the fields appearing in the left supersymmetry transformation of $\Sigma_{ijk}$. For this purpose, let us define: \begin{align}\label{d3.1} \bar{\nabla}_{(i}\gamma^{b}\Sigma_{jk\ell)}=\mathcal{H}^{b}_{ijk\ell}\;, \;\;\; \varepsilon^{pq}\nabla_{p}\gamma^{b}\Sigma_{qmn}=\mathcal{G}^{b}_{mn}\;. \end{align} In terms of the above mentioned fields, the left-supersymmetry transformation of $\Sigma_{ijk}$ becomes: \begin{align}\label{d3.2} \delta_{Q}^{L}\Sigma_{ijk}&=\bar{\epsilon}^{\ell}\nabla_{\ell}\Sigma_{ijk}\nonumber \\ &=-\frac{1}{2}\bar{\nabla}_{\ell}\gamma^{b}\Sigma_{ijk}\gamma_{b}\epsilon^{\ell}\nonumber \\ &=-\frac{1}{2}\mathcal{H}^{b}_{ijk\ell}\gamma_{b}\epsilon^{\ell}-\frac{3}{8}\varepsilon_{\ell(i}\mathcal{G}^{b}_{jk)}\gamma_{b}\epsilon^{\ell}\;. \end{align} And we get: \begin{align}\label{d3.3} \nabla_{1/2}J_{e\bar{\psi}^2\psi}&=-\frac{1}{4}e^{a}\bar{\psi}_{i}\gamma_{a}\psi^{j}\bar{\psi}_{k}\gamma_{b}\psi^{\ell}\mathcal{H}^{b}_{\ell jmn}\varepsilon^{km}\varepsilon^{in}\nonumber \\ &\;\;\; +\frac{3}{32}t_{0}(e^{a}e^{b})\bar{\psi}^{j}\gamma_{ab}\gamma_{c}\psi_{k}\mathcal{G}^{c}_{j\ell}\varepsilon^{k\ell}\nonumber \\ &\;\;\; -\frac{1}{32}t_{0}(e^{a}e^{b})\bar{\psi}^{j}\gamma_{c}\gamma_{ab}\psi_{k}\mathcal{G}^{c}_{j\ell}\varepsilon^{k\ell}\;. \end{align} The Hermitian conjugate of the above expression is: \begin{align}\label{d3.4} \bar{\nabla}_{1/2}J_{e{\psi}^2\bar{\psi}}&=-\frac{1}{4}e^{a}\bar{\psi}_{j}\gamma_{a}\psi^{i}\bar{\psi}_{\ell}\gamma_{b}\psi^{k}\mathcal{H}^{b}{}^{\ell jmn}\varepsilon_{km}\varepsilon_{in}\nonumber \\ &\;\;\; -\frac{3}{32}t_{0}(e^{a}e^{b})\bar{\psi}^{k}\gamma_{c}\gamma_{ab}\psi_{j}\mathcal{G}^{c}{}^{j\ell}\varepsilon_{k\ell}\nonumber \\ &\;\;\; +\frac{1}{32}t_{0}(e^{a}e^{b})\bar{\psi}^{k}\gamma_{ab}\gamma_{c}\psi_{j}\mathcal{G}^{c}{}^{j\ell}\varepsilon_{k\ell}\;. \end{align} The first expressions in the above two equations have the same structure and they are also not $t_0$ exact. Hence, they have to cancel among themselves and that can only happen if the following constraint is satisfied: \begin{align}\label{d3.5} \mathcal{H}^{b}{}^{\ell jmn}=-\varepsilon^{\ell p}\varepsilon^{jq}\varepsilon^{mr}\varepsilon^{ns}\mathcal{H}^{b}_{pqrs}\;. \end{align} The second and third expressions are $t_0$ exact and will cancel by taking $t_0$ operation on the following $J_{e^2\psi\bar{\psi}}$: \begin{align}\label{d3.6} J_{e^2\bar{\psi}\psi}&=-\frac{3}{32}e^a e^b \bar{\psi}^{j}\gamma_{ab}\gamma_{c}\psi_{k}\mathcal{G}^{c}_{j\ell}\varepsilon^{k\ell}\nonumber \\ &\;\;\; +\frac{1}{32}e^{a}e^{b}\bar{\psi}^{j}\gamma_{c}\gamma_{ab}\psi_{k}\mathcal{G}^{c}_{j\ell}\varepsilon^{k\ell} +\text{h.c}\;. \end{align} \subsection{Solving the remaining Bianchi and the final invariant density formula} The remaining Bianchi that are to be solved are respectively $e^2 \psi^3$, $e^2\psi^2\bar{\psi}$, $e^3\psi^2$, $e^3\bar{\psi}{\psi}$, $e^4 {\psi}$ and their conjugates. However, it must be clear from the previous analyses that the Bianchi such as $e^2 \psi^3$, $e^3\psi^2$, $e^4 \psi$ and their conjugates will give rise to constraints and will not yield a new term in the four-form $J$. Where-as the Bianchi such as $e^2\psi^2\bar{\psi}$, its conjugate and $e^3\bar{\psi}{\psi}$ would give rise to constraints as well as yield new contributions to the four form $J$ which would be of the form $J_{e^3\psi}$ and its conjugate as well as $J_{e^4}$. However, the constraints that we would get from these Bianchi will not be independent ones. These constraints would be satisfied as a result of the earlier constraints (\ref{d2.6}, \ref{d3.5}) and closure of the supersymmetry algebra. This can be argued using a standard argument in superspace based on the Bianchi identity of the Bianchi identity (See appendix B.2 of \cite{Butter:2019edc}). We will revisit this argument in appendix-\ref{Bianchi}. Upon solving the $e^2\bar{\psi}^2 \psi$ Bianchi, we would obtain the $J_{e^3\bar{\psi}}$ contribution to the four-form $J$ which is as follows: \begin{align}\label{d4.1} J_{e^3\bar{\psi}}&=\frac{1}{3}e^{a}e^{b}e^{c}\bar{\psi}_{i}\Upsilon^{d}_{j}\varepsilon^{ij}\varepsilon_{abcd}+\frac{1}{3}e^{a}e^{b}e^{c}\bar{\psi}_{i}\gamma^{d}\Psi_{j}\varepsilon^{ij}\varepsilon_{abcd}\;, \end{align} where, \begin{align}\label{d4.2} \Upsilon_{b}{}_{i}&=-\frac{1}{32}\gamma^{a}\sigma_{abi}\nonumber \\ \Psi_{i}&=- \frac{3}{64}\gamma^{a}\zeta_{ai}+\frac{1}{64}\gamma^{b}\beta_{bi}\;. \end{align} The components appearing above are related to the supersymmetry transformation of the components that appeared before as follows: \begin{align}\label{d4.3} \delta_{Q}^{L}\mathcal{B}_{abij}&=\bar{\epsilon}^{k}\nabla_{k}\mathcal{B}_{abij}=\bar{\epsilon}^{k}\lambda_{abijk}+\frac{2}{3}\varepsilon_{k(i}\bar{\epsilon}^{k}\sigma_{abj)}\nonumber \\ \delta_{Q}^{R}\mathcal{G}_{cij}&=\bar{\epsilon}_{k}\nabla^{k}\mathcal{G}_{cij}=\varepsilon^{kl}\bar{\epsilon}_{k}\xi_{cijl}-\frac{2}{3}\bar{\epsilon}_{(i}\zeta_{cj)} \nonumber \\ \delta_{Q}^{R}\bar{\mathcal{G}}_{cij}&=\bar{\epsilon}_{k}\nabla^{k}\bar{\mathcal{G}}_{cij}=\varepsilon^{kl}\bar{\epsilon}_{k}\alpha_{cijl}-\frac{2}{3}\bar{\epsilon}_{(i}\beta_{cj)}\;. \end{align} We have defined $\bar{\mathcal{G}}_{cij}$ as: \begin{align}\label{d4.4} \bar{\mathcal{G}}_{cij}&=\varepsilon_{ik}\varepsilon_{jl}{\mathcal{G}}_{c}^{kl}\;. \end{align} The conjugate $e^2\psi^2\bar{\psi}$ Bianchi will give rise to the conjugate $J_{e^3\psi}$. And finally the $e^3\psi\bar{\psi}$ Bianchi will give rise to the following: \begin{align}\label{d4.5} J_{e^4}&=-\frac{1}{24}e^{a}e^{b}e^{c}e^{d}\mathcal{F}\varepsilon_{abcd}+\text{h.c}\;, \end{align} where, $\mathcal{F}$ appears as the Lorentz invariant and SU(2) invariant component in the supersymmetry transformation of $\Psi_{i}$ as: \begin{align}\label{d4.6} \delta_{Q}^{L}\Psi_{j}&=-\frac{1}{2}\mathcal{Q}_{kj}\epsilon^{k}-\frac{1}{2}\varepsilon_{kj}\mathcal{F}\epsilon^{k}+\frac{1}{8}\gamma_{ab}\mathcal{P}^{ab}_{kj}\epsilon^{k}+\frac{1}{8}\gamma_{ab}\mathcal{R}^{ab}\epsilon^{k}\varepsilon_{kj}\;. \end{align} All the supercovariant terms will be contained in $J_{e^4}$. Using the above results, our final invariant action takes the following form: \begin{align}\label{d4.7} S=\int \mathcal{L} d^4 x \;, \end{align} where the Lagrangian density $\mathcal{L}$ is given by the density formula: \begin{align}\label{d4.8} e^{-1}\mathcal{L}&=-i\mathcal{F}+2i\bar{\psi}_{ai}\Upsilon^{a}_{j}\varepsilon^{ij}+2i\bar{\psi}_{ai}\gamma^{a}\Psi_{j}\varepsilon^{ij}+\frac{3i}{16}\bar{\psi}_{a}^{j}\gamma^{ab}\gamma^{c}\psi_{bk}\mathcal{G}_{cjl}\varepsilon^{kl}+\frac{i}{16}\bar{\psi}_{a}^{j}\gamma^{c}\gamma^{ab}\psi_{bk}\mathcal{G}_{cjl}\varepsilon^{kl}\nonumber \\ & \;\;\; -\frac{i}{4}\bar{\psi}_{a}^{i}\gamma^{ab}\psi_{b}^{j}\mathcal{D}^{kl}\varepsilon_{ik}\varepsilon_{jl}-\frac{i}{32}\bar{\psi}_{a}^{i}\gamma^{ab}\gamma^{cd}\psi_{b}^{j}\mathcal{B}_{cd}^{kl}\varepsilon_{ik}\varepsilon_{jl}-i\varepsilon^{abcd}\bar{\psi}_{ai}\gamma_{b}\psi_{c}^{j}\bar{\psi}_{dk}\Sigma_{jmn}\varepsilon^{km}\varepsilon^{in}+\text{h.c}\;. \end{align} Our basic building block in the above density formula is $\Sigma_{ijk}$ which satisfies the properties as explained below (\ref{density-12}) and also the constraints (\ref{d2.6}, \ref{d3.5}). The other components of the above density formula are found by taking consecutive supersymmetry transformations of $\Sigma_{ijk}$. The components transform among themselves under S-supersymmetry as shown below: \begin{align}\label{d4.9} \delta_S \Sigma_{ijk}&=0\nonumber \\ \delta_S \mathcal{D}_{ij}&=4\bar{\eta}^{k}\Sigma_{ijk} \nonumber\\ \delta_S \mathcal{B}_{ab}{}_{ij}&=8\bar{\eta}^{k}\gamma_{ab}\Sigma_{ijk} \nonumber\\ \delta_S \mathcal{G}_{a}{}_{ij}&=8\bar{\eta}_{k}\gamma_{a}\Sigma_{lij}\varepsilon^{kl} \nonumber\\ \delta_S \Upsilon_{d}{}_{j}&=\frac{1}{16}\gamma^{ab} \mathcal{B}_{ab}{}_{jk}\gamma_{d}\eta_{l}\varepsilon^{kl}-\frac{3}{8}\mathcal{G}_{d}{}_{jk}\eta^{k}+\frac{1}{8}\mathcal{G}^{g}{}_{jk}\gamma_{dg}\eta^{k} \nonumber\\ \delta_S \Psi_{j}&=-\frac{3}{4}\mathcal{D}_{jk}\eta_{l}\varepsilon^{kl}-\frac{3}{16}\gamma^{a}\mathcal{G}_{a}{}^{lm}\varepsilon_{jl}\varepsilon_{km}\eta^{k}+\frac{3}{16}\gamma^{a}\mathcal{G}_{a}{}_{jk}\eta^{k}\nonumber \\ \delta_S \mathcal{F}&=8\bar{\eta}_{i}\Psi_{j}\varepsilon^{ij} \end{align} We are interested in using the above density formula in obtaining invariant action of some known multiplets in $\mathcal{N}=2$ conformal supergravity. It seems that we can embed the 24+24 component real scalar multiplet of \cite{Hegde:2017sgl} in the above density formula to obtain an invariant action for the real scalar multiplet. It is also known from \cite{Hegde:2017sgl} that upon imposing suitable constraints on the real scalar multiplet one can embed the 8+8 component tensor multiplet in the 24+24 component real scalar multiplet. We will further use this embedding of tensor multiplet in the real scalar multiplet to get a new invariant action for the tensor multiplet coupled to $\mathcal{N}=2$ conformal supergravity. In the next section, we will discuss the real scalar multiplet, its components and their supersymmetry transformation laws. We will also discuss the embedding of the tensor multiplet in the real scalar multiplet. In the subsequent section we will embed the real scalar multiplet into the density formula obtained in this section to get an invariant action for the real scalar multiplet. We will further use the embedding of the 8+8 component tensor multiplet to obtain a new higher derivative coupling of the tensor multiplet to $\mathcal{N}=2$ conformal supergravity. \section{Real scalar multiplet}\label{Real} The real scalar multiplet is a 24+24 component $\mathcal{N}=2$ matter multiplet that was originally found in flat space \cite{Howe:1982tm} in an attempt to understand off-shell formulation of $\mathcal{N}=2$ hypermultiplet. This multiplet was extended to $\mathcal{N}=2$ conformal supergravity in \cite{Hegde:2017sgl}. Further, in \cite{Hegde:2017sgl}, a consistent set of constraints was found to restrict the real scalar multiplet to an $8+8$ restricted real scalar multiplet. This restricted real scalar multiplet was shown to admit an embedding of the $8+8$ tensor multiplet. We will review the results from \cite{Hegde:2017sgl} briefly in this section. Field content of the multiplet is given in Table-\ref{Table-Real-Scalar}. All the field components are invariant under special conformal transformations or K-transformation. Their $Q$ and $S$ transformation are given below\footnote{We have corrected some minor typos/omissions in the original paper \cite{Hegde:2017sgl}. The coefficient of $\bar{\Lambda}^{l}\Lambda^{m}\slashed{D}\Lambda_{(i}\varepsilon_{j|l|}\varepsilon_{k)m}$ in $\Gamma_{ijk}$ has been corrected and the term $-\frac{1}{16}\bar{\Lambda}^{p}\gamma^{ab}\Lambda^{q}\gamma_{ab}\Xi^{lmn}\varepsilon_{pq}\varepsilon_{il}\varepsilon_{jm}\varepsilon_{kn}$ which was missing in $\Gamma_{ijk}$has been added.}. \begin{table}[t] \caption{Field content of the $\mathcal{N}=2$ Real Scalar multiplet}\label{Table-Real-Scalar} \begin{center} \begin{tabular}{ | C{2cm}|C{2cm}|C{3cm}|C{2cm}|C{2cm}| } \hline Field & SU(2) Irreps & Restrictions &Weyl weight (w) & Chiral weight (c) \\ \hline $\phi$ & $\bf{1}$ & Real & 1 & 0 \\ \hline $E_{ij}$ & $\bf{3}$ & Complex &1 & -1 \\ \hline $S^a{}^i{}_j$ & $\bf{3}$ & $(S_a{}^{i}{}_{j})^{*}\equiv S_a{}_{i}{}^{j}=-S_a{}^{j}{}_{i}$ &1 & 0 \\ \hline $C_{ijkl}$ & $\bf{5}$ & $C^{ijkl}\equiv (C_{ijkl})^*=\varepsilon^{ip}\varepsilon^{jq}\varepsilon^{kr}\varepsilon^{ls}C_{pqrs}$ &2 & 0 \\ \hline $\Lambda_i$ & $\bf{2}$ & $\gamma_{5}\Lambda_i=\Lambda_i$&1/2 & 1/2 \\ \hline $\Xi_{ijk}$ & $\bf{4}$ & $\gamma_{5}\Xi_{ijk}=\Xi_{ijk}$ &3/2 &-1/2 \\ \hline \end{tabular} \end{center} \end{table} \begin{align}\label{Susy-transf} \delta \phi &= -\frac{\phi}{2}\bar{\epsilon}^{i}\Lambda_{i}+\text{h.c.}\;, \nonumber \\ \delta \Lambda^{i}&=-2\slashed{P}\epsilon^{i}-\left(\slashed{S}^{i}{}_{j}\epsilon^{j}+2\varepsilon^{ik}\varepsilon^{jl}\epsilon_{j}E_{lk}\right)-\frac{1}{2}\bar{\Lambda}^{i}\Lambda^{j}\epsilon_{j}-\frac{1}{4}\bar{\Lambda}^{j}\gamma_{a}\Lambda_{j}\gamma^{a}\epsilon^{i}+\frac{1}{8}\bar{\Lambda}^{i}\gamma_{ab}\Lambda^{j}\gamma^{ab}\epsilon_{j}\nonumber \\ & \;\;\;-2\eta^{i}\;,\nonumber\\ \delta S_{a}{}^{i}{}_{j}&=\bar{\epsilon}_{j}\gamma_{a}\chi^{i}+\frac{2}{3}\bar{\epsilon}_{j}\gamma_{a}\slashed{D}\Lambda^{i}-2\bar{\epsilon}_{j}D_{a}\Lambda^{i}-\frac{1}{3}\varepsilon^{li}\varepsilon^{nk}\bar{\epsilon}_{n}\gamma_{a}\Xi_{ljk}+\frac{1}{24}\bar{\epsilon}_{j}\gamma_{a}\gamma.T^{-}\Lambda_{k}\varepsilon^{ik} \nonumber\\ &\;\;\; -\frac{1}{3}\bar{\epsilon}_{j}\gamma_{a}\Lambda_{k}E_{lm}\varepsilon^{il}\varepsilon^{km}-\frac{2}{3}\bar{\epsilon}^{i}\gamma_{a}\slashed{S}^{k}{}_{j}\Lambda_{k}-\frac{1}{2}\bar{\epsilon}^{i}\slashed{S}^{k}{}_{j}\gamma_{a}\Lambda_{k}+\frac{1}{2}\bar{\epsilon}^{k}\gamma_{a}\slashed{S}^{i}{}_{j}\Lambda_{k} -\frac{2}{3}\bar{\epsilon}^{i}\gamma_{a}\slashed{P}\Lambda_{j}\nonumber \\ & \;\;\;-\bar{\epsilon}^{i}\slashed{P}\gamma_{a}\Lambda_{j}-\frac{1}{24}\bar{\Lambda}^{i}\Lambda^{k}\bar{\epsilon}_{j}\gamma_{a}\Lambda_{k}-\frac{1}{32}\bar{\Lambda}^{i}\gamma_{bc}\Lambda^{k}\bar{\epsilon}_{j}\gamma^{bc}\gamma_{a}\Lambda_{k}-\text{(h.c.;traceless)}\;,\nonumber\\ \delta E_{ij}&=2\bar{\epsilon}^{(l}\chi^{k)}\varepsilon_{ik}\varepsilon_{jl}-\frac{2}{3}\bar{\epsilon}^{(l}\slashed{D}\Lambda^{k)}\varepsilon_{ik}\varepsilon_{jl}+\frac{1}{3}\bar{\epsilon}^{k}\Xi_{ijk}-\frac{1}{12}\bar{\epsilon}^{k}\gamma.T^{-}\Lambda_{(i}\varepsilon_{j)k}+\frac{2}{3}\bar{\epsilon}^{k}\Lambda_{(i}E_{j)k} \nonumber\\ & \;\;\;-2\bar{\epsilon}_{(i}\Lambda^{k}E_{j)k}-\frac{2}{3}\bar{\epsilon}^{k}\Lambda_{k}E_{ij}+\bar{\epsilon}_{k}\Lambda^{k}E_{ij}-\frac{1}{3}\bar{\epsilon}^{(l}\slashed{S}^{m)}{}_{k}\Lambda^{k}\varepsilon_{il}\varepsilon_{jm}-\frac{2}{3}\bar{\epsilon}^{(k}\slashed{P}\Lambda^{l)}\varepsilon_{ik}\varepsilon_{jl}\nonumber \\ & \;\;\;-\frac{1}{12}\bar{\epsilon}^{(l}\gamma_{a}\Lambda^{k)}\bar{\Lambda}^{m}\gamma^{a}\Lambda_{m}\varepsilon_{il}\varepsilon_{jk}\;, \nonumber \\ \delta \Xi_{ijk}&=\frac{3}{2}\epsilon_{mn}\epsilon_{lp}\left[D_{a}S^a{}^l{}_{(i}\delta^n_j\delta^p_{k)}-2\gamma^{ab}D_{a}S_{b}{}^l{}_{(i}\delta^n_j\delta^p_{k)}-\gamma.R(V)^l{}_{(i}\delta^n_j\delta^p_{k)} \right]\epsilon^m+{6}\slashed{D}E_{(ij}\epsilon_{k)} \nonumber \\ &\;\;\;-C_{ijkl}\epsilon^l-6E^{ln}E_{m(i}\varepsilon_{j|l|}\varepsilon_{k)n}\epsilon^{m} -6\slashed{P}E_{(ij}\epsilon_{k)}+3\slashed{S}^{m}{}_{(i}E_{jk)}\epsilon_{m}-6\slashed{S}^{m}{}_{(i}E_{j|m|}\epsilon_{k)} \nonumber\\ &\;\;\; +3S^{a}{}^{m}{}_{(i}S^{b}{}^{n}{}_{j}\varepsilon_{k)m}\varepsilon_{ln}\gamma_{ab}\epsilon^{l} + 3 P^{a}S_{a}{}^{l}{}_{(i}\varepsilon_{j|l|}\varepsilon_{k)m}\epsilon^{m}-\frac{3}{4}\gamma.T^{-}E^{lm}\epsilon^{n}\varepsilon_{(i|l|}\varepsilon_{j|m|}\varepsilon_{k)n} \nonumber\\ &\;\;\; +\bar{\Lambda}^{l}\gamma_{a}\slashed{D}\Lambda^{m}\gamma^{a}\epsilon_{(i}\varepsilon_{j|l|}\varepsilon_{k)m}+\frac{1}{4}\bar{\Lambda}^{l}\slashed{D}\Lambda_{(i}\epsilon^{m}\varepsilon_{j|l|}\varepsilon_{k)m}-{\frac{3}{8}}\bar{\Lambda}^{l}\slashed{D}\gamma_{ab}\Lambda_{(i}\gamma^{ab}\epsilon^{m}\varepsilon_{j|l|}\varepsilon_{k)m} \nonumber \\ & \;\;\;-{\frac{1}{4}}\bar{\Lambda}_{(i}\slashed{D}\Lambda^{l}\epsilon^{m}\varepsilon_{j|l|}\varepsilon_{k)m} +{\frac{1}{8}}\bar{\Lambda}_{(i}\gamma_{ab}\slashed{D}\Lambda^{l}\gamma^{ab}\epsilon^{m}\varepsilon_{j|l|}\varepsilon_{k)m}-\frac{3}{2}\bar{\Lambda}_{(i}R(Q)_{ab}{}^{l}\gamma^{ab}\epsilon^{m}\varepsilon_{j|l|}\varepsilon_{k)m} \nonumber \\ &\;\;\;+\frac{1}{2}\bar{\Lambda}^{p}\Xi^{lmn}\epsilon^{q}\varepsilon_{il}\varepsilon_{jm}\varepsilon_{kn}\varepsilon_{pq}-\frac{1}{2}\bar{\Lambda}^{(m}\Xi^{np)l}\epsilon^{q}\varepsilon_{im}\varepsilon_{jn}\varepsilon_{kp}\varepsilon_{lq}-\frac{1}{8}\bar{\Lambda}_{l}\gamma_{ab}\Xi_{ijk}\gamma^{ab}\epsilon^{l} \nonumber \\ &\;\;\;-\bar{\Lambda}_{(i}\Xi_{jk)l}\epsilon^{l}+\frac{1}{8}\bar{\Lambda}_{(i}\gamma_{ab}\Xi_{jk)l}\gamma^{ab}\epsilon^{l}-\frac{1}{2}\bar{\Lambda}^{l}\gamma_{a}\Xi_{ijk}\gamma^{a}\epsilon_{l}+\bar{\Lambda}^{l}\gamma_{a}\Xi_{l(ij}\gamma^{a}\epsilon_{k)} \nonumber \\ &\;\;\; -\frac{3}{2}\bar{\Lambda}_{(i}\chi^{l}\epsilon^{m}\varepsilon_{j|l|}\varepsilon_{k)m}-\frac{3}{2}\bar{\Lambda}_{(i}\gamma_{ab}\chi^{l}\gamma^{ab}\epsilon^{m}\varepsilon_{j|l|}\varepsilon_{k)m}+\frac{3}{2}\bar{\Lambda}^{l}\chi_{(i}\epsilon^{m}\varepsilon_{j|l|}\varepsilon_{k)m} \nonumber \\ & \;\;\; -3\bar{\Lambda}^{l}\gamma_{a}\chi^{m}\gamma^{a}\epsilon_{(i}\varepsilon_{j|l|}\varepsilon_{k)m}+\frac{1}{4}\varepsilon_{l(k}\bar{\Lambda}_{i}\Lambda_{j)}\gamma.T^{-}\epsilon^{l}-\frac{1}{8}\varepsilon_{l(k}\bar{\Lambda}^{l}\gamma_{a}\Lambda_{i}\gamma.T^{-}\gamma^{a}\epsilon_{j)} \nonumber \\ &\;\;\; +\frac{1}{2}\bar{\Lambda}^{(l}\Lambda^{m}E^{n)p}\epsilon^{q}\varepsilon_{il}\varepsilon_{jm}\varepsilon_{kn}\varepsilon_{pq}-\frac{1}{2}\bar{\Lambda}^{p}\Lambda^{(l}E^{mn)}\epsilon^{q}\varepsilon_{il}\varepsilon_{jm}\varepsilon_{kn}\varepsilon_{pq}+\frac{1}{2}\bar{\Lambda}_{l}\Lambda_{(i}E_{jk)}\epsilon^{l} \nonumber \\ &\;\;\; -\frac{1}{2}\bar{\Lambda}_{(i}\Lambda_{j}E_{k)l}\epsilon^{l}+\frac{1}{4}\bar{\Lambda}_{l}\gamma_{ab}\Lambda_{(i}E_{jk)}\gamma^{ab}\epsilon^{l} +\frac{1}{4}\bar{\Lambda}^{l}\gamma_{a}\Lambda_{l}\gamma^{a}\epsilon_{(i}E_{jk)}-\bar{\Lambda}^{l}\gamma_{a}\Lambda_{(i}\gamma^{a}\epsilon_{j}E_{k)l} \nonumber \\ &\;\;\; -\frac{1}{2}\bar{\Lambda}^{l}\slashed{S}^{n}{}_{l}\Lambda_{(i}\epsilon^{m}\varepsilon_{j|n|}\varepsilon_{k)m}+\frac{1}{4}\varepsilon_{(i|n|}\varepsilon_{j|m|}\bar{\Lambda}^{l}\slashed{S}^{n}{}_{k)}\Lambda_{l}\epsilon^{m}-\frac{1}{8}\bar{\Lambda}^{l}\slashed{S}^{n}{}_{l}\gamma_{ab}\Lambda_{(i}\gamma^{ab}\epsilon^{m}\varepsilon_{j|n|}\varepsilon_{k)m} \nonumber \\ &\;\;\; +\frac{3}{16}\varepsilon_{(i|n|}\varepsilon_{j|m|}\bar{\Lambda}^{l}\slashed{S}^{n}{}_{k)}\gamma^{ab}\Lambda_{l}\gamma_{ab}\epsilon^{m}+\frac{1}{2}\bar{\Lambda}^{n}\Lambda^{m}\slashed{S}^{l}{}_{(i}\epsilon_{j}\varepsilon_{k)m}\varepsilon_{nl} +\frac{1}{2}\bar{\Lambda}^{l}\slashed{P}\Lambda_{(i}\epsilon^{m}\varepsilon_{j|l|}\varepsilon_{k)m} \nonumber \\ &\;\;\; +\frac{1}{16}\bar{\Lambda}^{l}\gamma_{ab}\Lambda^{m}\slashed{S}^{n}{}_{(i}\gamma^{ab}\epsilon_{j}\varepsilon_{k)n}\varepsilon_{lm}+\frac{1}{2}\bar{\Lambda}^{l}\slashed{P}\gamma_{ab}\Lambda_{(i}\gamma^{ab}\epsilon^{m}\varepsilon_{j|l|}\varepsilon_{k)m}+\bar{\Lambda}^{l}\Lambda^{m}\slashed{P}\epsilon_{(i}\varepsilon_{j|l|}\varepsilon_{k)m}\nonumber \\ &\;\;\; +\frac{1}{8}\bar{\Lambda}^{m}\Lambda^{n}\bar{\Lambda}^{l}\gamma_{a}\Lambda_{l}\gamma^{a}\epsilon_{(i}\varepsilon_{j|m|}\varepsilon_{k)n} -\frac{1}{16}\bar{\Lambda}^{l}\Lambda^{m}\bar{\Lambda}_{n}\gamma_{ab}\Lambda_{(i}\gamma^{ab}\epsilon^{n}\varepsilon_{j|l|}\varepsilon_{k)m}\;, \nonumber \\ \delta C_{ijkl}&=\bar{\epsilon}_{(i}\Gamma_{jkl)}+\varepsilon_{ip}\varepsilon_{jq}\varepsilon_{kr}\varepsilon_{ls}\bar{\epsilon}^{(p}\Gamma^{qrs)}+\left(\bar{\epsilon}_{m}\Lambda^{m}+\bar{\epsilon}^{m}\Lambda_{m}\right)C_{ijkl}\;. \end{align} where we have defined $P_a=\phi^{-1}D_a\phi$ and $\Gamma_{ijk}$ is given as follows. \begin{align}\label{Gammadef} \Gamma_{ijk}&=-2\slashed{D}\Xi_{ijk}-3D_{a}S^{a}{}^{n}{}_{(i}\Lambda^{m}\varepsilon_{j|n|}\varepsilon_{k)m}+6\slashed{D}E_{(ij}\Lambda_{k)}-2C_{ijkl}\Lambda^{l}+2\slashed{D}\Lambda_{(i}E_{jk)} \nonumber \\ &\;\;\; +2D_a\Lambda^l S^a{}^n{}_{(i}\varepsilon_{j|n|}\varepsilon_{k)l}-2\gamma^{ab}D_a\Lambda^l S_b{}^n{}_{(i}\varepsilon_{j|n|}\varepsilon_{k)l}-4\Xi^{lmn}E_{l(i}\varepsilon_{j|m|}\varepsilon_{k)n}+2\slashed{S}^{l}{}_{(i}\Xi_{jk)l} \nonumber \\ &\;\;\; +12 \chi_{(i}E_{jk)}-6\slashed{S}^{l}{}_{(i}\chi^{m}\varepsilon_{j|l|}\varepsilon_{k)m} -4\slashed{P}E_{(ij}\Lambda_{k)}-4P_aS^a{}^n{}_{(i}\varepsilon_{j|n|}\varepsilon_{k)l}\Lambda^l \nonumber \\ &\;\;\; -2P_a\gamma^{ab}S_b{}^n{}_{(i}\varepsilon_{j|n|}\varepsilon_{k)l}\Lambda^l-2\slashed{S}^l{}_{(i}E_{jk)}\Lambda_l-2\slashed{S}^l{}_{(i}E_{j|l|}\Lambda_{k)}-4E_{(ij}\varepsilon_{k)m}\varepsilon_{ln}E^{mn}\Lambda^l \nonumber \\ &\;\;\; +12\varepsilon_{(i|m|}\varepsilon_{j|n|}E_{k)l}E^{mn}\Lambda^l+S_a{}^m{}_{(i}S^a{}^n{}_j\varepsilon_{k)m}\varepsilon_{ln}\Lambda^l+\gamma_{ab}S^{a m}{}_{(i}S^b{}^n{}_j\varepsilon_{k)m}\varepsilon_{ln}\Lambda^l \nonumber \\ &\;\;\; +\frac{1}{2}\gamma\cdot T^+ E_{(ij}\varepsilon_{k)l}\Lambda^l+\frac{1}{4}\slashed{S}^l{}_{(i}\gamma\cdot T^-\Lambda_{j}\varepsilon_{k)l}+\frac{1}{4}\bar{\Lambda}^l\Lambda^m\slashed{D}\Lambda_{(i}\varepsilon_{j|l|}\varepsilon_{k)m} \nonumber \\ &\;\;\; +\frac{5}{4}\bar{\Lambda}^l\gamma^a\Lambda_{(i}\varepsilon_{j|l|}\varepsilon_{k)m}D_a\Lambda^m+\frac{5}{16}\bar{\Lambda}^l\gamma^{bc}\gamma^a\Lambda_{(i}\varepsilon_{j|l|}\varepsilon_{k)m}\gamma_{bc}D_a\Lambda^m \nonumber \\ &\;\;\; +\frac{3}{2}\bar{\Lambda}^l\Lambda^m\chi_{(i}\varepsilon_{j|l|}\varepsilon_{k)m}-\frac{3}{2}\bar{\Lambda}^l\gamma^a\Lambda_{(i}\varepsilon_{j|l|}\varepsilon_{k)m}\gamma_a\chi^m +\frac{1}{2}\bar{\Lambda}^{q}\Lambda^{(p}\Xi^{ml)n}\varepsilon_{ip}\varepsilon_{jm}\varepsilon_{kl}\varepsilon_{qn}\nonumber \\ &\;\;\;-\frac{1}{16}\bar{\Lambda}^p\gamma^{ab}\Lambda^q\gamma_{ab}\Xi^{lmn}\varepsilon_{pq}\varepsilon_{il}\varepsilon_{jm}\varepsilon_{kn}+\frac{1}{2}\bar{\Lambda}^l\gamma_a\Lambda_{(i}\gamma^a\Xi_{jk)l}+\frac{3}{2}\bar{\Lambda}^l\Lambda^m\slashed{P}\Lambda_{(i}\varepsilon_{j|l|}\varepsilon_{k)m}\nonumber\\ &\;\;\; -\frac{1}{4}\bar{\Lambda}^l\Lambda^m \slashed{S}^n{}_{(i}\varepsilon_{j|n|}\varepsilon_{k)m}\Lambda_l-\frac{1}{32}\varepsilon_{mn}\bar{\Lambda}^m\gamma^{ab}\Lambda^n\gamma_{ab}\slashed{S}^l{}_{(i}\Lambda_j\varepsilon_{k)l}-\frac{1}{2}\bar{\Lambda}^l\Lambda^{(m}E^{pq)}\Lambda^n\varepsilon_{ip}\varepsilon_{jq}\varepsilon_{km}\varepsilon_{ln} \nonumber \\ &\;\;\; -\bar{\Lambda}_{(i}\Lambda_j E_{k)l}\Lambda^l-\bar{\Lambda}_l\Lambda_{(i}E_{jk)}\Lambda^l+\frac{1}{8}\bar{\Lambda}^l\Lambda^m\bar{\Lambda}^n\gamma_a\Lambda_n\gamma^a\Lambda_{(i}\varepsilon_{j|l|}\varepsilon_{k)m}\;. \end{align} It is possible to reduce this multiplet to a restricted real scalar multiplet, by imposing a consistent set of constraints. In general the field $E_{ij}$ is a complex scalar field with chiral weight $-1$. Due to its non zero chiral weight, one can not demand it to satisfy a reality condition. However we can demand: \begin{align} E_{ij}=e^{-i\sigma/2}\mathcal{L}_{ij} \end{align} where $\sigma$ is a real scalar field with Weyl weight $0$ and $\mathcal{L}_{ij}$ is a triplet of scalars with Weyl weight $+1$ which satisfy the pseudo reality condition $\mathcal{L}^{ij}=\varepsilon^{ik}\varepsilon^{jl}\mathcal{L}_{kl}$. Let us define, $\bar{E}_{ij}=\varepsilon_{ik}\varepsilon_{jl}E^{kl}$. We can then put the above demand in the form of a constraint: \begin{align} \bar{E}_{ij}-e^{i\sigma}E_{ij}=0 \end{align} This constraint reduces the six off-shell degrees of freedom present in $E_{ij}$ to four. Other constraints are obtained by supersymmetry transformation of the above constraint, and there exist $16+16$ consistent set of constraints which reduce the $24+24$ real scalar multiplet to an $8+8$ restricted real scalar multiplet. The precise form of the constraints will not be reviewed here. However, the gauge invariant objects of the $8+8$ tensor multiplet can be embedded in this $8+8$ real scalar multiplet. We present the precise embedding below. \begin{align}\label{iden} \phi^{4}&=L^2\;, \nonumber\\ \Lambda^{i}&=-2L^{-2}L^{ij}\varphi_{j}\;, \nonumber\\ E_{ij}&=L^{-4}L_{ij}L^{kl}\bar{\varphi}_{k}\varphi_{l}-L^{-2}\bar{G}L_{ij}\;, \nonumber\\ S_{a}{}^{i}{}_{j}&=2L^{-2}H_{a}L^{ik}\varepsilon_{kj}+4L^{-4}L^{ik}L_{jm}\bar{\varphi}^{m}\gamma_{a}\varphi_{k}-L^{-2}\bar{\varphi}^{i}\gamma_{a}\varphi_{j}-\frac{1}{2}L^{-2}\delta^{i}_{j}\bar{\varphi}^{m}\gamma_{a}\varphi_{m}\;, \nonumber\\ &\;\;\; +L^{-2}\left(L^{ik}D_{a}L_{jk}-L_{jk}D_{a}L^{ik}\right)\;, \nonumber\\ \Xi_{ijk}&= -24L^{-6} L^{mn}\bar{\varphi}_m\varphi_nL_{(ij}L_{k)l}\varphi^l+6L^{-4}\bar{\varphi}_l\varphi_{(i}L_{jk)}\varphi^l-6L^{-4}L^{lm}\slashed{D}L_{l(i}L_{jk)}\varphi_m\nonumber\\ &\;\;\; +6L^{-4}L^{mn}\slashed{H}\varphi_nL_{(ij}\varepsilon_{k)m}+12L^{-4}\bar{G}L_{(ij}L_{k)l}\varphi^l+6L^{-2}\slashed{D}\varphi_{(i}L_{jk)}\nonumber\\ &\;\;\; +18L^{-2}L_{(ij}L_{k)l}\chi^l-\frac{3}{4}L^{-2}\gamma\cdot T^{-}\varphi^lL_{(ij}\varepsilon_{k)l}\;,\nonumber \\ C_{ijkl}&=-18L^{-2}DL_{(ij}L_{kl)}+6L^{-4}G\bar{G}L_{(ij}L_{kl)}+6L^{-4}H^aH_aL_{(ij}L_{kl)}\nonumber\\ &\;\;\;-12L^{-4}H^aD_aL^{mn}\varepsilon_{m(i}L_{jk}L_{l)n}-6L^{-2}D_aD^aL_{(ij}L_{kl)}+6L^{-4}L^{mn}D_aL_{mn}D^aL_{(ij}L_{kl)}\nonumber\\ &\;\;\;-3L^{-2}D^aL^{mn}D_aL_{mn}L_{(ij}L_{kl)}-9L^{-2}\bar{\chi}^m\varphi^n L_{(ij}\varepsilon_{k|m|}\varepsilon_{l)n}-36L^{-4}\bar{\chi}^m\varphi^nL_{(ij}L_{k|m|}L_{l)n}\nonumber\\ &\;\;\;-36L^{-4}L^{mn}\bar{\chi}_m\varphi_nL_{(ij}L_{kl)}+9L^{-2}\bar{\chi}_{(i}\varphi_jL_{kl)}+6L^{-4}G\bar{\varphi}_{(i}\varphi_jL_{kl)}\nonumber\\ &\;\;\;-18L^{-6}GL^{mn}\bar{\varphi}_m\varphi_nL_{(ij}L_{kl)} +6L^{-6}\bar{G}L_{mn}\bar{\varphi}^m\varphi^nL_{(ij}L_{kl)}-6L^{-4}\bar{G}\bar{\varphi}^m\varphi^nL_{(ij}\varepsilon_{k|m|}\varepsilon_{l)n}\nonumber\\ &\;\;\;-24L^{-6}\bar{G}\bar{\varphi}^m\varphi^nL_{(ij}L_{k|m|}L_{l)n}-6L^{-4}\bar{\varphi}^m\gamma^a\varphi_{(i}L_{jk}D_aL_{l)m}\nonumber\\ &\;\;\; +36L^{-6}\bar{\varphi}^m\gamma^a\varphi_nL^{ns}D_aL_{s(i}L_{jk}L_{l)m}-6L^{-4}\bar{\varphi}^m\gamma^a\varphi_mD_aL_{(ij}L_{kl)}\nonumber\\ &\;\;\;+6L^{-4}\bar{\varphi}^m\slashed{H}\varphi_{(i}L_{jk}\varepsilon_{l)m}-36L^{-6}\bar{\varphi}^m\slashed{H}\varphi_nL^{ns}L_{(ij}L_{k|m|}\varepsilon_{l)s}\nonumber\\ &\;\;\;-12L^{-4}L^{st}\varphi_s\slashed{D}\varphi^mL_{(ij}\varepsilon_{k|t|}\varepsilon_{l)m}-12L^{-4}\bar{\varphi}^m\slashed{D}\varphi_{(i}L_{jk}L_{l)m}\nonumber\\ &\;\;\;+\frac{3}{4}L^{-4}\varepsilon_{mn}\bar{\varphi}^m\gamma\cdot T^-\varphi^nL_{(ij}L_{kl)}+\frac{3}{4}L^{-4}\varepsilon^{mn}\bar{\varphi}_m\gamma\cdot T^+\varphi_nL_{(ij}L_{kl)}\nonumber\\ &\;\;\;-36L^{-6}\bar{\varphi}^m\varphi^n\bar{\varphi}_m\varphi_nL_{(ij}L_{kl)}-36L^{-6}L_{mn}\bar{\varphi}^m\varphi^n\bar{\varphi}_{(i}\varphi_jL_{kl)}+36L^{-6}\bar{\varphi}^m\varphi^n\bar{\varphi}_m\varphi_{(i}L_{jk}L_{l)n}\nonumber\\ &\;\;\;+90L^{-8}L^{mn}\bar{\varphi}_m\varphi_nL_{st}\bar{\varphi}^s\varphi^tL_{(ij}L_{kl)}\;, \end{align} When the real scalar multiplet fields are expressed in terms of the $8+8$ tensor multiplet fields as given above, one can verify that they obey the $16+16$ set of constraints given in \cite{Hegde:2017sgl}. We will show in the next section that the $24+24$ real scalar multiplet can be embedded into the density formula presented in the previous section, thereby allowing us to construct a new action for the tensor multiplet in $\mathcal{N}=2$ conformal supergravity. \section{Invariant action for the real scalar multiplet}\label{Real_action} In order to apply the density formula obtained in section-\ref{density} for obtaining the invariant action for the real scalar multiplet, we need to find a suitable combination of the fields belonging to the real scalar multiplet that has all the properties satisfied by $\Sigma_{ijk}$ which is the block that appears in the density formula with the maximum number of gravitino (\ref{density-12}). It turns out that the following choice can be made: \begin{align}\label{R1} \Sigma_{ijk}=i\Gamma_{ijk}+4iC_{ijk\ell}\Lambda^\ell\;, \end{align} where, $\Gamma_{ijk}$ is defined in (\ref{Gammadef}). Upon taking the supersymmetry transformations of $\Sigma_{ijk}$ up to terms that are quadratic in fermions, we obtain the following expressions for $\mathcal{A}^{ab}_{ijkl}$ and $\mathcal{H}^{a}_{ijkl}$: \begin{align}\label{R2} \mathcal{H}^a_{ijkl}&=-4iD^aC_{ijkl}+8iP^aC_{ijkl}+8iS^{am}{}_{(i}C_{jkl)m}-2i\bar{\Lambda}_{(i}\gamma^a\slashed{D}\Xi_{jkl)}\nonumber\\ &\;\;\; +2i\bar{\Lambda}^p\gamma^a\slashed{D}\Xi^{stu}\varepsilon_{s(i}\varepsilon_{j|t|}\varepsilon_{k|u|}\varepsilon_{l)p}\nonumber\\ \mathcal{A}^{ab}_{ijkl}&=0\;. \end{align} Without explicitly evaluating the remaining terms, we can argue that they indeed obey the constraints (\ref{d2.6}, \ref{d3.5}). Firstly, it is clear that the above expressions obey the constraints. It turns out that they are also S-invariant. This follows from the closure of the supersymmetry algebra, in particular $[\delta_Q,\delta_S]$ on $\Sigma_{ijk}$. Secondly, any term that are cubic in fermions in $\mathcal{H}^a_{ijkl}$ and $\mathcal{A}^{ab}_{ijkl}$ has to necessarily contain a bare $\Lambda^{i}$ or $\Lambda_{i}$ which transforms under S-supersymmetry to a bare S-supersymmetry parameter (\ref{Susy-transf}). Hence, such a term would be related by S-supersymmetry to terms that are quadratic in fermions and would automatically satisfy the constraints because the terms that are quadratic in fermions already satisfy the constraints. Similarly terms that are quartic in fermions would be related to the terms cubic in fermions by S-supersymmetry and hence they would also satisfy the constraints. As a result, we can argue that the constraints are completely satisfied without actually evaluating all the terms. Alternatively one can also see that the constraints are satisfied as a result of the closure of supersymmetry algebra on the $C_{ijkl}$ component of the real scalar multiplet. The closure also gives some more constraints apart from the constraints required by the density formula. For example, we find that the embedding of the real scalar multiplet into the abstract multiplet constituting the density formula would also satisfy the following constraint: \begin{align}\label{R3} \bar{{G}}^{a}_{ij}\equiv \varepsilon_{ik}\varepsilon_{jl}{G}^{a}{}^{kl}=-{G}^{a}_{ij}\;, \end{align} where, $G^{a}_{ij}$ is the S-invariant combination: \begin{align}\label{R3.1} {G}^{a}_{ij}\equiv \mathcal{G}^{a}_{ij}+4\bar{\Lambda}_{k}\gamma^{a}\Sigma_{lij}\varepsilon^{kl}\;. \end{align} This is expected since the real scalar multiplet is smaller than the abstract multiplet constituting the density formula. However, it serves as a consistency check on our calculations. We are interested in giving the final action that contains only bosonic terms. Hence, we will adopt the following minimalistic route. From $\Sigma_{ijk}$ given in (\ref{R1}), we will obtain $\mathcal{G}^{a}_{ij}$ up to terms quadratic in fermions using (\ref{d3.2}). Following this, we will obtain $\Psi_{i}$ up to terms linear in fermions using (\ref{d4.2}) and (\ref{d4.3}). And subsequently we obtain the bosonic $\mathcal{F}$ using (\ref{d4.6}), which will give us the Lagrangian density $\mathcal{L}$ (\ref{d4.8}). In order to avoid cluttering, we give the final result for bosonic $\mathcal{L}$. We perform some integration by parts thereby generating some bare K-gauge field. We will present all the intermediate results in appendix-\ref{inter}. \begin{align}\label{R4} e^{-1}\mathcal{L}&=-\frac{160}{9}D_{a}E^{ij}D_{a}E_{ij}+\frac{4}{3}D^a T^{-}_{ac}D^b T^{+}_{bc}-\frac{7}{9}D_a S^{a}{}^{i}{}_{j}D_{b}S^{b}{}^{j}{}_{i}+3D_a S_{b}{}^{i}{}_{j}D^{b}S^{a}{}^{j}{}_{i}\nonumber\\ &\;\;\; +24 D^2+\frac{4}{9}C_{ijkl}C^{ijkl}-4R(V)_{ab}{}^{i}{}_{j}R(V)^{ab}{}^{j}{}_{i}+{\frac{80}{3}}E_{ij}D_{a}E^{jk}S^{a}{}^{i}{}_{k}\nonumber\\ &\;\;\; +{\frac{67}{24}}S^{a}{}^{i}{}_{j}S^{b}{}^{j}{}_{k}D_{a}S_{b}{}^{k}{}_{i}+{\frac{41}{12}}D_{a}E^{ij}S_{b}{}_{ij}T^{-}{}^{ab}-\frac{5}{4}E^{ij}D_{a}S_{b}{}_{ij}T^{-}{}^{ab}\nonumber\\ & \;\;\; +{\frac{56}{9}}P^b S_{b}{}^{i}{}_{j}D^a S_{a}{}^{j}{}_{i}-{\frac{8}{3}} P_b S_{a}{}^{i}{}_{j}D^a S^{b}{}^{j}{}_{i}+\frac{320}{9}P^a E_{ij}D_{a}E^{ij}\nonumber\\ & \;\;\; +{\frac{8}{3}}P^aT^{+}_{ac}D_{b}T^{-}{}^{bc}+{5}S_{a}{}^{i}{}_{j}S_{b}{}^{j}{}_{k}R(V)^{ab}{}^{k}{}_{i}+\frac{5}{3} R(V)_{ij}\cdot T^{-}E^{ij}\nonumber\\ & \;\;\; -{\frac{1}{3}}S_{a}{}^{ij}S^{a}{}^{kl}C_{ijkl}+\frac{16}{3}C^{ijkl}E_{ij}\bar{E}_{kl}-{\frac{13}{2}}D S_{a}{}^{i}{}_{j}S^{a}{}^{j}{}_{i}-{3}f_{a}{}^{a}S_{b}{}^{i}{}_{j}S^{b}{}^{j}{}_{i}\nonumber\\ &\;\;\; -{\frac{10}{3}}f^{ab}S_{a}{}^{i}{}_{j}S_{b}{}^{j}{}_{i} +{\frac{56}{9}}P^a P^b S_{a}{}^{i}{}_{j}S_{b}{}^{j}{}_{i}+{\frac{4}{3}}P^a P_{a} S_{b}{}^{i}{}_{j}S^{b}{}^{j}{}_{i}-\frac{160}{9}P^a P_{a}E_{ij}E^{ij}\nonumber\\ &\;\;\; +{\frac{4}{3}}P_{a}P_{b}T^{-}{}^{ac}T^{+}{}^{b}{}_{c} +\frac{20}{9}P^a S_{a}{}^{i}{}_{j}E_{ik}E^{jk}-\frac{41}{12}P_a S_{b}{}_{ij}E^{ij}T^{-}{}^{ab}\nonumber \\ &\;\;\; +\frac{188}{9}E^{mn}E_{mn}E^{kl}E_{kl}-\frac{184}{9}E^{mn}E_{mk}E^{kl}E_{ln}+\frac{2}{3}T^+\cdot T^+ \bar{E}^{mn}E_{mn}\nonumber\\ &\;\;\; +\frac{17}{6}S_{amn}S^{akl}E_{kl}E^{mn}+\frac{13}{6}E^{mn}E_{mn}S^{akl}S_{akl}+\frac{2}{3}S^{akl}S_{amn}S^{bmn}S_{bkl}\nonumber\\ &\;\;\; -\frac{5}{6}S^{amn}S_{amn}S^{bkl}S_{bkl}+\frac{55}{6}T^{-ab}S_{alm}S^{bl}{}_nE^{mn}-\frac{47}{36}S^{amn}S^b_{mn}T^-_{ac}T^+{}_b{}^c+\text{h.c} \end{align} In the above equation, we have used the following definition: \begin{align}\label{R5} &\bar{E}_{ij}\equiv \varepsilon_{ik}\varepsilon_{jl}E^{kl}\;, \quad \bar{E}^{ij}\equiv \varepsilon^{ik}\varepsilon^{jl}E_{kl}\nonumber \\ &S_{a}{}_{ij}\equiv S_{a}{}^{k}{}_{i}\varepsilon_{kj}\;, \quad S_{a}^{ij}\equiv \varepsilon^{ik}\varepsilon^{jl}S_{a}{}_{kl}\nonumber \\ &R(V)_{ab}{}_{ij}=R(V)_{ab}{}^{k}{}_{i}\varepsilon_{kj} \end{align} \section{New higher derivative coupling of tensor multiplet in $\mathcal{N}=2$ conformal supergravity}\label{tensor_action} In this section we will use the embedding of the tensor multiplet in real scalar multiplet (\ref{iden}) and the invariant action for the real scalar multiplet (\ref{R4}) to obtain new higher derivative action for the tensor multiplet in $\mathcal{N}=2$ conformal supergravity. The bosonic terms in the action are: \begin{align}\label{tensor} e^{-1}\mathcal{L}&=\frac{1}{4}L^{-2}T^+\cdot T^+ \bar{G}^2-\frac{4}{3}L^{-2}T^-_{ac}T^+{}_b{}^cH^aH^b-16 L^{-2}DG\bar{G}-\frac{16}{3}L^{-2}G\bar{G}R\nonumber\\ &\;\;\; -2L^{-2}(H^aH_a)D-\frac{5}{3}L^{-2}R(H^aH_a)^2+\frac{5}{2}L^{-2}H_aH_bR^{ab}-\frac{5}{8}L^{-2}\bar{G}L^{ij}R(V)_{ij}\cdot T^-\nonumber\\ &\;\;\;-\frac{41}{16}L^{-2}D_a\bar{G}H_bT^{-ab}+\frac{15}{16}L^{-2}\bar{G}D_aH_bT^{-ab}+45D^2+\frac{3}{2}R(V)_{ab}{}^{ij}R(V)^{ab}{}_{ij} \nonumber\\ &\;\;\;+\frac{1}{2}D^a T^{-}_{ac}D^b T^{+}_{bc}-\frac{20}{3}L^{-2}D_aGD^a\bar{G}-\frac{9}{2}L^{-2}D_aH_bD^bH^a-\frac{41}{16}L^{-2}D_aGH_bT^{-ab}\nonumber\\ &\;\;\;+\frac{15}{16}L^{-2}GD_aH_bT^{-ab}+16 L^{-4}(G\bar{G})^2+L^{-4}(H^aH_a)^2+\frac{43}{2}L^{-4}(H^aH_a)(G\bar{G})\nonumber\\ &\;\;\;+\frac{1}{2}L^{-1}D^aLT^+_{ac}D_bT^{-ac}+11 L^{-3}D_aH_bH^aD^bL+32L^{-3}H^aD_bH_aD^bL\nonumber\\ &\;\;\;+52 L^{-3}D^aL(D_aG)\bar{G}+\frac{93}{32}L^{-3}GH_bD_aLT^{-ab}+9L^{-4}\varepsilon_{ik}D^bH^aD_aL^{ij}D_bL^{kl}L_{jl}\nonumber\\ &\;\;\; -\frac{317}{32}L^{-2}H_aD_bL^{ij}R(V)^{ab}{}_{ij}+\frac{317}{32}L^{-3}H_aD_bLL^{ij}R(V)^{ab}{}_{ij}+16L^{-2}D^aL^{ij}D_aL_{ij}D\nonumber\\ &\;\;\; - 26L^{-2}D_aLD^aLD+24L^{-2}D^aL^{ij}L_{ij}D-\frac{15}{4}L^{-2}D_aL^{im}D_bL_{km}R(V)_{ab}{}^k{}_i\nonumber\\ &\;\;\;+\frac{5}{4}L^{-2}R^{ab}D_aL_{ij}D_bL^{ij}-\frac{5}{4}L^{-2}R^{ab}D_aLD_bL+\frac{1}{6}L^{-2}RD_aL_{ij}D^aL^{ij}\nonumber \\ &\;\;\; -\frac{1}{6}L^{-2}RD^aLD_aL-\frac{83}{8}L^{-4}G\varepsilon_{ik}D_aL^{ij}D_bL^{kl}L_{jl}T^{-ab}-\frac{47}{48}L^{-4}D_aL_{ij}D_bL^{ij}T^{-ac}T^{+b}{}_c\nonumber\\ &\;\;\;+\frac{53}{48}L^{-4}L^{-4}H^aH^bD_aLD_bLT^{-ac}T^{+b}{}_c-\frac{67}{16}L^{-4}\varepsilon_{ik}D^aL^{ij}D_bL^{kl}L_{jl}D_aH_b\nonumber\\ &\;\;\;-\frac{37}{96}L^{-4}H^aH^bD_aLD_bL- 23L^{-4}L^{kl}H^aH_aD_bLD^bL-\frac{667}{192}L^{-4}H^aH^bD_aL_{ij}D_bL^{ij}\nonumber\\ &\;\;\;-\frac{13}{2}L^{-4}H^aH_aD_bL_{ij}D^bL^{ij}-6L^{-4}H^aH_aD^2L_{ij}L^{ij}-\frac{1643}{24}L^{-4}G\bar{G}D_aLD^aL\nonumber\\ &\;\;\;+\frac{323}{24}L^{-4}G\bar{G}D_aL_{mn}D^aL^{mn}-16L^{-4}G\bar{G}L_{ij}D^2L^{ij}\nonumber\\ &\;\;\; +\frac{313}{48}L^{-5}H^a (D^{b}L)\varepsilon_{il}L^{ij}D_{a}L_{jk}D_{b}L^{kl}-\frac{7}{3}L^{-4}H^{a}\varepsilon_{il}L^{ij}D_{a}L_{jk}D^2 L^{kl}\nonumber\\ &\;\;\;+\frac{43}{12}L^{-2}D^2L^{ij}D^2L_{ij}+\frac{9}{4}L^{-4}D_aD_bL_{ij}D^bD^aL^{ij}-\frac{9}{4}L^{-4}D_aD_bL^{ij}L_{ij}D^bD^aL^{kl}L_{kl}\nonumber\\ &\;\;\;+\frac{29}{12}L^{-4}D^2L^{ij}L_{ij}D^2L^{kl}L_{kl}-\frac{32}{3}L^{-3}L^{-3}D^aLD_aL^{ij}D^2L_{ij}+\frac{13}{6}L^{-5}D^aLD_aLL^{ij}D^2L_{ij}\nonumber\\ &\;\;\;+\frac{37}{8}D^2L_{ij}L^{ij}D_aL_{kl}D^aL^{kl}+\frac{26}{3}L^{-4}D_aLD_bLD^aD^bL_{ij}L^{ij}\nonumber\\ &\;\;\; -\frac{211}{32}L^{-4}D_aD_bL^{ij}L_{ij}D^aL^{kl}D^bL_{kl}+9L^{-4}D_aD_bL^{ij}D^aL_{ik}L_{jl}D^bL_{kl}\nonumber\\ &\;\;\; -\frac{67}{16}L^{-4}L^{ij}D^aL_{ik}D^bL_{jl}D_aD_bL^{kl}+\frac{233}{96}L^{-3}D_aL_{ij}D_bLD^aD^bLD^aD^bL^{ij}\nonumber \\ &\;\;\; +\frac{67}{32}L^{-3}D_bL_{ij}D_aLD^aD^bL^{ij}+\frac{43}{24}L^{-4}D_aLD^aLD_bL_{ij}D^bL^{ij}\nonumber\\ &\;\;\;+\frac{793}{96}L^{-4}D_aLD_bLD^aL^{ij}D^bL_{ij}-\frac{37}{3}L^{-4}D^aLD_aLD^bLD_bL\nonumber\\ &\;\;\;-\frac{5}{32}L^{-4}D_aL_{ij}D^aL^{ij}D_bL_{kl}D^bL^{kl}-\frac{35}{32}L^{-4}D^aL^{ij}D^bL_{ij}D_aL^{kl}D_bL_{kl}+\text{h.c}\;. \end{align} To bring the action in this form we replaced the dependent K-gauge field $f_{a}{}^{b}$ as shown below: \begin{align}\label{t1} f_{a}{}^{b}&=\frac{1}{2}R_{a}{}^{b}-\frac{1}{4}(D+\frac{1}{3})\delta_{a}{}^{b}-\frac{i}{2}\tilde{R}(A)_{a}{}^{b}+\frac{1}{8}T^{-}_{ac}T^{+}{}^{bc}\;, \end{align} where, \begin{align}\label{t2} R_{a}{}^{b}&\equiv (R(M)|_{f=0})_{ac}{}^{bc}\;, \nonumber \\ R&\equiv R_{a}{}^{a}\;. \end{align} The first of the above equations means that we first set $f=0$ in the expression for the supercovariant curvature $R(M)_{ab}{}^{cd}$ and then take the contraction between the Lorentz indices. If we confine ourselves to the bosonic background and upon using the gauge fixing condition for special conformal transformation ($b_\mu=0$), the $R_{a}{}^{b}$ and $R$ becomes the standard Ricci tensor and Ricci scalar respectively. \section{Conclusions and Future Directions} Matter multiplets play an important role in the superconformal approach for constructing off-shell supergravity theories. They are commonly used as compensators in gauge fixing the additional symmetries present in conformal supergravity to get the physical Poincar{\'e} supergravity. They can also be added as extra matter multiplets in the supergravity theories. Hence the study of various matter multiplets and their coupling to conformal supergravity plays an important role in the construction of higher derivative invariants in supergravity theories using the superconformal approach. The matter multiplets that play an important role in the construction of $\mathcal{N}=2$ supergravity theories are vector multiplet, tensor multiplet, non linear multiplet or hypermultiplet. For the construction of off-shell $\mathcal{N}=2$ supergravity theories typically vector multiplet and either non-linear or tensor multiplets are used as compensators. The coupling of tensor multiplet to conformal supergravity has been discussed earlier in the literature either using the linear-vector density formula or chiral density formula \cite{deWit:1982na, deWit:2006gn}. In this paper, we have discussed an alternate density formula based on a fermionic multiplet whose lowest component is dimension-5/2 spinor transforming in the $\bf{4}$ irrep of the underlying SU(2)-R symmetry and is a superconformal primary field. We arrived at this density formula using the covariant superform approach which was discussed in details in \cite{Butter:2019edc} for constructing an invariant action for $\mathcal{N}=4$ conformal supergravity. Using the new density formula, we followed a series of steps to construct a new higher derivative coupling of the tensor multiplet to conformal supergravity. Tensor multiplets can be used as a compensating multiplet to go from conformal supergravity to Poincar{\'e} supergravity and hence the results of this paper will induce new higher derivative corrections to Poincar{\'e} supergravity that were not present in the earlier literature. Higher derivative corrections to supergravity play an important role in the discussion of black hole entropy and its matching with the microscopic results originating from string theory. It would be interesting to see what are the implications of the results of this paper on the study of black hole entropy. Study of tensor multiplets as a matter multiplet instead of compensating multiplet is also important as has been discussed in \cite{Cribiori:2018xdy}. It would be interesting to see the effect of the new higher derivative couplings on the results of \cite{Cribiori:2018xdy}. We would also like to see if the applicability of the new density formula can be taken beyond real scalar multiplet and tensor multiplet to find new higher derivative actions for other multiplets, for e.g vector multiplet or non linear multiplet. \acknowledgments This work is supported by SERB grant CRG/2018/002373, Government of India. SH thanks Amitabh Virmani and Kedar Kolekar for hospitality at the Chennai Mathematical Institute, and Nemani Suryanarayana for hospitality at the Institute of Mathematical Sciences, Chennai during the course of this work.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,369
const blessed = require('blessed'); module.exports = blessed.screen({ autoPadding: true, fullUnicode: true, smartCSR: true, tput: true, warnings: true, debug: false, ignoreLocked: ['C-q', 'C-c', 'enter'], });
{ "redpajama_set_name": "RedPajamaGithub" }
31
I was searching for a Property and found this listing (MLS® #13995545). Please send me more information regarding 3874 Highgrove Drive, Dallas, TX, 75220-3752. Thank you! I'd like to request a showing of 3874 Highgrove Drive, Dallas, TX, 75220-3752 (MLS® #13995545). Thank you!
{ "redpajama_set_name": "RedPajamaC4" }
474
Q: How to express custom Data in angular html file Follwing example, <tr *ngFor="let participant of allParticipants"> <td class="participant-properties">{{participant.orgId}}</td> <td class="participant-properties">{{participant.orgName}}</td> <td class="participant-properties">{{participant.address}}</td> <td class="participant-properties">{{participant.contactAdress}}</td> <td class="participant-properties">{{participant.homepage}}</td> <td class="participant-properties">{{participant.discription}}</td> <td class="participant-properties">{{participant.requestResumeList}}</td> </tr> RequestResume[] requestResumeList I want to express member field named 'resumeDetails' in participant.requestResumeList's single object requestResumeList is RequestResume Array Type Object export class RequestResume { userId: string; requestDetails: string; user: User; requestResumeAssetId: string; } I attempt <td class="participant-properties">{{ participant.requestResumeList[1].resumeDetails }}</td> but it's Failed How to express resumeDetails field in angular html page
{ "redpajama_set_name": "RedPajamaStackExchange" }
577
\section{Introduction} The formation of realistic late-type spirals has been a long standing problem of galaxy formation in a $\Lambda$CDM universe. Within this framework, baryons condense at the center of dark matter halos and acquire angular momentum through tidal torques from nearby structures \citep{fall80}. A centrifugally-supported baryonic disk forms, with a size that depends on the fraction of the original angular momentum that is retained during the contraction. In numerical simulations of this process, however, a fundamental ``angular momentum problem" arises, as galaxies are produced with a baryonic component that is quite deficient in angular momentum compared to real spirals \citep{navarro91,navarro00}. Aside from artificial losses of angular momentum caused by insufficient resolution and other numerical effects \citep{okamoto03,governato04,kaufmann07}, this failure has traditionally been traced back to the very nature of the hierarchical buildup of structures: dynamical friction transfers the orbital angular momentum of merging substructures to the outer halo, and causes the associated cold baryons to sink to the center of the proto-galaxy and form a spheroid rather than a disk \citep[e.g.][]{maller02}. \begin{deluxetable*}{lccccccccccccccr} \tablecaption{Properties of the simulated galaxy \label{simsum}} \tablewidth{0pt} \tablehead{\colhead{Galaxy} & \colhead{M$_{\rm vir}$} & \colhead{$V_{\rm peak}$} & \colhead {M$_{*}$} & \colhead{$f_b$} & \colhead{$f_{\rm cold}$} & \colhead{$m_{\rm DM}$} & \colhead{$m_{\rm SPH}$} & \colhead{$\epsilon_G$} & \colhead{$\epsilon_{\rm SF}$} & \colhead{$N_{\rm DM}$} & \colhead{$N_{\rm gas}$} & \colhead{$N_*$} & \colhead{$B/D$} & \colhead{$R_d$}\\ } \tabletypesize{\footnotesize} \startdata Eris ($z=0$) & 7.9 & 238 & 3.9 & 0.121 & 0.12 & 9.8 & 2 & 120 & 0.1 & $7.0$ & $3.0$ & $8.6$ & 0.35 & 2.5\\ Eris ($z=1$) & 5.4 & 237 & 2.9 & 0.126 & 0.40 & 9.8 & 2 & 120 & 0.1 & 4.8 & 2.0 & 6.2 & 0.30 & 1.8\\ ErisLT ($z=1$) & 5.5 & 308 & 3.4 & 0.158 & 0.18 & 9.8 & 2 & 120 & 0.05 & 4.9 & 2.9 & 8.3 & 0.42 & 1.4 \enddata \tablecomments{Columns 2, 3, 4, 5, and 6 list the virial mass (in units of $10^{11}\,\,{\rm M_\odot}$), peak circular velocity (in $\,{\rm km\,s^{-1}}$), total stellar mass of the halo (in units of $10^{10}\,\,{\rm M_\odot}$), baryonic mass fraction, and cold ($T<3\times 10^4$ K) gas fraction. Columns 7 and 8 list the mass resolution of individual dark matter and SPH particles (in units of $10^4\,\,{\rm M_\odot}$), and columns 9 and 10 the spline gravitational force softening (in pc) and the star formation efficiency. Columns 11, 12, and 13 list the total number (in units of $10^6$) of dark matter, gas, and star particles within the virial radius of the halo. Columns 14 and 15 list the bulge-to-disk ratio and disk scale length (in kpc) estimated from the $i$-band photometric decomposition.} \\ \end{deluxetable*} A popular solution to the angular momentum problem envisions energy injection from supernovae (SNe) and evolving stars as a mechanism to prevent efficient gas cooling and condensation and to remove low angular momentum material from the central part of galaxies. Modern simulations with improved resolution and more effective recipes for SN feedback \citep{robertson04,governato07,scannapieco09,stinson10,piontek11,brooks11} have yielded rotationally-supported disks with realistic exponential scale lengths, not only in galaxies formed in relative isolation but also in those that are accreted by massive groups with a dominant central elliptical \citep{feldmann10a,feldmann10b}. They have also modified the standard picture of gas accretion and cooling onto galaxy disks: for galaxies up to Milky Way masses, gas acquired through filamentary ``cold flows" that was never shock-heated to the halo virial temperature is largely responsible for star formation in the disk at all times \citep{brooks09,keres09,ceverino10}. Yet, these simulations typically continue to produce centrally-concentrated systems, with rotation curves that rise steeply towards the center: simulated disk galaxies fall exclusively in the S0 or Sa category, leaving late-type spirals with negligible bulges and large disks with flat rotation curves -- such as our own Milky Way -- as an unsolved puzzle. Two recent exceptions are the simulations of \citet{agertz11} and \citet{governato10}. In the first, replicas of Sb/Sc galaxies with moderate bulges were obtained with a low efficiency of star formation that may implicitly mimic the bottleneck of the conversion of atomic gas into molecular, at the expense of producing stellar disks that are much more massive than expected at a given halo mass \citep[e.g.][]{guo10}. In the second, a realistic bulgeless dwarf galaxy with a shallow central dark-matter profile was generated by resolving the inhomogenous interstellar medium (ISM) and the process of energy injection from multiple SNe in clustered star forming regions. In this paper we extend the latter approach to massive galaxy scales, and present initial results from a new SPH cosmological simulation of high dynamic range that includes radiative cooling, heating from a cosmic UV field, SN feedback, and a star formation recipe based on a high gas density threshold as in \citet{governato10}. It is this last feature, we argue, that is key to the formation of a more realistic massive late-type spiral in $\Lambda$CDM. \section{Simulation setup} Dubbed ``Eris", the simulation described in this paper is part of a campaign of extreme resolution simulations of the formation of Milky Way-sized galaxies \citep{diemand07,diemand08}. It was performed in a {\it Wilkinson Microwave Anisotropy Probe} 3-year cosmology, $\Omega_M=0.24$, $\Omega_\Lambda=1-\Omega_M$, $\Omega_b=0.042$, $H_0=73\,\,{\rm km\,s^{-1}\,Mpc^{-1}}$, $n=0.96$, $\sigma_8=0.76$, running the parallel, spatially and temporally adaptive, treeSPH-code {\it GASOLINE} \citep{wadsley04} for 1.5 milion cpu hours. The target halo was identified at $z=0$ in a low-resolution, dark matter-only, periodic box of 90 Mpc on a side. It was choosen to have a similar mass as the Milky Way and a rather quiet late merging history, i.e. to have had no major mergers (defined as mass ratio $\ge 1/10$) after $z=3$. New initial conditions were then generated with improved mass resolution, centered around a Lagrangian sub-region of 1 Mpc on a side, using the standard ``zoom-in" technique to add small-scale perturbations. High-resolution particles were further split into 13 million dark matter particles and an equal number of gas particles, for a final dark and gas particle mass of $m_{\rm DM}=9.8\times 10^4\,\,{\rm M_\odot}$ and $m_{\rm SPH}=2\times 10^4\,\,{\rm M_\odot}$, respectively. The gravitational softening length, $\epsilon_G$, was fixed to 120 physical pc for all particle species from $z=9$ to the present, and evolved as $1/(1+z)$ from $z=9$ to the starting redshift of $z=90$. The version of the code used in this study includes Compton cooling, atomic cooling, and metallicity-dependent radiative cooling at low temperatures \citep{mashchenko06}. A uniform UV background modifies the ionization and excitation state of the gas and is implemented using a modified version of the \citet{haardt96} spectrum. Three parameters characterize the star formation and feedback recipes: (a) the star formation threshold $n_{\rm SF}$, (b) the star formation efficiency $\epsilon_{\rm SF}$, and (c) the fraction of SN energy that couples to the ISM $\epsilon_{\rm SN}$. Star formation occurs when cold ($T<3\times 10^4$ K), virialized gas reaches a threshold density $n_{\rm SF}=5$ atoms cm$^{-3}$ and is part of a converging flow. It proceeds at a rate \begin{equation} d\rho_*/dt=\epsilon_{\rm SF} \rho_{\rm gas}/t_{\rm dyn} \propto \rho_{\rm gas}^{1.5} \label{eq:KS} \end{equation} (i.e. locally enforcing a Schmidt law), where $\rho_*$ and $\rho_{\rm gas}$ are the stellar and gas densities, and $t_{\rm dyn}$ is the local dynamical time. We choose $\epsilon_{\rm SF}=0.1$. An additional identical run with $\epsilon_{\rm SF}=0.05$, the value adopted in \citet{governato07} and \citet{brook10}, yielded a galaxy with nearly identical structural properties, and will be discussed in a forthcoming paper. Each star particle is created stochastically with an initial mass $m_*=6\times 10^3\,\,{\rm M_\odot}$, and the gas particle that spawns the new star has its own mass reduced accordingly. A star particle represents a simple stellar population with its own age, metallicity, and a \citet{kroupa01} initial stellar mass function (IMF). Each SN deposits metals and a net energy of $\epsilon_{\rm SN} \times 10^{51}\,$ergs into the nearest neighbor gas particles, with $\epsilon_{\rm SN}=0.8$ (the same value adopted in previous simulations). The heated gas has its cooling shut off until the end of the snowplow phase of the SN blastwave, which is set by the local gas density and temperature and by the total amount of energy injected $E$ \citep{stinson06}. For the typical ISM conditions at threshold found in this study, this translates into regions of size $R_E\sim 30 E_{51}^{0.32}$ pc heated by individual SNe and having their cooling shut off for a timescale $t_E\sim 5\times 10^5 E_{51}^{0.31}$ yr, where $E_{51}\equiv E/10^{51}\,{\rm ergs}$. The energy injected by many SNe adds up to create larger hot bubbles and longer shutoff times. No feedback from an active galactic nucleus was included. \begin{figure*}[th] \centering \includegraphics[width=0.48\textwidth]{fig1a.pdf} \includegraphics[width=0.48\textwidth]{fig1b.pdf} \caption{\footnotesize {\it Left panel}: The rotation curve of the simulated Milky Way-sized galaxy (``Eris") at $z=0$. The figure shows the contributions to the circular velocity $V_c=\sqrt{GM(<r)/r}$ of the various mass components: dark matter ({\it long-dashed curve}), stars ({\it short-dashed curve}), gas ({\it dot-short dashed curve}), and total ({\it solid curve}). The data points show two realizations of the rotation curve of the Milky Way from observations of blue horizontal-branch halo stars in the {\it Sloan Digital Sky Survey} \citep{xue08}, and have been offset slightly from each other in radius for clarity. {\it Right panel}: The total inner rotation curve at $z=1$ for the fiducial high-threshold simulation (Eris, {\it solid line}) and for the low-threshold (ErisLT, {\it dot-short dashed line}) twin run. The short-dashed line shows Eris' inner rotation curve at $z=0$ for comparison. The star formation threshold has a significant effect on the mass distribution: a more prominent stellar bulge forms at early times in ErisLT and is responsible for the peaked rotation curve. } \label{fig1} \vspace{+0.3cm} \end{figure*} The adoption of a density threshold for star formation that is 50 times higher than in many previous lower-resolution studies is possible owing to the high mass and spatial resolution of this run, which resolves the giant cloud complexes where star formation actually occurs in the ISM and the true scale height of the neutral atomic ISM. In particular, the local Jeans length corresponding to our density threshold (for $T=10^3$ K, a lower bound on the typical temperature of the cold gas in the simulations) is resolved with more than 5 SPH smoothing lengths, thus preventing artificial fragmentation \citep{bate97}. While not as high as the value of $n_{\rm SF}=100$ atoms cm$^{-3}$ used in \citet{governato10} dwarf galaxy simulation, whose particle mass was substantially lower and allowed to resolve the Jeans length of star forming gas at much higher densities, our star formation threshold is still large enough to allow the development of a clumpy, inhomogeneous ISM with more localized energy injection by multiple overlapping SN explosions. This allows galactic pressure-driven outflows to develop and remove low-angular momentum material. To demonstrate the important role of the star formation threshold on the structural properties of massive galaxies, we have run a low-threshold twin simulation (termed ``ErisLT") with $n_{\rm SF}=0.1$ atoms cm$^{-3}$. We have kept all the other simulation parameters fixed (same mass and spatial resolution and identical feedback scheme) except for the star formation efficiency parameter, $\epsilon_{\rm SF}$, which was lowered from 0.1 (Eris) to 0.05 (ErisLT) to match the observed normalization of the star formation density in local galaxies \citep[see][]{governato10}. ErisLT was stopped at redshift 0.7 in order to limit the computational burden. \begin{figure*}[th] \centering \includegraphics[width=.86\textwidth]{fig2.pdf} \vspace{+0.3cm} \caption{{\it Left panel:} The optical/UV stellar properties of Eris at $z=0$. The images, created with the radiative transfer code {\sc Sunrise} \citep{jonsson06}, show an $i$, $V$, and $FUV$ stellar composite of the simulated galaxy seen face-on and edge-on. A Kroupa IMF was assumed. {\it Right panel:} Projected face-on and edge-on surface density maps of Eris's neutral gas at $z=0$. The color bar shows the neutral gas fraction. } \label{fig2} \vspace{+0.4cm} \end{figure*} \section{Results} In this first paper we focus on the final $z=0$ state and properties of the galaxy and on the comparison with observational constraints. The main characteristics of the simulated galaxy are listed in Table 1. \subsection{Structural Parameters} At the present epoch, Eris is a massive, barred, late-type spiral with structural properties consistent with those of Sb/Sbc galaxies, of which our own Milky Way is an example. It has a virial radius of $R_{\rm vir}=239$ kpc (defined as the radius enclosing a mean density of $93\rho_{\rm crit}$, \citealt{bryan98}), total mass $M_{\rm vir}=7.9\times 10^{11}\,\,{\rm M_\odot}$, spin parameter $\lambda=0.019$ (\`{a} la \citealt{bullock01}), and 7.0, 3.0, and 8.6 million dark matter, gas, and star particles within $R_{\rm vir}$, respectively. The minimum smoothing length for gas particles is 5 times smaller than the force softening. The total mass enclosed within 60 kpc is $M_{<60}=3.3\times 10^{11}\,\,{\rm M_\odot}$. The rotation curve, shown in the left panel of Figure \ref{fig1}, has a peak circular velocity of $V_{\rm peak}=238\,\,{\rm km\,s^{-1}}$ (reached at 1.34 kpc) and a value at 8 kpc (the solar circle) of $V_{c,\odot}=206\,\,{\rm km\,s^{-1}}$. Its overall shape out to 20 kpc, including the peak in the central bulge-dominated kpc, is reminiscent of the recent reconstruction of the Milky Way rotation curve by \citet{sofue09}. The circular velocity decreases gently to distances of 60 kpc from its value at the solar radius, in agreement with observations of blue horizontal-branch halo stars in the {\it Sloan Digital Sky Survey} \citep{xue08}. The measured $V_{c,\odot}, M_{<60}$, and $M_{\rm vir}$ agree within the errors with the values of $V_{c,\odot}=221\pm 18\,\,{\rm km\,s^{-1}}$, $M_{<60}=4.0\pm 0.7\times 10^{11}\,\,{\rm M_\odot}$, and $M_{\rm vir}=1.0^{+0.3}_{-0.2} \times 10^{12}\,\,{\rm M_\odot}$ derived recently for the Milky Way using the narrow GD-1 stream of stars \citep[for $V_{c,\odot}$,][]{koposov10} and halo stars as kinematic tracers \citep[for $M_{<60}$ and $M_{\rm vir}$,][]{xue08}. \subsection{Brightness Profile} To correctly compare simulations with observations we created artificial images of our simulations and from them measured photometric bulge-to-disk ratios and disk scale lengths. The mock images were created using the radiation transfer code {\sc Sunrise} \citep{jonsson06}, which produces spectral energy distributions using the age and metallicities of each simulated star particle, and takes into account the three-dimensional effect of dust reprocessing. The results for a Kroupa IMF are shown in Figure \ref{fig2}. A 2D photometric decomposition was performed on the dust-reddened $i$-band light distribution with the {\sc Galfit} program \citep{peng02}. At the present epoch, the total $i$-band magnitude is $M_i=-21.7$, and a stellar disk with a scale length $R_d=2.5$ kpc dominates the light distribution (Fig. \ref{fig3}). The disk scale length is comparable to the value $R_d=2.3\pm 0.6$ kpc, adopted for the Milky Way in the compilation by \citet{hammer07}, and with the scale length of the Milky Way thin disk, $2.6$ kpc, as traced by M dwarfs in the solar neighborhood \citep{juric08}. Its value also agrees with the scaling relations of spiral galaxies \citep{courteau07}. The SDSS $u-g=1.03$ mag and $g-r=0.49$ mag integrated colors, obtained directly from the {\sc Sunrise} images, fall within 1$\sigma$ of the mean optical colors of late-type galaxies as luminous as Eris \citep{blanton03}. The ``downbending" observed in Eris' brightness exponential profile at about 4 disk scale lengths appears to be characteristic of late-type spirals \citep{pohlen06}. As in the sample of truncated late-type spirals of \citet{bakos08}, there is no break in the stellar surface mass density profile of Eris: rather, Eris' stellar age profile shows a ``U shape" with a minimum of 6 Gyr at the break radius, explaining the origin of the break as a radial change in stellar population likely caused by the stochastic radial migration of young stars from the inner parts of the disk to the outskirts \citep{roskar08}. Eris' bulge-to-disk ratio (as determined by a two-component fit to the $i$-band surface brightness profile), $B/D=0.35$, is also typical of Sb spirals, which are characterized by a median ($\pm 68/2$ per cent) value $\log B/D=-0.53^{+0.27}_{-0.30}$, and of many Sbc galaxies, which have $\log B/D=-0.86^{+0.34}_{-0.40}$ \citep{graham08}. A three-component decomposition (disk$+$bar$+$bulge) will lower the $B/D$ ratio further. The bulge S\'{e}rsic index, $n_s=1.4$, is indicative of a ``pseudobulge" rather than a classical one: according to \citet{weinzirl09}, $\sim 3/4$ of all bright spirals have low $n_s\le 2$ bulges. Eris' large final disk (disk-to-total ratio $D/T=0.74$) is not typically found in lower-resolution simulations of Milky Way-sized galaxies that impose no restrictions on merger history: e.g., only one of the eight galaxies simulated by \citet{scannapieco10} has a photometric $D/T$ as large as 0.68 (and six have $D/T<0.5$), and only one out of the six galaxies above $M_{\rm vir}=10^{11}\,\,{\rm M_\odot}$ simulated by \citet{brooks11} has a disk-to-total ratio comparable to Eris' (``h239", which is offset, however, from the stellar mass-halo mass relation). \begin{figure}[t] \centering \includegraphics[width=.45\textwidth]{fig3.pdf} \caption{The 1D $i$-band radial surface brightness profile of Eris at $z=0$. This is well fitted by a S\'{e}rsic bulge with index $n_s=1.4$, an exponential disk with scale length $R_d=2.5$ kpc, and a bulge-to-disk ratio $B/D=0.35$. The dust reddened, face-on 2D light distribution created by {\sc Sunrise} was analyzed with {\sc Galfit} \citep{peng02} following a procedure similar to that detailed in \citet{weinzirl09}. The ``downbending" in the brightness exponential profile at about 5 disk scale length and the surface brightness where the break occurs, 23.5 $i$-mag arcsec$^{-2}$, are characteristic of late-type spiral galaxies \citep{pohlen06}. } \label{fig3} \vspace{+0.3cm} \end{figure} \subsection{Stellar Content} Eris' total mass in baryons is $M_b=9.5\times 10^{10}\,\,{\rm M_\odot}$, corresponding to a mass fraction $f_b=0.12$ that is 30\% lower than the universal value (for the adopted cosmology) of 0.175. Stars (and their remnants) comprise 41\% of all baryons within $R_{\rm vir}$: the total stellar mass, $M_*=3.9\times 10^{10}\,\,{\rm M_\odot}$, is comparable to the value estimated for the Milky Way, $4.9-5.5\times 10^{10}\,\,{\rm M_\odot}$, by \citet{flynn06}. To make a bias-free comparison with the stellar mass-halo mass relation derived from the abundance matching technique by \citet{behroozi10} we adopt the following procedure. We fit the SDSS $u,g,r,i,z$ broadband colors from the mock {\sc Sunrise} images with the flexible stellar population synthesis code of \citet{conroy09}: the fit assumes a Kroupa IMF and provides a {\it photometric} stellar mass estimate of ${\cal M}_*=3.2\times 10^{10}\,\,{\rm M_\odot}$ (Conroy, private communication), 18\% lower than the value directly measured in the simulation. The photometric stellar mass of Eris can now be weighted self-consistently against the \citet{behroozi10} average stellar mass-halo relation (which uses a \citealt{chabrier03} IMF), free of IMF systematics, after offsetting all \citet{behroozi10} stellar masses by 0.06 dex (to correct from Chabrier to Kroupa IMF). The comparison, depicted in the right panel of Figure \ref{fig4}, demonstrates that Eris' implied ``baryon conversion efficiency", $\eta\equiv ({\cal M}_*/M_{\rm vir}) \times (\Omega_M/\Omega_b)=23\%$, is in excellent agreement with that predicted by the abundance matching technique. This contrasts with the recent analysis of many hydrodynamic simulations of galaxy formation by \citet{guo10}, who show that the great majority of them lock too many baryons into stars to be viable models for the bulk of the observed galaxy population. Note that the intrinsic scatter in the stellar mass at a given halo mass is estimated to be 0.17 dex, independent of halo mass \citep{yang09}. With a circular velocity at the radius, $R_{80}=6.8$ kpc, containing 80\% of the $i$-band flux of $V_{80}=210\,\,{\rm km\,s^{-1}}$, our galaxy lies close to the Tully-Fisher relation of the \citet{pizagno07} galaxy sample (see the left panel of Fig. \ref{fig4}). As discussed in \citet{pizagno07}, the Tully-Fisher uses $V_{80}$ as the primary velocity measure rather than $V_{2.2}$, the circular velocity at 2.2 disk scale lengths, since the former is less sensitive to the degeneracies of bulge-disk decomposition. The ratio $V_{2.2}/V_{200}$=$214\,\,{\rm km\,s^{-1}}/129\,\,{\rm km\,s^{-1}}=1.66$ in Eris, where $V_{200}$ is the circular velocity at the radius enclosing a mean overdensity of $200\,\rho_{\rm crit}$ ($R_{200}=177$ kpc), is equal to the value suggested by the dynamical model for the Milky Way of \citet{klypin02}. It is also consistent with the recent measurements of the virial mass of the Milky Way by \citet{smith07} and \citet{xue08}, implying $V_{2.2}/V_{200} =1.48^{+0.25}_{-0.26}$ and $V_{2.2}/V_{200} =1.67^{+0.31}_{-0.24}$, respectively.\footnote{The $V_{2.2}/V_{200}$ ratios from \citet{smith07} and \citet{xue08}, were computed by \citet{dutton10} from these data sets after converting different virial mass definitions and for an assumed Milky Way's $V_{2.2}=220\,\,{\rm km\,s^{-1}}$.}\, Note that while there is an unsettled disagreement among estimates of the Galaxy's virial mass\footnote{Recent estimates of the virial mass of the Milky Way range from $M_{\rm vir}=1.0^{+0.3}_{-0.2} \times 10^{12}\,M_{\odot}$ from blue horizontal branch kinematics \citep{xue08} to $1.2^{+0.7}_{-0.4} \times 10^{12}\,M_{\odot}$ from studies based on the properties of the Large and Small Magellanic Clouds \citep{busha11}.}, Eris is likely 20\% less massive than the Milky Way and therefore the agreement between simulated and observed kinematic data should be confirmed in future simulations of slightly more massive halos and different accretion histories. Like the Milky Way, however, Eris is offset relative to determinations using various dark halo mass tracers for late-type disk galaxies by \citet{dutton10}, who predict for typical dark matter halos with $V_{2.2}=220\,\,{\rm km\,s^{-1}}$ the ratio $V_{2.2}/V_{200}=1.11^{+0.22}_{-0.20}~(2\sigma)$. \subsection{Gas Content} Eris' \hbox{H~$\scriptstyle\rm I$}\ mass is $M_{\rm HI}=1.9\times 10^9\,\,{\rm M_\odot}$, comparable to the \hbox{H~$\scriptstyle\rm I$}\ mass estimated for the Milky Way disk by \citet{nakanishi03}, but smaller than the value of $\sim 5\times 10^9\,\,{\rm M_\odot}$ given by \citet{wolfire03}. The \hbox{H~$\scriptstyle\rm I$}-to-stellar mass ratio, $1.9\times 10^9\,\,{\rm M_\odot}/3.9\times 10^{10}\,\,{\rm M_\odot}=0.049$, is equal to the median value observed in the GASS survey \citep{catinella10} for galaxies of comparable stellar mass. Eris' \hbox{H~$\scriptstyle\rm I$}\ disk extends out to about 15 kpc (6 stellar disk scale lengths), similar to the size of the \hbox{H~$\scriptstyle\rm I$}\ disk of the Milky Way \citep{nakanishi03}. Clustered SN explosions create a large number of holes in the face-on \hbox{H~$\scriptstyle\rm I$}\ distribution of Eris (Fig. \ref{fig2}) due to bubbles of hot gas expanding perpendicular to the disk. These holes are mostly located within the bright optical disk and preferentially in regions of high star formation: they are kpc in size, as observed, e.g., in the nearby low-inclination spiral galaxy NGC 6946 \citep{boomsma08}. \begin{figure*}[th] \centering \includegraphics[width=0.46\textwidth]{fig4a.pdf} \includegraphics[width=0.46\textwidth]{fig4b.pdf} \caption{\footnotesize {\it Left panel:} The $i$-band Tully-Fisher relation for the \citet{pizagno07} galaxy sample ({\it empty squares with error bars}). {\it Filled circle:} The Eris simulation. Here $V_{80}$ denotes the circular velocity at the radius containing 80\% of the $i$-band flux, as defined by \citet{pizagno07}. {\it Right panel:} The stellar mass - halo mass relation at $z=0.1$ from \citet{behroozi10}, modified for a Kroupa IMF ({\it empty squares with error bars}). Errors bars include only statistical uncertainties. {\it Filled circle:} The Eris simulation with a photometric stellar mass of ${\cal M}_*=3.2\times 10^{10}\,\,{\rm M_\odot}$ and a virial mass of $M_{\rm vir}=7.9\times 10^{11}\,\,{\rm M_\odot}$ (see text for details). } \label{fig4} \vspace{+0.3cm} \end{figure*} About $6.7\times 10^9\,\,{\rm M_\odot}$ are found in a cold phase below $T=3\times 10^4$ K. This is comparable to the total mass of the atomic and warm ionized medium inferred for the Milky Way \citep[e.g.][]{ferriere01}. The gas mass that is hot ($T>3\times 10^5\,$K) and thus potentially X-ray luminous is $M_X=3.6\times 10^{10}\,\,{\rm M_\odot}$, 63\% of the total gas content. For comparison, 12\% of the gas is in the cold phase and 25\% is in a warm phase with $3\times 10^4~{\rm K} <T< 3\times 10^5$ K. The fractions of cold, warm, and hot gas within its inner 20 kpc are 83.5\%, 1.5\%, and 15\%, respectively. Hot gas within 20 kpc contains significant amount of angular momentum and is co-rotating with the cold disk. The hot gas baryon fraction, $f_X=M_X/M_{\rm vir}=0.046$, is 3.8 times smaller than the cosmological baryon fraction. This implies an average density for the hot gas that is 3.8 times smaller than assumed in the standard ``cooling flow" halo model \`{a} la \citet{white91}, and yields a factor of 14.5 smaller X-ray emission measure. Contrary to the standard assumption that hot gas follows the radial distribution of the dark matter, the density distribution of hot gas in Eris follows a ``flattened" $\rho_X(r)\propto r^{-1.13}$ power-law profile out to 100 kpc (see Fig. \ref{fig5}). This gives origin to an X-ray surface brightness profile that is not as sharply peaked as expected for hot halos with NFW profiles and that satisfies the observational constraints \citep[e.g.][]{anderson10}. The flattened density profile we find is consistent with the results of much lower-resolution simulations by \citet{crain10}, who identified the reason for the more extended gas distribution and weaker X-ray coronae in the entropy injection by SNe at $z\sim 1-3$. Only 10\% of the gas in Eris is in a very hot phase above $T=10^6$ K: the mean density of million degree gas at $R\ge 70$ kpc is $n\le 6\times 10^{-5}$ atoms cm$^{-3}$, which is well within the observational constraints from \hbox{O~$\scriptstyle\rm VI$}\ absorption measurements \citep{sembach03} in the halo of the Milky Way, and high enough to produce significant ram pressure stripping of dwarf spheroidal satellites \citep{mayer07}. The observed dispersion measure (DM) of pulsars in the Large Magellanic Cloud (LMC) provides another constraint to the hot halo of the Milky Way \citep{anderson10}. Of the 11 pulsars discovered by \citet{manchester06} in the direction of the LMC, 3 have dispersion measures below $45\,\,{\rm cm^{-3}\,pc}$ and may be located within the Galaxy at a random position along the line of sight to the Cloud. The other 8 have dispersion measures in the range between 65 and 130 $\,{\rm cm^{-3}\,pc}$ and are thought to be associated with the LMC, with the lowest dispersion values belonging to pulsars located on the near side of the Cloud, some 50 kpc away. This leads to an estimate for the dispersion measure introduced by Galactic free electrons towards the LMC of ${\rm DM} \approx 70\,\,{\rm cm^{-3}\,pc}$ \citep{anderson10}. To compare these values with prediction from the Eris simulation, we have calculated the mean integrated column density of free electrons between randomly positioned ``observers" on a circle in the stellar disk at a galactocentric distance of 8 kpc, and a ``pulsar" 50 kpc away. The mock pulsar was located at galactic coordinates $l=280^{\circ}$ and $b=-30^{\circ}$ like the LMC. The predicted dispersion measure, \begin{equation} {\rm DM} = \int_0^{50\,{\rm kpc}} n_e(l)dl=(62 \pm 3) ~{\rm cm^{-3}~pc} \end{equation} where $n_e$ is the electron density along the line of sight, is perfectly consistent with the data. Collectively, the above arguments indicate that the reservoir of hot gas around the Eris simulated galaxy appears to satisfy pulsar dispersion-measure as well as X-ray surface brightness and \hbox{O~$\scriptstyle\rm VI$}\ absorption constraints. \begin{figure}[th] \centering \includegraphics[width=.47\textwidth]{fig5.pdf} \caption{The average dark matter ({\it blue empty dots}) and hot ($T>3\times 10^5\,$K) gas ({\it red empty dots}) density profiles of Eris at $z=0$. The solid lines show the best-fit NFW profile for the dark matter ({\it upper curve}) and the best-fit power-law profile (with slope $-1.13$) for the hot gas ({\it lower curve}). The best-fit NFW profile is characterized by a large halo concentration parameter $c\equiv R_{\rm vir}/R_s=22$ as the dark matter halo contracts in response to the condensation of baryons in its center. } \label{fig5} \vspace{+0.cm} \end{figure} \begin{figure*} \centering \includegraphics[width=.47\textwidth]{fig6a.pdf} \includegraphics[width=.47\textwidth]{fig6b.pdf} \includegraphics[width=.47\textwidth]{fig6c.pdf} \includegraphics[width=.47\textwidth]{fig6d.pdf} \caption{{\it Top left:} Star formation history of all star particles identified within Eris's virial radius today. {\it Black filled dots}: total star formation rate ({\it top panel}) and stellar mass ({\it bottom panel}) as a function of redshift. {\it Blue filled dots}: same for disk star particles identified at $z=0$. {\it Red filled dots}: same for spheroid star particles identified at $z=0$. See the text for details of the disk-spheroid kinematic decomposition. {\it Top right:} Stellar mass fraction as a function of the ``orbital circularity parameter" $j_z/j_c$, describing the degree of rotational support of a given stellar particle, for Eris at $z=0$ ({\it solid line}), Eris at $z=1$ ({\it short-dashed blue line}), and ErisLT at $z=1$ ({\it long-dashed red line}). The prevalence of stars in a centrifugally-supported thin disk manifests itself in a sharply peaked distribution about unity. {\it Bottom left:} $\Sigma_{\rm SFR}$ versus $\Sigma_{\rm HI}$ for Eris' disk at $z=0$ ({\it black filled dots}). The simulation data were averaged over square patches 750 pc on the side: some discreteness effects associated with the limited resolution of the star formation timescale can be seen at low values of $\Sigma_{\rm SFR}$. Every dot represents one sampling point. Note that our simulations do not model the formation of molecular hydrogen. The {\it blue empty dots} show the pixel-by-pixel $\Sigma_{\rm SFR}$ data as a function of $\Sigma_{\rm HI}$ (at 750 pc resolution) for 7 spiral galaxies from the THINGS survey \citep{bigiel08}. The same THINGS data are plotted against the total gas surface density $\Sigma_{\rm gas}=\Sigma_{\rm HI}+\Sigma_{\rm H_2}$ ({\it gray empty dots}). {\it Bottom right:} Evolution of the baryon fraction within the virial radius for Eris ({\it blue filled dots}) and ErisLT ({\it red filled squares}). In the adopted cosmology, the cosmic baryon fraction is $\Omega_b/\Omega_M=0.175$.} \label{fig6} \vspace{+0.3cm} \end{figure*} \subsection{Star Formation and Kinematic Decomposition} The top left panel of Figure \ref{fig6} shows the star formation history of all star particles identified within Eris' virial radius at $z=0$ (regardless of whether they formed within the main host or in satellites), and of its kinematically-decomposed present-day {\it disk} and {\it spheroid}. The decomposition technique follows \citet{scannapieco09}, and is based on the distribution of orbital circularity parameters, $j_z/j_c$, of the simulated stars introduced by \citet{abadi03}. Here, $j_z$ is the angular momentum of each star in the $z$-direction (i.e. the direction defined by the total angular momentum of all gas particles within 5 kpc from the host center) and $j_c$ is the angular momentum of a circular orbit at the same radius. Spheroidal stars are defined as those that are not part of the rotationally-supported disk and therefore typically include bulge and stellar halo stars, as well as stellar bars if they are present. The distribution of circularity parameters, shown in the top right panel of Figure \ref{fig6}, is characterized by two peaks: one at $j_z/j_c\simeq 1$ that is indicative of the presence of a dominant cold disk in rotational support, and a second one at $j_z/j_c\simeq 0$ corresponding to a modest hot spheroidal component dominated by velocity dispersion. The vast majority of disk stars in the Milky Way reside in a thin disk component with exponential scale height $h_z=300\pm 60$ pc \citep{juric08}. A study of the vertical structure of edge-on spiral galaxies finds that the scale height of their thin disk increases systematically with circular velocity as $z_0=610$ $(V_c/100\,\,{\rm km\,s^{-1}})^{0.9}$ pc \citep{yoachim06}, where $z_0$ is the scale height of a sech$^2$ profile, $z_0\approx 2h_z$ at large heights above the disk plane. Eris' kinematically-decomposed stellar disk agrees well with the above scaling: by fitting an exponential (sech$^2$) profile to the simulation data, we derive a scale height of $h_z=490$ pc ($z_0=860$ pc) at a galactocentric distance of 8 kpc. Today, Eris is forming stars at a rate of ${\rm SFR}= 1.1\,\,{\rm M_\odot\,yr^{-1}}$, comparable to the value of ${\rm SFR} = 0.68-1.45\,\,{\rm M_\odot\,yr^{-1}}$, recently inferred for the Milky Way by \citet{robitaille10} using {\it Spitzer} data. The star formation rate declines rapidly with redshift from a plateau value of $\sim 10\,\,{\rm M_\odot\,yr^{-1}}$ maintained between $z=2$ and $z=5$. SN feedback and photoheating by the ultraviolet radiation background efficiently quench star formation at $z>5$. The rate of formation of spheroidal stars fades rapidly after redshift 3, while disk stars nearly triple their mass from $z=2$ to the present. The star formation rate surface densities $\Sigma_{\rm SFR}$ and \hbox{H~$\scriptstyle\rm I$}\ gas surface densities $\Sigma_{\rm HI}$ (Kennicutt-Schmidt law) of Eris' disk are shown in the bottom-left panel of Figure \ref{fig6}. The simulation data were averaged over square patches 750 pc on the side for comparison with the pixel-to-pixel observations at 750 pc resolution of 7 spiral galaxies from the THINGS survey \citep{bigiel08}. The same THINGS data are also plotted against the total gas surface density $\Sigma_{\rm gas}=\Sigma_{\rm HI}+\Sigma_{\rm H_2}$. The observed relationship between $\Sigma_{\rm gas}$ and $\Sigma_{\rm SFR}$ varies dramatically among and within spiral galaxies, and most galaxies show little or no correlation between $\Sigma_{\rm HI}$ and $\Sigma_{\rm SFR}$ \citep{bigiel08}. Rather, there is a clear correlation between $\Sigma_{\rm H_2}$ and $\Sigma_{\rm SFR}$ (molecular Schmidt law), and gas at densities in excess of $\sim 10 \,\,{\rm M_\odot}$ pc$^{-2}$ is observed to be fully molecular. At solar metallicities, this corresponds to the column of atomic hydrogen required to shield a molecular region against photodissociation \citep{krumholz09}. While the total mass of cold gas in Eris is comparable to the sum of the atomic and molecular gas mass of Sb galaxies such as the Milky Way, we do not model directly the formation of H$_2$ molecules. Indeed, as clearly shown in Figure \ref{fig6}, even at Eris's high resolution we do not capture kpc-sized regions with gas surface densities in excess of the characteristic shielding column. Furthermore, our gravitational softening is still large compared to the size of typical molecular clouds, which are a few tens of pc in size, and our gas density threshold for star formation is still somewhat below the density of real molecular clouds. Both effects contribute to further smooth out the high density tail of the total gas distribution. Yet the figure shows that our simulated disk is forming stars in the same range of $\Sigma_{\rm SFR}-\Sigma_{\rm HI}$ values observed in spiral galaxies. The global atomic gas depletion timescale, i.e. the time needed for the present rate of star formation to consume the existing atomic gas reservoir, is $t_{\rm HI}\equiv M_{\rm HI}/{\rm SFR}=1.9\times 10^9\,\,{\rm M_\odot}/1.1\,\,{\rm M_\odot\,yr^{-1}}=1.7$ Gyr in Eris today. As this is significantly smaller than the Hubble time, star formation rates and gas fractions must be set by the balance between gas accretion from the halo and stellar feedback. The depletion time for atomic gas observed in the COLD GASS survey \citep{saintonge11} is on average 3 Gyr, with a large scatter from one galaxy to another. \section{Discussion} Approximately 70\% of bright spirals have $B/T\le 0.2$ \citep{weinzirl09}. Recent attempts to generate such disk-dominated galaxies in cosmological simulations have failed to reproduce simultaneously their observed morphologies as well as their baryonic/stellar content \citep{scannapieco10,agertz11}. Our very high-resolution Eris simulation appears to form a close analog of a Milky Way disk galaxy by capturing a realistic inhomogeneous ISM in which star formation occurs in high density regions of mass comparable to that of giant cloud complexes. Contrary to the ``inefficient star formation'' prescription adopted by \citet{agertz11}, this is achieved with a strong localized SN feedback and a high (10\%) Schmidt-law efficiency. We stress that our efficiency parameter is simply phenomenological, and does not have a direct relation to the true star formation efficiency within giant molecular clouds, which is the end result of all the processes regulating star formation including feedback, and concerns scales (tens of parsecs) that are not yet resolved in cosmological simulations. While the chosen star formation and feedback prescriptions combine to expel a significant amount of baryons from the system, they preferentially remove low angular momentum material and leave plenty of cold gas available for disk star formation \citep[cf.][]{scannapieco09}, thus allowing a good fit to the Tully-Fisher relation \citep[cf.][]{piontek11}. It is interesting to compare the properties of Eris and ErisLT at redshift 1 (see Table 1): \begin{itemize} \item The ErisLT control run produces a galaxy resembling an early-type Sa spiral with $B/D=0.42$ (cf. Eris' $B/D=0.31$), closer to (but still on the low side of) the typical outcome of previously published cosmological simulations of disk formation \citep[e.g.][]{governato10,scannapieco10}. Its rotation curve peaks at $308\,\,{\rm km\,s^{-1}}$ (cf. Eris' $237\,\,{\rm km\,s^{-1}}$) and declines steeply within few kpc from the center (Fig. \ref{fig1}). \item The baryon fraction in ErisLT is higher than in Eris and close to the universal value. The difference in the baryon content of the two simulations is established at very early times (see the bottom right panel of Fig. \ref{fig6}). The baryon content of ErisLT increases from $z=10$ to $z=5$, as its dark matter halo grows from $M_{\rm vir} = 3\times 10^9\,\,{\rm M_\odot}$ to $M_{\rm vir} = 6\times 10^{10}\,{\rm M_\odot}$. At higher redshifts (smaller progenitor mass), the collapse of baryons is heavily suppressed by the ultraviolet radiation background. Eris, by contrast, maintains a relatively low baryon fraction, between 64\% and 73\% of the cosmic value at all redshift $z<10$. Note that these values are higher than the value measured at the present epoch in the bulgeless dwarf galaxy simulation of \citet{governato10}, which is only 30\% of the cosmic fraction. \item While ErisLT was run with a star formation efficiency that was half of that adopted for Eris, its stellar content at $z=1$ is 20\% higher, showing that the most important parameter in the star formation recipe is not the efficiency but rather the star formation density threshold. Indeed, by allowing the gas to reach higher densities before turning into stars, the ISM develops a more inhomogeneous structure, with important consequences on the large-scale pattern of star formation in the galaxy, and, as a byproduct, on the effect of supernovae feedback \citep[e.g.][]{governato10}. \begin{figure*} \centering \includegraphics[width=.47\textwidth]{fig7a.pdf} \includegraphics[width=.47\textwidth]{fig7b.pdf} \includegraphics[width=.47\textwidth]{fig7c.pdf} \includegraphics[width=.47\textwidth]{fig7d.pdf} \includegraphics[width=.47\textwidth]{fig7e.pdf} \includegraphics[width=.47\textwidth]{fig7f.pdf} \caption{Properties of the gas distribution for different star formation thresholds and different redshifts. The local gas density measured around each SPH particle is plotted as a function of its radial distance from the galaxy center for Eris ({\it left panels}) and ErisLT ({\it right panels}) at $z =$ 1, 3, and 5 (from bottom to top). Horizontal green lines mark the minimum gas density for star formation in each run. The color coding highlights cold ($T<3\times 10^4$ K, {\it blue}), warm ($3\times 10^4\,{\rm K}<T<3\times 10^5$ K, {\it yellow}), and hot ($T>3\times10^5$ K, {\it red})) gas particles. Star formation only occurs in blue gas particles above the horizontal green lines. } \label{fig7} \vspace{+0.cm} \end{figure*} \item At $z=1$, about 14\% of the total gas content in ErisLT is in a cold phase, 19\% is warm, and 67\% is in a hot component. The same fractions are 30\%, 18\%, and 52\% in Eris, i.e. there is 2 times more cold gas in Eris than in ErisLT. The difference in the density distribution and thermodynamic state of the ISM in the two simulations is clearly seen in Figure \ref{fig7}, which shows the local gas density versus radius of each gas particle at different epochs. The horizontal green line in each panel marks the adopted star formation threshold, and particles are color coded according to their temperature. Stars are born only in the blue gas particles above the green lines. Because of the increased threshold, fewer regions are forming stars at any given time in Eris. According to equation (\ref{eq:KS}), however, within these cold dense clumps the number of massive stars per unit gas mass born at threshold scales as $\sqrt{n_{\rm SF}}$, i.e. it is a factor $(5/0.1)^{1/2}=7$ times larger in Eris versus ErisLT. As these stars explode as SNe, the energy injected at high redshift in Eris's ISM is enough to disrupt and unbind the star forming regions, generating strong outflows that leave the galaxy and reduce its baryonic content. \item Figure \ref{fig7} also shows that, at $z=5$, star formation can only occur in Eris' inner densest regions within 10 kpc from the center. By contrast, in ErisLT, star formation is spread out over a significant fraction of the host galaxy, extending well beyond 20 kpc. At $z=3$, when outflows are stronger due the high star formation rates triggered by mergers, it is the cold, low angular momentum gas at the center that is preferentially removed in Eris, as previously found for dwarf galaxies by \citet{governato10}. Such outflows are weaker in the low threshold simulation, where star formation is more diffuse and the energy injection from SNe is more evenly deposited in the ISM: cold gas accumulates in the inner regions already at very high redshifts and continues to turn into stars unabated by feedback \citep[see also][]{ceverino09,robertson08}. At lower redshift ErisLT has now consumed its low-angular momentum cold gas in star formation, while Eris has preserved a higher cold gas fraction due to the more effective regulation of star formation via feedback. By $z=1$ both galaxies have formed roughly the same amount of stars, but 90\% of the gas within 20 kpc from the center is still cold in Eris, compared to only 57\% in ErisLT. The distributions of stellar angular momenta are also different in the two cases (see Fig. \ref{fig6}). Gas being blown out at high redshift has a systematically lower angular momentum than the gas that gets accreted at later times \citep{brook10}, and this explains the flatter rotation curve of Eris relative to ErisLT. \end{itemize} It is fair to point out that, while the success of our Eris simulation in matching the observations appears to be linked to the ability of correctly following star formation in an inhomogenous ISM and regulating it with SN feedback, many avenues remain unexplored and require further investigation. Simulations of even higher resolution, approaching the true gas densities reached in star forming giant molecular cloud complexes (about a factor of 10 higher than adopted here), are needed to test the convergence and robustness of our results, and are in the making. The density threshold should depend on metallicity and therefore on redshift \citep{gnedin09}, which may affect the structure of the ISM and SN feedback differently in progenitors at different epochs. The feedback model that we use is still phenomenological, and the actual mechanism of outflow generation may require the combination of more than one effect to support a large-scale blastwave \citep[e.g.][]{ceverino09}. Lastly, the simulations presented in this paper neglect cooling by metal lines at temperatures above $10^4$ K. At the mass scale considered here, cold flows are mostly responsible for the assembly of the star forming disk even at low redshift, as opposed to the cooling flow of the hot halo mode \citep{brooks09}. This suggests that the details of the cooling function for gas above $10^4$ K, namely in the hot mode, are not important for the assembly of the disk. \citet{piontek11} find that metal cooling gives origin to a stronger burst of star formation at high redshift in the progenitors of Milky-Way sized galaxies and thus to bigger bulges. This result, however, was obtained with the canonical low star formation density threshold ($n_{\rm SF}=0.1$ atoms cm$^{-3}$), and it is unclear in which temperature range the effect of metal cooling is most crucial. These authors also show that the augmented bulge can be suppressed by boosting the effect of (thermal and kinetic) SN feedback. It is conceivable that, by further increasing the density threshold for star formation toward actual giant molecular cloud densities, along with modeling the formation of molecular hydrogen, the ISM may become increasingly clumpy and dense, populating the high surface density tail of the Kennicutt-Schmidt relation that remains currently unresolved in Eris. As a result, one may expect that heating and outflows from even more localized SN feedback may become stronger in such high density regions, and eventually offset the impact of increased cooling, as suggested by \citet{piontek11}. We plan to explore these issues in the next generation of simulations with increased resolution, a higher star formation density threshold, and the inclusion of molecular phase physics. In addition to the effect on disk and bulge, metal cooling at $T>10^4$ K may also have an impact on the density profile and temperature of the hot halo. \citet{piontek11} find a significant reduction of the mass of the hot halo when metal cooling is included, under the assumption of a gas metallicity of $Z = 0.5 Z_{\odot}$. This is higher than the metallicity measured in spiral galaxies with extended X-ray and H$\alpha$ emission around their disk, which is in the range $Z= 0.01-0.1 Z_{\odot}$ \citep{rasmussen09}, suggesting that the effect may have been overestimated. The mean metallicity of hot gas in Eris closely matches the latter observations, being on average $Z=0.08 Z_{\odot}$ (with $Z_\odot=0.0194$, \citealt{anders89}). The last major merger in our simulations occurs at $z\sim3$, and therefore Eris is expected to show some offset from the observed structural parameters of the average spiral galaxy. The same likely applies to the Milky Way itself, which indeed closely resembles Eris. Whether or not the good match with the properties of typical spiral galaxies is related to the fact that we have selected a particularly quiet merging history, in which outflows shut off early as the rate of star formation drops after the last major merger, will have to be investigated. At $z<3$, the evolution of the B/D ratio is non-monotonic and it is seen to decrease following a major merger at $z=3$ and a minor merger at $z=1$, and to grow secularly at lower redshifts. The details of this evolution will be the subject of a forthcoming paper (Guedes et al. 2011c, in preparation). More typical galaxies undergoing significant mergers at later times may develop smaller bulges due to the more prolonged effect of supernovae outflows, which could explain why 11 out of 19 nearby massive spirals show no evidence for a classical bulge \citep{kormendy10}). If true, the trend would be at odds with the standard picture in which mergers lead to earlier type objects. This would be an important new ingredient, perhaps complementary to the finding that gas-rich mergers assist the growth of larger disks \citep{hopkins09,governato09}. \acknowledgements This research was funded by NASA through grant NNX09AJ34G and by the NSF through grant AST-0908910 (PM), by the Swiss National Foundation (SNF), and by an ARCS Foundation Fellowship to JG. Simulations were carried out on NASA's Pleiades supercomputer, the UCSC Pleiades cluster, and the Swiss National Supercomputing Center's ROSA Cray-XT5. We thank the referee for useful comments that improved this paper and acknowledge useful discussions on the topic of this paper with Oscar Agertz, Alyson Brooks, Valentino Gonz\'alez, Fabio Governato, Du\v{s}an Kere\v{s}, Andrey Kravtsov, Mark Krumholz, Brant Robertson, Sijing Shen, and Romain Teyssier. We are indebted to Frank Bigiel for helping with THINGS data, Charlie Conroy for providing a photometric stellar mass estimate for Eris, and Elena D'Onghia for helping in generating the initial conditions and selecting the Eris halo. LM thanks the Aspen Center of Physics for hospitality during the early stages of the work.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,336
Editorials from around New York Recent editorials of statewide and national interest from New York's newspapers: The Rochester Democrat and Chronicle on the need for Gov. Andrew Cuomo to sign legislation that would ensure patients have access to abuse deterrent medication. Make no mistake, Gov. Andrew Cuomo is doing many, many things to combat opioid abuse and addiction in New York, and we are leading the country in a number of ways. But, there is another powerful step he could take that would require nothing more than a pen. Cuomo is sitting on legislation that would ensure patients have access to abuse deterrent medication. The bill passed in both the Republican-led Senate and the Democratic Assembly - not just once, but twice. Last year, Cuomo vetoed a very similar measure passed by state legislators. Abuse deterrent opioids are pain pills that either can't be crushed or dissolved, or don't work very well if they are. Users don't get the same kind of high that can come from traditional opioids, so the new formulas are less likely to cause addiction or lead to an overdose. It is uncertain what impact that will have on our nation's opioid epidemic. These drugs are very new to the market, having just surfaced in 2010. There is not a comprehensive body of public health research showing that they are an effective tool. But, there are signs that they might be. In one of the first studies of these new drugs, a Harvard team of researchers found that when the drugs hit the market, the nation's overdose rate caused by prescription drugs decreased by 20 percent over the next two years. There were other things happening at the same time, such as the discontinuance of a much-prescribed painkiller, so questions remain. Still, the technology shows promise. Promise, alone, is usually reason for caution. In an ideal world, we would not recommend a policy change without more substantial data and a better understanding of potential, unintended consequences. For example, in the Harvard study mentioned above, the decrease in opioid prescription overdoses was met with a 23 percent increase in heroin overdoses as addicts switched over. But we are in crisis mode, and lives are being destroyed every day. Abuse deterrent opioids might not be a good option for patients who are already addicted. They might, however, be appropriate for others who are at risk of becoming addicted, or who have family members who are. However, insurance companies aren't letting doctors and patients make that decision. The drugs can be more than twice as expensive as traditional opioids. So some insurance plans require patients to use a non-abuse deterrent opioid first, increase the co-payment, or don't cover the cost of the new drugs at all. The legislation sitting on Cuomo's desk would stop those practices - putting the interests of patients and physicians first - and quite likely adding another weapon to fight this enormous, and expensive, public health battle. The law would cost New York taxpayers more to cover state benefits. New Jersey's governor concluded a similar measure could cost that state more than $11 million, but it's not clear how much would be saved if fewer people were to become addicted. We're spending millions in federal and state dollars for treatment and enforcement. An ongoing cost-benefit analysis is definitely in order. But, ensuring consumer access to these drugs is supported by the legislature, as well as law enforcement, substance abuse organizations, civil rights groups and even the governor's own Heroin and Opioid Task Force. Cuomo should add his name to the list. http://on.rocne.ws/2biJZk7 The Syracuse Post-Standard on the state Public Service Commission ratifying Gov. Andrew Cuomo's Clean Energy Standard, which requires 50 percent of the state's electricity to come from renewable energy sources by 2030. On Monday, the state's Public Service Commission unanimously ratified Gov. Andrew Cuomo's Clean Energy Standard. The standard requires 50 percent of New York's electricity to come from renewable energy sources by 2030. It was the right thing to do. It puts New York on an aggressive path to reduce greenhouse emissions. The PSC's decision also shows that nuclear power is as vital as wind and solar in meeting that goal. It makes nuclear power an environmental asset. It shows you can't reduce greenhouse gases without nuclear, and it puts in place a nuclear subsidy for 12 years. Support for nuclear power does not sit well with all environmentalists, but it was telling that the governor's staff could quickly circulate statements from a number of environmental organizations supporting not only the 50 percent goal, but the maintenance of nuclear power. Opponents ignore the fact that 31 percent of the energy produced in New York is from nuclear. If the only mission were to push renewable energy sources, the PSC's nuclear subsidy would not be praiseworthy. But it's not that simple. We would not favor building new nuclear power plants. But the three in Oswego County already exist, and the issue of safely containing radioactivity remains even if the plants shut down. If the plants were to close soon, the generating capacity would be filled by other sources, probably natural gas. While fracking has made natural gas cheaper in the short term, switching to it does not reduce greenhouse emissions. It does not move New York in the right direction. We do not favor a blank check for nuclear subsidies. The state must evaluate them regularly and Monday's action sets up a review at least every three years. When the price of natural gas or competing fuels goes up, making nuclear competitive on its own, the subsidy should go down or end. Subsidies should be a bridge to a worthy public goal, not a permanent entitlement. In Central New York, the PSC decision is praiseworthy for a more parochial concern as well. The three nuclear generating plants in Oswego County employ more than 1,500 people. They are skilled, highly trained and well paid. Last year, Entergy announced it would close its FitzPatrick plant in January. Efforts by business leaders and state officials to keep it generating electricity have failed. Now Exelon, owner of the two neighboring Nine Mile plants, is trying to buy FitzPatrick from Entergy. The PSC action on Monday makes Exelon believe FitzPatrick is worth operating. We urge Exelon and Entergy to reach a deal. There is urgency because FitzPatrick refueling needs to happen soon. The economic impact of closing would be harsh in Oswego County. The PSC decision dealt with other issues. One is the transmission of power generated Upstate to markets that need it downstate. The state is wise to turn attention to this issue. Whatever the source of electricity, New York's economic future should not teeter on old bottlenecks. The Clean Energy Standard positions New York as a leader in new ways of generating electricity. It makes manufacturing of emerging technologies more likely in New York. It forces the state to explore or expand energy sources - like wind farms off the coast of Long Island or geothermal pumps. Some day, fossil fuel will no longer be the planet's chief energy source. America - and New York - should be inventing and building new sources. We should not let that fall to Europe, China or other international competitors. America should lead this industry, and it is wise of Gov. Cuomo and the Public Service Commission to put New York in the forefront. http://bit.ly/2ayvhHn The Oneonta Daily Star on Republican presidential nominee Donald Trump's words regarding the U.S. military. During the Vietnam War, about 648,000 Americans were drafted into our armed forces. Donald J. Trump was not one of them. Trump, the Republican nominee for president, received four student deferments beginning in 1972 while he attended Fordham University and the Wharton School of the University of Pennsylvania. After he graduated in 1968, Trump would have become eligible for the draft, except this son of a multimillionaire was granted a 1-Y medical deferment and later 4-F status because, he told The New York Times, he had bone spurs in one of his heels. "I had a doctor that gave me a letter - a very strong letter on the heels," Trump said. In an interview last year, he said he could not recall which heel it was. During the Vietnam War, John McCain, now the senior senator from Arizona, served more than five years in Hanoi as a prisoner of war. This son of a prominent admiral had not waited to be drafted. He enlisted in the Navy and his plane was shot down over Vietnam. McCain was tortured, but refused to give his captors anything but name, rank and serial number. He turned down an early release because he felt prisoners who had been there longer should go first. To this day, his injuries in service to his country make it impossible to raise his arms above his head. "He's not a war hero," Trump said at a campaign event in June 2015. "He's a war hero because he was captured. I like people that weren't captured." McCain, in a tough primary and general election battle for his Senate seat, has criticized Trump, but has not withdrawn his endorsement. This week, Trump gave him the back of his hand by falsely claiming McCain had not been there for veterans. Republican Jeff Flake, Arizona's junior senator, said Trump's comments were "just laughable, frankly, if it weren't so serious." Taken to task at the Democratic Convention by retired four-star general John Allen for advocating illegally torturing prisoners and "taking out" the families of terrorists, Trump lashed out at the former commander of our troops in Afghanistan. "You know who he is? He's a failed general," Trump said. "He was the general fighting ISIS. I would say he hasn't done so well, right?" Clearly, when it pertains to military matters, Trump doesn't know what he's talking … and talking . and talking about. He describes an American military that is underfunded and in disarray, when the U.S. is the mightiest nation on Earth and spends more on defense spending than the next seven countries combined. He threatens our nation's most important alliance by saying he would not necessarily come to the aid of some NATO countries should they come under attack unless they pay their full dues. He also blusters about pulling American troops out of Japan, South Korea and other important foreign posts where they've honorably served for decades to foster peace. He has made simplistic promises to wipe out ISIS without any explanation of how he would do it. And perhaps most egregiously, after being called out at the Democratic Convention for his anti-Muslim immigration policy by the Muslim parents of an Army captain who died heroically in Iraq in 2004, Trump could not help entering into a six-day war of words with this couple who know a heartache that Trump apparently cannot even fathom. Veterans in our community, local men and women serving in our military defending our liberty, and all of us who are protected from - as our country's Oath of Allegiance states - "enemies foreign and domestic" deserve far better than insults from Donald Trump. Every American has need to question whether we want a candidate with this history of disrespecting our military personnel and purpose as our next commander-in-chief of the world's most powerful fighting force. http://bit.ly/2aG9k5L The Albany Times Union on the need for Congress to reconvene and vote on a Zika bill as the Obama administration warns that money to fight the virus is about to run out. The number of confirmed cases of Zika in the continental U.S. now has exceeded 1,800, making the fight against the dangerous virus even more urgent. Yet funds needed to defeat the disease are drying up - another casualty of our dysfunctional Congress. Members of Congress started their summer recess last month in another nasty deadlock, this time over President Obama's request for $1.9 billion in emergency funds to halt the virus' spread and to fast-track development of a vaccine. Nearly all the U.S. cases are people who traveled here after being infected elsewhere but, ominously, last week 15 cases believed spread by mosquito bites were reported in Florida, according to the Centers for Disease Control and Prevention. In February the mosquito-borne illness was declared an international public health emergency after outbreaks across Latin America. Heart-wrenching images of babies born with the birth defect microcephaly after their mothers were infected with Zika were followed by predictions that the virus would soon make its way to the U.S. Instead of taking necessary action, some members of Congress played their usual games. The Republican-led Congress pared down Obama's request to $1.1 billion, then attached some objectionable strings to the bill - like cutting Ebola protection funds by $107 million, restricting Planned Parenthood's ability to advise expectant mothers infected with Zika and loosening environmental regulations on pesticides that have nothing to do with controlling the spread of Zika. In what can only be described as bizarre, language was also added to reverse a ban on displaying the Confederate flag at federal cemeteries. Perhaps GOP leaders figured they could remove that ridiculous rider and boast that they had compromised. This was all unacceptable to Democrats, who said the imminent public health crisis meant members of Congress should stop their usual partisan dithering. The stalemate continued until July, when lawmakers left town for seven weeks. Now the Obama administration warns that money to fight Zika is about to run out, which will mean Americans would have to wait longer for a vaccine. A partisan fight over critical funding for such a real and present threat to our country in inexcusable. Congress set aside politics two years ago when the Ebola outbreak was exploding across West Africa, allocating $5.4 billion to support our emergency response. Where is that spirit now? Citing the urgency for the emergency money, many Democrats are urging Speaker Paul Ryan to reconvene the House to act on a bipartisan Zika bill that had passed the Senate. Some may call it partisan posturing; to us, it looks more like a legitimate call. It won't hurt House members to interrupt their summer recess (or re-election campaigns) for a day to return to Washington - to do what they were elected to do in the first place. http://bit.ly/2biO0VJ Newsday on how the Olympic games in Rio de Janeiro celebrate athletic achievement and common humanity, but also offer a sometimes grim reflection of the world. Don't open your mouth. That's advice usually given to criminal defendants, protesters in totalitarian countries and headstrong politicians. Nowadays, it's what health experts say must be done by Olympic sailors, rowers and open-water swimmers competing in the human waste-laden waters of Rio de Janeiro. On that disquieting note, the opening ceremony takes place tonight. The contaminated water that officials promised to treat but never did is a metaphor for a disturbing run-up to these Olympic Games - staged by Brazil, a country in political and economic crisis, and by a city struggling with street crime and poor infrastructure. At their best, the games celebrate athletic achievement and our common humanity amid a marvelous panoply of cultural diversity. But as we see time and again, the Olympics are not an escape from the world but a reflection of it. Friday night's opening ceremony, featuring 10,000 athletes marching into Maracanã Stadium, will be a glorious display of pride, patriotism and swirling colors. But you also will notice: The first Muslim American woman to compete in a hijab. Ibtihaj Muhammad is a fencer who wears the head scarf under her mask. She's from New Jersey, trains in Manhattan and could win a medal. She also is a potent reminder of the contributions Muslims make to this country. The first-ever refugee Olympic team. The 10 athletes come from Syria, South Sudan, Ethiopia and Congo, countries contributing to the worst refugee crisis since World War II. Yusra Mardini, an 18-year-old Syrian swimmer, jumped into the Mediterranean Sea when her boat broke down short of Greece and, with her sister, swam more than three hours alongside to guide it ashore safely. These athletes are true portraits in courage and the definition of inspirational. Missing Russians. Nearly 120 were banned because of their country's massive state-run doping program. Drugs have been a scourge on sports. But Russian President Vladimir Putin blamed the ban on politics intruding on sport, a reminder of current geopolitical tensions. The United States contingent, including 30 from New York. Flag-bearer Michael Phelps, the swimmer and most-decorated Olympian of all time, is coming back from retirement - and from a second guilty plea to drunken driving, in 2014. Our sports icons are mortals, too. Security. It might not be noticeable in the stadium, but will be impossible to miss on the streets - more than 100,000 security personnel, an Olympic record. These are uncertain times. We hope Rio pulls it off. We also hope the city derives some lasting benefits in housing, infrastructure and the like. But the modern Olympic legacy of facilities that sit idle afterward makes the games' cost - Brazil is spending about $11 billion - indefensible. Perhaps the time has come to consider a permanent location. Once the games begin, the action often takes over the story. We hope the competition is fierce but fair, everyone comes home safe and no one gets sick from that water. Then we hope the Olympic movement reflects in earnest on how to make itself better. http://nwsdy.li/2b6fCj2 « Europe Office Stays Busy Through Summer EU single market membership worth 4 pct to British GDP »
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,011
{"url":"https:\/\/www.freemathhelp.com\/forum\/threads\/can-someone-please-help-me-with-this-geometry-problem-i-would-really-appreciate-it.129866\/","text":"jessetalexis\n\nNew member\nWhat is the lateral area, in square inches, of the regular triangular prism shown in the picture?\n\nSubhotosh Khan\n\nSuper Moderator\nStaff member\nWhat is the lateral area, in square inches, of the regular triangular prism shown in the picture?\n\nWhat is the definition of Lateral Surface?\n\nGamedev3\n\nNew member\nokayyy well i dunno what lateral area is so i googled this def \"The sum of the areas of the lateral (vertical) faces of a cylinder, cone, frustum or the like. , so it seems u just need the total area of those 3, long vertical sides, should be easy enough: just area of vertical side(and u know the dimentions of the base) times 3\n\nAttachments\n\n\u2022 6 KB Views: 1\n\nnasi112\n\nFull Member\nIt is a regular triangular prism. This means all triangle sides are equal. Therefore, it would be so easy to find the area of two triangles, and and the area of three rectangles.\n\nSubhotosh Khan\n\nSuper Moderator\nStaff member\nokayyy well i dunno what lateral area is so i googled this def \"The sum of the areas of the lateral (vertical) faces of a cylinder, cone, frustum or the like. , so it seems u just need the total area of those 3, long vertical sides, should be easy enough: just area of vertical side(and u know the dimentions of the base) times 3\nthe three rectangles that make up the lateral surface of the given solid - What are their dimensions?","date":"2021-06-19 15:27:09","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8695568442344666, \"perplexity\": 1105.2865588595503}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487648373.45\/warc\/CC-MAIN-20210619142022-20210619172022-00252.warc.gz\"}"}
null
null
/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */ /* * Copyright (c) 2014 Universita' di Firenze * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation; * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * * Author: Tommaso Pecorella <tommaso.pecorella@unifi.it> */ #ifndef PACKET_SOCKET_SERVER_H #define PACKET_SOCKET_SERVER_H #include "ns3/application.h" #include "ns3/event-id.h" #include "ns3/ptr.h" #include "ns3/packet-socket-address.h" namespace ns3 { class Socket; class Packet; /** * \ingroup socket * * \brief A server using PacketSocket. * * Receives packets using PacketSocket. It does not require (or use) IP. * The application has the same requirements as the PacketSocket for * what concerns the underlying NetDevice and the Address scheme. * It is meant to be used in ns-3 tests. * * Provides a "Rx" Traced Callback (received packets, source address) */ class PacketSocketServer : public Application { public: /** * \brief Get the type ID. * \return the object TypeId */ static TypeId GetTypeId (void); PacketSocketServer (); virtual ~PacketSocketServer (); /** * \brief set the local address and protocol to be used * \param addr local address */ void SetLocal (PacketSocketAddress addr); protected: virtual void DoDispose (void); private: virtual void StartApplication (void); virtual void StopApplication (void); /** * \brief Handle a packet received by the application * \param socket the receiving socket */ void HandleRead (Ptr<Socket> socket); uint32_t m_pktRx; //!< The number of received packets uint32_t m_bytesRx; //!< Total bytes received Ptr<Socket> m_socket; //!< Socket PacketSocketAddress m_localAddress; //!< Local address bool m_localAddressSet; //!< Sanity check /// Traced Callback: received packets, source address. TracedCallback<Ptr<const Packet>, const Address &> m_rxTrace; }; } // namespace ns3 #endif /* PACKET_SOCKET_SERVER_H */
{ "redpajama_set_name": "RedPajamaGithub" }
5,438
Q: difficult pointer to function I am reading book "Thinking in C++" Bruce Eckel. The Chapter 3 in page 164(Polish edition)is about pointer to function. Examples from the book: void * (*(*fp1)(int))[10] float (*(*fp2)(int,int,float))(int) double (*(*(*fp3)())[10])() int (*(*f4())[10])() Can you tell me how I should interpret this and what is created by these examples because I do not understand the book solution? A: I hope this tricky rule will help you to unwind such conundrums: http://c-faq.com/decl/spiral.anderson.html A: Let's take 4: int (*(*f4())[10])() It reads f4 evaluated (f4()) and then dereferenced ((*f4())) can be subscribed ((*f4())[10]) then dereferenced ((*(*f4())[10])) and evaluated to give an int (int (*(*f4())[10])()). It is thus a function returning a pointer of arrays to pointers of functions returning int.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,752
{"url":"https:\/\/www.r-bloggers.com\/2018\/03\/exploring-the-underlying-theory-of-the-chi-square-test-through-simulation-part-2\/","text":"[This article was first published on ouR data generation, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)\nWant to share your content on R-bloggers? click here if you have a blog, or here if you don't.\n\nIn the last post, I tried to provide a little insight into the chi-square test. In particular, I used simulation to demonstrate the relationship between the Poisson distribution of counts and the chi-squared distribution. The key point in that post was the role conditioning plays in that relationship by reducing variance.\n\nTo motivate some of the key issues, I talked a bit about recycling. I asked you to imagine a set of bins placed in different locations to collect glass bottles. I will stick with this scenario, but instead of just glass bottle bins, we now also have cardboard, plastic, and metal bins at each location. In this expanded scenario, we are interested in understanding the relationship between location and material. A key question that we might ask: is the distribution of materials the same across the sites? (Assume we are still just counting items and not considering volume or weight.)\n\n### Independence\n\nIf we tracked the number of items for a particular day, we could record the data in a contingency table, which in this case would be $$4 \\times 3$$ array. If we included the row and column totals, it might look like this:\n\nOne way to inspect the data would be to calculate row- and column-specific proportions. From this (albeit stylized example), it is apparent that the proportion of each material is constant across locations \u2013 10% of the items are glass, roughly 30% are cardboard, 40% are plastic, and 20% are metal. Likewise, for each material, about 17% are in location 1, 33% in location 2, and 50% in location 3:\n\nThis consistency in proportions across rows and columns is the hallmark of independence. In more formal terms, $$P(M = m | L = l) = P(M = m)$$ and $$P(L = l|M = m) = P(L = l)$$. The conditional probability (what we see in a particular row or column) is equal to the overall probability (what we see in the marginal (total) row or column.\n\nThe actual definition of statistical independence with respect to materials and location is\n\n$P(M=m \\ \\& \\ L= l) = P(M=m) \\times P(L=l)$\n\nThe probability on the left is the cell-specific proportion (the count of the number of items with $$M=m$$ and $$L=l$$ divided by $$N$$, the total number of items in the entire table). The two terms on the right side of the equation are the marginal row and column probabilities respectively. The table of overall proportions gives us an example of data generated from two characteristics that are independent:\n\nThere are 116 plastic items in location 3, 19% of the overall items ($$116 \\div 600$$). The overall proportion of plastic items is 40%, the overall proportion of items in location 3 is 50%, and $$0.19 \\approx 0.4 \\times 0.5$$. If we inspect all of the cells, the same approximation will hold.\n\n### Dependence\n\nIn the case where the distributions of materials differ across locations, we no longer have independence. Here is an example, though note that the marginal totals are unchanged:\n\nLooking across the row- and column-specific proportions, it is clear that something unique might be going on at each location:\n\nIt is apparent that the formal definition of independence might be violated: $$P(M=m \\ \\& \\ L=l) \\ne P(M=m)P(L=l$$). Look again at plastics in location 3: $$0.30 \\ne 0.4 \\times 0.5$$.\n\n### The chi-square test of independence\n\nI have been making declarations about independence with my made up contingency tables, just because I was the all-knowing creator who made them up. Of course, when we collect actual data, we don\u2019t have that luxury. That is where the chi-square test of independence helps us.\n\nHere\u2019s the general idea. We start off by making the initial assumption that the rows and columns are indeed independent (this is actually our null hypothesis). We then define a test statistic $$X^2$$ as\n\n$X^2 = \\sum_{m,l} \\frac{(O_{ml} \u2013 E_{ml})^2}{E_{ml}}.$ This is just a slight modification of the test statistic we saw in part 1, which was presented as a summary of a $$k \\times 1$$ array. In this context, $$X^2$$ is just a summary of the $$M \\times \\ L$$ table. As previously discussed, $$X^2$$ has a $$\\chi^2$$ distribution with a particular parameter specifying the $$k$$ degrees freedom.\n\nThe question is, how can we calculate $$X^2$$? The observed data $$O_{ml}$$ are just the observed data. But, we don\u2019t necessarily know $$E_{ml}$$, the expected value of each cell in the contingency table. These expected values can be defined as $$E_{ml} = P(M=m \\ \\& \\ L=l) \\times N$$. If we assume that $$N$$ is fixed, then we are half way there. All that remains is the joint probability of $$M$$ and $$L$$, $$P(M=m \\ \\& \\ L=l)$$. Under independence (which is our starting, or null, assumption) $$P(M=m \\ \\& \\ L=l) = P(M=m)P(L=l)$$. If we make the additional assumption that the row and column totals (margins) $$R_m$$ and $$C_l$$ are fixed, we can calculate $$P(M=m) = R_m\/N$$ and $$P(L=l) = C_l\/N$$. So now,\n\n$E_{ml} = \\frac{(R_m * C_l)}{N}.$ Where does that leave us? We calculate the test statistic $$X^2$$ and evaluate that statistic in the context of the theoretical sampling distribution suggested by the assumptions of independence and fixed marginal totals. That theoretical sampling distribution is $$\\chi^2$$ with some degrees of freedom. If the observed $$X^2$$ is very large and defies the theoretical distribution (i.e.\u00a0seems like an outlier), we will reject the notion of independence. (This is just null hypothesis testing using the $$X^2$$ statistic.)\n\n### Chi-square tests of our two tables\n\nThe test statistic from the first table (which I suggest is from a scenario where material and location are independent) is relatively small. We would not conclude that material and location are associated:\n\n## Sum\n## 8 23 29 60\n## 28 61 91 180\n## 39 85 116 240\n## 25 31 64 120\n## Sum 100 200 300 600\nchisq.test(im)\n##\n## Pearson's Chi-squared test\n##\n## data: im\n## X-squared = 5.0569, df = 6, p-value = 0.5365\n\nIn the second case, the test statistic $$X^2$$ is quite large, leading us to conclude that material and location are indeed related, which is as we suspected:\n\n## Sum\n## 51 5 4 60\n## 22 99 59 180\n## 21 40 179 240\n## 6 56 58 120\n## Sum 100 200 300 600\nchisq.test(dm)\n##\n## Pearson's Chi-squared test\n##\n## data: dm\n## X-squared = 314.34, df = 6, p-value < 2.2e-16\n\n## Degrees of freedom\n\nThe paramount question is what $$\\chi^2$$ distribution does $$X^2$$ have under the independence assumption? If you look at the results of the chi-square tests above, you can see that, under the null hypothesis of independence and fixed margins, these tables have six degree of freedom, so $$X^2 \\sim \\chi^2_6$$. But, how do we get there? What follows is a series of simulations that start with an unconditional data generation process and ends with the final set of marginal conditions. The idea is to show that by progressively adding stricter conditions to the assumptions, we continuously reduce variability and lower the degrees of freedom.\n\n### Unconditional contingency tables\n\nIf we start with a data generation process based on the $$4 \\times 3$$ table that has no conditions on the margins or total number of items, the statistic $$X^2$$ is a function of 12 independent Poisson variables. Each cell has an expected value determined by row and column independence. It should follow that $$X^2$$ will have 12 degrees of freedom. Simulating a large number of tables under these conditions and evaluating the distribution of the calculated $$X^2$$ statistics will likely support this.\n\nThe initial (independent) table specified above is our starting point:\n\naddmargins(im)\n## Sum\n## 8 23 29 60\n## 28 61 91 180\n## 39 85 116 240\n## 25 31 64 120\n## Sum 100 200 300 600\nrow <- margin.table(im, 1)\ncol <- margin.table(im, 2)\nN <- sum(row)\n\nThese are the expected values based on the observed row and column totals:\n\n(expected <- (row\/N) %*% t(col\/N) * N)\n## [,1] [,2] [,3]\n## [1,] 10 20 30\n## [2,] 30 60 90\n## [3,] 40 80 120\n## [4,] 20 40 60\n\nEach randomly generated table is a collection of 12 independent Poisson random variables with $$\\lambda_{ml}$$ defined by the \u201cexpected\u201d table. The tables are first generated as a collection columns and stored in a matrix. Here, I am creating 10,000 tables - and print out the first two in column form:\n\nset.seed(2021)\n\n(lambdas <- as.vector(t(expected)))\n## [1] 10 20 30 30 60 90 40 80 120 20 40 60\ncondU <- matrix(rpois(n = 10000*length(lambdas),\nlambda = lambdas),\nnrow = length(lambdas))\ncondU[, 1:2]\n## [,1] [,2]\n## [1,] 9 15\n## [2,] 22 11\n## [3,] 31 37\n## [4,] 31 25\n## [5,] 66 71\n## [6,] 71 81\n## [7,] 41 50\n## [8,] 74 87\n## [9,] 138 96\n## [10,] 15 20\n## [11,] 36 30\n## [12,] 71 53\n\nNow, I convert each column to a table and create a \u201clist\u201d of tables. Here are the first two tables with the row and column margins; you can see that even the totals change from table to table:\n\ncondUm <- lapply(seq_len(ncol(condU)),\nfunction(i) matrix(condU[,i], length(row), length(col), byrow = T))\n\n## Sum\n## 9 22 31 62\n## 31 66 71 168\n## 41 74 138 253\n## 15 36 71 122\n## Sum 96 198 311 605\n## Sum\n## 15 11 37 63\n## 25 71 81 177\n## 50 87 96 233\n## 20 30 53 103\n## Sum 110 199 267 576\n\nA function avgMatrix estimates the average and variance of each of the cells (code can be made available if there is interest). The average of the 10,000 tables mirrors the \u201cexpected\u201d table. And since all cells (including the totals) are Poisson distributed, the variance should be quite close to the mean table:\n\nsumU <- avgMatrix(condUm, addMarg = T, sLabel = \"U\")\n\nround(sumU$sampAvg, 0) ## [,1] [,2] [,3] [,4] ## [1,] 10 20 30 60 ## [2,] 30 60 90 180 ## [3,] 40 80 120 240 ## [4,] 20 40 60 120 ## [5,] 100 200 300 600 round(sumU$sampVar, 0)\n## [,1] [,2] [,3] [,4]\n## [1,] 10 19 30 60\n## [2,] 30 61 90 180\n## [3,] 40 80 124 244\n## [4,] 20 40 60 122\n## [5,] 100 199 308 613\n\nFunction estX2 calculates the $$X^2$$ statistic for each contingency table:\n\nestX2 <- function(contMat, expMat) {\nX2 <- sum( (contMat - expMat)^2 \/ expMat)\nreturn(X2)\n}\n\nX2 <- sapply(condUm, function(x) estX2(x, expected))\n\n## [1] 11.819444 23.162500 17.681944 3.569444 31.123611 14.836111\n\nComparing the mean and variance of the 10,000 simulated $$X^2$$ statistics with the mean and variance of data generated from a $$\\chi^2_{12}$$ distribution indicates that the two are quite close:\n\ntrueChisq <- rchisq(10000, 12)\n\n# Comparing means\nround(c( mean(X2), mean(trueChisq)), 1)\n## [1] 12 12\n# Comparing variance\nround(c( var(X2), var(trueChisq)), 1)\n## [1] 24.5 24.3\n\n### Conditioning on N\n\nIf we assume that the total number of items remains the same from day to day (or sample to sample), but we allow to totals to vary by location and materials, we have a constrained contingency table that looks like this:\n\nThe table total is highlighted in yellow to indicate that $$N$$ is fixed. The \u201cmetal\/location 3\u201d is also highlighted because once $$N$$ is fixed and all the other cells are allowed to be randomly generated, that last cell is automatically determined as $C_{metal, 3} = N - \\sum_{ml \\ne (metal \\ \\& \\ 3)} C_{ml}.$ The data generation process that reflects this constraint is the multinomial distribution, which is the multivariate analogue to the binomial distribution. The cell probabilities are set based on the proportions of the independence table:\n\nround(probs <- expected\/N, 2)\n## [,1] [,2] [,3]\n## [1,] 0.02 0.03 0.05\n## [2,] 0.05 0.10 0.15\n## [3,] 0.07 0.13 0.20\n## [4,] 0.03 0.07 0.10\n\nAs before with the unconditional scenario, let\u2019s generate a large number of tables, each conditional on N. I\u2019ll show two tables so you can see that N is indeed constrained:\n\ncondN <- rmultinom(n = 10000, size = N, prob = as.vector(t(probs)))\ncondNm <- lapply(seq_len(ncol(condN)),\nfunction(i) matrix(condN[,i], length(row), length(col),\nbyrow = T))\n## Sum\n## 12 16 30 58\n## 26 67 83 176\n## 36 91 119 246\n## 21 40 59 120\n## Sum 95 214 291 600\n## Sum\n## 8 20 19 47\n## 30 64 97 191\n## 36 84 112 232\n## 21 52 57 130\n## Sum 95 220 285 600\n\nAnd here is the key point: if we look at the mean of the cell counts across the samples, they mirror the expected values. But, the variances are slightly reduced. We are essentially looking at a subset of the samples generated above that were completely unconstrained, and in this subset the total across all cells equals $$N$$. As I demonstrated in the last post, this constraint effectively removes samples with more extreme values in some of the cells - which reduces the variance of each cell:\n\nsumN <- avgMatrix(condNm, sLabel = \"N\")\n\nround(sumN$sampAvg, 0) ## [,1] [,2] [,3] ## [1,] 10 20 30 ## [2,] 30 60 90 ## [3,] 40 80 120 ## [4,] 20 40 60 round(sumN$sampVar, 0)\n## [,1] [,2] [,3]\n## [1,] 10 19 29\n## [2,] 28 53 76\n## [3,] 37 70 95\n## [4,] 19 39 54\n\nWe lost one degree of freedom (the one cell highlighted in grey in the table above), so it makes sense to compare the distribution of $$X^2$$ to a $$\\chi^2_{11}$$:\n\nX2 <- sapply(condNm, function(x) estX2(x, expected))\n\ntrueChisq <- rchisq(10000, 11)\n\n# Comparing means\nround(c( mean(X2), mean(trueChisq)), 1)\n## [1] 11 11\n# Comparing variance\nround(c( var(X2), var(trueChisq)), 1)\n## [1] 21.7 22.4\n\n### Conditioning on row totals\n\nWe go one step further - and condition on the row totals (I am going to skip conditioning on the column totals, because conceptually it is the same thing). Now, the row totals in the table are highlighted, and all of the cells in \u201clocation 3\u201d are grayed out. Once the row total is set, and the first two elements in each row are generated, the last cell in the row is determined. We are losing four degrees of freedom.\n\nThese tables can be generated again using the multinomial distribution, but each row of the table is generated individually. The cell probabilities are all based on the overall column proportions:\n\nround(prob <- col\/N, 2)\n## [1] 0.17 0.33 0.50\n\nThe rows are generated individually based on the count of the total number fixed items in the row. Two of the tables are shown again to show that the generated tables have the same row totals:\n\ncondRow <- lapply(seq_len(length(row)),\nfunction(i) t(rmultinom(10000, size = row[i],prob=prob)))\n\ncondRm <- lapply(seq_len(10000),\nfunction(i) {\ndo.call(rbind, lapply(condRow, function(x) x[i,]))\n}\n)\n\n## Sum\n## 5 19 36 60\n## 32 55 93 180\n## 39 76 125 240\n## 19 42 59 120\n## Sum 95 192 313 600\n## Sum\n## 11 19 30 60\n## 36 52 92 180\n## 44 74 122 240\n## 16 41 63 120\n## Sum 107 186 307 600\n\nThis time around, the variance of the cells is reduced even further:\n\nsumR <- avgMatrix(condRm, sLabel = \"R\")\n\nround(sumR$sampAvg, 0) ## [,1] [,2] [,3] ## [1,] 10 20 30 ## [2,] 30 60 90 ## [3,] 40 80 120 ## [4,] 20 40 60 round(sumR$sampVar, 0)\n## [,1] [,2] [,3]\n## [1,] 8 14 15\n## [2,] 26 41 46\n## [3,] 34 53 59\n## [4,] 17 27 31\n\nAnd let\u2019s compare the distribution of the sample $$X^2$$ statistics with the $$\\chi^2_8$$ distribution (since we now have $$12 - 4 = 8$$ degrees of freedom):\n\nX2 <- sapply(condRm, function(x) estX2(x, expected))\n\ntrueChisq <- rchisq(10000, 8)\n\n# Comparing means\nround(c( mean(X2), mean(trueChisq)), 1)\n## [1] 8.1 8.0\n# Comparing variance\nround(c( var(X2), var(trueChisq)), 1)\n## [1] 16.4 16.4\n\n### Conditioning on both row and column totals\n\nHere we are at the grand finale, the actual chi-square test of independence, where we condition on both the row and column totals. The whole point of this is to show that once we set this condition, the variance of the cells is reduced far below the Poisson variance. As a result, we must use a $$\\chi^2$$ distribution with fewer degrees of freedom when evaluating the $$X^2$$ test statistic.\n\nThis final table shows both the constraints on the row and column totals, and the impact on the specific cell. The six grayed out cells are determined by the column totals once the six other cells are generated. That is, we lose six degrees of freedom. (Maybe you can now see where the $$degrees \\ of \\ freedom = (\\# \\ rows - 1) \\times (\\# \\ cols - 1)$$ comes from?)\n\nThe process for generating data for a table where both row totals and column totals is interesting, and I actually wrote some pretty inefficient code that was based on a simple algorithm tied to the multivariate hypergeometric distribution, which was described here. Luckily, just as I started writing this section, I stumbled upon the R function r2dtable. (Not sure why I didn\u2019t find it right away, but was glad to have found it in any case.) So, with a single line, 10,000 tables can be very quickly generated.\n\ncondRCm <- r2dtable(10000, row, col)\n\nHere are the first two generated tables:\n\naddmargins(condRCm[[1]])\n## Sum\n## 14 12 34 60\n## 24 64 92 180\n## 38 79 123 240\n## 24 45 51 120\n## Sum 100 200 300 600\n## Sum\n## 7 23 30 60\n## 30 60 90 180\n## 38 78 124 240\n## 25 39 56 120\n## Sum 100 200 300 600\n\nAnd with this most restrictive set of conditioning constraints, the variances of the cell counts are considerably lower than when conditioning on row or column totals alone:\n\nsumRC <- avgMatrix(condRCm, sLabel = \"RC\")\n\nround(sumRC$sampAvg, 0) ## [,1] [,2] [,3] ## [1,] 10 20 30 ## [2,] 30 60 90 ## [3,] 40 80 120 ## [4,] 20 40 60 round(sumRC$sampVar, 0)\n## [,1] [,2] [,3]\n## [1,] 8 12 13\n## [2,] 18 28 31\n## [3,] 20 32 36\n## [4,] 13 22 24\n\nAnd, take a look at the mean and variance of the $$X^2$$ statistic as it compares to the mean and variance of the $$\\chi^2_6$$ distribution:\n\nX2 <- sapply(condRCm, function(x) estX2(x, expected))\n\ntrueChisq <- rchisq(10000, 6)\n\n# Comparing means\nround(c( mean(X2), mean(trueChisq)), 1)\n## [1] 6 6\n# Comparing variance\nround(c( var(X2), var(trueChisq)), 1)\n## [1] 11.8 11.7\n\nI\u2019ll leave you with a plot of the cell counts for each of the 10,000 tables generated in each of the conditioning scenarios: unconditional (U), conditional on N (N), conditional on row totals (R), and conditional on both row and column totals (RC). This plot confirms the point I\u2019ve been trying to make in this post and the last: adding more and more restrictive conditions progessively reduces variability within each cell. The reduction in degrees of freedom in the chi-square test is the direct consequence of this reduction in within-cell variability.","date":"2021-08-02 02:16:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5717038512229919, \"perplexity\": 958.171378987312}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046154302.46\/warc\/CC-MAIN-20210802012641-20210802042641-00675.warc.gz\"}"}
null
null
Q: Создание структуры из файла Прошу оказать помощь. Можно ли сформировать структуру из файла и заполнить поля (Marshal.PrtToStructure)? Файл xml, имеем строки <ID>b7131367-82e3</ID> <Layer>Channel</Layer> <MinFrameLength>20</MinFrameLength> <...>???</...> <MaxFrameLength>65535</MaxFrameLength> Как в структуру записать <ID>, <Layer>, <MinFrameLength>, <...>``<MaxFrameLength> и присвоить значения b7131367-82e3, Channel, 20, ???, 65535 соответственно.? Входные данные: файл формата xml (количество строк, порядок, наименование может быть разным т.е нам не известным). Задача 1. Сформировать структуру из файла где строка является полем структуры. Задача 2. Заполнить структуру. A: Класс Marshal предназначен для работы с неуправляемым кодом. Вам же нужна обычная Xml десериализация: для файла <?xml version="1.0" encoding="utf-8"?> <SomeRootNode> <ID>b7131367-82e3</ID> <Layer>Channel</Layer> <MinFrameLength>20</MinFrameLength> <MaxFrameLength>65535</MaxFrameLength> </SomeRootNode> код будет выглядеть примерно так: using System.IO; using System.Xml.Serialization; namespace ConsoleApplication1 { [XmlRoot("SomeRootNode")] public struct MyStruct { public string ID { get; set; } public string Layer { get; set; } public byte MinFrameLength { get; set; } public ushort MaxFrameLength { get; set; } } class Program { static void Main(string[] args) { XmlSerializer serializer = new XmlSerializer(typeof(MyStruct)); using (var fileStream = File.OpenRead(@"C:\Temp\2.xml")) { var instance = (MyStruct)serializer.Deserialize(fileStream); } } } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,816
@interface SelectTableViewCell : UITableViewCell @property (nonatomic, strong) SelectModel *model; @end
{ "redpajama_set_name": "RedPajamaGithub" }
36
That's right! The Pete's Basement Crew is abdicating their rule of Westeros! We're giving away the Iron Throne!! Well, a 7" die-cast metal replica, at least! From our pals at Dark Horse Comics, we are proud to present our Iron Throne Giveaway Contest! Make sure you tune into this week's episode for details on how to win!
{ "redpajama_set_name": "RedPajamaC4" }
742
\section{Introduction} \IEEEPARstart{A}{cquisition} and compression are two crucial stages in imaging systems. The acquired data, describing the visual scene in a degraded form, is further processed to reconstruct images that can be compressed for storage or transmission. The decompressed images are the important outcome of this overall process, and ideally should be of a good quality considering the bit-rate required for their binary compressed representations. In practice, this ultimate goal of optimizing the entire processing chain is often neglected, leaving the individual tasks unaware of each other (Fig.~\autoref{fig:decoupled_process}) and, by that, inducing sub-optimal performance with respect to the complete system. \begin{figure*}[] \centering {\subfloat[(a) Decoupled reconstruction and compression]{\label{fig:decoupled_process}\includegraphics[width=0.9\textwidth]{figures/decoupled_process.png}}} \\ {\subfloat[(b) Joint reconstruction and compression]{\label{fig:joint_process}\includegraphics[width=0.9\textwidth]{figures/joint_process.png}}} \caption{Two general settings for an MRI processing system: (a) based on a decoupled reconstruction and compression stages, and (b) based on a joint reconstruction and compression stage.} \end{figure*} Recently there was a significant progress in joint optimization of multiple tasks within typical imaging systems. This is mainly due to the emerging utilization of optimization techniques relying on variable splitting methods (e.g., the alternating direction method of multipliers (ADMM) \cite{boyd2011distributed}), promoting numerically tractable designs that improved the state-of-the-art performance for various multi-task goals. Contemporary examples for the benefits of multi-task optimization in the context of MRI include joint reconstruction and segmentation \cite{corona2019enhancing,adler2018task}, reconstruction and registration \cite{corona2019multi,Royuela-del-Val2016,Lingala::2015}, as well as reconstruction and motion estimation \cite{odille2016joint,Rank20174DRM,aviles2018compressed}. The idea in these joint models is to exploit complementary information coming from the different imaging tasks to enhance the image reconstruction quality and finally improve the overall performance of the joint optimization. By careful modeling and coupling of the multiple tasks, the authors show accuracy boost as well as reduction of error propagation. Another recent research line addresses multi-task problems involving image and video compression \cite{dar2018optimized,dar2018system,dar2018compression,dar2019benefiting}. This new design philosophy enables optimization of compression with respect to complete acquisition-rendering systems \cite{dar2018optimized,dar2018system,dar2018compression}, multimedia distribution networks \cite{dar2018compression}, and reliable storage systems \cite{dar2019benefiting}. These architectures are based on ADMM and also have the important property of modularity, where a repeated sub-problem in the iterative optimization process is identified as a standard image compression task and, accordingly, replaced with a repeated black-box application of an existing compression method. The improved compression ability (namely, the better rate-distortion trade-offs achieved) is provided in binary compressed data compatible with standard decompression processes that do not require additional post-decompression processing. This modular optimization approach for compression \cite{dar2018optimized,dar2018system,dar2018compression,dar2019benefiting} relates to the Plug-and-Play Priors framework \cite{venkatakrishnan2013plug,sreehari2016plug}, addressing image reconstruction and restoration problems using ADMM-based iterative procedures that employ denoisers as black-box modules. This utilization of denoisers is motivated by associating a sub-problem in the process with the task of denoising an image contaminated by additive white Gaussian noise. Similar benefits of denoising-based modularity were also proposed by additional architectures employing optimization techniques beyond ADMM, a prominent example is the Regularization-by-Denoising (RED) framework \cite{romano2017little}, also suggesting powerful solutions to intricate restoration tasks based on iterative applications of state-of-the-art denoisers. This recent branch of research attracts great interest, as reflected in a long line of studies presenting new modular algorithms for restoration \cite{venkatakrishnan2013plug,sreehari2016plug,romano2017little,dar2016postprocessing,rond2016poisson,dar2018restoration} and compression \cite{dar2018optimized,dar2018system,dar2018compression,dar2019benefiting} tasks. Lossy compression of magnetic resonance (MR) images is a topic of current interest. For example, see the overviews given in recent papers \cite{liu2019fast,kumar2020versatile}. Compatibility with existing image compression standards is also desired. For example, note the wavelet-based extension proposed for JPEG2000 in \cite{bruylants2015wavelet}. Also note the region-of-interest based method in \cite{yee2017medical} that is compatible with the Better Portable Graphics (BPG) state-of-the-art image compression format. As explained next, our approach is also compatible with existing image compression standards. Also, in contrast to the existing studies that focus on the single task of MRI lossy compression, the main contribution of this paper is a framework for joint optimization of MRI reconstruction and lossy compression. In this paper we propose a new modular optimization approach that brings together the tasks of MRI reconstruction and lossy data compression (see Fig.~\autoref{fig:joint_process}). Specifically, given raw MRI measurements obtained from the acquisition stage, we use the ADMM-based compression approach from \cite{dar2018optimized,dar2018system,dar2018compression,dar2019benefiting} in conjunction with the relevant MRI acquisition model and obtain a process that jointly optimizes the tasks of MRI reconstruction and the lossy compression of the reconstructed MR image. The resulting binary compressed data is compatible with an image compression standard of choice that is employed as a black box in the modular optimization process. Moreover, this paper is the first to propose a modular optimization process that includes an explicit, common regularizer (total variation \cite{rudin1992nonlinear} in our case) in modular optimization for lossy compression purposes. This essentially improves the generic image model used in the standard compression by adjusting it to image models that better suit the considered image types. Nicely, this modular optimization still provides compressed data compatible to the compression standard and does not require additional post-decompression processing. We present experiments that consider a noisy linear model for MRI acquisition at several levels of subsampling of coefficients in the Fourier domain (also known as the K-space in the context of MRI). Our experiments show how the state-of-the-art image compression technique, BPG \cite{hevc_software_bpg}, is adjusted and improved for the purpose of MRI reconstruction and compression. We examine two settings of the proposed method: joint optimization of MRI reconstruction and lossy compression \textit{without any additional regularization}, and joint optimization of MRI reconstruction and lossy compression \textit{with total-variation regularization}. Note that the added total-variation regularization is only active at the compression optimization stage and not at (nor after) the decompression stage. Our experiments show that our joint optimization approach \textit{with total-variation regularization} often achieves significant PSNR gains between 4 to 9 dB at high bit-rates compared to the joint optimization approach \textit{without any additional regularization}. Moreover, our joint optimization approach \textit{with total-variation regularization} provides PSNR gains between 0.5 to 1 dB at high bit-rates compared to a decoupled approach (Fig.~\ref{fig:decoupled_process}) where total-variation based MRI reconstruction is followed by standard lossy compression without any joint optimization process. These impressive PSNR gains at high bit-rates are of great relevance for medical image compression. In addition, we show that lossy compression can significantly improve the MRI reconstruction quality even with respect to total-variation based reconstruction without any compression (i.e., an ideal lossless compression). This paper is organized as follows. In Section \ref{sec:The Proposed Method} we present the development of the modular optimization approach for joint reconstruction and compression of MRI data. In Section \ref{sec:Experimental Results} we provide the experimental results. Section \ref{sec:Conclusion} concludes this paper. \section{The Proposed Method} \label{sec:The Proposed Method} \subsection{Problem Formulation} \label{subsec:Problem Formulation} We consider an imaging system model (see Fig.~\ref{fig:joint_process}) where an unknown visual signal, denoted as the $ N $-length column-vector $ \vec{x} \in \mathbb{R}^N $, goes through an MRI acquisition stage modeled via \begin{IEEEeqnarray}{rCl} \label{eq:MRI acquisition model} \label{eq:network structure - reconstructed k^th output} \vec{y} = \mtx{A} \vec{x} + \vecgreek{\eta} . \end{IEEEeqnarray} Here $\mtx{A}$ is a $M\times N$ matrix that acts as a linear operator that subsamples $\vec{x}$ in the discrete Fourier domain (also known as the K-space in the context of MRI) to return ${M \le N}$ measurements of the visual scene embodied in $\vec{x}$. Also, ${\vecgreek{\eta}\in \mathbb{R}^M}$ is a Gaussian noise vector with i.i.d.~components that are zero mean and have variance $\sigma_{\eta}^2$. Then, $\vec{y}$ is a column vector containing $M$ degraded samples in the discrete Fourier domain. The data in $\vec{y}$ should be reconstructed in order to obtain a visually-meaningful image. The linear operator in the acquisition can be formulated as $\mtx{A}\triangleq\mtx{S}\mtx{F}$, where $\mtx{F}$ is the Discrete Fourier Transform matrix of $N\times N$ size, and $\mtx{S}$ is a subsampling operator in the form of a $M\times N$ matrix that selects $M\le N$ samples from its $N$-dimensional input based on a desired binary sampling pattern. The subsampling ratio $N/M$ defines the MRI acceleration factor, e.g., using acceleration factor 4x keeps and utilizes only 25\% of the coefficients in the Fourier domain. Also note that since we consider a true image $\vec{x}$ which is real valued, we can adapt $\mtx{A}$ and its adjoint $\mtx{A}^{*}$ to be $\mathbb{R}^{N}\rightarrow\mathbb{C}^{M}$ and $\mathbb{C}^{M}\rightarrow\mathbb{R}^{N}$ linear operators, respectively, as described in \cite{ehrhardt2016multicontrast}. Before continuing to the reconstruction of $\vec{x}$, one should note that $\vec{y}$ includes complex values that cannot be kept in a binary format with infinite precision. Accordingly, digital storage and/or transmission of $\vec{y}$ requires lossy (or at least near lossless) compression. Let us describe a general lossy compression procedure by the function $ C: \mathbb{R}^N \rightarrow \mathcal{B} $ that maps the $ N $-dimensional signal domain to a discrete set $ \mathcal{B} $ of compressed representations in binary forms of various lengths. The compression of $ \vec{w}\in \mathbb{R}^N $ is denoted by $\textit{b} = C \left( \vec{w} \right)$, where $ \textit{b} \in \mathcal{B} $ is the binary compressed data to store or transmit. The decompression of $\textit{b}$ is done via $\vec{v} = F \left( \textit{b} \right)$, where $ F: \mathcal{B} \rightarrow \mathcal{S} $ maps binary compressed representations from $ \mathcal{B} $ to their respective decompressed signals in the discrete set $ \mathcal{S} \subset \mathbb{R}^N $. In the case of image compression, the decompressed signal $ \vec{v} $ may be displayed. Moreover, in our MRI acquisition setting, the goal is to get a decompressed image $\vec{v}$ that approximates well the unknown visual signal $\vec{x}$, where the approximation quality is constrained by the number of bits utilized for the compression. Now we turn to define the optimization problem for the joint task of MRI reconstruction and compression. Since $\vec{x}$ is unknown and only its degraded measurement $\vec{y}$ is available, the suggested distortion metric resembles the fidelity term in inverse problem (see also the discussion in \cite{dar2018system}) taking here the form of \begin{IEEEeqnarray}{rCl} \label{eq:network structure - expected distortion} D_{\mtx{A}}\left( \vec{y}, \vec{v} \right) \triangleq \frac{1}{N} \left\| { \vec{y} - \mtx{A} \vec{v} } \right\|_2^2 \end{IEEEeqnarray} where the matrix $\mtx{A}$ was defined in the MRI acquisition model in (\ref{eq:MRI acquisition model}). The idea in (\ref{eq:network structure - expected distortion}) is that, after applying the subsampling operator $\mtx{A}$, the decompressed image $\vec{v}$ should be close to the given measurements $\vec{y}$ up to the noise term. While the minimal desired value of $D_{\mtx{A}}\left( \vec{y}, \vec{v} \right)$ should be a positive value that depends on the noise level \cite{dar2018system}, this value is not required for our method. Our goal is to optimize the rate-distortion performance of the joint reconstruction and compression of the MRI measurements given in $ \vec{y} $. For this, we formulate the task as \begin{IEEEeqnarray}{rCl} \label{eq:rate-distortion optimization - Lagrangian} \hat{ \vec{v}} = \underset{ \vec{v}\in\mathcal{S} }{\text{argmin}} ~~ { R \left( \vec{v} \right) + \lambda D_{\mtx{A}}\left( \vec{y}, \vec{v} \right) + \alpha {\rm TV}\left( \vec{v} \right) } \end{IEEEeqnarray} where $R \left( \vec{v} \right)$ evaluates the length of the binary compressed description $ \textit{b} \in \mathcal{B} $ matched to the decompressed signal $ \vec{v}\in\mathcal{S} $, $D_{\mtx{A}}\left( \vec{y}, \vec{v} \right)$ is the overall distortion as defined in (\ref{eq:network structure - expected distortion}). The term ${\rm TV}\left( \vec{v} \right)$ is the total variation of the decompressed image, which is defined in the discrete setting as \begin{equation*} \operatorname{TV}(\vec{v}) = \sum_{(i,j)\in \Omega} \sqrt{| \nabla_1 v(i,j) |^2 + | \nabla_2 v(i,j) |^2} \label{eq:dtv} \end{equation*} where $v$ is the two-dimensional organization of the vector $\vec{v}$ on the discrete two-dimensional image grid $\Omega$. Then, $\nabla_1$ and $\nabla_2$ are the discrete gradient operators in the horizontal and vertical directions of $\Omega$, respectively. The parameters $\lambda\ge 0$ and $\alpha\ge 0$ reflect the relative importance of minimizing the distortion and TV regularization, respectively. Moreover, the values of $\lambda$ and $\alpha$ induce the actual number of bits used for the compressed binary representation that is coupled with $\hat{ \vec{v}}$ (such coding without an explicitly specified bit-rate constraint is common, e.g., in video coding \cite{sullivan1998rate,sullivan2012overview}) The unconstrained Lagrangian optimization form in (\ref{eq:rate-distortion optimization - Lagrangian}) resembles the contemporary compression formulations via rate-distortion optimizations (see, e.g., \cite{ortega1998rate,sullivan1998rate,sullivan2012overview}). Yet, importantly note that in our case we do not only optimize the compression rate and distortion but also include an additional term for (total variation) regularization of the decompressed result. This is a significant feature that regularizes the decompression result already at the compression stage and not at the post decompression stage as usually done in compression-artifact reduction methods (e.g., \cite{dar2016postprocessing}). Also note that our proposed method resolves compression artifacts together with the degradation originating in the MRI acquisition. We consider compression of high-dimensional signals (namely, $ N $ is large), hence, the discrete set $ \mathcal{S} $ is extremely large such that a direct solution of the discrete optimization form in (\ref{eq:rate-distortion optimization - Lagrangian}) is impractical for non-trivial forms of $ \mtx{A} $ and for $\alpha>0$. In contrast, for $ \mtx{A} = \mtx{I} $ and $\alpha=0$, the optimization in (\ref{eq:rate-distortion optimization - Lagrangian}) reduces to a standard compression form \cite{ortega1998rate,sullivan1998rate} without any reconstruction or regularization aspects and, therefore, can be practically solved using block-based designs that decompose the problem to a set of independent optimizations on non-overlapping blocks of sufficiently low dimensions. \subsection{Modular Optimization Procedure} \label{subsec:Modular Optimization Procedure} We address the computationally challenging, discrete optimization problem (\ref{eq:rate-distortion optimization - Lagrangian}) using the alternating direction method of multipliers (ADMM) technique \cite{boyd2011distributed}. First, we split the optimization variable such that (\ref{eq:rate-distortion optimization - Lagrangian}) becomes \begin{IEEEeqnarray}{rCl} \label{eq:rate-distortion optimization - variable splitting} && \hat{ \vec{v}} = \underset{ \vec{v}\in\mathcal{S} , {\vec{z}}\in\mathbb{R}^N }{\text{argmin}} ~~ { R \left( \vec{v} \right) + \lambda D_{\mtx{A}}\left( \vec{y}, \vec{z} \right) + \alpha {\rm TV}\left( \vec{z} \right) } \nonumber \\ && \text{subject to} ~~ \vec{z} = \vec{v} \end{IEEEeqnarray} where $ \vec{z} \in \mathbb{R}^N $ is an auxiliary variable that is not constrained to the discrete set $ \mathcal{S} $. Then, the scaled form of the augmented Lagrangian and the method of multipliers \cite[Ch. 2]{boyd2011distributed} translate (\ref{eq:rate-distortion optimization - variable splitting}) into the iterative process \begin{IEEEeqnarray}{rCl} \label{eq:rate-distortion optimization - augmented Lagrangian} && \left\{{ \hat{ \vec{v}}^{(t)}, \hat{\vec{z}}^{(t)} }\right\} = \mathop {{\text{argmin}}}\limits_{ \vec{v}\in\mathcal{S} , {\vec{z}}\in\mathbb{R}^N } \Bigg\lbrace R \left( \vec{v} \right) + \lambda D_{\mtx{A}}\left( \vec{y}, \vec{z} \right) \nonumber \\ && \qquad\qquad\qquad + \alpha {\rm TV}\left( \vec{z} \right) + \frac{\beta}{2}{\left\| { \vec{v} - \vec{z}} + \vec{u}^{(t)} \right\|_2^2} \Bigg\rbrace ~~~~ \\ && \vec{u}^{(t+1)} = \vec{u}^{(t)} + \left( \hat{ \vec{v}}^{(t)} - \hat{\vec{z}}^{(t)} \right), \end{IEEEeqnarray} where $ t $ is the iteration index, $\vec{u}^{(t)} \in \mathbb{R}^N$ is the scaled dual variable, and $ \beta $ is an auxiliary parameter induced by the augmented Lagrangian. Then, applying one iteration of alternating minimization on (\ref{eq:rate-distortion optimization - augmented Lagrangian}) provides the ADMM form of the problem that includes a sequence of simpler optimizations \begin{IEEEeqnarray}{rCl} \label{eq:rate-distortion optimization - ADMM - compression} && \hat{\vec{v}}^{(t)} = \mathop {{\text{argmin}}}\limits_{ \vec{v}\in\mathcal{S} } R \left( \vec{v} \right) + \frac{\beta}{2}{\left\| { \vec{v} - \tilde{ \vec{z}}^{(t)} } \right\|_2^2} \\ \label{eq:rate-distortion optimization - ADMM - deconvolution} && \hat{ \vec{z}}^{(t)} = \mathop {\text{argmin}}\limits_{{\vec{z}}\in\mathbb{R}^N } \lambda D_{\mtx{A}}\left( \vec{y}, \vec{z} \right) + \alpha {\rm TV}\left( \vec{z} \right) + \frac{\beta}{2}{\left\| { \vec{z} - \tilde{ \vec{v}}^{(t)} } \right\|_2^2} ~~~~~~ \\ \label{eq:rate-distortion optimization - ADMM - u update} && \vec{u}^{(t+1)} = \vec{u}^{(t)} + \left( \hat{ \vec{v}}^{(t)} - \hat{\vec{z}}^{(t)} \right) \end{IEEEeqnarray} where $ \tilde{ \vec{z}}^{(t)} = \hat{\vec{z}}^{(t-1)} - \vec{u}^{(t)} $ and $ \tilde{ \vec{v}}^{(t)} = \hat{\vec{v}}^{(t)} + \vec{u}^{(t)} $. Importantly, the ADMM form decoupled the compression architecture $ \left\lbrace \mathcal{S}, R \right\rbrace $ from the acquisition model and the total variation regularizer. The optimization task in (\ref{eq:rate-distortion optimization - ADMM - compression}) corresponds to the Lagrangian rate-distortion optimization form as in standard compression problems with a squared-error distortion metric. Therefore, similarly to \cite{dar2018optimized,dar2018system,dar2018compression,dar2018restoration}, we replace the solution of (\ref{eq:rate-distortion optimization - ADMM - compression}) with an application of a standard compression (and decompression) method, which does not have to exactly solve the Lagrangian optimization in (\ref{eq:rate-distortion optimization - ADMM - compression}). The standard compression and decompression are denoted here as \begin{IEEEeqnarray}{rCl} \label{eq:rate-distortion optimization - ADMM - compression - standard compression} {\textit{b}}^{(t)} = {\rm StandardCompress} \left( \tilde{ \vec{z}}^{(t)}, \theta \right) \\ \label{eq:rate-distortion optimization - ADMM - compression - standard decompression} \hat{\vec{v}}^{(t)} = {\rm StandardDecompress} \left( {\textit{b}}^{(t)} \right) \end{IEEEeqnarray} where the parameter $ \theta $ generalizes the Lagrange multiplier part in inducing the rate-distortion tradeoff. The proposed procedure (see Algorithm \ref{Algorithm:Proposed Method}) has a generic structure that may consider any compression method together with the MRI reconstruction task. The optimization in (\ref{eq:rate-distortion optimization - ADMM - deconvolution}) is over a continuous domain and its cost includes two quadratic terms and a total variation regularizer. Hence, (\ref{eq:rate-distortion optimization - ADMM - deconvolution}) can be numerically solved using primal-dual algorithms \cite{chambolle2011first,chambolle2016introduction,esser2010general}, e.g., see \cite{corona2019enhancing}. Since each of the three terms in (\ref{eq:rate-distortion optimization - ADMM - deconvolution}) has its own parameter, we can degenerate one of the parameters (e.g., $\lambda$) as can be observed in Algorithm \ref{Algorithm:Proposed Method}. \begin{algorithm} \caption{Joint Optimization of MRI Reconstruction and Compression with Total Variation Regularization} \label{Algorithm:Proposed Method} \begin{algorithmic}[1] \State Inputs: $ \vec{y} $, $ \theta $, $ \alpha $, $ \beta $. \State Initialize $t = 0, {\hat{\vec{z}}}^{(0)} = \vec{x}, \vec{u}^{(1)} = \vec{0}$. \Repeat \State $ t \gets t + 1$ \State $ \tilde{ \vec{z}}^{(t)} = \hat{\vec{z}}^{(t-1)} - \vec{u}^{(t)} $ \State $ {\textit{b}}^{(t)} = {\rm StandardCompress} \left( \tilde{ \vec{z}}^{(t)}, \theta \right) $ \State $ \hat{\vec{v}}^{(t)} = {\rm StandardDecompress} \left( {\textit{b}}^{(t)} \right) $ \State $ \tilde{ \vec{v}}^{(t)} = \hat{\vec{v}}^{(t)} + \vec{u}^{(t)} $ \State \resizebox{.85\hsize}{!}{$ \hat{ \vec{z}}^{(t)} = \mathop {\text{argmin}}\limits_{{\vec{z}}\in\mathbb{R}^N } D_{\mtx{A}}\left( \vec{y}, \vec{z} \right) + \alpha {\rm TV}\left( \vec{z} \right) + \frac{\beta}{2}{\left\| { \vec{z} - \tilde{ \vec{v}}^{(t)} } \right\|_2^2} $} \State $\vec{u}^{(t+1)} = \vec{u}^{(t)} + \left( \hat{ \vec{v}}^{(t)} - \hat{\vec{z}}^{(t)} \right)$ \Until{stopping criterion is satisfied} \State Output: The binary compressed data $ {\textit{b}}^{(t)} $. \end{algorithmic} \end{algorithm} \section{Experimental Results} \label{sec:Experimental Results} In this section we demonstrate the great potential of our approach for the joint optimization of MRI reconstruction and compression. We evaluate our reconstruction quality using the Peak Signal-to-Noise Ratio (PSNR) defined as \begin{equation*} {\rm PSNR} = 10 \log_{10} \left( \frac{P^2}{ \frac{1}{N}\|\vec{x} - \hat{\vec{v}}\|_2^2}\right), \end{equation*} where $\vec{x}$ is the groundtruth image, $\hat{\vec{v}}$ is the decompressed MRI reconstruction, $P$ is the maximal value attainable by an image pixel and $N$ is the number of pixels in the image. We evaluate the performance of our proposed approach using the state-of-the-art image (and video) compression standard HEVC \cite{sullivan2012overview} in its BPG implementation \cite{hevc_software_bpg}. We consider different compression levels by setting quantization parameter (QP) values between 4 to 49 in jumps of 3. In the implementation of our proposed approach we empirically set $\beta$ to a value depending on the specific quantization parameter (QP) given to the HEVC compression, specifically, \begin{equation*} \beta = 5.5 - 0.1\text{QP} \end{equation*} where QP values are integers between 0 and 51 (where a lower QP value corresponds to a higher reconstruction quality). The stopping criterion in our experiments was the arrival to a maximal number of 40 iterations or when $\hat{\vec{v}}^{(t)}$ and $\hat{\vec{z}}^{(t)}$ are detected to converge or diverge. The convergence/divergence is defined based on the total absolute difference between $\hat{\vec{v}}^{(t)}$ and $\hat{\vec{z}}^{(t)}$ in each iteration, i.e., $\hat{w}^{(t)}\triangleq \|\hat{\vec{v}}^{(t)} - \hat{\vec{z}}^{(t)}\|_{1}$. Specifically, we determine convergence when $|w^{(t)} - w^{(t-1)}| < 0.5$ for three consecutive iterations, whereas divergence is identified when $|w^{(t)} - w^{(t-1)}|> 50$. The datasets used in our experiments are the following: \begin{itemize} \item Brain\footnote{\url{https://github.com/veronicacorona/multicontrastSegmentation}}: the dataset is taken from \cite{CoronaHBM}. The data is a T$_1$-weighted acquired using a 3D MPRAGE sequence with the following scan parameters: Inversion time = 1,100 ms, flip angle $\alpha=7^{\circ}$, echo time(TE) = 4.37 ms, receiver bandwidth (RB) = 140 Hz per pixel, echospacing = 11.1 ms, repetition time (TR) = 2,500 ms; $256 \times 256 \times 192$ matrix dimensions. We refer to these images as Brain 1, 2 and 3. \item Liver\footnote{\url{http://www.vision.ee.ethz.ch/~organmot/chapter_download.shtml}}: The datasets are 4DMRI data acquired during free-breathing of the right liver lobe \cite{Siebenthal2007}. It was acquired on a 1.5T Philips Achieva system using a T$_1$-weighted gradient echo sequence, TR=3.1 ms, slices=25, matrix size = $195\times166$, over roughly one hour on 22 to 30 sagittal slices and a temporal resolution of $2.6-2.8$ Hz. \end{itemize} \subsection{Results for Settings Without Total-Variation Regularization} Let us start by examining the experimental results for the following methods that do not include total-variation regularization: \begin{itemize} \item \textit{Proposed joint optimization without TV-regularization}: This approach is obtained by Algorithm \ref{Algorithm:Proposed Method} with $\alpha=0$. This setting will provide joint optimization of the MRI reconstruction and lossy compression such that, effectively, the MRI reconstruction benefits from the implicit regularization introduced by the lossy compression (see \cite{dar2018restoration} for additional details on this type of complexity regularization). The PSNR versus bit-rate results for this setting appear in blue curves in Figures \ref{fig:all_4x_option1}-\ref{fig:option1_brain1}. \item \textit{Decoupled MRI reconstruction and compression without TV-regularization}: in this competing method, the MR image is reconstructed from the measurements (without regularization), and then the reconstructed MR image goes through a standard compression (see the general description of the flow in Fig.~\ref{fig:decoupled_process}). The unregularized reconstruction stage is done by setting the missing K-space samples to zeros and then computing the inverse Fourier transform of the zero-filled data to get an MR image. The PSNR versus bit-rate results for this setting appear in black curves in Figures \ref{fig:all_4x_option1}-\ref{fig:option1_brain1}. \item \textit{No compression, only zero-filled MRI reconstruction (without TV regularization)}: in this setting we evaluate the PSNR of the MR image obtained in a reconstruction process (without TV regularization) operated in 64-bit numerical resolution. In this reconstruction process the missing K-space samples are set to zeros and then inverse Fourier transform is applied to obtain the MR image. In our comparisons we will use only the PSNR values of this setting that appear as constant red-lines in Figures \ref{fig:all_4x_option1}-\ref{fig:option1_brain1}. \end{itemize} \begin{figure*} \centering \subfloat[]{ \raisebox{1.5cm}{\rotatebox[origin=t]{90}{\colorbox{gray!10}{Without TV Reg.}}}} \subfloat[(a) Brain 1]{\includegraphics[width=0.22\textwidth]{figures_psnr/option1_mprage140_25samp_psnr.png}}~ \subfloat[(b) Brain 2]{\includegraphics[width=0.22\textwidth]{figures_psnr/option1_mprage130_25samp_psnr.png}}~ \subfloat[(c) Brain 3 ]{\includegraphics[width=0.22\textwidth,]{figures_psnr/option1_mprage120_25samp_psnr.png}}~ \subfloat[(d) Liver]{\includegraphics[width=0.22\textwidth,]{figures_psnr/option1_liver_25samp_psnr.png}} \caption{PSNR-bitrate curves comparing the proposed joint optimization without TV-regularization (blue lines) to the decoupled MRI reconstruction and compression without TV-regularization (black lines), and to no compression, only zero-filled MRI reconstruction (without TV regularization, red lines). All results are for acceleration factor 4x. Each subfigure is for a different dataset. } \label{fig:all_4x_option1} \end{figure*} \begin{figure*}[t] \centering \subfloat[]{ \raisebox{1.5cm}{\rotatebox[origin=t]{90}{\colorbox{gray!10}{Without TV Reg.}}} } \subfloat[(a) Full]{\includegraphics[width=0.22\textwidth]{figures_psnr/option1_liver_100samp_psnr.png}}~ \subfloat[(b) 2x]{\includegraphics[width=0.22\textwidth]{figures_psnr/option1_liver_50samp_psnr.png}}~ \subfloat[(c) 4x ]{\includegraphics[width=0.22\textwidth,]{figures_psnr/option1_liver_25samp_psnr.png}}~ \subfloat[(d) 8x]{\includegraphics[width=0.22\textwidth,]{figures_psnr/option1_liver_12samp_psnr.png}} \caption{PSNR-bitrate curves comparing the proposed joint optimization without TV-regularization (blue lines) to the decoupled MRI reconstruction and compression without TV-regularization (black lines), and to no compression, only zero-filled MRI reconstruction (without TV regularization) (red lines). All results are for the dataset Liver. Each subfigure is for a different acceleration factor.} \label{fig:option1_liver} \end{figure*} \begin{figure*} \centering \subfloat[]{\raisebox{1.5cm}{\rotatebox[origin=t]{90}{\colorbox{gray!10}{Without TV Reg.}}} } \subfloat[(a) Full]{\includegraphics[width=0.22\textwidth]{figures_psnr/option1_mprage140_100samp_psnr.png}}~ \subfloat[(b) 2x]{\includegraphics[width=0.22\textwidth]{figures_psnr/option1_mprage140_50samp_psnr.png}}~ \subfloat[(c) 4x ]{\includegraphics[width=0.22\textwidth,]{figures_psnr/option1_mprage140_25samp_psnr.png}}~ \subfloat[(d) 8x]{\includegraphics[width=0.22\textwidth,]{figures_psnr/option1_mprage140_12samp_psnr.png}} \caption{PSNR-bitrate curves comparing the proposed joint optimization without TV-regularization (blue lines) to the decoupled MRI reconstruction and compression without TV-regularization (black lines), and to no compression, only zero-filled MRI reconstruction (without TV regularization) (red lines). All results are for the dataset Brain 1. Each subfigure is for a different acceleration factor.} \label{fig:option1_brain1} \end{figure*} Considering the three options that do not include total-variation regularization, in Fig.~\ref{fig:all_4x_option1} we show PSNR-bitrate curves for our four datasets and acceleration factor 4x (i.e., keeping 25\% of the coefficients in the Fourier domain subsampling). In these results (in Fig.~\ref{fig:all_4x_option1}) we can note that both the decoupled approach and proposed joint optimization (i.e., Algorithm \ref{Algorithm:Proposed Method} with $\alpha=0$) perform significantly better than the 'no compression' option, suggesting that the lossy compression in our approach acts as a regularizer. Moreover, note that our proposed joint optimization further outperforms the decoupled approach, especially in the mid-range of bit rates. In Fig.~\ref{fig:option1_liver}-\ref{fig:option1_brain1}, we also report the PSNR-bitrate curves for two datasets (Liver and Brain1) for several acceleration factors. These results demonstrate that our method achieves significant gains also for various acceleration factors. \begin{figure*} \centering \subfloat[(a) Ground truth]{\includegraphics[width=0.22\textwidth]{figures_visual/mprage140.png}}~ \subfloat[(b) No compression, only zero-filled reconstruction \protect\\ PSNR=21.72]{\includegraphics[width=0.22\textwidth]{figures_visual/mprage140_50samp_xinv.png}}~ \subfloat[(c) Decoupled without TV reg. \protect\\ PSNR=22.77]{\includegraphics[width=0.22\textwidth,]{figures_visual/option1_mprage140_50samp_reg_31.png}}~ \subfloat[(d) Proposed without TV reg.\protect\\ PSNR=24.78]{\includegraphics[width=0.22\textwidth,]{figures_visual/option1_mprage140_50samp_ours_31.png}} \caption{Visual results for data Brain 1 with compression QP 31 and acceleration factor 2x. We show the (a) ground truth image, (b) no-compression zero-filled reconstruction, (c) decoupled MRI reconstruction and compression \textit{without} TV-regularization, and (d) the proposed joint optimization \textit{without} TV-regularization. } \label{figu:visual_option1_brain1} \end{figure*} In \autoref{figu:visual_option1_brain1} we present visual results for the Brain 1 image for the settings without TV regularization. These results clearly show the gains in visual quality obtained using lossy compression in settings that do not include other regularization types (e.g., total variation). \subsection{Results for Settings With Total-Variation Regularization} Now we proceed to the main part of the results, and evaluate methods with total-variation regularization: \begin{itemize} \item \textit{Proposed joint optimization with TV-regularization}: This approach is obtained by Algorithm \ref{Algorithm:Proposed Method} with $\alpha>0$. Then, the joint optimization of the MRI reconstruction and lossy compression includes TV-regularization in addition to the implicit regularization provided by the lossy compression. The PSNR versus bit-rate results for this setting are represented by the blue curves in Figures \ref{fig:all_4x_option2}-\ref{fig:option2_brain1}. \item \textit{Decoupled MRI reconstruction and compression with TV-regularization}: in this competing approach, the MR image is reconstructed (using TV regularization) from the measurements, and then compressed by a standard technique (see the general description of the process in Fig.~\ref{fig:decoupled_process}). The PSNR versus bit-rate results for this setting are represented by the black curves in Figures \ref{fig:all_4x_option2}-\ref{fig:option2_brain1}. \item \textit{No compression, only MRI reconstruction using TV regularization}: in this setting we measure the PSNR of the MR image reconstructed using TV regularization in a 64-bit numerical resolution. In our comparisons we will refer only to the PSNR values of this setting that are represented by constant red-lines in Figures \ref{fig:all_4x_option2}-\ref{fig:option2_brain1}. \end{itemize} The three options that use total-variation regularization provide the PSNR-bitrate curves in Figures \ref{fig:all_4x_option2}-\ref{fig:option2_brain1} for various datasets and acceleration factors. These results show that the proposed joint optimization with TV regularization (i.e., Algorithm \ref{Algorithm:Proposed Method} with $\alpha>0$) significantly outperforms, in particular in the medium and high bit-rate range, the two other alternatives (that by themselves utilize TV regularization, as explained above). The impressive PSNR gains are summarized in Table \ref{table} that shows the average PSNR difference between performance curves. The average PSNR differences between curves were calculated using the BD-PSNR metric \cite{bjontegaard2001calculation,BDPSNR_Matlab}, for the entire bit-rate range (i.e., the complete curves generated for QP values) and for curve segments corresponding to high bit-rates (defined by QP values 3, 7, 13 and 19). Specifically, the results in Table \ref{table} show that our joint optimization method (with TV regularization) is able to achieve PSNR gains of 1 dB at high bit-rates over the decoupled approach that utilizes TV regularization. Moreover, Table \ref{table} exhibits the importance of the TV regularization to the proposed method: the PSNR gains of Algorithm \ref{Algorithm:Proposed Method} using TV regularization over Algorithm \ref{Algorithm:Proposed Method} without TV regularization (i.e., with $\alpha=0$) range between 4 to 9 dB. This is a strong evidence for the effectiveness of the proposed joint optimization that includes a novel combination of TV regularization and lossy compression. \begin{table*}[t!] \caption{~~~~~~~~~~~~~ Average PSNR Gains (DB, measured using BD-PSNR) of Algorithm \ref{Algorithm:Proposed Method} with TV-regularization over Two Alternative Methods: Decoupled Reconstruction and Compression with TV-regularization, and Algorithm \ref{Algorithm:Proposed Method} without TV-regularization.} \renewcommand{\arraystretch}{1.1} \label{table} \centering \begin{tabular}{|c|c||c|c||c|c|} \hline \multirow{2}{*}{\bfseries \shortstack{Image}} & \multirow{2}{*}{\bfseries \shortstack{MRI\\Acceleration\\ Factor }} & \multicolumn{2}{|c|}{\bfseries \shortstack{All Bit-Rates}} & \multicolumn{2}{|c|}{\bfseries\shortstack{High Bit-Rates}} \\ \cline{3-6} & & \shortstack{~\\Proposed with TV-Reg.\\over\\Decoupled with TV Reg.} & \shortstack{~\\Proposed with TV-Reg.\\over\\Proposed without TV Reg.} & \shortstack{~\\Proposed with TV-Reg.\\over\\Decoupled with TV Reg.} & \shortstack{~\\Proposed with TV-Reg.\\over\\Proposed without TV Reg.} \\ \hline\hline Brain 1 & 2x & 0.2127 & 5.3815 & 0.4311 & 9.3681 \\ \cline{2-6} & 4x & 0.3862 & 4.9906 & 0.9572 & 7.1224 \\ \cline{2-6} & 8x & 0.5048 & 3.7934 & 0.9414 & 4.5415 \\ \hline\hline Brain 2 & 2x & 0.2884 & 5.1590 & 0.6876 & 9.1983 \\ \cline{2-6} & 4x & 0.2754 & 4.4242 & 1.1127 & 6.6918 \\ \cline{2-6} & 8x & 0.4560 & 3.3232 & 0.8553 & 4.0410 \\ \hline\hline Brain 3 & 2x & 0.4075 & 4.8432 & 0.999 & 8.8626 \\ \cline{2-6} & 4x & 0.4296 & 4.5707 & 1.1753 & 6.6334 \\ \cline{2-6} & 8x & 0.4177 & 3.2280 & 1.0315 & 4.1089 \\ \hline\hline Liver & 2x & -0.1110 & 5.0050 & 0.1190 & 9.8255 \\ \cline{2-6} & 4x & 0.1247 & 4.3601 & 0.4618 & 7.0811 \\ \cline{2-6} & 8x & 0.0262 & 3.1015 & 0.1728 & 4.0402 \\ \hline\hline \hline \end{tabular} \end{table*} \begin{figure*}[t] \centering \subfloat[]{\raisebox{1.5cm}{\rotatebox[origin=t]{90}{\colorbox{gray!10}{With TV Reg.}}} } \subfloat[(a) Brain 1]{\includegraphics[width=0.22\textwidth]{figures_psnr/option2_mprage140_25samp_psnr.png}}~ \subfloat[(b) Brain 2]{\includegraphics[width=0.22\textwidth]{figures_psnr/option2_mprage130_25samp_psnr.png}}~ \subfloat[(c) Brain 3 ]{\includegraphics[width=0.22\textwidth,]{figures_psnr/option2_mprage120_25samp_psnr.png}}~ \subfloat[(d) Liver]{\includegraphics[width=0.22\textwidth,]{figures_psnr/option2_liver_25samp_psnr.png}} \caption{PSNR-bitrate curves comparing the proposed joint optimization using TV-regularization (blue lines) to the decoupled MRI reconstruction and compression using TV-regularization (black lines), and to no compression, only TV-regularized MRI reconstruction (red lines). All results are for acceleration factor 4x and regularization parameter $\alpha=0.01$. Each subfigure is for a different dataset. } \label{fig:all_4x_option2} \end{figure*} \begin{figure*}[t!] \centering \subfloat[]{\raisebox{1.5cm}{\rotatebox[origin=t]{90}{\colorbox{gray!10}{With TV Reg.}}} } \subfloat[(a) Full]{\includegraphics[width=0.22\textwidth]{figures_psnr/option2_liver_100samp_0003psnr.png}}~ \subfloat[(b) 2x]{\includegraphics[width=0.22\textwidth]{figures_psnr/option2_liver_50samp_0003psnr.png}}~ \subfloat[(c) 4x ]{\includegraphics[width=0.22\textwidth,]{figures_psnr/option2_liver_25samp_0003psnr.png}}~ \subfloat[(d) 8x]{\includegraphics[width=0.22\textwidth,]{figures_psnr/option2_liver_16samp_0003psnr.png}} \caption{PSNR-bitrate curves comparing the proposed joint optimization using TV-regularization (blue lines) to the decoupled MRI reconstruction and compression using TV-regularization (black lines), and to no compression, only TV-regularized MRI reconstruction (red lines). All results are for the dataset Liver and regularization parameter $\alpha=0.01$. Each subfigure is for a different acceleration factor.} \label{fig:option2_liver} \end{figure*} \begin{figure*}[t!] \centering \subfloat[]{ \raisebox{1.5cm}{\rotatebox[origin=t]{90}{\colorbox{gray!10}{With TV Reg.}}} } \subfloat[(a) Full]{\includegraphics[width=0.22\textwidth]{figures_psnr/option2_mprage140_100samp_psnr.png}}~ \subfloat[(b) 2x]{\includegraphics[width=0.22\textwidth]{figures_psnr/option2_mprage140_50samp_psnr.png}}~ \subfloat[(c) 4x ]{\includegraphics[width=0.22\textwidth,]{figures_psnr/option2_mprage140_25samp_psnr.png}}~ \subfloat[(d) 8x]{\includegraphics[width=0.22\textwidth,]{figures_psnr/option2_mprage140_12samp_psnr.png}} \caption{PSNR-bitrate curves comparing the proposed joint optimization using TV-regularization (blue lines) to the decoupled MRI reconstruction and compression using TV-regularization (black lines), and to no compression, only TV-regularized MRI reconstruction (red lines). All results are for the dataset Brain 1 and regularization parameter $\alpha=0.01$. Each subfigure is for a different acceleration factor.} \label{fig:option2_brain1} \end{figure*} In \autoref{fig:visual_option2_brain3} we provide visual results for the Brain 3 image for the methods that utilize TV regularization. These results clearly show the gains in visual quality (and PSNR) obtained by adding the TV regularization to the lossy compression. \begin{figure*} \centering \subfloat[(a) Ground truth]{\includegraphics[width=0.22\textwidth]{figures_visual/mprage120.png}}~ \subfloat[(b) No compression, only TV-reg.~reconstruction \protect\\ PSNR=32.56]{\includegraphics[width=0.22\textwidth]{figures_visual/mprage120_50samp_TVrec.png}}~ \subfloat[(c) Decoupled with TV reg. \protect\\ PSNR=32.50]{\includegraphics[width=0.22\textwidth,]{figures_visual/option2_mprage120_50samp_reg_16.png}}~ \subfloat[(d) Proposed with TV reg. \protect\\ PSNR=33.74]{\includegraphics[width=0.22\textwidth,]{figures_visual/option2_mprage120_50samp_ours_16.png}} \caption{Visual results for data Brain 3 with compression QP 16 for regularized optimization and acceleration factor 2x. We show the (a) ground truth image, (b) no-compression TV-regularized reconstruction ($\alpha=0.02$), (c) decoupled MRI reconstruction and compression \textit{using} TV-regularization ($\alpha=0.02$), and (d) the proposed joint optimization \textit{using} TV-regularization ($\alpha=0.01$). } \label{fig:visual_option2_brain3} \end{figure*} \section{Conclusion} \label{sec:Conclusion} In this paper we proposed a new modular optimization method for joint reconstruction and lossy compression of MRI data. The method addresses the degradations in the MRI acquisition stage and provides a compressed representation compatible with a compression standard of choice. We demonstrate that lossy compression can improve the reconstruction quality compared to settings based on lossless compression. An additional novelty is the consideration of a total variation regularizer at the compression stage, leading to a decompressed image of a better quality without any processing at (or after) the decompression stage. The proposed method significantly outperforms the relevant competing methods at medium and high bit-rates, showing strong potential for clinical applications. Future research directions may explore our approach in conjunction with other regularizers, as well as with automated parameter optimization (e.g., using bi-level optimization \cite{calatroni2017bilevel}). \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,884
{"url":"http:\/\/talkstats.com\/threads\/disjoint-and-dependent-events.76153\/","text":"disjoint and dependent events\n\nosamastat\n\nNew Member\nHello\ncan we say every disjoint events are dependent events ?\n\nfed2\n\nActive Member\nmy vote is yes, they are dependant. independance means prob A and B = prob A * prob B. so disjoint then we have 0 = prob A * Prob B, which won't hold except in stupid situations.","date":"2021-02-27 01:31:13","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9384920001029968, \"perplexity\": 3722.2268308818816}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178358033.38\/warc\/CC-MAIN-20210226234926-20210227024926-00447.warc.gz\"}"}
null
null
THE NEW RAY BRADBURY REVIEW - announced for October 2016 I've been editing issue five of the annual New Ray Bradbury Review, and it has now been listed in the publisher's catalogue for 2016. October sounds a long way off , but with any luck, copies may become available earlier than this date; they sometimes do. The issue is entirely devoted to articles related to the Francois Truffaut film of Fahrenheit 451, which is fifty years old in 2016. I managed to pull together contributors from four continents for a wide-ranging look at the film, its contexts, its influence and its curious strengths and weaknesses. The film is usually considered to be flawed - and indeed Truffaut scholars often rate it as one of his lesser works. But it remains just about the only film made from a Bradbury work by a major figure in world cinema. It's fun to speculate what a Kurosawa, a Fellini or a David Lean might have made of a Bradbury story - and Bradbury tried to work with all of these directors and more - but we do at least have a Truffaut version of Bradbury. The New Ray Bradbury Review is edited at the Center for Ray Bradbury Studies in Indianapolis under the general editorship of the Director, Jon Eller; and is published by Kent State University Press. The publisher's catalogue page for the Review can be viewed here: http://issuu.com/dcrosby/docs/2016_catalog_complete_web/15?e=2256225/31544935 Posted by Phil at 2:53 pm Labels: Fahrenheit 451, New Ray Bradbury Review, Truffaut Connor Sondergeld said... I personally have never read a Ray Bradbury review, but am very interested in reading this one. I am working on a school project called the author study, and am researching Ray Bradbury. One of his books that I am reading is Fahrenheit 451, and plan on watching the movie when I finish. I, now, plan to read this article of the Ray Bradbury review. I probably will finish before the book, and movie before the article comes out, but hopefully I can site the article on my project. How did you get into Ray Bradbury, and are there any authors similar to him that I should read? Hi Connor, I hope you're enjoying FAHRENHEIT 451. I think you're doing the right thing by reading the book first, and watching the film second. I think it will be interesting that way round. I got into Ray Bradbury at school, when my class was given a Bradbury short-story collection to read, called THE GOLDEN APPLES OF THE SUN. Bradbury is a master of the short story, and every story in GOLDEN APPLES is different. There's science fiction, fantasy, mystery, and a couple of "realist" stories, too. Other authors who are similar in some way: you could try Richard Matheson, who was a friend of Bradbury; or Jack Finney; or Harlan Ellison. If you're enjoying F451, you should also read 1984 by George Orwell. Thanks for your comment, and I hope you get to read THE NEW RAY BRADBURY REVIEW when it comes out next year. - Phil Thank you for the recommendations, Phil! It's very useful for me. Your response was very thorough, and well written. Is writing a part of your career? I also have some questions about F451. I don't quite understand the concept of the 'parlor', and the 'families' that are there. I wonder if this could be a representation of something, but Bradbury is not directly saying it. Or will I find out more towards the end of the book? Also I do not understand how the ear pieces are meant to look like or represent. The book sometimes describes them as a moth, buzzing in Montag's ear. Is this just a metaphor? Or somehow, have they done something futuristically scientific to make things work this way. Similarly to the shells that are in Guys' wife's ears to broadcast the radio. Any additional insight is welcome. Thanks,Connor. The "parlor" is the room in the house where Mildred entertains her friends. She has had it fitted with giant TV screens which take up three of the walls (and she desperately wants to add one to the fourth wall, to complete the immersive experience). The "family" are the fictional characters in the soap operas that Mildred watches. In FAHRENHEIT 451, soap operas are larger than life, interactive dramas. Mildred gets more pleasure from escaping into these dramas than she gets from real-life interactions with people. It's all intended as a satire on how we get absorbed in meaningless trivia on TV (and nowadays in computer games etc). Bradbury usually speaks in metaphors. He rarely tells you what a piece of technology actually looks like. He prefers to give you an impression of it. So yes, the earpiece buzzes like a moth. Imagine yourself sitting next to someone on a bus, and they're listening to music on earphones. You won't hear the music clearly, but you will hear an annoying, endless buzzing. In other places, he describes the earpieces as "thimbles". That probably gives the best impression of what they may look like. The remarkable thing is that Bradbury was describing these phenomena decades before they were as pervasive as they are in our present-day lives. THE NEW RAY BRADBURY REVIEW - announced for Octobe... Dandelion Wine - on screen
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,825
{"url":"https:\/\/www.usna.edu\/Users\/cs\/roche\/courses\/f13si413\/l02\/","text":"Lab 2: Working with let and lists\n\nThis is the archived website of SI 413 from the Fall 2013 semester. Feel free to browse around; you may also find more recent offerings at my teaching page.\n\nThis lab is due at 2359 on Thursday, 5 September. It should be submitted as \"413 lab 02\", with the files named properly according to the submit script. See the submit page for details. You are highly encouraged to submit whatever you have done by the end of this lab. Remember that duplicate submissions are fine; I will just grade the latest one when the time comes.\n\n# Testing!\n\nA major theme of this class is understanding careful specifications, and a consequence is that, for the programming parts at least, you should be able to know the quality of your solution before submitting it. To this end, you will be given the time and the credit for carefully testing your code. Remember that the purpose of testing is to find bugs in code. Therefore a good test is one that fails a piece of code. Don't feel bad about this; that's the point of testing! Easy tests might make you feel good, but are useless. Careful testing helps you find bugs before they cause problems.\n\nStarting with this lab, you will write and submit (unit) tests with each of your labs. The full details of how these tests are submitted is outlined on the submit page, but here is a general overview.\n\nYou will write and submit at least one test for each of the exercises below. Each test case must be in a separate file, and each one is in two parts: First, there is a single Scheme expression (possibly following some definitions) that will be evaluated in the context of the file that's being tested. Then you have another Scheme expression for the \"expected\" result. These are separated by a single line that contains just a semicolon ; and nothing else.\n\nFor example, to test the to-celcius function from Lab 1, the following file would work:\n\n(to-celcius 77)\n;\n25\n\n\nNotice that the tests don't have to be long or complicated! You write each of these tests into a different file, and put all those files in the tests subdirectory of your submission.\n\nImportant tip: You need to be careful when testing functions that return inexact numbers, that you account for round-off errors in the calculation. The easiest way to do this is to use the built-in round function; for example, the public test for the first exercise below looks like:\n\n(round (* (to-usdollar 500 'yen) 100))\n;\n509.0\n\nAn easy script is provided to test your solution against the tests you have written. If you have set up your environment properly, you can run the command\n\nschemetest your_file.scm test1.scm test2.scm ...\nto test the Scheme program your_file.scm against those test files. In fact, this is the same script I run to do the sanity tests and ultimately to grade your work!\n\nThere is also a collection of \"public tests\" (basically corresponding to the examples in the text of the lab) that I have written for you. There is another script provided to test your submission against all your tests as well as all the public tests. If you pass all these tests, you can feel reasonably confident that you have understood the basic intent of all the exercises and named your functions correctly. This \"sanity check\" against your tests and the public tests is also automatically run every time you submit.\n\nAfter the deadline, I will test your code and I will test your tests! First, your tests are run against a correct sample solution, and against a very incorrect anti-solution. You will lose credit for any tests you write which my (correct) solution fails, or which my (incorrect) anti-solution passes. These bad tests are also discarded for the next round, in which your code is tested with everyone else's \"good tests\". Here, you will receive credit for other students' tests that you pass, as well as your tests that other students' code fails. So you will be rewarded for writing difficult tests, and for passing the difficult tests that everyone else writes!\n\nMy hope is that taking the time to do this testing makes you more confident in your own code, and encourages you to think carefully about what mistakes you might make before you make them. I would love to hear any feedback you have about the testing process in our labs.\n\n# Symbols\n\nRecall that a symbol is just a quoted identifier, such as 'si413, or 'wombats. We can use the function symbol? to find out whether something is a symbol, and eqv? to compare two symbols.\n\n#### Exercises\n\nFor these exercises, use the following exchange rates:\n 1 US dollar = 0.76 Euros (euro)\n1 US dollar = 98.18 Japanese Yen (yen)\n1 US Dollar = 1109.85 South Korean Won (won)\n\n1. Conversion to US dollars\nCreate a function (to-usdollar amt cur) that takes an amount of money amt in some foreign currency cur and returns that amount in US dollars. The parameter cur will be one of the symbols 'euro, 'yen, 'won, or 'usd.\nSo for instance (to-usdollar 500 'yen) produces 500\/98.18, which comes out to 5.0927....\n2. Conversion from US dollars\nCreate a function (from-usdollar amt cur) that does the opposite: takes an amount in US Dollars and converts it to the named currency.\nIf you're clever, you can write a helper function for this problem and the previous one that will avoid your having to enter the conversion rates twice.\n3. Any kind of conversion!\nCreate a function (convert amt fromCur toCur) that takes an amount in currency fromCur and converts it to currency toCur. Use your functions from parts 1 and 2 and life will be easy!\n\n# Lists\n\nRecall \"big 4\" of list processing:\n\n\u2022 '(): the name for an empty list.\n\u2022 (cons a L): Take an item a and a list L and return the new list with a inserted in the front of L.\n\u2022 (car L): Returns the first item in L.\n\u2022 (cdr L): Returns the list containing all the items in L after the first item.\n\nIf you're faced with nested lists, you sometimes need cars of cdrs, and cdrs of cars, and so forth. The shortcut for a bunch of these in a row is cXXXr, where each X is either a or d, corresponding to car and cdr:\n\nThere are two more extremely useful shortcuts:\n\n\u2022 list: Takes an arbitrary number of items and makes a single list containing them.\n\u2022 append: Takes an arbitrary number of lists and makes a single list containing all their items.\n\nMake sure you understand the difference between these two!\n\nIn class we looked at the general pattern for a recursive function on a list. When that function is also producing a list, it extends the pattern by (usually) returning '() in the base case, and (usually) returning a cons in the general case. For example, here is a simple recursive function that multiplies every number in a list by 2:\n\n; alon must be a list of numbers.\n(define (mulby2 alon)\n(if (null? alon)\n'()\n(cons (* 2 (car alon))\n(mulby2 (cdr alon)))))\n\nSee what's happening? Make sure you can identify the base case condition, the base case return value, and the recursive call. Notice what gets multiplied by 2, and where it happens. If it's confusing, now is the time to ask your able and handsome instructor!\n\n#### Exercises\n\n1. List of Squares\nWrite a function squares that takes integers i and j and returns list of the squares i^2, (i+1)^2, ..., (j-1)^2, j^2.\n> (squares 2 12)\n(4 9 16 25 36 49 64 81 100 121 144)\n\n2. Length Comparison\n\nIn class we saw that there is a built-in Scheme function length which returns the length of a list. Using this, we could write a function longer? to test whether the first list is longer than the second:\n\n; L1 and L2 must both be lists\n(define (longer? L1 L2)\n(> (length L1) (length L2)))\n\nNow I want you to write your own version of longer? that doesn't use length calculation as a subroutine. Instead, you will have to recursively process both L1 and L2 at the same time until one of them is empty.\n\n(I really mean it - no length, even if you write it yourself! I will test you code using an infinite list as one of the arguments. Yes, those exist in Scheme. Look up \"immutable cyclic data\" in the manual if you're really really curious.)\n\n3. Count cash!\nWrite a function called sum-cash that returns the value in US dollars of a collection of amounts of different currencies (same currencies as above). The amounts will be given in a list L, such that each element of L is itself a list, whose first element is an amount and whose second element is a currency name. So, for example,\n'((12.20 usd) (340 yen) (850 won))\n\nas an argument to sum-cash would mean adding 12.20 dollars, 340 yen and 850 won, and giving the total in dollars. Here's an example:\n> (sum-cash '((12.20 usd) (340 yen) (850 won)))\n16.4289...\n\nWrite a function called std-dev that takes a list of numbers and returns their standard deviation. (Recall: std dev of x1, x2, ..., xn is\n __________________________________________\n\/\n\/(x1 - u)^2 + (x2 - u)^2 + ... + (xn - u)^2\n\/ ------------------------------------------\n\\\/ n - 1\n\n... where u is the average of x1, x2, ..., xn. When you write this function, use top-down design. In scheme this means writing the std-dev function using functions you wish were already defined, then going back and defining them later. It can be useful to quote these function calls before you implement them, so you can even do top-down testing!\nExample:\n> (std-dev '(34 18 25 23 29 11 28 24 27 29))\n6.460134157533676\n\n5. Uniquefication (OPTIONAL)\nWrite a function uniquefy that takes a list of numbers, and returns a list with all the distinct numbers in the original, i.e., with all the duplicate numbers removed. For example:\n> (uniquefy '(1 2 4 1 2 2 3 15))\n(1 2 4 3 15)\n\nAt the very least, you will probably want to also define at least one nice helper function. Try thinking about the efficiency of your function. Could you make it faster?\n\n# Let\n\nThe let construct in Scheme allows you to give a name to a common subexpression. For example, consider the expression\n\n 33 * (501 - 33)\n---------------------\n1 - (33 * (501 - 33))\n\n\nThe natural way to code this in Scheme is probably\n\n(\/ (- (* 33 501) 33) (- 1 (- (* 33 501) 33)))\n\nBut you could say \"let a = 33 * (501 - 33), and return a \/ (1 - a)\". That's what let allows you to do:\n\n(let ((a (- (* 32 501) 33)))\n(\/ a (- 1 a)))\n\nEssentially, let is the Scheme way of getting local variable functionality. What you've got is the let keyword, followed by a list of variable-name\/value pairs, followed by an expression (presumably using the new names) that provides the value of the whole let expression. For example, the following code produces 12 as a result:\n\n(let ((a 2)\n(b 4)\n(c 6))\n(+ a b c))\n\nThere is power in using let in functions. Suppose I want to define a function called shifted-cube, which computes (x + 1)^3 for a value x.\n\n(define (shifted-cube x)\n(let ((a (+ x 1)))\n(* a a a)))\n\nThen running (shifted-cube 2) produces the value 27.\n\n#### Exercises:\n\n1. Let for a common sub-expression\nUsing what you just learned, write a function called test-sin that computes 1\/((sin x)^2 + 1) + sqrt((sin x)^2 + 1) - ((sin x)^2 + 1)^2\nWrite a function dist that computes the difference (in inches) between two lengths (given in feet and inches). Example:\n> (dist 3 7 2 11) ;;; difference between 3'7'' and 2'11''\n8\n\nWrite this function using a let expression to create values L1 and L2 for the lengths in inches of the input feet-and-inches lengths.\n\n# Functions as parameters to other functions\n\nAs you have perhaps noticed, you never actually tell scheme what the type of a function parameter is. Thus, if I define some function (f x y), there's nothing to stop me from calling it like this: (f sqrt 4). In other words, there's nothing to stop me from passing f a function as one of its arguments. This is something very special about functional programming languages: functions can be used like any other kind of data. Take a moment to allow this to sink in.\n\nNow check out this example. f uses its first argument, x, as a function. So x should be a function!\n\n> (define (f x y)\n(* y (x y)))\n> (f sqrt 4)\n8\n\n\nWhat happened here? Well since x is sqrt and y is 4, the evaluation produces (* 4 (sqrt 4)) which is 8. So f is the \"apply function x to argument y and then multiply by y\" function. Passing functions to functions like this is very powerful. This is part of what we mean when we say that \"functions are first class objects\" in a functional language.\n\n#### Exercises:\n\n1. Finite difference.\nGiven a function g(x), the finite difference of g(x) at x=n is g(n + 1) - g(n). Define a function (fd-at g n) that takes a function g and a value n and returns the finite difference of g at n. For example:\n> (define (f x) (* x x))\n> (fd-at f 3)\n7\n\n\n# Functions and lists: map and apply\n\nmap is a really useful function that takes functions as arguments. The expression (map f L) applies the function f to each element of the list L and puts the results together in a new list. For example:\n\n> (map abs '(-4 12 -3 -8 11 0))\n(4 12 3 8 11 0)\n\n\nIf you have a function with k arguments, then you give map k lists, and it will take the first argument from list1, the second from list2, etc.\n\n> (map * '(2 3 4) '(6 5 4))\n(12 15 16)\n\n\nAnother useful function of this type is apply. The expression (apply f L) calls the function f with arguments the elements of L. Here are some examples:\n\n> (apply max '(4 6 2)) ; same as (max 4 6 2)\n6\n> (apply - '(3 7)) ; same as (- 3 7)\n-4\n\n\nCombining map and apply can be very interesting.\n\n> (define (sqr x) (* x x))\n> (map sqr '(1 2 3 4 5 6 7 8 9 10))\n(1 4 9 16 25 36 49 64 81 100)\n> (apply + (map sqr '(1 2 3 4 5 6 7 8 9 10)))\n385\n\n\nYet another useful function-that-takes-a-function is filter. The expression (filter f? L) applies the predicate function f? to each element in the list L, and returns the list containing only the elements for which the predicate returns true. For example:\n\n> (filter number? '(a b 2 #t + 4))\n(2 4)\n\n\nUnfortunately, the filter function is not built-in to the version of Scheme that we are using. So you will have to copy and paste the function into your code:\n\n(define (filter pred? L)\n(cond ((null? L) '())\n((pred? (car L))\n(cons (car L) (filter pred? (cdr L))))\n(else (filter pred? (cdr L)))))\n\n#### Exercises\n\nFor these, you will probably want to use this helper function. Just copy its definition into your code.\n\n; Returns a list containing integers a, a+1, ..., b.\n(define (range a b)\n(if (> a b)\n'()\n(cons a (range (+ a 1) b))))\n1. Product of square roots\nCreate a function sqrt-prod that takes a number n and computes the product of the square roots of the integers from 1 up to n. For example:\n> (sqrt-prod 10)\n1904.94...\n\n2. Find the special numbers\nDefine a function special-nums that takes an integer n and computes a list of all integers between 1 and n that are both divisible by 3 and are perfect squares. For example:\n> (special-nums 100)\n(9 36 81)\n\nHint: you should make some helper functions that are predicates.\n3. Polygonal transformation (OPTIONAL)\nMake the following definition for P1:\n(define P1 '((3 5) (9 2) (11 6) (8 8) (4 6)))\n\nP1 represents a polygon whose vertex coordinates are given by the pairs in P1. Write a function (translate p d) that takes a polygon (in the sense of P1, i.e., a list of coordinate pairs) and an offset d (represented by a pair, the x offset and the y offset) and translates the given polygon by the given offset.\n> (define P1 '((3 5) (9 2) (11 6) (8 8) (4 6)))\n> (translate P1 '(1 2))\n((4 7) (10 4) (12 8) (9 10) (5 8))\n>\n\nDepending on your approach, the following function might be useful to you:\n;Creates a list of k copies of x\n(define (repeat x k)\n(if (= k 0)\n'()\n(cons x (repeat x (- k 1)))))\nFor full credit, your function should also work in higher dimensions. By this I mean, if each point has x, y, and z coordinates, for example, everything should still work, by providing a list of 3 offsets to the translate function.","date":"2018-02-20 07:49:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.37715619802474976, \"perplexity\": 1485.931861248638}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-09\/segments\/1518891812913.37\/warc\/CC-MAIN-20180220070423-20180220090423-00300.warc.gz\"}"}
null
null
layout: page title: "Allison Scott Pruitt" comments: true description: "blanks" keywords: "Allison Scott Pruitt,CU,Boulder" --- <head> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script> <script src="https://dl.dropboxusercontent.com/s/pc42nxpaw1ea4o9/highcharts.js?dl=0"></script> <!-- <script src="../assets/js/highcharts.js"></script> --> <style type="text/css">@font-face { font-family: "Bebas Neue"; src: url(https://www.filehosting.org/file/details/544349/BebasNeue Regular.otf) format("opentype"); } h1.Bebas { font-family: "Bebas Neue", Verdana, Tahoma; } </style> </head> #### TEACHING INFORMATION **College**: College of Arts and Sciences **Classes taught**: SOCY 1004, SOCY 2034 #### SOCY 1004: Deviance in U.S. Society **Terms taught**: Fall 2015, Spring 2016 **Instructor rating**: 5.59 **Standard deviation in instructor rating**: 0.02 **Average grade** (4.0 scale): 3.16 **Standard deviation in grades** (4.0 scale): 0.03 **Average workload** (raw): 2.14 **Standard deviation in workload** (raw): 0.29 #### SOCY 2034: Drugs in United States Society **Terms taught**: Fall 2016, Spring 2017 **Instructor rating**: 4.84 **Standard deviation in instructor rating**: 0.66 **Average grade** (4.0 scale): 3.57 **Standard deviation in grades** (4.0 scale): 0.07 **Average workload** (raw): 2.07 **Standard deviation in workload** (raw): 0.07
{ "redpajama_set_name": "RedPajamaGithub" }
1,208
/// XmlHitBox.cs /// https://github.com/Battenburg/Lysithea using System; using System.Collections.Generic; using System.Linq; using System.Xml.Serialization; using Lysithea.Mapping.Xml; using Lysithea.Mapping; using System.Xml; using System.Xml.Linq; namespace Lysithea.Collisions.Xml { public class XmlHitBox : HitBox, IXmlSerializable { public XmlHitBox(IPose pose, IEnumerable<IShape> shapes) : base(pose,shapes) { } public System.Xml.Schema.XmlSchema GetSchema() { return null; } public void ReadXml(XmlReader reader) { XElement xElement = XElement.Load(reader); if (xElement.Name != "HitBox") { throw new XmlException(); } foreach (XElement descendant in xElement.Elements()) { switch (descendant.Name.ToString()) { case (@"Pose"): { pose = XmlPose.FromXmlReader(descendant.CreateReader()); break; } case (@"Shapes"): { shapeSet.Clear(); shapeSet.AddRange(descendant.ToShapes()); break; } default: { throw new XmlException(); } } } } public void WriteXml(XmlWriter writer) { writer.WriteStartElement("HitBox"); Pose.ToXmlPose().WriteXml(writer); IEnumerable<IXmlShape> xmlShapes = Shapes.ToXmlShape(); writer.WriteStartElement("Shapes"); foreach (IXmlShape xmlShape in xmlShapes) { xmlShape.WriteXml(writer); } writer.WriteEndElement(); writer.WriteEndElement(); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,049
Is there any way to test the skip logic when the skip is based on a value of one of the variables being uploaded into the email campaign with the sample? The only way to effectively test this is to do so via a 'live' campaign. I often will make a copy of my campaign and upload myself as well as some colleagues as test live contacts. I then send the campaign to myself and the testers live to simulate how the survey would behave in an actual send.
{ "redpajama_set_name": "RedPajamaC4" }
2,344
import { Vector2 } from "../../../build/three.module.js"; /** * Triangle blur shader * based on glfx.js triangle blur shader * https://github.com/evanw/glfx.js * * A basic blur filter, which convolves the image with a * pyramid filter. The pyramid filter is separable and is applied as two * perpendicular triangle filters. */ var TriangleBlurShader = { uniforms: { "texture": { value: null }, "delta": { value: new Vector2( 1, 1 ) } }, vertexShader: [ "varying vec2 vUv;", "void main() {", " vUv = uv;", " gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", "}" ].join( "\n" ), fragmentShader: [ "#include <common>", "#define ITERATIONS 10.0", "uniform sampler2D texture;", "uniform vec2 delta;", "varying vec2 vUv;", "void main() {", " vec4 color = vec4( 0.0 );", " float total = 0.0;", // randomize the lookup values to hide the fixed number of samples " float offset = rand( vUv );", " for ( float t = -ITERATIONS; t <= ITERATIONS; t ++ ) {", " float percent = ( t + offset - 0.5 ) / ITERATIONS;", " float weight = 1.0 - abs( percent );", " color += texture2D( texture, vUv + delta * percent ) * weight;", " total += weight;", " }", " gl_FragColor = color / total;", "}" ].join( "\n" ) }; export { TriangleBlurShader };
{ "redpajama_set_name": "RedPajamaGithub" }
3,399
{"url":"https:\/\/www.aimsciences.org\/article\/doi\/10.3934\/ipi.2016.10.131","text":"# American Institute of Mathematical Sciences\n\n\u2022 Previous Article\nA divide-alternate-and-conquer approach for localization and shape identification of multiple scatterers in heterogeneous media using dynamic XFEM\n\u2022 IPI\u00a0Home\n\u2022 This Issue\n\u2022 Next Article\nFactorization method in inverse interaction problems with bi-periodic interfaces between acoustic and elastic waves\nFebruary\u00a0 2016,\u00a010(1):\u00a0131-163. doi:\u00a010.3934\/ipi.2016.10.131\n\n## The enclosure method for inverse obstacle scattering using a single electromagnetic wave in time domain\n\n 1 Laboratory of Mathematics, Institute of Engineering, Hiroshima University, Higashi Hiroshima 739-8527\n\nReceived\u00a0 October 2014 Revised\u00a0 July 2015 Published\u00a0 February 2016\n\nIn this paper, a time domain enclosure method for an inverse obstacle scattering problem of electromagnetic wave is introduced. The wave as a solution of Maxwell's equations is generated by an applied volumetric current having an orientation and supported outside an unknown obstacle and observed on the same support over a finite time interval. It is assumed that the obstacle is a perfect conductor. Two types of analytical formulae which employ a single observed wave and explicitly contain information about the geometry of the obstacle are given. In particular, an effect of the orientation of the current is catched in one of two formulae. Two corollaries concerning with the detection of the points on the surface of the obstacle nearest to the centre of the current support and curvatures at the points are also given.\nCitation: Masaru Ikehata. The enclosure method for inverse obstacle scattering using a single electromagnetic wave in time domain. Inverse Problems & Imaging, 2016, 10 (1) : 131-163. doi: 10.3934\/ipi.2016.10.131\n##### References:\n [1] H. Ammari, G. Bao and J. L. Fleming, An inverse source problem for Maxwell's equations in magnetoencephalography,, SIAM J. Appl. Math., 62 (2002), 1369.\u00a0 doi:\u00a010.1137\/S0036139900373927. \u00a0Google Scholar [2] H. Ammari, C. Latiri-Grouz and J.-C. N\u00e9d\u00e9lec, The Leontovich boundary value problem for the time-harmonic Maxwell equations,, Asymptotic Analysis, 18 (1998), 33.\u00a0 \u00a0Google Scholar [3] C. Athanasiadis, P. A. Martin and I. G. Stratis, On the scattering of point-generated electromagnetic waves by a perfectly conducting sphere, and related near-field inverse problems, Short Communication,, ZAMM$\\cdot$Z. Angew. Math. Mech. 83 (2003), 83 (2003), 129.\u00a0 doi:\u00a010.1002\/zamm.200310012. \u00a0Google Scholar [4] C. A. Balanis, Antenna Theory, Analysis and Design,, $3^{rd}$ edition, (2005).\u00a0 \u00a0Google Scholar [5] N. Bleistein and R. A. Handelsman, Asymptotic Expansions of Integrals,, $2^{nd}$ edition, (1986).\u00a0 \u00a0Google Scholar [6] R. J. Burkholder, I. J. Gupta and J. T. Johnson, Comparison of monostatic and bistatic radar images,, IEEE Antennas and Propagation Magazine, 45 (2003), 41.\u00a0 \u00a0Google Scholar [7] M. Cheney and B. Borden, Fundamentals of Radar Imaging,, CBMS-NSF, (2009).\u00a0 doi:\u00a010.1137\/1.9780898719291. \u00a0Google Scholar [8] D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory,, $3^{rd}$ edition, (2013).\u00a0 doi:\u00a010.1007\/978-1-4614-4942-3. \u00a0Google Scholar [9] R. Courant and D. Hilbert, Methoden der Mathematischen Physik,, Vol. 2, (1937).\u00a0 \u00a0Google Scholar [10] R. Dautray and J.-L. Lions., Mathematical Analysis and Numerical Methods for Sciences and Technology, Spectral Theory and Applications,, Vol. 3, (1990).\u00a0 \u00a0Google Scholar [11] R. Dautray and J.-L. Lions, Mathematical Analysis and Numerical Methods for Sciences and Technology,, Vol. 5, (1992).\u00a0 doi:\u00a010.1007\/978-3-642-58090-1. \u00a0Google Scholar [12] D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order,, Reprint of the 1998 ed., (1998).\u00a0 \u00a0Google Scholar [13] M. Ikehata, Enclosing a polygonal cavity in a two-dimensional bounded domain from Cauchy data,, Inverse Problems, 15 (1999), 1231.\u00a0 doi:\u00a010.1088\/0266-5611\/15\/5\/308. \u00a0Google Scholar [14] M. Ikehata, The enclosure method for inverse obstacle scattering problems with dynamical data over a finite time interval,, Inverse Problems, 26 (2010).\u00a0 doi:\u00a010.1088\/0266-5611\/26\/5\/055010. \u00a0Google Scholar [15] M. Ikehata, The enclosure method for inverse obstacle scattering problems with dynamical data over a finite time interval: II. Obstacles with a dissipative boundary or finite refractive index and back-scattering data,, Inverse Problems, 28 (2012).\u00a0 doi:\u00a010.1088\/0266-5611\/28\/4\/045010. \u00a0Google Scholar [16] M. Ikehata, An inverse acoustic scattering problem inside a cavity with dynamical back-scattering data,, Inverse Problems, 28 (2012).\u00a0 doi:\u00a010.1088\/0266-5611\/28\/9\/095016. \u00a0Google Scholar [17] M. Ikehata, The enclosure method for inverse obstacle scattering problems with dynamical data over a finite time interval: III. Sound-soft obstacle and bistatic data,, Inverse Problems, 29 (2013).\u00a0 doi:\u00a010.1088\/0266-5611\/29\/8\/085013. \u00a0Google Scholar [18] M. Ikehata, Extracting the geometry of an obstacle and a zeroth-order coefficient of a boundary condition via the enclosure method using a single reflected wave over a finite time interval,, Inverse Problems, 30 (2014).\u00a0 doi:\u00a010.1088\/0266-5611\/30\/4\/045011. \u00a0Google Scholar [19] M. Ikehata and H. Itou, On reconstruction of a cavity in a linearized viscoelastic body from infinitely many transient boundary data,, Inverse Problems, 28 (2012).\u00a0 doi:\u00a010.1088\/0266-5611\/28\/12\/125003. \u00a0Google Scholar [20] M. Ikehata and M. Kawashita, On the reconstruction of inclusions in a heat conductive body from dynamical boundary data over a finite time interval,, Inverse Problems, 26 (2010).\u00a0 doi:\u00a010.1088\/0266-5611\/26\/9\/095004. \u00a0Google Scholar [21] V. Isakov, Inverse obstacle problems,, Topical review, 25 (2009).\u00a0 doi:\u00a010.1088\/0266-5611\/25\/12\/123002. \u00a0Google Scholar [22] P. D. Lax and R. S. Phillips, The scattering of sound waves by an obstacle,, Comm. Pure and Appl. Math., 30 (1977), 195.\u00a0 doi:\u00a010.1002\/cpa.3160300204. \u00a0Google Scholar [23] H. Liu, M. Yamamoto and J. Zou, Reflection principle for the Maxwell equations and its application to inverse electromagnetic scattering,, Inverse Problems, 23 (2007), 2357.\u00a0 doi:\u00a010.1088\/0266-5611\/23\/6\/005. \u00a0Google Scholar [24] A. Majda and M. Taylor, Inverse scattering problems for transparent obstacles, electromagnetic waves, and hyperbolic systems,, Comm. in Partial Differential Equations, 2 (1977), 395.\u00a0 doi:\u00a010.1080\/03605307708820035. \u00a0Google Scholar [25] J.-C. N\u00e9d\u00e9lec, Acoustic and Electromagnetic Equations, Integral Representations for Harmonic Problems,, Springer, (2001).\u00a0 doi:\u00a010.1007\/978-1-4757-4393-7. \u00a0Google Scholar [26] B. O'Neill, Elementary Differential Geometry,, Revised, (2006).\u00a0 \u00a0Google Scholar\n\nshow all references\n\n##### References:\n [1] H. Ammari, G. Bao and J. L. Fleming, An inverse source problem for Maxwell's equations in magnetoencephalography,, SIAM J. Appl. Math., 62 (2002), 1369.\u00a0 doi:\u00a010.1137\/S0036139900373927. \u00a0Google Scholar [2] H. Ammari, C. Latiri-Grouz and J.-C. N\u00e9d\u00e9lec, The Leontovich boundary value problem for the time-harmonic Maxwell equations,, Asymptotic Analysis, 18 (1998), 33.\u00a0 \u00a0Google Scholar [3] C. Athanasiadis, P. A. Martin and I. G. Stratis, On the scattering of point-generated electromagnetic waves by a perfectly conducting sphere, and related near-field inverse problems, Short Communication,, ZAMM$\\cdot$Z. Angew. Math. Mech. 83 (2003), 83 (2003), 129.\u00a0 doi:\u00a010.1002\/zamm.200310012. \u00a0Google Scholar [4] C. A. Balanis, Antenna Theory, Analysis and Design,, $3^{rd}$ edition, (2005).\u00a0 \u00a0Google Scholar [5] N. Bleistein and R. A. Handelsman, Asymptotic Expansions of Integrals,, $2^{nd}$ edition, (1986).\u00a0 \u00a0Google Scholar [6] R. J. Burkholder, I. J. Gupta and J. T. Johnson, Comparison of monostatic and bistatic radar images,, IEEE Antennas and Propagation Magazine, 45 (2003), 41.\u00a0 \u00a0Google Scholar [7] M. Cheney and B. Borden, Fundamentals of Radar Imaging,, CBMS-NSF, (2009).\u00a0 doi:\u00a010.1137\/1.9780898719291. \u00a0Google Scholar [8] D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory,, $3^{rd}$ edition, (2013).\u00a0 doi:\u00a010.1007\/978-1-4614-4942-3. \u00a0Google Scholar [9] R. Courant and D. Hilbert, Methoden der Mathematischen Physik,, Vol. 2, (1937).\u00a0 \u00a0Google Scholar [10] R. Dautray and J.-L. Lions., Mathematical Analysis and Numerical Methods for Sciences and Technology, Spectral Theory and Applications,, Vol. 3, (1990).\u00a0 \u00a0Google Scholar [11] R. Dautray and J.-L. Lions, Mathematical Analysis and Numerical Methods for Sciences and Technology,, Vol. 5, (1992).\u00a0 doi:\u00a010.1007\/978-3-642-58090-1. \u00a0Google Scholar [12] D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order,, Reprint of the 1998 ed., (1998).\u00a0 \u00a0Google Scholar [13] M. Ikehata, Enclosing a polygonal cavity in a two-dimensional bounded domain from Cauchy data,, Inverse Problems, 15 (1999), 1231.\u00a0 doi:\u00a010.1088\/0266-5611\/15\/5\/308. \u00a0Google Scholar [14] M. Ikehata, The enclosure method for inverse obstacle scattering problems with dynamical data over a finite time interval,, Inverse Problems, 26 (2010).\u00a0 doi:\u00a010.1088\/0266-5611\/26\/5\/055010. \u00a0Google Scholar [15] M. Ikehata, The enclosure method for inverse obstacle scattering problems with dynamical data over a finite time interval: II. Obstacles with a dissipative boundary or finite refractive index and back-scattering data,, Inverse Problems, 28 (2012).\u00a0 doi:\u00a010.1088\/0266-5611\/28\/4\/045010. \u00a0Google Scholar [16] M. Ikehata, An inverse acoustic scattering problem inside a cavity with dynamical back-scattering data,, Inverse Problems, 28 (2012).\u00a0 doi:\u00a010.1088\/0266-5611\/28\/9\/095016. \u00a0Google Scholar [17] M. Ikehata, The enclosure method for inverse obstacle scattering problems with dynamical data over a finite time interval: III. Sound-soft obstacle and bistatic data,, Inverse Problems, 29 (2013).\u00a0 doi:\u00a010.1088\/0266-5611\/29\/8\/085013. \u00a0Google Scholar [18] M. Ikehata, Extracting the geometry of an obstacle and a zeroth-order coefficient of a boundary condition via the enclosure method using a single reflected wave over a finite time interval,, Inverse Problems, 30 (2014).\u00a0 doi:\u00a010.1088\/0266-5611\/30\/4\/045011. \u00a0Google Scholar [19] M. Ikehata and H. Itou, On reconstruction of a cavity in a linearized viscoelastic body from infinitely many transient boundary data,, Inverse Problems, 28 (2012).\u00a0 doi:\u00a010.1088\/0266-5611\/28\/12\/125003. \u00a0Google Scholar [20] M. Ikehata and M. Kawashita, On the reconstruction of inclusions in a heat conductive body from dynamical boundary data over a finite time interval,, Inverse Problems, 26 (2010).\u00a0 doi:\u00a010.1088\/0266-5611\/26\/9\/095004. \u00a0Google Scholar [21] V. Isakov, Inverse obstacle problems,, Topical review, 25 (2009).\u00a0 doi:\u00a010.1088\/0266-5611\/25\/12\/123002. \u00a0Google Scholar [22] P. D. Lax and R. S. Phillips, The scattering of sound waves by an obstacle,, Comm. Pure and Appl. Math., 30 (1977), 195.\u00a0 doi:\u00a010.1002\/cpa.3160300204. \u00a0Google Scholar [23] H. Liu, M. Yamamoto and J. Zou, Reflection principle for the Maxwell equations and its application to inverse electromagnetic scattering,, Inverse Problems, 23 (2007), 2357.\u00a0 doi:\u00a010.1088\/0266-5611\/23\/6\/005. \u00a0Google Scholar [24] A. Majda and M. Taylor, Inverse scattering problems for transparent obstacles, electromagnetic waves, and hyperbolic systems,, Comm. in Partial Differential Equations, 2 (1977), 395.\u00a0 doi:\u00a010.1080\/03605307708820035. \u00a0Google Scholar [25] J.-C. N\u00e9d\u00e9lec, Acoustic and Electromagnetic Equations, Integral Representations for Harmonic Problems,, Springer, (2001).\u00a0 doi:\u00a010.1007\/978-1-4757-4393-7. \u00a0Google Scholar [26] B. O'Neill, Elementary Differential Geometry,, Revised, (2006).\u00a0 \u00a0Google Scholar\n [1] Marion Darbas, J\u00e9r\u00e9my Heleine, Stephanie Lohrengel. Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations. Inverse Problems & Imaging, 2020, 14 (6) : 1107-1133. doi: 10.3934\/ipi.2020056 [2] Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 455-469. doi: 10.3934\/dcds.2020380 [3] Gang Bao, Mingming Zhang, Bin Hu, Peijun Li. An adaptive finite element DtN method for the three-dimensional acoustic scattering problem. Discrete & Continuous Dynamical Systems - B, 2020\u00a0 doi: 10.3934\/dcdsb.2020351 [4] Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934\/cpaa.2020273 [5] Jie Li, Xiangdong Ye, Tao Yu. Mean equicontinuity, complexity and applications. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 359-393. doi: 10.3934\/dcds.2020167 [6] Peng Luo. Comparison theorem for diagonally quadratic BSDEs. Discrete & Continuous Dynamical Systems - A, 2020\u00a0 doi: 10.3934\/dcds.2020374 [7] Li-Bin Liu, Ying Liang, Jian Zhang, Xiaobing Bao. A robust adaptive grid method for singularly perturbed Burger-Huxley equations. Electronic Research Archive, 2020, 28 (4) : 1439-1457. doi: 10.3934\/era.2020076 [8] Pierre-Etienne Druet. A theory of generalised solutions for ideal gas mixtures with Maxwell-Stefan diffusion. Discrete & Continuous Dynamical Systems - S, 2020\u00a0 doi: 10.3934\/dcdss.2020458 [9] Yi-Hsuan Lin, Gen Nakamura, Roland Potthast, Haibing Wang. Duality between range and no-response tests and its application for inverse problems. Inverse Problems & Imaging, , () : -. doi: 10.3934\/ipi.2020072 [10] Kha Van Huynh, Barbara Kaltenbacher. Some application examples of minimization based formulations of inverse problems and their regularization. Inverse Problems & Imaging, , () : -. doi: 10.3934\/ipi.2020074 [11] Mehdi Bastani, Davod Khojasteh Salkuyeh. On the GSOR iteration method for image restoration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 27-43. doi: 10.3934\/naco.2020013 [12] Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934\/cpaa.2020248 [13] Mehdi Badsi. Collisional sheath solutions of a bi-species Vlasov-Poisson-Boltzmann boundary value problem. Kinetic & Related Models, , () : -. doi: 10.3934\/krm.2020052 [14] Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $p$ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020\u00a0 doi: 10.3934\/dcdss.2020442 [15] Wenbin Li, Jianliang Qian. Simultaneously recovering both domain and varying density in inverse gravimetry by efficient level-set methods. Inverse Problems & Imaging, , () : -. doi: 10.3934\/ipi.2020073 [16] Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934\/naco.2020019 [17] Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934\/cpaa.2020243 [18] Jerry L. Bona, Angel Dur\u00e1n, Dimitrios Mitsotakis. Solitary-wave solutions of Benjamin-Ono and other systems for internal waves. I. approximations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 87-111. doi: 10.3934\/dcds.2020215 [19] Hai-Feng Huo, Shi-Ke Hu, Hong Xiang. Traveling wave solution for a diffusion SEIR epidemic model with self-protection and treatment. Electronic Research Archive, , () : -. doi: 10.3934\/era.2020118 [20] Omid Nikan, Seyedeh Mahboubeh Molavi-Arabshai, Hossein Jafari. Numerical simulation of the nonlinear fractional regularized long-wave model arising in ion acoustic plasma waves. Discrete & Continuous Dynamical Systems - S, 2020\u00a0 doi: 10.3934\/dcdss.2020466\n\n2019\u00a0Impact Factor:\u00a01.373","date":"2020-11-27 03:37:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6859656572341919, \"perplexity\": 8636.768197310324}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-50\/segments\/1606141189038.24\/warc\/CC-MAIN-20201127015426-20201127045426-00464.warc.gz\"}"}
null
null
WTF is happening?! and with less than 10 minutes left to go! This "movie" is just a bunch of random scenes. Did they have to cut filming short or something? Because this is a sloppy, choppy mess. At this point, I'm getting the feeling that this entire movie was spliced together from whatever they could get out of Lindsey when she was sober/coherent/co-operative/conscious and plot be damned. IA. maybe we should APPLAUD the editor?! So I take it that the confessional interview scenes they have isn't really supposed to be them in the same room? It's done at different times? This Liz and Burton even had an interview together after their divorce(s)? i heard its supposed to be a dream sequence. I left for a bit, has MJ appeared yet? ugh these pampers ads make me want kids. I've only been watching intermediately between The Walking Dead but Beyond the poor editing and Lindsay's voice it's decent. Lindsay was obviously miscast but her performance is solid for what it is I guess.
{ "redpajama_set_name": "RedPajamaC4" }
5,669
Jaron Ennis goes the distance vs. Karen Chukhadzhian, wins wide unanimous decision for interim title ·Combat columnist Jaron Ennis punches Karen Chukhadzhian in their Interim IBF Welterweight Championship bout at Capital One Arena on January 7, 2023 in Washington, DC. (Photo by Patrick Smith/Getty Images) It was hardly the performance Jaron Ennis was looking for on Saturday when he met Ukrainian Karen Chukhadzhian for the interim IBF welterweight title at the Capital One Arena in Washington, D.C. in the co-main event of a Showtime Pay-Per-View card. Ennis, one of the welterweight division's bright young stars, couldn't get his offense untracked and never got to show the power that led him to 27 knockouts in his first 29 fights, all of which were wins. But Ennis is a multi-faceted fighter, and managed to find a way to win going away despite not being able to get off his best stuff. Chukhadzhian fought an extraordinarily defensive fight, and proved good at it, frustrating Ennis, who was never really able to open up. But Ennis never quit working and pressuring and he won a wide unanimous decision over the game but outmanned Chukhadzhian. Ennis won all 12 rounds on all three judges' scorecards, claiming a 120-108 decision across the board as he was forced to go 12 full rounds for the first time. Chukhadzhian's movement gave Ennis some issues, but to be fair, he also took a lot of clean shots from Ennis. The punch statistics showed that Ennis landed 46 percent of his power shots, but there was never anything remotely close to a knockdown. Ennis did a workmanlike job of winning the rounds and avoiding a big mistake that would have gotten him in trouble. "I learned just to take my time," Ennis said. "I wasn't rushing." It didn't earn him many new fans, but it kept him unbeaten and gave him a belt, albeit one that doesn't carry a lot of clout with it. Errol Spence holds the unified title, with the IBF, WBA and WBC belts, and Terence Crawford, the pound-for-pound king, is the WBO champion. Ennis said he wants to fight all of the best, but it's unclear what his path will be. He showed he was in good condition and his defense was good enough to all but nullify Chukhadzhian's jab. Chukhadzhian connected on a woeful 5 of 200 jabs and on just 92 of 373 of his power shots. That's the kind of bout that so many aspiring stars have to go through, and if it was Ennis' least impressive win, that says something about him. On his worst night, he won all 12 rounds against a strong and slick opponent. Aston Villa star strikes agreement with next club as ruthless Emery waves goodbye to dubious Gerrard signing Aston Villa will terminate the contract of a player signed by Steven Gerrard and a LaLiga side have already reached an agreement to bring him on board straight away Catalans Dragons have confirmed their squad numbers for the 2023 Super League season. Nicky Henderson touched by support from Altior well-wishers but recovery has 'very long way to go' "I'd also like to give special mention to Kate McGovern who heads up the team in charge of Altior at Donnington Grove, and while he is still in intensive care, they are doing the most wonderful job and we cannot thank them enough." Henderson is a little more upbeat on the 13-year-old's health, he acknowledges Altior still has "a very long way to go". Nets All-Star Kyrie laments horror run of injuries as Simmons, Warren leave hurt The Brooklyn Nets lost a pair of rotation players on Thursday as Ben Simmons and T.J. Warren both limped off with knee injuries. Aston Martin take a dig at Alpine's '100-race' plan to fight for F1 titles Mike Krack claims Aston Martin's title "ambitions are credible" unlike some of their rivals' plans. Mikel Arteta admits Mohamed Elneny injury could force move for midfielder Mikel Arteta admits Arsenal may be forced into the transfer market for a new midfielder as fears mount over an injury to Mohamed Elneny Into Overdrive to be held back until Cheltenham Festival Walford was primed to run in the valuable three-mile event after Into Overdrive defeated Sounds Russian in the Rowland Meyrick at Wetherby on Boxing Day. "It is disappointing, but it is nothing serious," said Walford. A new-look RB19? Red Bull tease 'blank canvas' with social media post Are Red Bull going to a new-look livery for the 2023 season?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,963
{"url":"http:\/\/openstudy.com\/updates\/4dd2fcd799508b0b3684a3a4","text":"## anonymous 5 years ago Some psychologists believe that a \"genius\" should be defined as anyone having an IQ over 140. If IQ scores are normally distributed with a mean of 100 and a standard deviation of 17, and if the population of the world is 6,575,000,000, how many geniuses are there in the world today?\n\n1. amistre64\n\nthe bell curve is divided iinto whay ; 96% at the right ?\n\n2. amistre64\n\ni got an a intyping....honest lol\n\n3. amistre64\n4. amistre64\n\nthats the reverse of what you wanted .... gotta learn to read :)\n\n5. anonymous\n\nlol so this wont help me ? thanks anyway:)\n\n6. anonymous\n\nWe make the curve standard normal. $(X-\\mu)\/(\\sigma)$ (140-100)\/17 that's a z score of 2.353. The probability of being that or higher can be looked up in a standard normal table. It is a probability of about .0094. Multiply this by the population of 6.575E9 to get the number of geniuses. I get 61,805,000 geniuses.","date":"2017-01-24 13:41:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7755727171897888, \"perplexity\": 1226.4712694503826}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560284411.66\/warc\/CC-MAIN-20170116095124-00026-ip-10-171-10-70.ec2.internal.warc.gz\"}"}
null
null
Aceasta este o listă a primarilor din Alba Iulia. Novak Ferencz – primar (februarie 1912 – mai 1914) Roska Miklós – primar (iunie 1914 – decembrie 1918) Camil Velican - primar (decembrie 1918 – octombrie 1919) - primul primar român al orașului Aurel Sava - primar/președinte (octombrie 1919 – iunie 1931) Andrei Floașu - președinte/primar (iunie 1931 – august 1931) Danil Tecău - președinte (septembrie 1931 – iunie 1932) Bogdan Dumitru - președinte/primar (iunie 1932 – noiembrie 1933) Ioan Colbazi - președinte/primar (noiembrie 1933 – ianuarie 1938) Petre P. Vasiliu - primar (februarie 1938 – septembrie 1938) Victor Constantinescu - primar (octombrie 1938 – octombrie 1940) Aurel Bozdog - primar/președinte (octombrie 1940 – ianuarie 1941) Dumitru Bogdan - primar/președinte (februarie 1941 – noiembrie 1944) Traian Mârza - primar (noiembrie 1944 – aprilie 1945) Dumitru Constantin - primar (aprilie 1945 – februarie 1946) Iosif Socaciu - primar/președinte (februarie 1946 – decembrie 1947) Constantin Lahman - primar/președinte (ianuarie 1948 – decembrie 1948) Mihai Oara - președinte (ianuarie 1949 – decembrie 1950) Pavel Stoia - președinte (decembrie 1950 – ianuarie 1956) Iendrușak Carol - președinte (ianuarie 1956 – martie 1961) Emil Comșa - președinte (martie 1961 – septembrie 1963) Ioan Moldovan - președinte (octombrie 1963 – iulie 1964) Emil Comșa - președinte (august 1964 – februarie 1968) Nicolae Roșu - președinte (februarie 1968 – decembrie 1978) Vasile Purdea - primar (decembrie 1978 – 1986) Maria Bunea – (primar 1986-1990) Adam Drăgoi - primar (1990) Vasile Ursu - primar (1990) Nicolae Todea - primar (1990) Ioan Radu - primar (1990 – 1992) Ioan Timiș - primar (1992 – 1996) Mircea Hava - primar (1996 – 2019) Gabriel Pleșa - primar (2020 - prezent). Note Alba Iulia#Lista primarilor Alba Iulia
{ "redpajama_set_name": "RedPajamaWikipedia" }
304
{"url":"http:\/\/vesnik.math.rs\/landing.php?p=inpress.cap&name=mv2021_078","text":"MATEMATI\u010cKI VESNIK \u041c\u0410\u0422\u0415\u041c\u0410\u0422\u0418\u0427\u041a\u0418 \u0412\u0415\u0421\u041d\u0418\u041a\n\n A NEW TYPE OF WEIGHTED ORLICZ SPACES A.R. Bagheri Salec, S.M. Tabatabaie AbstractIn this paper, by some group action, we introduce a new type of weighted Orlicz spaces $L^\\Phi_{w,v}(\\Omega)$, where $w$ and $v$ are weights on $\\Omega$ and $\\Phi$ is a Young function. We study conditions under which $L^\\Phi_{w,v}(G)$ is a convolution Banach algebra, where $G$ is a locally compact group. Keywords: Locally compact group; weighted Orlicz algebra; Young function; convolution; spaceablity; inclusion. MSC: 46E30, 47B37, 43A15 Pages:\u00a0\u00a01$-$9\n\n\ufeff","date":"2023-02-07 14:26:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7862191200256348, \"perplexity\": 1229.0185249172712}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500619.96\/warc\/CC-MAIN-20230207134453-20230207164453-00386.warc.gz\"}"}
null
null
\section{INTRODUCTION} Consider the task of treating a disease characterized by some outlying biological marker. Often the medication necessary for treatment causes adverse side effects on other biological functionalities. During treatment, it is important to monitor such undesirable side effects by conducting various medical tests, while augmenting it with other medications to alleviate these adverse effects, and jointly calibrating the dosage of all these medications. Conducting tests may be expensive, thus it is desirable to find a treatment that has no side effects with efficient tests to optimally affect only the desired bio-marker. Such concerns are widespread in the treatment of disease - patients often receive multiple medications and the mitigation of drug related problems is a common concern, especially in the presence of comorbidities \cite{Hasniza2020Diabetes}. Optimal blood pressure control, for instance, is described as a challenge in the treatment of type 2 diabetes \cite{Hasniza2020Diabetes}, and antipsychotics prescribed for schizophrenia can result in side effects such as obesity, dyslipidemia and type 2 diabetes \cite{mackenzie2018Schizophrenia}. Combination therapy (where a variety of medications are jointly prescribed) is often used to reduce the impact of adverse effects \cite{garcia2018combination}. Our work abstractly considers the problem of finding an optimal combination therapy guided by sequential testing during the course of a patient's treatment to ensure recovery with the least cumulative side effects. We approach this problem as an online decision making problem in which the results of various tests of bio-markers are regarded as bi-linear functions of treatments and patient characteristics. At each round the physician may take an action (specify a therapy, dosages, schedules, etc.) $A_t$ chosen from some given set $\mathcal{A}_t\subset \mathbb{R}^d$. The physician has access to a test to monitor the result of the therapy, the result of which is given by $X_t = \langle \theta_0, A_t\rangle+\eta_t$ with $\eta_t$ representing some sub-gaussian noise, for some $\theta_0\in \mathbb{R}^d$. There may be other tests which should not be affected by the therapy (these test for side effects). Such tests are represented by $\theta_i\in \mathbb{R}^d, i\in [L]$, and their outcomes are similarly sub-gaussian with mean $\langle \theta_i, A_t\rangle$. In this setting, while the expected \textit{feedback} from an action $A_t$ is given by $X_t$ above, the learners \textit{reward} depends only on the components orthogonal to the protected space. That is, the reward given by $\langle a_t, \theta_\perp\rangle$ where $\theta_\perp$ is the component of $\theta_0$ orthogonal to $\{\theta_i\}_{i\in [L]}$ is unseen. In some sense, this is the component of the therapy that does not contribute to side-effects. The objective is to minimize \textit{pseudo-regret}, which is the total difference (that is, summed over all the rounds) between the \textit{expected} reward obtained by a genie who knows the means of the outcomes of every test exactly for each therapy, and the learner. Surprisingly, despite a similarity to the standard stochastic linear bandit problem, the partial information model makes this problem considerably more difficult. A key property that is used to upper bound regret in the linear bandit model is that under linear transformations, subgaussian random variables remain subgaussian. Speaking broadly, this allows us to use Hoeffding-style bounds to get confidence sets for unknown parameters under subgaussian assumptions of the noise. However our rewards are not linear functions of the unknown parameters, thus we require additional techniques to propagate estimates on confidence sets as samples are adaptively acquired over time. \subsection{Contributions} \label{section:contrib} \begin{itemize} \item[1.] \textbf{Model:} We introduce the protected linear bandit as a model for online decision making with incomplete bandit feedback in which some subspace is considered to be protected, meaning that projections onto that space are subtracted from our reward. The optimal action is thus not the one that aligns most with the target vector, but rather the one which aligns most with the component of the target vector orthogonal to the unknown protected subspace. It is important to note that we do not have direct access to these projections, but only to inner products with some fixed set of individual vectors in the subspace. \item[2.] \textbf{Algorithm and Regret Upper Bounds:} We propose an algorithm (Algorithm \ref{algorithm:protectedLUCB}) for the above and derive an upper bound for its regret that grows as $\tilde{O}(sd\sqrt{T})$ in the number of rounds $T$, similar to the best possible linear bandit regret for the case when the action space is the unit ball. The algorithm consists of two parts. First we remove redundancy in the set of protected vectors with a uniform exploration phase. We then restrict our attention to this independent set of constraints and play optimistically using an \textit{upper confidence bound} based algorithm. Typically in OFUL, confidence regions are maintained around the unknown parameters of interest. However in this case, the projection operator on the protected space is the object of interest, and it is unclear what it even means to have a confidence region for this object. To circumvent this, we only maintain individual confidence ellipsoids around an observed basis that spans the protected subspace. Translating sub-gaussian tail based confidence regions on the basis vectors to an appropriate confidence region on the projection operator involves a non-linear transformation and hence destroys sub gaussianity, so instead we directly construct a confidence interval on the reward {\em only} for the optimistic action (see Section \ref{section:key_difficulties}). \item[3.] \textbf{Regret Lower Bounds:} This new model comes with an interesting difficulty: even if we play an action infinitely many times, observing any number of noisy inner products with all of the protected vectors, we may not be able to find a good high probability confidence interval for the reward from that action. For general action spaces, Example (\ref{example:linear_regret}) shows an instance where naive optimism can lead to linear regret with this partial feedback model, and in Section \ref{section:lower_bound} we show a $\Omega (T^{\frac{3}{4}})$ regret lower bound for any algorithm on a finite (time-varying) action space. \end{itemize} \textbf{Notation} We will denote by $\Proj_{\{\theta_i\}_{i\in [L]}}$ the projection operator onto the space spanned by $\{\theta_i\}_{i\in [L]}$ and by $\Proj_{\{\theta_i\}_{i\in [L]}}^{\perp}$ the projection onto the orthogonal subspace. We use $[L]=\{1, 2, \cdots, L\}$ to denote the set of the first $L$ integers. We use $||x||_V$ to refer to the weighted norm $\sqrt{x^\top Vx}$. Given a matrix $P\in \mathbb{R}^{d\times L}$ (respectively, vector $x\in \mathbb{R}^{L}$) and a set $S\subseteq [L]$, we denote by $P_S\in \mathbb{R}^{d\times |S|}$ (respectively, $x_S$), the submatrix (vector) whose columns are the ones in $P$ ($x$) indexed by $S$. We denote the minimum non-zero eigenvalue by $\lambda_{\min}(\cdot)$. We denote by $\mathcal{B}_2^d$ the unit $2$-norm ball in $\mathbb{R}^d$. A notation table is given in Appendix~\ref{appendix:notation}. \section{RELATED WORK}\label{section:related_work} Multi armed bandits have been studied for decades, at least since \cite{Robbins1952Bandits}, and optimism in the face of uncertainty (OFUL) has proved to be an effective strategy in low regret algorithm design~\cite{Auer2002MAB,yadkori2011OFUL}. We point an interested reader to \cite{LatSze20} and references therein for works which are not directly related to ours. Linear Bandits, where observable rewards are generated as the noisy inner products of actions and a hidden vector, were analyzed in \cite{dani2008linbandits, yadkori2011OFUL} and the regret of an optimistic algorithm was shown to grow as $O(\sqrt{T}\log T)$ depending only on the dimension of the representation, \textit{independently} of the number of arms. In our model, additional to the hidden vector (as in linear bandits) we have a hidden protected space (spanned by multiple hidden constraint vectors). Our reward is the inner product of action and the component of the hidden vector orthogonal to the hidden protected space. Further, we do not observe the reward directly, instead we are allowed to make partial queries which, for diverse enough action space, can be used to infer the optimal action. Bandits with indirect access to rewards are studied under partial monitoring with finite~\cite{bartok2014PM, lattimore2019cleaning}, and infinite~\cite{bartok2014PM} action spaces. However, inferring optimal action in our model requires use of additional structure which absent in \cite{bartok2014PM}. From a motivational standpoint, we share similarities with linear bandit with safety constraints where a learner is required to be {\em safe}. In \cite{amani2019safety}, the authors study a linear bandit with {\em known linear constraints} where the actions should not violate these constraints. They propose an optimistic algorithm with initial safe exploration. This setting has been studied extensively, through the design of Thompson sampling based techniques~\cite{moradipari2020safe}, and extension to safe generalized linear bandits~\cite{amani2020generalized}, safe contextual bandits~\cite{daulton2019thompson}, and safe reinforcement learning~\cite{hasanzadezonuzy2020learning}. In a different model, \cite{kazerouni2017conservative} studied online learning where regret is constrained to be small compared to a {\em known baseline}. We differ from these works technically, as the constraints are unknown to us, unlike the above works. Further, we consider the protected space as reward shaping parameters, rather than {\em hard} constraints. Additionally, in the probabilistically approximately correct (PAC) learning framework, safety constrained optimization with unknown constraints and objectives with access to zeroth order oracles is studied in another line of work \cite{usmanova2019log, usmanova2019safe,fereydounian2020safe}. However, the convergence results in PAC-learning framework do not translate into regret minimization directly, as the former do not consider balancing exploration and exploitation. We expand on the connections with Linear Bandits, Safety Constrained Linear Bandits, and Partial monitoring in Section~\ref{section:model_comparison}. \section{MODEL}\label{section:model} We consider a game between a player and a stochastic environment in which we have query access to $L+1$ unknown vectors $\theta_0, \theta_1, \cdots, \theta_L \in \mathbb{R}^d$ with $||\theta_i||_2\le M$ for all $i$. The vectors $\theta_1, \theta_2,\cdots \theta_L$, the \textit{protected} vectors, are provided such that they span the protected subspace. In the context of our motivating problem, these represent low dimensional linear embeddings of the various tests for the biomarkers associated with side-effects. We are given a large number, $L$, of them, however they may represent a lower dimensional protected subspace of $\mathbb{R}^d$. We refer to $\theta_0$ as the target vector. We would like to play arms that align as well as possible with $\theta_\perp = \Proj_{\{\theta_i\}_{i\in [L]}}^{\perp}\theta_0$, that is, the orthogonal projection of $\theta_0$ onto the subspace orthogonal to the protected subspace. In the absence of the protected vectors, this would just be a stochastic linear bandit problem parameterized by $\theta_0$. At every round $t$, the player can choose any action $A_t\in \mathcal{A}_t$, and an index $I_t\in \{0\}\cup [L]$, and receive a corresponding feedback of $X_t = \langle A_t, \theta_{I_t}\rangle +\eta_t$ where $\eta_t$ is a conditionally $R$-subgaussian zero-mean noise. \textbf{Regret: }The sub-optimality of action $a\in \mathcal{A}_t$, $\Delta_a$, is given by $$\Delta_a = \langle a^*_t-A, \Proj_{\{\theta_i\}_{i\in [L]}}^{\perp}\theta_0\rangle$$ where $$a^*_t=\argmax_{a\in \mathcal{A}_t} \langle a, \Proj_{\{\theta_i\}_{i\in [L]}}^{\perp}\theta_0\rangle$$ is the optimal action. The goal is to minimize pseudo-regret with respect to a genie who is aware of the true vectors $\{\theta_i\}_{i\in \{0\}\cup [L]}$ (and so would play $a^*_t$ at each round): $$\mathcal{R}_{[T]} = \sum_{t\in [T]}\Delta_{A_t} = \sum_{t\in [T]} \langle a^*_t-A_t, \Proj_{\{\theta_i\}_{i\in [L]}}^{\perp}\theta_0\rangle.$$ \textbf{Assumptions: }We now discuss the assumptions we make and motivations for them. \begin{assumption}\label{assumption:action_space} The action space $\mathcal{A}_t$ at all times consists of all vectors with unit norm, i.e. $\mathcal{A}_t =\mathcal{B}^2_d$ . \end{assumption} This assumption is helpful due to the nature of the reward function. Finite action spaces with optimistic algorithms can sometimes lead to problems such as the one in Example \ref{example:linear_regret}. In fact, we show in Section \ref{section:lower_bound} that a particularly bad action space \textit{must} result in $\Omega(T^{\frac{3}{4}})$ regret for a consistent algorithm. What we really require is that any vector we desire to play as an action be available to us in the actions space. In the setting in which an action corresponds to a therapy (as in the example of the introduction), this just means that the physician is able to decide upon a therapy rather than prescribe one from a predetermined set. Because of Assumption \ref{assumption:action_space}, the optimal action $a^*_t$ is just $\frac{\Proj_{\{\theta_i\}_{i\in [L]}}^{\perp}\theta_0}{||\Proj_{\{\theta_i\}_{i\in [L]}}^{\perp}\theta_0||_2}$ at all times. This is the normalized projection of $\theta_0$ onto the space orthogonal to the protected subspace. \begin{assumption} There exists a subset $S\in [L]$ of size $|S| = s$ such that $\lambda_{\min}(\sum_{i\in {S}}\theta_i\theta_i^\top )>0$, while any larger set $S'$ has $\lambda_{\min}(\sum_{i\in {S'}}\theta_i\theta_i^\top )=0$. We assume knowledge of $s$. \end{assumption} This says that there is a $s$ dimensional subspace that contains all of the protected vectors. Our regret bounds will be in terms of $s$ rather than $L$. We denote the greatest such $\lambda_{\min}(\sum_{i\in {S}}\theta_i\theta_i^\top )$ (over all choices of $S$ with $|S|=s$) simply as $\lambda_{\min}$. This corresponds to the best spanning set of protected vectors. If we instead know $\lambda_{\min}(\sum_{i\in {S}}\theta_i\theta_i^\top )$, we can remove this assumption and use Algorithm \ref{algorithm:CORE-SET_alternative} instead of Algorithm \ref{algorithm:CORE-SET}. This alternative is discussed in Appendix \ref{appendix:unknown_s}. Let $\mathcal{F}_t = \sigma(A_1, A_2,\cdots, A_t, \eta_1,\eta_2,\cdots, \eta_t)$ denote the $\sigma$-algebra generated by all actions and noises up to and including time $t$. \begin{assumption}\label{assumption:noise} The noise on the observed feedback, $\eta_t$, is conditionally zero-mean $R$-subgaussian, meaning $\E[\eta_t|\mathcal{F}_{t-1}] = 0$ and $\E[e^{\lambda\eta_t}|\mathcal{F}_{t-1}] \le e^{\frac{1}{2}\lambda^2R^2}$. \end{assumption} This is standard, and used for deriving concentrations for the confidence sets for the unknown parameters. \subsection{Differences from Related Models}\label{section:model_comparison} \textbf{Linear Bandits: } The standard linear bandit problem considers minimizing regret while learning a single unknown vector \cite{dani2008linbandits}, \cite{yadkori2011OFUL} {\em without} other protected directions. In our setting, the regret depends on several unknown vectors; however, in each round, we only get a signal from one. As such, the noisy feedback that are observed from the player's actions $\{X_s\}_{s\in [T]}$ do not immediately give us the sub-optimality of an action. When a player plays action $(A_t, I_t)$, it observes $X_t = \langle A_t, \theta_{I_t}\rangle+\eta_t$ and incurs regret $\langle a^*_t-A_t, \Proj_{\{\theta_i\}_{i\in [L]}}^{\perp}\theta_0\rangle$. In particular, the player does \textit{not} see a noisy version of $\langle A_t, \Proj_{\{\theta_i\}_{i\in [L]}}^{\perp}\theta_0\rangle$. Aside from choosing the arm to pull, a player must also choose which vector to query with that arm. The analysis is further obscured by the fact that the rewards are a non-linear function of the unknown parameters. Finally, letting the set of protected vectors be empty ($L = 0$) recovers the standard linear bandit, so our setting is a generalization. \textbf{Safety-constrained Linear Bandits: } Safety-constrained bandits, studied, for instance, in \cite{kazerouni2017conservative}, \cite{amani2019safety}, are typically supposed to guarantee a safety constraint with high probability at each round. For instance, \cite{kazerouni2017conservative} require that the cumulative regret of a learner not exceed the regret of a baseline learner by more than a small multiplicative factor. \cite{amani2019safety} have a safety constraint that is a geometric constraint on the arms that can be played at each round. Aside from maximizing the cumulative regret against $a_t^\top \theta_0$, they have a known matrix $B$ and known constant $c$ such that the arm they pull at each round $a_t$ must satisfy $a_t^\top B\theta_0<c$ with high probability for some safety threshold $t$. Both of these are essentially constraints on the exploration of a learner. In contrast, we do not enforce any explicit exploration constraint. Rather, the difficulty of our problem is to \textit{learn} the safety constraints simultaneously with the objective. Moreover, the aforementioned works (i) typically consider a single safety constraint as opposed to multiple, unknown directions $\{\theta_i\}_{i \in L}$, and (ii) they crucially assume `free' access to an observation of the constraint violation at each action round, leading to very rapid learning of the linear constraint halfspace; in our setting, the exploration of the constraint/protection is partial (learn about one of the $\theta_i$) and has to be adaptively decided. \textbf{Linear Partial Monitoring: } A reduction to the linear partial monitoring framework in~\cite{kirschner2020information}, although possible, results in linear regret with existing guarantees. ~\cite{kirschner2020information} provide a regret spectrum based on how informative the action space is, and derive a linear minimax bound for regret on games that are not \textit{globally observable}. The following is a reduction to the linear partial monitoring setting. Let $\theta_\perp=\Proj_{\{\theta_i\}_{i\in [L]}}^{\perp}\theta_0$. We may take $\theta = e_{L+1}\otimes \theta_\perp + \sum_{i=0}^L e_i\otimes \theta_i$. An action $(i, a)\in [L]\times \mathcal{A}$, is encoded as $\textbf{e}_{L+1}\otimes a$, while $A_{(i, a)}$ is taken to be $e_i\otimes a$. The partial monitoring game described here is not globally observable, hence gives linear regret, since for all $a_1, a_2\in \mathcal{A}$, we have $\textbf{e}_{L+1}\otimes (a_1-a_2) \not\in \text{Span}_{i\in [L], a\in \mathcal{A}}A_{(i, a)}$. Here $\otimes$ refers to the Kronecker product. To overcome this difficulty, we leverage crucially the structure in $\theta$ (specific to our problem), that the first $d$ coordinates of $\theta$ are actually a known function of the last $(L+1)d$ coordinates (a projection). \section{PROTECTED LIN-UCB}\label{section:confidence_sets} In this section, we present an algorithm for the regret minimization problem described in Section~\ref{section:model}. Our algorithm, Algorithm~\ref{algorithm:protectedLUCB}, is developed following the Optimization in the Face of Uncertainty (OFU) principle \cite{yadkori2011OFUL}, where we play optimistic actions that maximize the reward with high probability. For that purpose, we maintain and continually refine respective high probability confidence sets for a subset of protected vectors that spans the protected space, namely the {\em core set}. As the dimension of the protected space is assumed to be known to be $s$, it is possible to find a set of $s$ protected vectors that span the space, and any additional vectors need not be considered. In the first phase of the algorithm, we use Algorithm~\ref{algorithm:CORE-SET} to reduce the number of relevant unknown vectors in an approximately optimal way. The method described in \cite{yadkori2011OFUL} to construct confidence sets, which we will use, is as follows. After $t$ rounds, suppose we have queried $\theta$ with arms $\{A_s\}_{s\in [t]}$ and received feedback $\{X_s = \theta^\top A_s+\eta_s\}_{s\in [t]}$. We use these to determine the regularized maximum likelihood estimate (for regularizer $\rho$) \begin{equation}\label{eqn:MLE}\hat{\theta}_t = (\sum_{s\in [t]}A_sA_s^\top +\rho I)^{-1}(\sum_{s\in [t]}A_sX_s) \end{equation} In our setting, if the actions $a\in \mathcal{A}_t$ also satisfy $||a||_2\le M$, then Theorem 2 of \cite{yadkori2011OFUL} establishes that with probability $1-\delta$, $\theta_i$ lies in the set $\Theta_i$ defined as \begin{equation}\label{eq:confidence_set} \Theta_i = \{\theta : ||\hat{\theta}_i-\theta||_{V_i}\le \sqrt{\beta_{T_i}}\} \end{equation} where $T_i = \sum_{s\le t}\mathbbm{1}_{I_s=i}$ is the number of times we sample $\theta_i$, $V_i = \sum_{s\le t}\mathbbm{1}_{I_s = i}A_sA_s^\top +\rho I$, and $ \sqrt{\beta_t} = R\sqrt{d\log\left(\frac{1+tM^2/\rho}{\delta}\right)}+\sqrt{\rho}M$. We refer to this set $\Theta_i$ as the confidence set for each unknown $\theta_i$. The dependence on $t$ is implicit - these confidence intervals generally shrink over time as learn about the unknown parameters, and at each time $t$, there is a well defined $\Theta_i$ to correspond to each unknown in the way prescribed above. {\bf Coreset Estimation:} Because we need only concern ourselves with a spanning set of protected vectors, we first use the CORE-SET procedure to prune the set of protected vectors. We cannot simply pick $s$ of the protected vectors arbitrarily, as these may not span the whole protected space, and even if they do, they may span the space inefficiently. We do this with a deterministic, isotropic phase in which we sample every unknown vector uniformly in every direction in a round robin manner until we are certain that some subset is within a multiplicative factor of being optimal. \begin{algorithm}[th!] \caption{CORE-SET for rank $k$} \label{algorithm:CORE-SET} \begin{algorithmic} \STATE $\{e_i: 1\leq i\leq d\} \gets$ the standard basis\; \STATE $t\leftarrow 1$\; \WHILE{$\forall S\subseteq [L]$, $|S| = k$, and\\ $\lambda_{\min}(\sum_{i\in S}\hat{\theta}_i\hat{\theta}_i^\top )\mathtt{\le}\frac{16LR(M+R)(d\log 6+\log\frac{1}{\delta})}{\sqrt{t}}$} \FOR{$i\in [d]$} \FOR{$p\in [L]$} \STATE Play $(e_i, \theta_p)$, observe feedback $x$\; \STATE Update $\Theta_i$ with $(e_i, x)$ following Eq.\ref{eqn:MLE} and Eq.\ref{eq:confidence_set}\; \ENDFOR \ENDFOR \STATE $t\leftarrow t+1$\; \ENDWHILE \STATE return $(\argmax_{S \subseteq [L], |S|=k} \lambda_{\min}(\sum_{i\in S}\hat{\theta}_i\hat{\theta}_i^\top ), t)$ \end{algorithmic} \end{algorithm} From this we get a set $\tilde{S}$ for which with high probability, we have $$\lambda_{\min}(\sum_{i\in \tilde{S}} \theta_i\theta_i^\top )\ge \frac{1}{3}\max_{S'\in [L]}\lambda_{\min}(\sum_{i\in S'} \theta_i\theta_i^\top ).$$ This is our notion of being optimal within a multiplicative factor. We restrict our attention to this set. Note that this phase only occurs once per instance of the problem - meaning that once we know which protected vectors are representative, we need not learn anything about the others. In the context of our motivation (treatment of disease while reducing the impact of adverse effects), this corresponds to picking a set of tests in advance for a particular ailment. The fact that we are doing this only once per ailment and not once per patient might alleviate ethical concerns related to providing experimental (exploratory) treatments to patients. Furthermore, this phase only introduces a constant to the regret, that is, the regret contribution of CORE-SET does not depend on $T$. {\bf Protected LinUCB:} \begin{algorithm}[th!] \caption{Protected LinUCB} \label{algorithm:protectedLUCB} \begin{algorithmic} \STATE {\bfseries Input:} protected subspace dimension $s$ \STATE $\tilde{S}, t_0\leftarrow$ CORE-SET(s)\; \STATE $t\leftarrow t_0$\; \STATE $V_i=\rho I_d$ \FOR{$t \in [T]$} \STATE $(A_t, \{\overline{\theta}\}_i) = \argmax\limits_{a\in \mathcal{A}_t, \{\tilde{\theta}_i\in \Theta_i\}_{i\in \tilde{S}}, \tilde{\theta}_0\in \Theta_0} \langle a, \Proj_{\{\tilde{\theta}_i\}_{i\in \tilde{S}}}^{\perp}\tilde{\theta}_0 \rangle$\; \label{line:optimization} \STATE $I_t = \argmax_{i\in \tilde{S}} ||A_t||_{V_I^{-1}}\sqrt{\beta_{T_I}}$\; \STATE Play $(A_t, I_t)$, observe $X_t$\; \STATE Update $\Theta_{I_t}$ with $(A_t, X_t)$ following Eq.\ref{eqn:MLE} and Eq.\ref{eq:confidence_set}\; \STATE $V_{I_t}\leftarrow V_{I_t}+A_tA_t^\top$\; \STATE Increment $T_{I_t}$\; \ENDFOR \end{algorithmic} \end{algorithm} For all $i\in \tilde{S}$, we maintain one such ellipsoid $\Theta_i$ centered at $\hat{\theta}_i$ for each of the unknown vectors in the manner of the OFUL lin-UCB algorithm from \cite{yadkori2011OFUL}. We use these to infer a confidence interval for $\langle a_t, \theta_\perp\rangle$. These sets are such that each of the unknown vectors is contained within their respective confidence sets at every round with high probability. We refer to \cite{yadkori2011OFUL} for a detailed discussion on how such confidence sets are constructed. To keep track of the exploration for each unknown $\theta_i$ until time $t$, we denote by $T_i$ the number of times we have queried vector $i$, $T_i=\sum_{s\le t} \mathbbm{1}_{I_s = i}$ and set $V_i=\rho I+\sum_{s\le t}\mathbbm{1}_{I_s=i}A_sA_s^\top $. We then play optimistically with respect to these confidence sets. Concretely, we maximize over all actions $a\in \mathcal{A}_t$ and all possible $\overline{\theta}_i\in \Theta_i, i\in \tilde{S}$ and $\overline{\theta}_0\in \Theta_0$ the value of $\langle a, \Proj_{\{\overline{\theta_i}\}_{i\in \tilde{S}}}^{\perp}\overline{\theta}_0\rangle$. Note that the confidence set for $\theta_\perp$ is not a geometric ellipsoid, and characterizing its shape exactly is quite difficult (see also Section~\ref{section:key_difficulties}). In each round a player must also chose an index determining the particular protected vector to be queried, and we make this decision based on which vector is least explored in the direction of the selected action. \section{REGRET UPPER BOUND FOR $\mathcal{A}_t=\mathcal{B}_2^d$}\label{section:regret_bound} In this section, we derive an upper bound on the regret of Algorithm (\ref{algorithm:protectedLUCB}). The algorithm begins by constructing a core-set $\tilde{S}$ of the protected vectors that optimally span the protected subspace. This core-set has cardinality $|\tilde{S}| = s$, the known dimension of the protected subspace and is constructed by paying a constant exploratory regret. Here we assume $M, R, \rho = 1$, but the results presented in the appendix are such that the dependence on these parameters is explicit. We have the following theorem that allows us to get a spanning set of protected vectors that span the protected space near-optimally. \begin{theorem}\label{thm:CORE-SET} CORE-SET terminates in at most $t_0 = \nicefrac{2304L^2\big(d\log 6+\log\frac{L}{\delta}\big)^2}{\lambda_{\min}^2}$ iterations of the outer loop and returns a subset $\tilde{S}$ such that, with probability at least $1-\delta$, $\lambda_{\min}(\sum_{i\in {\tilde{S}}}\theta_i\theta_i^\top ) \mathtt{\ge} \frac{\lambda_{\min}}{3}$. \end{theorem} \begin{proof}[Proof sketch] We establish error bounds on the protected vectors in Lemma \ref{lemma:uniform_lambda_bounds} and use these to bound the perturbation of the eigenvalues from a spanning set in Lemma \ref{lemma:PPT_perturbation}. (see Appendix~\ref{section:core-set_appendix} for details). \end{proof} Once the core-set is found, we play optimistically with respect to confidence sets derived from estimates that only include the core-set vectors, reducing the number of parameters we need to learn. We have the following high probability regret bound for Algorithm~(\ref{algorithm:protectedLUCB}): \begin{theorem} \label{thm:known_subspace_dim} If we have $\mathcal{A}_t=\mathcal{B}_2^d$, the regret of Algorithm \ref{algorithm:protectedLUCB} satisfies \small \begin{align*} \mathcal{R}_{[T]} &\le 12\sqrt{2}\frac{s+1}{\lambda_{\min}}\sqrt{Td\log(1+\frac{TL}{d})}\sqrt{\beta_T\left(\frac{\delta}{2(L+1)}\right)}\\&+\underbrace{\frac{4608L^3d\big(d\log 6+\log\frac{2L}{\delta}\big)^2}{\lambda_{\min}^2}}_{\text{CORE-SET estimation}} \end{align*} with probability $1-\delta$ where $$\sqrt{\beta_t(\delta)}=R\sqrt{d\log(\tfrac{1}{\delta}+\tfrac{tM^2}{\delta\rho})}+M\rho^{\frac{1}{2}}.$$ \end{theorem} \subsection{Key Difficulties}\label{section:key_difficulties} We now describe the reasons we cannot straightforwardly apply results from linear bandit literature. Given only stochastic zero-order access to vectors $\{\theta_i\}_{i\in \{0\}\cup [L]}$, we must play the arm $a\in \mathcal{A}_t$ which maximizes $\langle a, \Proj^{\perp}_{\{\theta_i\}_{i\in [L]}}\theta_0\rangle.$ Suppose for all $i \in L$, we know that the unknown vector $\theta_i$ was in some confidence set $\Theta_i$ with high probability. Then, let the set of all possible $\theta_\perp$ be denoted $\Theta_\perp$ where each member is derived from a specific choice of $\{\theta_i\}_{i\in [L]}$ consistent with $\{\Theta_i\}_{i\in [L]}$. Clearly, this contains the true $\theta_\perp$ with high probability. Meanwhile, if we chose to play that action that gave us the maximum reward under any choice of $\theta_\perp\in \Theta_\perp$ then the sub-optimality of an action is \textit{upper bounded by the uncertainty in the mean reward for that action}, so a complete characterization of $\Theta_\perp$ would directly lead to a regret bound. However, explicitly constructing a high probability confidence set for $\theta_\perp$, denoted by $\Theta_\perp$, presents new problems. The key issue is that one cannot get meaningful confidence regions on the object of interest, namely the projection of $\theta_0$ given by $\Proj_{\{\theta_i\}_{i\in [L]}}^{\perp}\theta_0$. To see this, observe that $\{\Theta_i\}_{i\in [L]}$ are confidence ellipsoids obtained from sub-gaussian tail bounds. However, the map $\{\theta_i\} \rightarrow \Proj_{\{\theta_i\}_{i\in [L]}}^{\perp} \theta_0 $ is not linear in $\theta_i$, and hence sub-gaussianity is not preserved through this transformation. There is another way of seeing this difficulty. In standard linear bandit, for arm $a$ and the optimal parameter $\theta_0$, pulling arm $a$ repeatedly reduces our uncertainty of $\langle \theta_0, a\rangle$. However, the object of our interest is $\Proj_{\{\theta_i\}_{i\in [L]}}^{\perp}$, i.e. the {\em space} orthogonal to the protected vectors. Thus, {\em (i)} the component of $a$ that lies in the protected space is not informative because any reduction in variance of a protected vectors in the span of the protected space does not change the variance of our estimate of the protected space, and {\em (ii)} the true reward depends on the protected vectors only through the space they span and not the vectors themselves. As such, it is not true that getting even infinite samples from an arm allows us to compute its mean reward with high confidence. Instances in which this fundamentally changes the regret bounds are presented in Sections \ref{section:failure} and \ref{section:lower_bound}. \subsection{Proof idea} We consider the unknown linear operator $C$ that maps $\theta_i$ to $\overline{\theta}_i$ (the choice corresponding to the optimistic action) for each $i$ in the coreset, and replace $\Proj_{\{\overline{\theta_i}\}}\theta_0=\Proj_{\{C \theta_i\}}\theta_0$ (the true projection of the target vector on the optimistic space) by $C\Proj_{\{\theta_i\}}\theta_0$ (here we have switched the order of $\Proj$ and $C$). Note that $C$ and $\Proj$ may not be commutative, but using self-adjointness and idempotence of projection operators allows us to do this for specifically the optimistic action when $\mathcal{A}_t=\mathcal{B}_2^d$ (actually all we need is that the optimistic vector at each step is available in the action space). We can now propagate the errors in the protected vectors \textit{linearly} through our estimates of the subspace, thus crucially {\em preserving sub-Gaussianity of noise.} Concretely, in Lemma \ref{lemma:delta_upperbound} we show an upper bound on the suboptimality of the player's action using Algorithm \ref{algorithm:protectedLUCB} as $$\Delta_{A_t}\le 2\underbrace{(3\frac{\sqrt{s}}{\lambda_{\min}}M+1)}_{(*)}||A_t||_{V_{i_t, t}^{-1}}\sqrt{\beta_{T}(\delta)}$$ Here the ($*$) multiplicative term comes from an online subspace estimation and can be thought of as a condition number for the operator $C$ above. We can use this to get a regret bound similarly to \cite{yadkori2011OFUL}. % \subsection{Remarks} Here we discuss some of the key terms of the regret bound presented in Section~\ref{thm:known_subspace_dim}. \begin{remark}[Comparison with OFUL algorithm for Lin-UCB in \cite{yadkori2011OFUL}] The regret of the OFUL algorithm satisfies $$R^{L-UCB}_{[T]}\le 4\sqrt{Td\log(1+TL/d)}\sqrt{\beta_t(\delta)}$$ with probability $1-\delta$ where $$\sqrt{\beta_t(\delta)} = R\sqrt{d\log(\tfrac{1}{\delta}+\tfrac{TM^2}{\rho\delta})}+M\rho^{\frac{1}{2}}.$$ In comparison, our regret has a multiplicative $\frac{s+1}{\lambda_{\min}}$ factor. This comes from the fact that our rewards now depend on $s+1$ unrelated vectors. The dependence on $\lambda_{\min}$ comes from the way perturbations of vectors affect perturbations of the space they span. \end{remark} \begin{remark}[Knowledge of $s$]\label{remark:dimension_known} If $s<L$, it is desirable to have regret that scales as $s$ and not $L$. This raises an additional difficulty, as demonstrated by the following example. Suppose in the first instance, $\theta_0=[1,1,1], \theta_1=[1,0,0], \theta_2=[1,0,0]$, while in the second $\theta_0=[1,1,1], \theta_1=[1,0,0], \theta_2=[1,\Delta, 0]$. The true subspace dimension in the first is 1, while in the second, it is $2$. The ideal action, $\theta_{\perp}$ is $[0,1,1]$ in the first, while it is $[0,0,1]$ in the second. For small $\Delta$, it is difficult to decide between these, and deciding incorrectly leads to a sub-optimality that does not go to $0$ as $\Delta\rightarrow 0$. Note that this is very different from the analogous issue in the multi-arm bandit (MAB), where a separation of $\Delta$ leads only to a sub-optimality of $\Delta$. To further complicate matters, such a suboptimality in a MAB is addressed as directly as possible by sampling the relevant arms of the bandit. In our case, the separation is in a direction orthogonal to $\theta_{\perp}$, the direction we need to exploit. \end{remark} \subsection{A Failure of Optimism}\label{section:failure} A study of this algorithm reveals an interesting phenomenon. While Theorem \ref{thm:known_subspace_dim} demonstrates a regret bound that scales in $T$ as $\tilde{O}(sd\sqrt{T})$ if we set the action space $\mathcal{A}_t$ to always be the unit ball $\mathcal{B}^d_2$, we also note in Theorem \ref{thm:hardness} that no consistent algorithm can do better than $\Omega(T^{\frac{3}{4}})$ with no restriction on the action space. In fact, the naive optimism of Algorithm \ref{algorithm:protectedLUCB} can get stuck with linear regret, as demonstrated in the following example. \begin{example}\label{example:linear_regret} Consider a problem with $d=2, L=1$, where $\theta_0 = u_{\frac{\pi}{4}}, \theta_1 = u_0$. For ease of notation, let $u_\alpha$ denote the point $(\cos \alpha, \sin\alpha)$. For simplicity, suppose the player knows $\theta_0$ exactly. Suppose that at all times the player is given the choice of actions $\mathcal{A}_t=\{a_1, a_2\}$ where $a_1=u_{\frac{\pi}{4}}$ and $a_2=u_{\frac{\pi}{2}}$. Suppose at round $t$, $\theta_1$ has been queried $T_{1, t}$ times and the vector $\overline{\theta}_1=u_0+u_{-\frac{\pi}{4}}$ is in the confidence set for $\theta_1$, that is, $$||\overline{\theta}_1-\theta_1||_{V_{1, t}} = ||u_{-\frac{\pi}{4}}||_{V_{1, t}}\le \sqrt{\beta_{T_{1, t}}}.$$ Then an optimistic evaluation of $a_1$ is at least as good as the evaluation that uses $\overline{\theta}=u_0+u_{-\frac{\pi}{4}}$. With this as the protected vector, the evaluation of $a_1$ is $\cos^2 \frac{\pi}{8}$. Meanwhile, the evaluation of action $a_2$ can never exceed $\cos^2 \frac{\pi}{8}$. An optimistic player will play $a_1$ at round $t+1$. There is no hope of the player learning any better in the future, since $\overline{\theta}_1$ remains in the confidence ellipsoid \begin{align*} ||\overline{\theta}_1-\theta_1||_{V_{1, t+1}} &=||u_{-\frac{\pi}{4}}||_{V_{1, t+1}}\\ &=\sqrt{||u_{-\frac{\pi}{4}}||^2_{V_{1, t}}+\langle u_{-\frac{\pi}{4}}, u_{\frac{\pi}{4}}\rangle^2} \\ &= ||u_{-\frac{\pi}{4}}||_{V_{1, t}}\le \sqrt{\beta_{T_{1, t}}} \le \sqrt{\beta_{T_{1, t+1}}} \end{align*} and so the learner will just play $a_1$ again. Such a learner suffers linear regret under a naively optimistic policy. \begin{figure}[ht!] \centering \includegraphics[width=0.15\linewidth]{images/failure.pdf} \caption{Instance described in Example \ref{example:linear_regret}} \end{figure} \end{example} \section{REGRET LOWER BOUND FOR FINITE ACTION SPACE}\label{section:lower_bound} In this section, we establish the difficulty of the protected linear bandit problem. Note that section (\ref{section:regret_bound}) provides a $O(\sqrt{T}\log T)$ upper bound on the regret of Algorithm \ref{algorithm:protectedLUCB} when the actions space is $\mathcal{B}_2^d$. We suggested in section \ref{section:confidence_sets} that an adversarial action space could make the problem much harder. Here we provide a lower bound for the regret of any algorithm on a specially chosen instance. \begin{theorem}\label{thm:hardness} There is an instance of the Protected Linear Bandit problem such that any algorithm incurs a regret of $\Omega(T^{\frac{3}{4}})$. \vspace{-1em} \end{theorem} \begin{proof}[Proof sketch] Consider a pair of instances, denoted with superscripts $(1)$ and $(2)$. For both, we set our ambient space to have dimension $d=2$, and set $s=L=1$. We denote by $u_\alpha\in \mathbb{R}^2$ the vector $(\cos \alpha, \sin \alpha)$. Take $\alpha = T^{-\frac{1}{4}}$. We set $\theta^{(1)}_0=\theta^{(2)}_0=u_{\frac{\pi}{2}-\alpha}$. In instance $(1)$, we set $\theta^{(1)}_1=u_0$ while in instance $(2)$, we set $\theta^{(2)}_1=u_{-\alpha}$. In both instances, in each round, we allow the player an action space that consists of either the actions $\{u_{\pi-\alpha}, u_{2\alpha}\}$ or $\{u_{\pi-\alpha}, u_{2\alpha}, u_{\pi-3\alpha}\}$ with equal probability. These instances are chosen such that $u_{2\alpha}$ is always optimal for the second instance, while whenever $u_{\pi-3\alpha}$ is available, it is optimal for the first instance. The event in which $u_{\pi-3\alpha}$ is picked more than half the times it is available must thus have a high probability under the interaction between the algorithm with the first instance and a low probability in the interaction with the second instance. The Bretagnolle-Huber inequality \cite{LatSze20} allows us to control the maximum difference in this probability by the KL divergence induced by the different interactions, which we prove to be bounded by a constant. The complete proof is given in Appendix \ref{appendix:KL_upperbound}. \end{proof} \section{EXPERIMENTS} \label{sec:experiment} In this section, we validate our theoretical results with simulations on a synthetic instance, and an instance derived from the Warfarin dataset~\cite{WarfarinData} that consists of clinical and pharmacogenetic data on Warfarin dosage in the presence of other medications. For all experiments, we perform $10$ parallel runs, and report the cumulative regrets (average, and average $\pm$ 1 $\times$ standard deviation). \textbf{Baseline Algorithms:} Because this is a new model, there is no previously studied baseline that we are aware of. As mentioned in Section \ref{section:related_work}, the prior work on safety constrained Bandits \textit{requires} safety with high probability in each round, and assumes a known relationship between the target vector and the protected actions. We simulate against two natural baselines. Complete algorithms are listed in the Appendix. \textbf{Round Robin LinUCB/LinUCB2: } Here we learn each of the protected vectors as separate instances of linUCB. We dedicate each round as a ``subspace learning" round with probability $\epsilon = \frac{1}{\sqrt{t}}$ (for Round Robin LinUCB2 we use $\epsilon = t^{-\frac{1}{4}}$) where $t$ is the round number, and iterate over the protected vectors playing exactly the OFUL algorithm of \cite{yadkori2011OFUL}. Otherwise we play the same arm as specified in Algorithm \ref{algorithm:protectedLUCB} but query the target vector $\theta_0$. Psuedocode is provided in Algorithm \ref{algorithm:RRLinUCB} in the Appendix. \textbf{$\epsilon$ greedy: } Here with probability $\frac{\epsilon}{\sqrt{t}}$ the algorithm plays a uniformly random protected vector and a uniformly random arm. We use these samples to estimate (using MLE) the protected and target vectors, and otherwise play a pure exploitation strategy based on a subspace estimate derived from these MLE estimates. We manually optimize the hyper-parameter $\epsilon$. Psuedocode is provided in Algorithm \ref{algorithm:EG} in the Appendix. \textbf{Synthetic Data:} A problem instance was generated randomly by drawing vectors randomly from $\mathcal{N}(0, I_d)$ for $d=5$ in each round which are then normalized. We set $L=3$ and set $s=2$. We have set the regularization parameter $\lambda=0.1$ and the failure probability $\delta=0.001$. The regret due to the interaction of the player and the instance over $T=1000$ rounds and 4 times in parallel is plotted. \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=0.95\linewidth]{images/synthetic_data_2.pdf} \caption{Regret of $\epsilon$-Greedy, and Algorithm~\ref{algorithm:protectedLUCB} with $\rho=0.1, \delta=0.001, R=0.001$. We have $s=2$, $L=4$, $d=6$, and $100$ arms randomly drawn on the unit sphere at each round.} \label{fig:synthetic_comparison} \end{subfigure}\hfill% \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=0.95\linewidth]{images/warfarin_data_3.pdf} \caption{Regret of $\epsilon$-Greedy, and Algorithm~\ref{algorithm:protectedLUCB} with $\rho=0.1, \delta=0.001, R=0.001$ on Warfarin dataset. We have $s=1$, $L=1$, $d=8$, and $1832$ fixed arms.} \label{fig:warfarin_comparison} \end{subfigure}% \end{figure} \textbf{Warfarin Dataset:} We consider the Warfarin dataset~\cite{WarfarinData} and construct an instance to optimize Warfarin dosage in our setting. This dataset consists of dosages of Warfarin (an anticoagulant prescribed for Deep Vein Thrombosis, Stroke, Cardiomyopathy, etc) and other medications (`Simvastatin', `Atorvastatin', `Fluvastatin', etc.) as well as the resulting INR (International Normalized Ratio which indicates susceptibility to bleeding - this is provided as a number between roughly $1$ and $4$) and stability of Warfarin therapy (this is provided as a Boolean). In this context, we consider the task of optimizing a therapy consisting of some combination of these medications to get optimal Stability while minimally affecting deviation from the normal range of INR (defined to be 2.5). We model the therapy (combination of medications) as a unit norm vector in $a\in \mathcal{A}_t\subset \mathbb{R}^8$ (interpreted as the dosages of each of $8$ medications). We assume the following model, and learn the unknown parameters $\theta_0$ and $\theta_1$ from the data. \begin{align*} \text{INR} & \leftarrow \text{Subgaussian}(\theta_0^\top a, R) && a\in \mathcal{A}_t\\ \text{Stability} & \leftarrow \text{Bernoulli} (\theta_1^\top a) \end{align*} We then construct a Protected Linear Bandit instance, where all the available therapy records comprise the action space (i.e. we interpret each therapy as an element of $\mathcal{A}$ (unchanging in time) which is large enough to approximate as $\mathbb{R}^d$, with $d=8$ and $1832$ elements (arms)), the INR test vector acts as the protected vector $\theta_1$ (i.e. $L=s=1$), and the Stability test vector acts as the reward vector $\theta_0$. We set $\rho=1$, and $\delta = 0.001$ in Algorithm~\ref{algorithm:protectedLUCB} and simulate the system for $5$ parallel runs each with $T=1000$ time steps. \begin{remark}[Solving the optimization problem in Line \ref{line:optimization} of Algorithm \ref{algorithm:protectedLUCB}] This is a maximization of a function that is not concave. In Appendix \ref{appendix:optimizer} we describe a simple way to solve this optimization explicitly for a fixed arm (that is, how to get the optimal $\overline{\theta}_0$ and $\overline{\theta}_i$ for a fixed $a_t$) if $\mathcal{A}_t=\mathcal{B}_2^d$. Even though $\mathcal{A}_t\ne \mathcal{B}_2^d$ for the above experiments, we use this optimizer as a heuristic for all of the algorithms above. \end{remark} \pagebreak \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,822
\section{Introduction} Mobile phones are ubiquitous today. Billions of images are captured every day for personal and social use. This has driven the need for image understanding tasks such as object detection and optical character recognition (OCR). OCR is one of the most renowned and foremost discussed Computer Vision task, which is used to convert the text in images to electronic form in order to analyze digitized data. The recognition of text has a wide range of applications such as image retrieval, scene understanding, keyword-based image hashing, document analysis, and machine translation. Though OCR pipelines work well to retrieve text from scanned documents, these traditional methods fail to work on images occurring in natural scenes. We focus on analyzing and extracting text from real-world scene images, commonly known as Scene Text Recognition (STR). \begin{figure}[!b] \centerline{\includegraphics[scale=0.35]{vertOtherImg.png}} \caption{1 and 2 are orthogonally rotated images, whereas 3 is a type of vertical image which is not handled by conventional OCR.} \label{vertVsRot} \end{figure} STR poses a great challenge due to the large variance of text patterns and fonts, complex foreground-background variations, imperfect imaging conditions, and highly complicated backgrounds. This makes it a much more complex task than conventional OCR. With the advent of deep learning, deep neural networks have been used to solve the task of scene text recognition. These networks are computationally expensive, aiming to achieve the best possible recognition results on popular benchmarks. Some of them also require the use of lexicon dictionaries or language-model-based corrections to improve the prediction accuracy. This increases the memory consumption of the network further. But, due to the nature of downstream applications, it becomes necessary to process images in real-time, using the limited computation power of smartphones. We aim to build an efficient and compact general-purpose text recognition system, supporting diverse symbols and characters and consequently, multiple languages. Other than dealing with perturbations such as blurriness, contrast mismatch, altered brightness, and so on, a general-purpose OCR system must also be able to deal with variations in orientation. \cite{shi2016robust, shi2018aster} have proposed transformation modules to normalize the text image into a straight line. But vertical text, in which horizontal characters are stacked vertically, is not handled by the current OCR systems. This orientation is different from orthogonally rotated top-to-bottom or bottom-to-top text which can be handled by clockwise or anti-clockwise rotation of the text. Here on, for the ease of understanding, we will denote the vertically stacked horizontal text as Vertical Text and the orthogonally rotated vertical text as simply Rotated Text. The distinction between these two types of images is shown in Fig. \ref{vertVsRot}. Vertical texts are predominantly found in East-Asian scripts including Chinese, Korean, Vietnamese and Japanese. Some methods recognize vertical text by detecting character boundaries and recognizing each character in the word. But, due to improper character boundaries in scene text scenarios, high latency, and less word-level context in the case of character-level recognition, this approach is not very efficient and accurate. In this paper, we propose STRIDE: Scene Text Recognition In-Device, a light-weight CNN-LSTM based network, which performs real-time text recognition of multi-oriented, multi-scale scene text images on-device. We develop 4 models to support 4 different scripts and cumulatively support 34 languages. Each network has around 0.88M parameters, which is at least 10x smaller than existing models with comparable precision. These optimizations are covered in more detail in further sections. The main contributions of this paper are as follows: \begin{itemize} \item[1] Simultaneous recognition of horizontal and vertical text \item[2] Addition of convolution attention blocks and analyzing its impact on OCR systems \item[3] Developing a device ready text recognition solution, with state of the art inference speed and comparable accuracy\end{itemize} The rest of the paper is organized in the following way. Section \ref{sec:related} talks about related works. We elucidate the working of our pipeline in section \ref{sec:network}. Section \ref{sec:experiments} concentrates on the experiments we conducted and the corresponding results we achieved. The final section \ref{sec:future} takes into consideration the future improvements which can be further incorporated. \section{Related Works} \label{sec:related} Though the OCR problem has received attention for many decades, the main focus was document images \cite{nagy2000twenty}. Although OCR on document images is now well-developed, these methods fail utterly when applied to natural scene images due to a large number of variations in scene images. In the last decade, the advent of deep learning has led to substantial advancements in STR. Earlier methods were segmentation-based methods \cite{chen2020text} that aimed to locate the characters, use character classifier to recognize the characters, and then group characters into text lines. PhotoOCR \cite{bissacco2013photoocr} was one such segmentation-based method that used a deep neural network trained on the extracted histogram of oriented gradient (HOG) features for character classification. The performance of segmentation-based methods are constrained by their dependence on accurate detection of individual characters, which is a very challenging problem. Also, most of the methods \cite{wang2012end, liu2016scene} rely on post-OCR correction, such as lexicon set or language models to capture context beyond a character. This leads to an increase in their time and memory consumption. Segmentation-free methods focus on mapping the entire text image to a target string. Jaderberg\cite{jaderberg2014synthetic} treated the word recognition problem as a multi-class classification problem, where each class represents a different word. But, such an approach is not scalable. Recent methods have construed the problem of scene text recognition as an image-based sequence recognition. CRNN \cite{shi2016end} was the first combined application of CNN and RNN for text recognition. It consisted of a fully-convolutional part of VGG \cite{simonyan2014very}, followed by 2 Bi-LSTM layers. For lexicon-free prediction, the network was trained using connectionist temporal classification (CTC) \cite{graves2006connectionist} to find the label sequence with the highest probability. Multiple variants have hence been proposed to improve the performance of CRNN. Image processing techniques such as background removal, super-resolution \cite{jain2020device}, and image rectification \cite{shi2016robust} have been employed to reduce the load on the downstream stages for learning complex features. Better convolutional feature extractors such as Resnet \cite{he2016deep} and RCNN \cite{lee2016recursive} have been used to better extract the text features from complex images. But these feature extractors are very deep and have high latency. \begin{figure*}[htbp!] \centerline{\includegraphics[scale=0.25]{architecture3.png}} \caption{STRIDE Network Pipeline: The word boxes detected from the text localization network are passed to the feature extractor, after applying selective rotation. The orientation of each word is classified separately and passed to the sequence model with the temporal word features extracted. } \label{architecture} \end{figure*} Along with deep feature extractors, attention mechanism \cite{shi2016robust, cheng2017focusing, liu2016star} is often combined with RNN for character-sequence decoding. This can help in extracting text from irregular scene text crops and achieve better performance in regular word crops too. \cite{baek2019wrong} conduct a fair comparison to identify the impact of variations in different modules. Although the newer variants have achieved better performance on various benchmarks, they have come at a cost to memory and computation. Convolution based Attention modules such as convolutional block attention module \cite{woo2018cbam} and global squeeze-excite blocks \cite{hu2018squeeze} have shown to increase the performance of convolutional networks on benchmarks such as Imagenet for detection and segmentation tasks. The effect of these attention modules has not been explored in the domain of text recognition till now. The Global Squeeze-Excite blocks provide channel attention to the network and can be easily integrated with any recognition feature extraction. It recalibrates channel-wise feature responses by explicitly modeling inter-dependencies between channels. Global Squeeze-Excite blocks have been shown to increase the search space with very little effect on time and memory \cite{tan2019mnasnet}. Alternatively, Convolutional Block Attention Module (CBAM) proposed modification to the SE block, which helps to provide spatial attention in addition to channel attention. CBAM contains two sequential sub-modules called the Channel Attention Module (CAM) and the Spatial Attention Module (SAM), which are applied in that particular order. Most of the current STR models assume the input image as horizontal. Even the irregular scene text recognizers fail on vertical text images due to the structure of the network \cite{li2019show}. Some models use vertical information \cite{ijcai2017-458, cheng2018aon, choi2018simultaneous}. But, these models use attention mechanism, which makes them infeasible to be used on-device due to their computation requirements. The demand for on-device OCR has lead to prominent firms offering OCR such as Google's Mlkit\footnote{\url{developers.google.com/ml-kit/vision/text-recognition} } and Apple's Text recognition in vision framework \footnote{\url{developer.apple.com/documentation/vision/recognizing_text_in_images}}. But these products support a minimal number of languages on-device. PP-OCR \cite{du2020pp} is an open-source on-device OCR supporting multiple scripts. But, we outperform the network in terms of both accuracy and inference speed. \section{Network Architecture} \label{sec:network} In this section, we describe our network, which takes an image and bounding box of the word as input and converts it into a sequence of labels. The network is based on the Convolutional Recurrent Neural Network (CRNN) architecture \cite{shi2016end}. We incorporate certain modifications to the network, to perform simultaneous recognition of horizontal and vertical text. We also focus on building a compact network, suitable for on-device inferencing without compromising on accuracy. The network architecture is shown in Fig. \ref{architecture}. The proposed model consists of four components: selective rotation, feature extraction, sequence modeling, and prediction. Each component of the network is carefully designed and optimized to minimize the overall network size. \begin{figure*} \centerline{\includegraphics[scale=0.22]{feature_extractor_R.png}} \caption{Feature Extractor and Orientation Classifier Module. CBAM is used to get channel and character region attention information. The detected orientation is concatenated to the extracted features and fed to the LSTM. Input Image is of size: 16*width*3, where height =16 and 3 denotes the RGB channels. } \label{feature_extractor} \end{figure*} \subsection{Selective Rotation} Text in scene images comes in diverse shapes and varieties. The natural text present on signboards, restaurant boards, banners, etc. is generally skewed or rotated. The recognition model is made to handle perspective information, text skewness, and rotation up to a certain level. But, as the skew increases it starts impacting the recognition results. Rectifying and pre-processing each word crop adds to the computation cost of the network. To handle this trade-off between accuracy and inference time, we extract rotation angle and skewness information from the localization module \cite{telcos2021} and pass only the highly rotated words through a computer vision-based perspective transformation. The model is made invariant to small amounts of distortions in the text. \subsection{Feature Extraction} The feature extractor block consists of CNN and max-pooling layers which extracts attributes related to the text and learns feature representation for both horizontal and vertical text. It has 4 convolution layers with kernel size of 3x3. The first two convolutions operate on full word crop width, which is followed by an average pool layer across the width dimension. So, the features fed to the following convolution layers and the sequence model are represented by half the original word width. Low-level features play a huge role to distinguish subtle changes in characters due to diacritics in Latin script and to correctly identify minute differences in Chinese and Korean compound characters. So, to preserve the important low-level details, we pass a residual skip-connection from the second convolution layer and concatenate it with the output of the fourth layer. All crops are scaled to a fixed height of 16 while preserving the aspect ratio. Thus, all input word-crops are of size $16*width*3$, where the $width$ of the input is determined from the aspect ratio of the original word crop. Each convolution layer is followed by a max-pool layer across the height dimension. So, the final output from the feature extractor is of size $1*(width/2)*164$, 164 representing the number of channels fed to the sequence modeler. Attention mechanism provides localized information and generally boosts the recognition accuracy when used in the encoder or decoder blocks of an OCR network. But, this comes with an additional time and network complexity. \cite{baek2019wrong} shows the time gain when the attention module is used in conjunction with the base network. This additional overhead of conventional attention methods and transformer modules makes it unsuitable to be directly deployed for on-device inference. To overcome this drawback and take advantage of the improved feature discriminability, we propose the use of convolution attention blocks in the encoder networks for recognition. It provides attention information to the sequence modeling block with very minimal computational overhead. These blocks can be integrated with any recognition network.To the extent of our knowledge, the effect of these attention modules has not been explored in the domain of text recognition. \begin{figure}[b!] \centerline{\includegraphics[scale=0.65]{CBAM_feature.png}} \caption{Feature maps extracted after the third convolution layer. CBAM blocks are able to clearly separate the characters from its background. } \label{CBAM feature} \end{figure} We experiment with Global Squeeze-Excite (GSE) Blocks and Convolution Block Attention modules, both of which provide channel or region-specific localized information as described in Section 2. The GSE Blocks squeeze global spatial information into a channel descriptor and this channel attention information is mapped to the input feature. Whereas Convolution Block Attention Module (CBAM) provides both spatial and channel attention, through its two sequential sub-modules called the Channel Attention Module (CAM) and the Spatial Attention Module (SAM). Fig. \ref{feature_extractor} shows the components of the CBAM block. In a semi-supervised way, SAM tries to learn the region of interest for each character in the word, which helps in providing a better background-foreground separation. Whereas, the CAM module focuses on learning the discriminative features between characters, that leads to excitation of desired channels. Fig. \ref{CBAM feature} shows the feature maps from the third convolution layer of the network with and without the CBAM block. The feature maps of the network using CBAM provide better attention and character separation, which when fed to the sequence modeling block leads to better recognition results. Ablation study of SE and CBAM blocks for the text recognition model is performed in section \ref{sec:experiments}. Empirically, CBAM provides the best overall result, with minimal computational overhead. \subsection{Orientation Classifier} Due to the frequent use of vertical texts in East-Asian scripts including Chinese, Korean, Vietnamese, and Japanese, there is a need to recognize vertical text along with the horizontal text. To solve this problem, we aim to recognize horizontal and vertical text simultaneously with a single model, without any additional overhead. Predicting orientation at character-level is difficult due to the orthogonal nature of character pairs like Z and N or H and I. Hence, the orientation of the word requires a global word-level context, rather than a character-level context. To get this word-level information, we apply Global Average Pooling (GAP) across the width dimension. This information is then fed to a fully connected layer with sigmoid activation to predict a single orientation per word, instead of the per-pixel value of orientation across the width. Fig \ref{feature_extractor} shows the working of our orientation classifier. The prediction of the orientation classifier can be represented as: \begin{equation} y_{i}^{p} \ =\ \sigma \ ( \ Dense\ ( \ GAP_{axis=2}( CB_{1}))) \end{equation} where $CB_{1}$ is the output from the first CBAM block and $y_{i}^{p}$ is the prediction of the orientation classifier. During training, we jointly optimize the recognition network and the orientation classifier. The orientation classifier is trained using binary cross-entropy loss and it is defined as: \begin{equation} \label{eq:orientation} L_{o} \ =\ -\frac{1}{N} \ \sum ^{N}_{i=1} \ y_{i} \ log( y_{i}^{p}) \ +\ ( 1\ -\ y_{i}) \ log( 1\ -\ y_{i}^{p}) \ \ \end{equation} where $y_{i}$ is the actual orientation, $N$ is the batch size and $L_{o}$ defines the orientation loss. \subsection{Sequence modeling} Sequence modeling captures contextual information within a sequence of characters, for the next stage to predict each character. Our sequence modeling stage consists of 1 Bi-LSTM layer. LSTM has shown strong capability for learning meaningful structure from an ordered sequence. Another important property of the LSTM is that the rate of changes of the internal state can be finely modulated by the recurrent weights, which contributes to its robustness against localized distortions of the input data. The LSTM receives input from both the orientation classifier and feature extractor. The orientation information is a piece of non-temporal information that is fed to the LSTM with the temporal word features extracted from the convolution layers, the width of the image being the time dimension. There are a few ways to feed non-temporal data to the bi-directional LSTM. It can be fed as the first and last timestamp of the temporal sequence. The disadvantage of this approach is that if the input sequence is long enough, the orientation information may not be properly propagated to all the timestamps, and it might forget the conditioning data. The approach we follow is to append the orientation information to each timestamp, effectively adding the feature across the width dimension of the convolution features. We are able to retain the accuracy of horizontal text on the benchmark dataset, in addition to supporting vertical word recognition, as shown in Table \ref{tab:Vertical Text Classification}. For faster inference, we use LSTM with recurrent projection layer \cite{sak2014long} to decrease the computational complexity. The recurrent projection layer projects hidden states at every time step to a lower dimension. The projected hidden states in both directions are then concatenated and fed to the prediction stage. \subsection{Prediction} The prediction stage outputs a sequence of characters from the identified features of the word crop. The per-frame predictions made from the LSTM are fed to a fully connected layer to obtain per-frame probability distribution over the labels. Finally, Connectionist Temporal Classification (CTC) \cite{graves2006connectionist} is used to transform frame-wise classification scores to label sequence. If $x_{t}$ is the per-frame probability distribution over the set $L'$, where $L'$ represents the set of all labels in addition to blank symbol and the ground truth label sequence is represented by $y^{*}$, then CTC defines the objective function to be minimized as follows: \begin{equation} L_{c}\ =\ -\ \sum \ log\ p\left( y^{*} \ |\ x\right) \end{equation} Combined with orientation loss defined in Eq. \ref{eq:orientation}, the combined loss equation is: \begin{equation} L\ =\ L_{c} \ + \lambda L_{o} \end{equation} where $\lambda$ is a hyper-parameter controlling the trade-off between the two losses. $\lambda$ is set to 1 in our experiments. For fast inference, we use the greedy decoder, which assumes the most probable path to be the most probable labeling to estimate the character sequence. \begin{figure}[t!] \centerline{\includegraphics[scale=1]{2.jpg}} \caption{Synthetic dataset created by rendering font on varied backgrounds and using data augmentation techniques.} \label{fig:synth_samples} \end{figure} \section{Experiments} \label{sec:experiments} \subsection{Datasets} \label{subsec:datasets} The diversity of training datasets plays an important role in creating a robust model with high performance. But, the size of the existing real-world datasets is small to train a highly accurate scene text recognizer. The real-world vertical images are also limited in number and a few were collected from the datasets mentioned below. Labeling scene text images is costly. Thus, we rely on synthetic datasets for training our network.\\ \textbf{HVSynth}: We create our own synthetic dataset for horizontal and vertical text, by generating text images with basic data augmentation techniques such as rotation, perspective distortion, blurring, etc. We also apply text blending, to add background noise to the text. We create a total of 5L samples for horizontal text and 1L for vertical text. Fig. \ref{fig:synth_samples} and Fig. \ref{vertical_Examples} shows some samples of synthetic horizontal and vertical text. \begin{figure}[t!] \centerline{\includegraphics[scale=0.65]{vertical_Examples.png}} \caption{Vertical Natural and Synthetic Text Samples } \label{vertical_Examples} \end{figure} Along with our synthetic datasets, we use standard synthetic datasets such as: \\ \textbf{MJSynth} (MJ) \cite{jaderberg2014synthetic} is a synthetic dataset designed for Scene Text Recognition. It contains 8.9M word box images generated using 90k alpha-numeric words. We use this dataset to train our Latin model. \\ \textbf{SynthText} (ST) \cite{gupta2016synthetic} is another synthetically generated dataset. Even though SynthText was an arrangement for detection, it has been also used for Scene Text Recognition by cropping off word boxes. SynthText has around 6M training data once we render the text samples as word boxes. We also use the same method to generate word crops for all the other scripts. We use the following real-world datasets for training and evaluation:\\ \textbf{ICDAR 2013 (IC13)} \cite{karatzas2013icdar} was created for the ICDAR 2013 Robust Reading competition for reading camera captured scene texts. The dataset has 848 images for training and 1095 images for testing. \\ \textbf{ICDAR 2015 Incidental Text (IC15)} \cite{karatzas2015icdar} consists of a lot of irregular text and highly blurred images. It has 4468 training images and 2077 testing images. \\ \textbf{IIIT5k-Words (IIIT5k)} \cite{mishra2012scene} consists of images crawled from google search result. The dataset consists of 5000 images that are split into a training set of 2000 images and an evaluation set of 3000 images. \\ \textbf{Street View Text (SVT)} \cite{wang2011end} consists of 247 images for training and 647 images for evaluation. The images are outdoor street images collected using Google Street view. Some images are severely corrupted by noise, blur, and low resolution.\\ \textbf{ICDAR 2019 multi-lingual scene text (MLT) } \cite{nayef2017icdar2017} consists of around 90k word crops belonging to 7 scripts. Though the dataset was created for script identification, the word crops have been annotated and can be used for training STR. \subsection{Training details} We implement the network using Tensorflow 2.3 \cite{abadi2016tensorflow} framework. All experiments are carried out on GeForce GTX 1080ti GPU, and 16GB RAM. The training batch size is 64. We use Adam optimizer \cite{kingma2014adam} to train the models with initial learning rate $1e-3$. The learning rate was halved when the validation loss did not decrease for more than 2 epochs, and the training was stopped once the learning rate reached $1e-5$. Training CTC is hard and the models take a lot of time to converge \cite{borisyuk2018rosetta}. For improving stability during training, we explore curriculum learning strategies. Curriculum learning was proposed to address the challenges of training deep neural networks under non-convex training criteria. For a few epochs, we train our network on a subset of training images consisting of shorter words and clear text. After every epoch, we increased the maximum length of the words along with the degree of perturbations. Adopting curriculum training strategies significantly fastened the training of the network. Both synthetic and natural scene text contains few wrongly annotated images and highly complex background images. Also, many images of the MjSynth, SynthText, and IIIT5k datasets have case-insensitive labeling, which makes it difficult for the model to learn case information. To tackle these data-related issues, we weakly supervise our model, by monitoring per image loss. We reduce the training data instances that have a very high loss on a trained model, which could possibly be due to wrong annotations or hard examples. We also correct the wrong case annotations of the dataset, by taking input from the model, in case of incorrect case predictions. The dropped images are fed back to the data pipeline after a few epochs, to make up for the dropped hard examples. \subsection{Results on benchmark datasets} Though there are a few benchmark recognition datasets for Latin, there are no such datasets for other scripts. There are end-to-end benchmarks for Chinese, but none for recognition. The recognition word accuracies on the public real-word datasets mentioned in section \ref{subsec:datasets}, obtained by some state-of-the-art techniques and our proposed model STRIDE are given in Table \ref{tab:Experimental Results}. The results of the other models are taken from \cite{baek2019wrong} which compares different STR models. \begin{table}[t!] \label{tab:Experimental Results} \centering \caption{Experimental Results}\label{tab1} \begin{tabular}{| M{1.35cm} | M{1cm} | M{0.9cm} | M{0.9cm} | M{0.9cm} | M{0.9cm} | M{0.9cm} | } \hline \bfseries {Model} & \bfseries{Params} & \bfseries{IIIT} & \bfseries{SVT} & \bfseries{IC13} & \bfseries{IC15} \\ \hline CRNN & 8.3M & 82.9 & 81.6 & 89.2 & 64.2 \\ RARE & 10.8M & 86.2 & 85.8 & 91.1 & 68.9 \\ STAR-Net & 48.7M & 87.0 & 86.9 & 91.5 & 70.3 \\ Rosetta & 44.3M & 84.3 & 84.7 & 89.0 & 66.0 \\ Our Model & 0.88M & 82.5 & 81.3 & 88.4 & 68.7 \\ \hline \end{tabular} \label{tab:Experimental Results} \end{table} Though our network is a minuscule version of CRNN having around a tenth of its parameters, the recognition accuracies of our model is on-par with CRNN. We perform better in IC15, since it consists of a few vertical and orthogonally rotated images which our model can handle. By using computationally intensive modules such as attention for decoding (in RARE) and Resnet for feature extraction (in Star-Net), the performance gap widens. But due to the on-device constraints, we cannot integrate these modules in our model. \begin{table}[b!] \centering \caption{Vertical Text Classification and Recognition Result on Latin synthetic dataset}\label{tab1} \begin{tabular}{| M{3.0cm} | M{1.25cm} | M{1.25cm} | M{1.25cm} |} \hline \bfseries {Network} & \bfseries{Horizontal Word Accuracy} & \bfseries{Vertical Word Accuracy} & \bfseries{Orientation Classifier Accuracy}\\ \hline Base & 94.77 & - & - \\ Base + Orientation Classifier &94.25 & 96.12 & 97.34 \\ \hline \end{tabular} \label{tab:Vertical Text Classification} \end{table} \subsection{Result on HVSynth dataset} The performance of our model on the HVSynth dataset is shown in Table \ref{tab:Vertical Text Classification}. Our model is able to predict well on both horizontal and vertical word crops with a negligible loss on horizontal word crops performance, compared to our base model. The orientation classifier is able to learn the distinction between horizontal and vertical text and predicts the orientation with high accuracy. \begin{table}[b!] \label{tab:Ablation Study of GSE and CBAM modules on IC13 data} \centering \caption{Ablation Study of GSE and CBAM modules on IC13 data}\label{tab:attention} \begin{tabular}{| M{1.85cm} | M{1.2cm} | M{1.2cm} | M{1.25cm} | M{1.08cm} |} \hline \bfseries {Module} & \bfseries{Word Accuracy} & \bfseries{Char Accuracy} & \bfseries{Parameters} & \bfseries{Time}\\ \hline Base Network & 86.5 & 92.2 & 850k & 2.2 ms \\ 1 GSE Block & 87.4 & 92.8 & 859k & 2.28 ms\\ 2 GSE Block & 87.7 & 93.1 & 868K & 2.35 ms\\ 2 CBAM Block & 88.4 & 93.4 & 886K & 2.44 ms \\ \hline \end{tabular} \label{tab:Ablation Study of GSE and CBAM modules on IC13 data} \end{table} \subsection{Ablation study on attention modules} Table \ref{tab:attention} shows the impact of attention modules on the recognition accuracies of our Latin model on IC13. As we can see, convolutional attention blocks can boost the performance of the network with little effect on latency and size. Thus, using CBAM, which provides both spatial and channel attention, helps in better extraction of the features of the characters. \begin{figure}[t!] \centerline{\includegraphics[scale=0.65]{pass_cases.jpg}} \caption{Complex Pass Cases} \label{Pass Cases} \end{figure} \subsection{Results Analysis} Our model performs substantially well while handling complex images like those shown in Fig. \ref{Pass Cases}. We also investigate the failure cases of our model and some of them are shown in Fig. \ref{Fail_Cases}. A few common reasons for failure are elucidated below: \begin{figure}[b!] \centerline{\includegraphics[scale=0.45]{fail_cases1.jpg}} \caption{Fail Cases} \label{Fail_Cases} \end{figure} \begin{itemize \item \textbf{Text Merging with Background:} Model fails to handle cases where the text merges with the background. Better feature extractors may improve performance in this domain. \item \textbf{Shadow and Blur:} Selective failure cases are observed due to reflections in low resolution textured images. \item \textbf{Smudged Text:} Existing models do not explicitly handle smudged text in low resolution cases; super-resolution modules or image pyramids might improve performance. \item \textbf{Occluded Text:} Current research methods do not substantially exploit contextual information to overcome occlusion. Future researches may utilize superior language models to maximally make use of context. \end{itemize} \subsection{On-Device Statistics} The model details for various scripts are shown in Table \ref{tab:ondevice}. The number of parameters of the model varies across scripts due to the difference in the number of characters supported, which affects the number of parameters in the last fully connected layer. For speed comparisons, we find out the time taken to process a word crop of size $16*64$ on S20 which has Exynos 990 chipset and 8 GB RAM. \begin{table} \centering \caption{On-Device Statistics} \label{tab:ondevice} \begin{tabular}{| M{1.0cm} | M{2.0cm} | M{2.0cm} | M{2.0cm} |} \hline \bfseries {Script} & \bfseries{No. of characters supported } & \bfseries{No. of parameters (in mil)} & \bfseries{Time taken (in ms)}\\ \hline Latin & 236 & 0.88 & 2.44 \\ Korean & 1330 & 0.99 & 2.81 \\ Japanese & 2647 & 1.11 & 3.21 \\ Chinese & 5949 & 1.43 & 3.93\\ \hline \end{tabular} \end{table} \section{Future Work} \label{sec:future} Though the model performs well on regular datasets, the model does not perform well on irregular datasets containing curved text and distorted text due to model limitations. \cite{baek2019wrong} has clearly shown that such images can be deciphered by using computationally expensive modules such as 2-D attention \cite{li2019show}. Other solutions to read irregular word crops involve developing efficient pre-processing modules that can transform and normalize the image \cite{shi2016robust}. We plan to explore both avenues to develop a robust on-device OCR that can handle both types of word crops. Another interesting scope of future work will be to target challenging scripts like Arabic which are written right to left. The model also faces difficulty in extracting text written in calligraphic fonts and handwritten text. The model can be fine-tuned with such datasets and it remains to be seen how this similar architecture will perform in such cases. \section{Conclusion} In this paper, we have presented STRIDE, a lightweight OCR solution. OCR is one of the most important and popular areas of research especially for scene text given the popularity of high-resolution cameras in recent times, and the billions of images captured daily. In this paper, we presented STRIDE, our novel, lightweight, lexicon-free, and on-device solution with real-time results. We have demonstrated that the architecture of our proposed system is universal, and scales to document and scene text, and to other scripts and languages as well. The system can be used to recognize both horizontally and vertically aligned text. We also introduce the use of convolution attention blocks in STR networks and show its impact on accuracy through ablation studies. Additionally, we have compared our methods to previous works. We have shown how we improve upon previous approaches and handle the complex scenarios and challenges of scene text, at the same time optimizing the network for real-time performance on-device. The number of LSTM layers is decided to achieve a trade-off between model performance and model size. Stacking more LSTM layers didn't give any significant boost in model accuracy. We are able to achieve comparable results with the current SOTA recognition models with only 0.88M parameters and an On-Device inference time of 2.44ms. We hope this will enable the use of OCR on camera images and scene text as a starting point for various downstream Vision and NLP tasks, entirely on-device giving real-time results and protecting the user's privacy at the same time. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,114
Q: Show that T is diagonalizable if nullity$(T) + m = n.$ $V$ is a vector space of dimension $n > 0$, let $T: V \to V$ be a linear operator and $ { 0, \lambda _{1}, ..., \lambda_{m} } $ be a set of the distinct eigenvalues of T . Show that T is diagonalizable if nullity$(T) + m = n$. My Try:I think we can directly claim that $T$ is diagonalizable since it splits into distinct linear factor.Thank you A: Remember that a linear map (or a matrix...) is diagonalizable iff for each and every eigenvalue, it algebraic and geometric multiplicities are equal, and this is the same as saying that the sum of all the geometric multiplicities of all the eigenvalues equals the space's dimension, i.e. $\;n\;$ For the eigenvalue zero: its geometric multiplicity equals the dimension of the corresponding eigenspace, which of course is $\;\ker T\;$ : $\;\dim\ker T=\text{null.}\,T\;$ , but then we get the above mentioned condition for diagonability as we're givenb $$\overbrace{null.\,T+m}^{\text{sum of geom. mult.}}=n$$ A: Let $u_1, \dots,u_k$ be a basis of $\ker T$. Let $v_1, \dots,v_m$ be eigenvectors for $\lambda_1, \dots,\lambda_m$ respectively. Then $u_1, \dots,u_k, v_1, \dots,v_m$ are a basis for $V$ because eigenvectors of distinct eigenvalues are linearly independent and $k+m=n$. With respect to this basis, $T$ has a diagonal matrix: $diag(0,\dots,0,\lambda_1, \dots,\lambda_m)$. A: Let $\delta_0,\delta_1,\dots,\delta_m$ be the geometric multiplicities for the eigenvalues $0,\lambda_1,\dots,\lambda_m$. Then $\delta_0=\operatorname{nullity}(T)$ by definition. Let $\mu_0,\mu_1,\dots,\mu_m$ be the algebraic multiplicities. You know that $1\le\delta_i\le\mu_i$, for $i=0,1,\dots,m$. In particular $$ \delta_0+\delta_1+\dots+\delta_m\ge\delta_0+m $$ and so $$ n=\mu_0+\mu_1+\dots+\mu_n\ge \delta_0+\delta_1+\dots+\delta_m\ge\delta_0+m=n $$ the rightmost relation holding by assumption. Can you show this forces $\delta_i=\mu_i$ for $i=0,1,\dots,m$? By the way, what can you say about $\mu_i$ for $i=1,\dots,m$? (This is however not relevant for diagonalizability.)
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,199
Inbreeding Inbreeding is breeding between close relatives, whether plant or animal. If practiced repeatedly, it often leads to a reduction in genetic diversity. A concomitant increase in homozygousity of recessive traits can, over time, result in inbreeding depression. This may result in inbred individuals exhibiting reduced health and fitness and lower levels of fertility. Livestock breeders often practice inbreeding to "fix" desirable characteristics within a population. However, they must then cull unfit offspring, especially when trying to establish the new and desirable trait in their stock. In plant breeding, inbred lines are used as stocks for the creation of hybrid lines to make use of the heterosis effect. Inbreeding in plants also occurs naturally in the form of self-pollination. Safe Weighing Range Ensures Accurate Results How to Quickly Check Pipettes? 1 Results of inbreeding 2 Inbreeding in domestic animals 3 Inbreeding in humans 3.1 Ancient Egypt 3.2 Royalty and nobility Results of inbreeding Inbreeding may result in a far higher phenotypic expression of deleterious recessive genes within a population than would normally be expected.[1] As a result, first-generation inbred individuals are more likely to show physical and health defects, including: reduced fertility both in litter size and sperm viability increased genetic disorders fluctuating facial asymmetry lower birth rate higher infant mortality slower growth rate smaller adult size loss of immune system function. Natural selection works to remove individuals who acquire the above types of traits from the gene pool. Therefore, many more individuals in the first generation of inbreeding will never live to reproduce. Over time, with isolation such as a population bottleneck caused by purposeful (assortative) breeding or natural environmental stresses, the deleterious inherited traits are culled. The cheetah once was reduced by disease, habitat restriction, overhunting of prey, competition from other predators (primarily lions, competition from human land use, etc.) to a very small number of individuals.[2][3] All cheetahs now come from this very small gene pool. Should a virus appear that none of the cheetahs have resistance to, extinction is always a possibility. Currently, the threatening virus is feline infectious peritonitis, which has a disease rate in domestic cats from 1%-5%; in the cheetah population it is ranging between 50% to 60%. The cheetah is also known, in spite of its small gene pool, for few genetic illnesses. Island species are often very inbred, as their isolation from the larger group on a mainland allows for natural selection to work upon their population. This type of isolation may result in the formation of race or even speciation, as the inbreeding first removes many deleterious genes, and allows expression of genes that allow a population to adapt to an ecosystem. As the adaptation becomes more pronounced the new species or race radiates from its entrance into the new space, or dies out if it cannot adapt and, most importantly reproduce.[4] The reduced genetic diversity that results from inbreeding may mean a species may not be able to adapt to changes in environmental conditions. Each individual will have similar immune systems, as immune systems are genetically based. Where a species becomes endangered, the population may fall below a minimum whereby the forced interbreeding between the remaining animals will result in extinction. In the South American sea lion, there was concern that recent population crashes would reduce genetic diversity. Historical analysis indicated that a population expansion from just two matrilineal lines were responsible for most individuals within the population. Even so, the diversity within the lines allowed for great variation in the gene pool that may inoculate the South American sea lion from extinction.[5] Natural breedings include inbreeding by necessity, and most animals only migrate when necessary. In many cases, the closest living mate is a mother, sister, grandmother, father, grandfather... In all cases the environment presents stresses to select or remove those individuals who cannot survive because of illness from the population. In lions, prides are often followed by related males in bachelor groups. When the dominant male is killed or driven off by one of these bachelors, a father may be replaced with his son. There is no mechanism for preventing inbreeding or to ensure outcrossing. In the prides, most lionesses are related to one another. If there is more than one dominant male, the group of alpha males are usually related. Two lines then are being "line bred". Also, in some populations such as the Crater lions, it is known that a population bottleneck has occurred. Far greater genetic heterozygosity than what was expected was found.[6] In fact, predators are known for low genetic variance, along with most of the top portion of the tropic levels of an ecosystem.[7] Additionally, the alpha males of two neighboring prides can potentially be from the same litter; one brother may come to acquire leadership over another's pride, and subsequently mate with his 'nieces' or cousins. However, killing another male's cubs, upon the takeover, allows for the new selected gene complement of the incoming alpha male to prevail over the previous male. There are genetic assays being scheduled for lions to determine their genetic diversity. The preliminary studies show results inconsistent with the outcrossing paradigm based on individual environments of the studied groups.[8] There was an assumption that wild populations do not inbreed; this is not what is observed some cases in the wild. However, in species such as horses, animals in wild or feral conditions often drive off the young of both genders, thought to be a mechanism by which the species instinctively avoids some of the genetic consequences of inbreeding.[9] Inbreeding in domestic animals Breeding in domestic animals is assortative breeding primarily (see selective breeding). Without the sorting of individuals by trait, a breed could not be established, nor could poor genetic material be removed. Inbreeding is used by breeders of domestic animals to fix desirable genetic traits within a population or to attempt to remove deleterious traits by allowing them to manifest phenotypically from the genotypes. Inbreeding is defined as the use of close relations for breeding such as mother to son, father to daughter, brother to sister. Breeders must cull unfit breeding suppressed individuals and/or individuals who demonstrate either homozygosity or heterozygosity for genetic based diseases.[10] The issue of casual breeders who inbreed irresponsibly is discussed in the following quote on cattle... Meanwhile, milk production per cow per lactation increased from 17,444 lbs to 25,013 lbs from 1978 to 1998 for the Holstein breed. Mean breeding values for milk of Holstein cows increased by 4,829 lbs during this period (http://aipl.arsusda.gov/main/data.html#gtrend). High producing cows are increasingly difficult to breed and are subject to higher health costs than cows of lower genetic merit for production (Cassell, 2001). Intensive selection for higher yield has increased relationships among animals within breed and increased the rate of casual inbreeding. Many of the traits that affect profitability in crosses of modern dairy breeds have not been studied in designed experiments. Indeed, all crossbreeding research involving North American breeds and strains is very dated (McAllister, 2001) if it exists at all. Linebreeding, a specific form of inbreeding, is accomplished through breedings of cousins, aunt to nephew, half brother to half sister. This was used to isolate breeds within the companion and livestock industry. For instance an animal with a desirable colour is bred back within the lines with identified selection traits whether it be milk production or adherence to breed standard of appearance or behavior. Breeders must then cull unfit individuals, and in some cases the breeders will then outbreed to increase the level of genetic diversity. Again casual breeding is problematic as it is without the requisite culling of individuals who are either maladaptive, not to breed standard or carriers of poor genetic material that must be removed from a healthy breeding program. [12] Outcrossing is where two unrelated individuals have been crossed to produce progeny. In outcrossing, unless there is verifiable genetic information, one may find that all individuals are distantly related to an ancient progenitor. If the trait carries throughout a population, all individuals can have this trait. This is called the founder's effect. In the well established breeds, that are commonly bred,a large gene pool is present. For example, in 2004, over 18,000 Persian cats were registered.[13] A possibility exists for a complete outcross, if no barriers exist between the individuals to breed. However it is not always the case, and a form of distant linebreeding occurs. Again it is up to the assortative breeder to know what sort of traits both positive and negative exist within the diversity of one breeding. This diversity of genetic expression, within even close relatives, increases the variability and diversity of viable stock. [14] The two dog sites above also point out that in the registered dog population, the onset of large numbers of casual breeders has cooresponded with an increase in the number of genetic illnesses of dogs by not understanding how, why and which traits are inherited. The dog sites indicate that the largest percentage of dog breeders in the US are casual breeders. Therefore the investment in a papered animal,with an expected short term profit, motivates some to ignore the practice of culling. Casual breeders in companion animals often ignore breeding restrictions within their contracts with source companion animal breeders. The casual breeders breed the very culls that a genetics based breeder has released as a pet. The casual breeder also was cited in the quotes above on cattle raising. Inbreeding is also deliberately induced in laboratory mice in order to guarantee a consistent and uniform animal model for experimental purposes. Inbreeding in humans The taboo of incest has been discussed by many social scientists. Anthropologists attest that it exists in most cultures. As inbreeding within the first generation often produces expression of recessive traits, the prohibition has been discussed as a possible functional response to the requirement of culling those born deformed, or with undesirable traits.[citation needed] Some anthropologists like Charles Davenport advocated the traditional forms of assortative breeding to form better human stock. Some Egyptian Pharaohs married their sisters; in such cases we find a special combination between endogamy and polygamy. Normally the son of the old ruler and the ruler's oldest (half-)sister became the new ruler. Cleopatra VII and Ptolemy XIII, married and named co-rulers of ancient Egypt following their father's death, were brother and sister. Not only this, but all rulers of the Ptolemaic dynasty from Ptolemy II on engaged in inbreeding among brothers and sisters, so as to keep the Ptolemaic blood "pure". Royalty and nobility The royal and noble families of Europe have close blood ties which are strengthened by royal intermarriage; the most discussed instances of interbreeding relate to European monarchies. Examples abound in every royal family; in particular, the ruling dynasties of Spain and Portugal were in the past very inbred. Several Habsburgs, Bourbons and Wittelsbachs married aunts, uncles, nieces and nephews. Even in the British royal family, which is very moderate in comparison, there has scarcely been a monarch in 300 years who has not married a (near or distant) relative. Indeed, Queen Elizabeth II and her husband Prince Philip, Duke of Edinburgh are second cousins once removed, both being descended from King Christian IX of Denmark. They are also third cousins as great-great-grandchildren of Queen Victoria of the United Kingdom. European monarchies did avoid brother-sister marriages, though Jean V of Armagnac was an exception. It is not necessarily the case that there was a greater amount of inbreeding within royalty than there is in the population as a whole: it may simply be better documented. Among genetic populations that are isolated, opportunities for exogamy are reduced. Isolation may be geographical, leading to inbreeding among peasants in remote mountain valleys. Or isolation may be social, induced by the lack of appropriate partners, such as Protestant princesses for Protestant royal heirs. Since the late Middle Ages, it is the urban middle class that has had the widest opportunity for outbreeding. It has long been debated on whether inbreeding caused some of the problems among some of the family members of some royal lines, most notably centered around Charles II of Spain, who was mentally handicapped and could not properly chew food. As there was no genetic testing back then, it will remain unclear whether these defects were naturally occurring or were due to the inbreeding. Other examples of royal family intermarriage include: Some Peruvian Sapa Incas married their sisters; in such cases we find a special combination between endogamy and polygamy. Normally the son of the old ruler and the ruler's oldest (half-)sister became the new ruler. The Inca had an unwritten rule that the new ruler must be a son of the Inca and his wife and sister. He then had to marry his sister (not half-sister), which ultimately led to the catastrophic Huascars reign, culminating in a civil war and then fall of the empire. The House of Habsburg inmarried particularly often. Famous in this case is the Habsburger (Unter) Lippe (Habsburg jaw/Habsburg lip/"Austrian lip"), typical for many Habsburg relatives over a period of six centuries.[15] The condition progressed through the generations to the point that last of the Spanish Habsburgs, Charles II of Spain, could not properly chew his food.[16] (See mandibular prognathism.) [[Charles V, Holy Roman Emperor, King of Spain and Infanta Isabella of Portugal were first cousins. John, Crown Prince of Portugal and Joan of Habsburg were double first cousins. Mary, Queen of Scots and Henry Stuart, Lord Darnley were half first cousins, and 3rd cousins once removed. King Louis XIV of France and Infanta Maria Theresa of Spain were double first cousins. King William III and Queen Mary II of England were first cousins. King George I of Great Britain and Princess Sophia Dorothea of Celle were paternal first cousins. King Philip V of Spain and Princess Maria Luisa of Savoy were double second cousins. King Gustav III of Sweden and Princess Sophia Magdalena of Denmark were second cousins. King Christian VII of Denmark and Princess Caroline Matilda of Great Britain were first cousins King George IV of the United Kingdom and Princess Caroline of Brunswick were first cousins. William I, German Emperor and Princess Augusta of Saxe-Weimar were second cousins. Queen Victoria of the United Kingdom and Prince Albert of Saxe-Coburg and Gotha were first cousins. Emperor Franz Joseph I of Austria and Princess Elisabeth of Bavaria were first cousins. King George V of the United Kingdom and Princess Mary of Teck were second cousins once removed. Prince Gustav Adolf, Duke of Västerbotten and Princess Sibylla of Saxe-Coburg and Gotha, parents of the present King Carl XVI Gustaf of Sweden, were second cousins. Prince Nicola Pignatelli (1648–1730) and Princess Giovanna Pignatelli (1666–1723) were half great-granduncle and half great-grandniece, representing a peculiar alliance between two relatives. Nicola was a son of Giulio Pignatelli, Prince of Noia (1587-1658) through his third wife and Giovanna a great-great-granddaughter through his first marriage. A similar alliance was the marriage between Princess Sophie of Sweden and Grand Duke Leopold of Baden, half-brother of her maternal grandfather. Intermarriage in European royal families is no longer practiced as often as in the past. This is likely due to changes in the importance of marriage as a method of forming political alliances through kinship ties between nobility, as well as an awareness of modern medical science. These ties were often sealed only upon the birth of progeny within the arranged marriage. Marriage was seen as a union of lines of nobility, not of a contract between individuals as it is seen today. More marry for "love", best illustrated by the second marriage of Prince Charles of the United Kingdom. During the tumult of the removal, sometimes by revolution, of most lines of nobility from state government, it became less important to marry for the good of the respective monarchies and the states they governed. Prohibited degree of kinship Cousin couple Outbreeding depression Inbreeding depression Coefficient of relationship Consanguinity Exogamy Self-incompatibility in plants (how some plants avoid inbreeding) F-statistics Heterozygote advantage ^ Griffiths, Anthony J. F.; Jeffrey H. Miller, David T. Suzuki, Richard C. Lewontin, William M. Gelbart (1999). An introduction to genetic analysis. New York: W. H. Freeman, 726-727. ISBN 0-7167-3771-X. ^ Cheetahs ^ M Menotti-Raymond and S J O'Brien. "Dating the genetic bottleneck of the African cheetah." Proc Natl Acad Sci U S A. 1993 April 15; 90(8): 3172–3176. ^ [http://elibrary.unm.edu/sora/JFO/v051n02/p0168-p0173.pdf CHARLES F. LECK. "ESTABLISHMENT OF NEW POPULATION CENTERS WITH CHANGES IN MIGRATION PATTERNS." New Population Centers Vol. 51, No. 2] ^ http://www.dur.ac.uk/anthropology.journal/vol13/iss1/posters/freilich.pdf ^ http://services.oxfordjournals.org/cgi/tslogin?url=http://www.oxfordjournals.org%2Fjnls%2Flist%2Fjhered%2Ffreepdf%2F82-378.pdf ^ http://www.iupac.org/publications/pac/1998/pdf/7011x2079.pdf ^ "ADVS 3910 Wild Horses Behavior," web page accessed June 22, 2007 at http://www.advs.usu.edu/academics/pdf/ADVS3910WildHorses.pdf ^ http://muextension.missouri.edu/explore/agguides/ansci/g02036.htm ^ http://www.nimss.umd.edu/homepages/home.cfm?trackID=2354 ^ http://showcase.netins.net/web/royalair/libreeding.htm ^ http://www.petplace.com/cats/top-cat-breeds-for-2004/page1.aspx ^ http://www.bulldoginformation.com/breeding-quality.html ^ [http://www.msu.edu/course/lbs/333/fall/hapsburglip.html "The Hapsburg Lip." Topics in the History of Genetics and Molecular Biology, Fall 2000 ^ http://www.hapsburg.com/menu5.htm "The Imperial House of Hapsburg: Chapter 5. Web page accessed September 23, 2007] Categories: Genetics | Population genetics | Breeding This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Inbreeding". A list of authors is available in Wikipedia. https://www.bionity.com/en/encyclopedia/Inbreeding.html
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,957
2547 Hubei este un asteroid din centura principală, descoperit pe 9 octombrie 1964 de Observatorul Zijinshan. Legături externe 2547 Hubei în JPL Small-Body Database 2547 Hubei în baza de date a Minor Planet Center Diagrama orbitei asteroidului 2547 Hubei (JPL) Obiecte astronomice descoperite în 1964 Centura de asteroizi
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,108
What annoys you? Traffic jams, car alarms, flight delays, phone trees, junk mail? People who cut in line? People who talk loudly on cellphones? People who eat noisily and clip their nails in public? You're not alone. These are just some of the irksome things we confront daily. Since annoyances are ubiquitous, and so many people are annoyed so much of the time, you might think that science could offer some insights about why we find certain things so annoying and what we can do about them. In fact, science can. But don't ask a scientist. As a group, they have almost completely dropped the ball on the topic. In fact, if you talk to scientists, you might get the idea that there is no such thing as annoyance at all. "From my perspective, annoyance is mild anger," says James Gross, a psychologist at Stanford University. Paul Rozin, a psychologist at the University of Pennsylvania, warns, "You have to be careful to distinguish annoyance from aversion." And University of Florida psychologist Clive Wynne says, "It's hard to distinguish annoyance from frustration." It's true that annoyance shares qualities with anger, aversion and frustration. There is also some overlap with disgust, irritation, even ennui. But annoyance, as we who've felt it can attest, is its own thing. It captures elements of other emotions but belongs exclusively to none. So it fell to us, two science journalists, to probe the findings of science for clues to what annoyance is all about. After talking with researchers in a variety of disciplines including (but not limited to) psychology, physics, acoustics, chemistry, molecular genetics, animal behavior and neuroscience, we came up with a formula for what makes something annoying. First, to be annoying, something must be unpredictable. That may be the heart of why cellphone conversations are so grating. Research shows that when we listen to speech, our brains try to predict the words that will come next. But it's hard to predict the next words out of someone's mouth when you're only hearing one side of a conversation. Research on the topic indicates you get drawn in more; you get more distracted from what you'd rather be doing or thinking about; the annoyance level rises. Next, it must be unpleasant. That's a giant category, and often subjective. Some people are annoyed when someone picks a piece of lint off a garment they are wearing; others are grateful. Some are annoyed when radio announcers leave the "g" off words such "going" and "rolling"; others hardly notice. Though there's no accounting for taste, there may be a way to account for aversion. Detecting something unpleasant is among the first things that biological organisms learned to do. Thanks to a receptor that evolved half a billion years ago, certain chemical irritants — like the active ingredient in tear gas or the compound that makes up wasabi — have been annoying life on Earth since before the dinosaurs. Some smells and sounds also seem to be intrinsically unpleasant. For instance, the annoying component in skunk smell is a sulfur-based compound. Sulfur-rich environments tend to be oxygen-poor environments, so it makes sense for creatures that need oxygen to avoid them. There may also be a biological component to why most of us can't abide the sound of fingernails on a blackboard. Some researchers suggest the fingernail noise resembles the acoustic signature of a primate warning call; others liken it to a human scream. We may be programmed to register that sound as a danger signal. The final ingredient in the annoyance recipe is that it must be of uncertain duration. A cellphone call will end eventually; you just don't know when. That uncertainty, combined with a desire that it be over soon, feeds your annoyance. You can't craft a logical plan of action for dealing with an unpleasant, unpredictable situation if you don't know how long it will last. The good news is that taking control of an annoyance seems to alleviate the feeling — and sometimes even the source of the annoyance itself. Take a baby's wail. The annoying sound of crying prompts you to take action — you shut off the sound by changing a diaper or providing a meal. And dealing with your annoyance sometimes prevents your having to confront something worse later. If you'd ignored the crying and the wet diaper had stayed next to your baby's sensitive skin longer, you might have had to deal with diaper rash — and an even fussier baby. Annoyances are, by definition, trivial. If the sensory assault were putting you in real danger, it would no longer be annoying; it would be a crisis. That seems to be the essence of annoyance's special role in the emotional arena: Unlike something that makes you angry or sad, in which you might be rightly justified in your feeling, an annoyance is so petty that you're expected to put up with it, even though you don't like it. Your logical mind tells you that it makes no sense for your blood to boil when the guy next to you starts smacking his gum. If you become aware that your reaction is out of proportion with the stimulus causing it, you become at risk for what we call "terminal annoyance." This is where you become annoyed with yourself for being annoyed, and then you become annoyed with yourself for being annoyed with yourself. You've entered the annoyance spiral. It's a bad state. But there is a small positive side to the times when you start sputtering and tearing your hair out because someone sitting next to you won't stop clipping his nails: It usually makes for a good story later.
{ "redpajama_set_name": "RedPajamaC4" }
668
Azad Bon (, also Romanized as Āzād Bon; also known as Āzād Bon Maḩalleh) is a village in Siyahrud Rural District, in the Central District of Juybar County, Mazandaran Province, Iran. At the 2006 census, its population was 622, in 164 families. References Populated places in Juybar County
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,758
package org.buddycloud.channelserver.sync; import java.util.concurrent.BlockingQueue; import java.util.concurrent.LinkedBlockingQueue; import org.buddycloud.channelserver.Configuration; import org.buddycloud.channelserver.channel.ChannelManager; import org.buddycloud.channelserver.channel.ChannelManagerFactory; import org.buddycloud.channelserver.packetHandler.iq.IQTestHandler; import org.junit.Before; import org.junit.Test; import org.mockito.Mockito; import org.xmpp.packet.Packet; public class ServerSyncTest extends IQTestHandler { private BlockingQueue<Packet> outQueue = new LinkedBlockingQueue<Packet>(); private BlockingQueue<Packet> inQueue = new LinkedBlockingQueue<Packet>(); private ChannelManagerFactory channelManagerFactory; private ChannelManager channelManager; private ServerSync serverSync; private Configuration configuration; @Before public void setUp() throws Exception { configuration = Mockito.mock(Configuration.class); channelManager = Mockito.mock(ChannelManager.class); channelManagerFactory = Mockito.mock(ChannelManagerFactory.class); Mockito.when(channelManagerFactory.create()).thenReturn(channelManager); serverSync = new ServerSync(channelManagerFactory, outQueue, inQueue, configuration); } @Test public void testChannelManagerIsCreated() { Mockito.verify(channelManagerFactory, Mockito.times(1)).create(); } @Test public void ifUserChoosesToPurgeRemoteDataThenMethodIsCalled() throws Exception { Mockito.when(configuration.getProperty(Mockito.eq(Configuration.PURGE_REMOTE_ON_START), Mockito.eq("false"))).thenReturn("true"); serverSync.start(); Mockito.verify(channelManager, Mockito.times(1)).deleteRemoteData(); } @Test public void ifUserChoosesNotToPurgeRemoteDataThenMethodIsNotCalled() throws Exception { Mockito.when(configuration.getProperty(Mockito.eq(Configuration.PURGE_REMOTE_ON_START), Mockito.eq("false"))).thenReturn("false"); serverSync.start(); Mockito.verify(channelManager, Mockito.times(0)).deleteRemoteData(); } @Test public void ifNoChoiceIsMadeAboutPurgingRemoteDataThenMethodIsNotCalled() throws Exception { Mockito.when(configuration.getProperty(Mockito.eq(Configuration.PURGE_REMOTE_ON_START), Mockito.eq("false"))).thenReturn(null); serverSync.start(); Mockito.verify(channelManager, Mockito.times(0)).deleteRemoteData(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,808
Q: INSERT statement syntax with condition and two tables I would like to insert the same entries into 2 different tables (which are structurally the same) on the condition that the current value of a_text in one of the tables is not already present anywhere in that table. Here is my 1st try cur.execute('''IF NOT EXISTS (SELECT * FROM checktble WHERE a_text = %s) ''', (a_text))THEN INSERT INTO tble1 AND INSERT INTO tble2 (a_text,a_fulltext,a_link,a_title,a_source,a_date,a_tag) VALUES (%s,%s,%s,%s,%s,%s,%s)''', (a_text,a_fulltext,a_link,a_title,a_source,a_date,a_tag)) but could someone clean up and fix this for me? A: You probably want something like the following (pseudo code, as I do not have your table strucutre). IF NOT EXISTS (SELECT * FROM checktble WHERE a_text = 'yourtexthere') BEGIN BEGIN TRANSACTION doubleInsert INSERT INTO tble1 (a_text,a_fulltext,a_link,a_title,a_source,a_date,a_tag) VALUES ('blabla', 'more blabla', 'linkgoeshere', 'Bla!', 'Source', 'GETDATE(), 'tag'; INSERT INTO tble2 (a_text,a_fulltext,a_link,a_title,a_source,a_date,a_tag) VALUES ('blabla', 'more blabla', 'linkgoeshere', 'Bla!', 'Source', 'GETDATE(), 'tag'; COMMIT TRANSACTION doubleInsert END A: Another that checks in checktable and insert in two table t1 and t2 in a single statement (hence transaction) : insert into t1 (a_text,a_fulltext,a_link,a_title,a_source,a_date,a_tag) output inserted.a_text, inserted.a_fulltext, inserted.a_link, inserted.a_title, inserted.a_source, inserted.a_date, inserted.a_tag into t2(a_text,a_fulltext,a_link,a_title,a_source,a_date,a_tag SELECT a_text,a_fulltext,a_link,a_title,a_source,a_date,a_tag from (values ( 'myvalue for a_text',... a_fulltext,a_link,a_title,a_source,a_date,a_tag )) as T(a_text,a_fulltext,a_link,a_title,a_source,a_date,a_tag) where not exists (select 1 from checktable where checktable.a_text = T.a_text)
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,505
using System.Web.Mvc; namespace Frapid.Account.Cors { public class AllowCorsAttribute : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { if (!string.IsNullOrWhiteSpace(filterContext.RequestContext.HttpContext.Request.Headers["Origin"]) && string.IsNullOrWhiteSpace(filterContext.RequestContext.HttpContext.Response.Headers["Access-Control-Allow-Origin"])) { filterContext.RequestContext.HttpContext.Response.AddHeader("Access-Control-Allow-Origin", "*"); filterContext.RequestContext.HttpContext.Response.AddHeader("Access-Control-Allow-Headers", "*"); filterContext.RequestContext.HttpContext.Response.AddHeader("Access-Control-Allow-Credentials", "true"); } base.OnActionExecuting(filterContext); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,903
Q: How to send data with load function in jquery I want to send a string data with jquery load function but its not sending, my code is function dialog(data) { $(function () { alert(data); var ph = $("#Org1"); ph.empty(); ph.load("/FrontEnd/DocsDownload", data, function () { ph.dialog({ width: 500, modal: true, show: 'slide', closeText: 'hide', draggable: false, resizable: false, title: "Download" }); }); }); } alert show me the data but when it goes to that controller and i pick value from data variable it have null value. My controller code is public ActionResult DocsDownload(string data) { } what may be the problem? A: The data parameter is a name/value pair: .load('/FrontEnd/DocsDownload', { data: 'Hello World' }, function () { And send multiple parameters: .load('/FrontEnd/DocsDownload', { param1: 'value1', param2: 'value2' }, function () {
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,993