text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Contact | International About Pipelife "A little bit of social warmth in every box of clamps or sleeves - not only for Christmas" Pipelife Austria and the sheltered workshop of the non-profit organization Lebenshilfe in Berndorf, Lower Austria, have been writing a very special success story for 30 years now, because this is how long this great cooperation has existed. Its purpose is to help intellectually disabled people claim their rights to a meaningful life through meaningful work and activities. Thus, for three decades, the workshop has been handling the secondary packaging of small electrical parts such as sleeves and clamps for Pipelife. "Solidarity is very strong on both sides", comments Horst Schraml, internal sales employee for Pipelife Elektro and coordinator of the cooperation, on the long-standing partnership. A conscious decision for social commitment Although the production of small parts has been relocated from Austria to the Netherlands, cooperation with Lebenshilfe continued. It may be cheaper to pack the items in small boxes directly during production, but the long-standing partnership with the Lebenshilfe sheltered workshop in Berndorf outweighs economical aspects by far. With its long-term order, Pipelife Austria contributes to the basic capacity utilization of the workshop and the sheltered employees are pleased with the meaningful and valuable work – this cannot be offset financially. It was a conscious decision in favor of social responsibility behind which Franz Grabner, Managing Director of Pipelife Austria, stands. "When you see what this work means for the sheltered employees of Lebenshilfe, you realize that you are doing the right thing. We will continue placing orders for secondary packaging with the Berndorf workshop even if automatic packaging would be cheaper. This is part of our sense of responsibility for disadvantaged people in our region. We hope that this is also appreciated by our customers. There is a little bit of social warmth in every box of clamps or sleeves, and not only at Christmas time" explains Franz Grabner. A typcial "Pipelife" day: teamwork and maximum commitment It all starts on Wednesday at 9 o'clock in the morning: the truck loaded with sleeves and clamps drives into the yard of the workshop in Berndorf, Lower Austria, and parks in front of the big gate. The excitement and joy of the sheltered employees is not only visible but can be felt throughout. The process is highly professional and well organized: Sabine Landl, affectionately referred to by a sheltered employee as "Mrs. Pipelife", maintains a close eye on everything, while the truck is unloaded by pallet in teamwork between Lebenshilfe caregivers and employees. She checks which items are delivered and in what quantities. All pallets are taken to the warehouse and stored on the shelves with the forklift. Everybody helps - as much as they can and like to. "I always take part in the Pipelife job," says Fritz, an employee. From holding the warehouse door open to transporting the pallet on the pallet truck – everyone is productive and makes an important contribution. Some items are needed quite urgently, so they have priority. The items are transported from the warehouse to the open, light-flooded workshop together with white outer boxes. There is a separate "Pipelife" desk here with label printer and computer to handle the orders, as well as large work tables and sufficient space for pallets. Sabine prints the labels while everything is being prepared for the next work steps in teamwork. A sheltered employee adjusts the counting scale which is used to determine the correct quantity of sleeves and clamps. Others assemble the outer boxes themselves and affix labels to them, supporting each other in doing so. Everyone performs the work step that gives them pleasure and to which they are best suited. The small parts are then counted, weighed and repacked into the white, handy boxes. After sealing, they are placed in a larger outer box for transport. Sheltered employees undertake this work absolutely voluntarily – and yet more and more of them come along to help. This is once again proof of the enormous commitment and joy meaningful work can bring. Franz Grabner comments: "For us, this joy is the best testimony to how important cooperation with the Lebenshilfe workshop is". Everybody helps and is motivated. All work steps are performed conscientiously. It is a flowing process, everything goes very swiftly and every step fits. The employees are a well-integrated team and are proud of their performance and valuable contribution. "Every company can only wish for such motivated and friendly staff" says Karl Traun, the workshop manager. The new workshop in Berndorf: space for meaningful work The new workshop in Berndorf was opened in January 2018 and is equipped for 60 sheltered employees. The open, bright and barrier-free building offers employees many opportunities for activity: whether crafts or services, anyone can do the work that pleases and interests them. The fact that a warehouse for Pipelife orders was incorporated into the new concept demonstrates once again the durability of the cooperation and the strong solidarity. The cooperation between Pipelife Austria and Lebenshilfe is a story that has seen unshaken success for 30 years. About Lebenshilfe Lower Austria Lebenshilfe Lower Austria was founded in 1967. What started off as parents' initiative turned into a structured organization. Lebenshilfe Lower Austria is a human rights organization with the aim to fully prevail the UN Convention's rights for people with disabilities. The organization offers help and support for people with intellectual disabilities of all ages and in whichever situation they may be. Everybody at Lebenshilfe Lower Austria works within the framework of an inclusive society and stands for the interests of people with intellectual disabilities. More information about Lebenshilfe Lower Austria. The original German article can be found on Pipelife Austria's website. Proud: Horst Schraml (middle) with sheltered employees Each work step is carried out in teamwork PIPELIFE INTERNATIONAL GmbH Wienerbergstrasse 11 · 1100 Wien / Austria Phone: +43 1 602 2030 0 Email: info@pipelife.com · Imprint/Disclaimer · Privacy Policy · Cookie Information · Website by wukonig.comPart of the Wienerberger Group
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,623
The shareholders of NCAB Group AB (publ), reg. no. 556733-0161 ("Company"), are hereby convened to the annual general meeting on Monday 13 May 2019 at 12.00 noon. The general meeting will be held at the Company premises at Mariehällsvägen 37 A in Bromma. The right to participate at the general meeting etc. Shareholders who which to participate at the general meeting shall i) be registered in the share register kept by Euroclear Sweden AB on the record day which is Tuesday 7 May 2019, as well as ii) notify the Company of their intention to participate at the general meeting no later than by Tuesday 7 May 2019 by way of mail to NCAB Group AB (publ), "Annual General Meeting", Mariehällsvägen 37 A. 168 65 Bromma or by e-mail to agm@ncabgroup.com. To be entitled to participate at the general meeting, shareholders with nominee-registered shares through a bank or other nominee must register their shares in their own name with Euroclear Sweden AB. Shareholders requesting such registration must notify their nominee well before Tuesday 7 May 2019, when such registration shall have been executed. The notification shall set out name/company name, personal ID number/registration number, number of shares held and, when applicable, number of advisors which may not exceed two. Shareholders who are represented by proxy should submit a power of attorney concurrently with the notice of participation. The power of attorney shall be in writing, dated and signed. The original power of attorney shall be brought to the general meeting. Power of attorney forms are available on the Company's website www.ncabgroup.com and sent free of charge to those shareholders who so request and state their postal address or e-mail address. Representatives of legal entities shall also enclose a copy of the registration certificate or equivalent document which indicates the persons authorised to represent the legal entity. Presentation of annual report and the auditor's report and consolidated accounts and auditor's report for the group. Resolution on the number of directors of the board to be appointed. Resolution to establish the remuneration for the directors of the board and the auditor. Appointment of the board of directors and the chairman of board of directors. Resolution to adopt the remuneration policy for executive management. Resolution on authorisation for the board of directors to issue shares. The nomination committee has before the meeting consisted of Jannis Katsakis (representing Fjärde AP-Fonden and acting as chairman of the nomination committee), Christian Salamon (chairman of the board of directors), Per Hesselmark (R12 Kapital AB), Sofia Aulin (Länsförsäkringar) and Gunnar Blix (Tredje AP-Fonden). The nomination committee's complete proposal and explanatory statement will be held available at the Company's website, www.ncabgroup.com. The nomination committee proposes that attorney at law Emma Norburg from Advokatfirma DLA Piper is appointed chairman of the general meeting. The board of directors and the managing director proposes that the general meeting resolves that a dividend of SEK 4.50 per share shall be paid to the Company's shareholders. The board of directors proposes that 15 May 2019 shall be the dividend record date. Provided that the general meeting resolves in accordance with the proposal, payment of the dividend is expected to be made on 20 May 2019. The remaining amount of the year's result is proposed to be carried forward. It is the opinion of the board of directors that the allocation of the Company's result is justified on the basis of the requirements on the Company's and the groups equity in the light of the nature, scope and risks associated with the business as well as the Company's and the group's need to strengthen its balance sheet, liquidity and financial position in general. The nomination committee proposes that the board shall consist of seven directors without deputy directors. The nomination committee proposes that the remuneration of the board of directors remains unchanged and is set to SEK 2 775 000 in total to be allocated with SEK 700 000 to the chairman of the board and SEK 350 000 to each of the directors of the board who are not employees of the group, SEK 150 000 to the chairman of the audit committee and SEK 50 000 to each of the members of the audit committee who are not employees of the group and SEK 25 000 to each of the members of the remuneration committee who are not employees of the group. The nomination committee proposes, for the period until the next annual general meeting has been held, re-election of Christian Salamon, Jan-Olof Dahlén, Per Hesselmark, Magdalena Persson, Hans Ramel, Gunilla Rudebjer and Hans Ståhl as directors of the board and re-election of Christian Salamon as chairman of the board of directors. The proposed directors of the board will be presented on the Company's website, www.ncabgroup.com. The nomination committee proposes re-election of ÖhrlingsPriceWaterhouseCoopers AB. The auditing firm has declared that if the general meeting resolves in accordance with the proposal, Johan Engstam will be appointed as auditor in charge. The nomination committee proposes that the general meeting resolves that the nomination committee shall be appointed in accordance with the following principles. The nomination committee shall consist of representatives of the four largest shareholders according to Euroclear's register as of the last business day in August 2019. The chairman of the board of directors shall in September contact these shareholders in order to convene the nomination committee. The chairman of the board of directors shall be part of the nomination committee. The nomination committee appoints its chairman amongst its members. If a member leaves the nomination committee or in the event of a change in ownership resulting in the represented shareholder not being one of the largest shareholders, the nomination committee's composition shall, if the nomination committee finds it appropriate, be changed as the nomination committee decides. The composition of the nomination committee shall be made public as soon as the members and the chairman of the nomination committee have been appointed. There shall be no remuneration for the work performed in the nomination committee. The board of directors proposes that the annual general meeting resolves to adopt the following remuneration policy for the managing director and other persons in the Company's executive management for the period until the next annual general meeting. The group applies market standard salaries and remuneration based on a fixed part and a variable part. The total remuneration shall reflect market practice and be competitive, but not necessarily market-leading, and reflect the individual's performance and responsibilities. Remuneration to the Chief Executive Officer (CEO) and other senior executives consists of a basic salary, variable salary and pension. Executive management refers to those persons who together with the CEO constitute the group management. The allocation between basic salary and variable remuneration shall be proportionate to the executive's responsibilities and authorities. The yearly remuneration shall be based on financial goals linked to NCAB's development. The yearly variable salary to the CEO shall not exceed 100 percent of the fixed yearly salary. Other senior executives may receive yearly variable salary in an amount not exceeding the equivalent of 40-100 percent of the yearly fixed salary. Senior executives may in addition receive benefits customary for their respective countries, such as a company car, occupational health care etc. Senior executives shall be entitled to pension benefits according to a defined contribution plan with premiums of up to 20 percent of the executive's salary or according to applicable occupational pension scheme. The CEO shall have a notice period of no more than 12 months if termination is made by the Company and 6 months if termination is made by the CEO. No severance pay shall be made. Other senior executives shall have a notice period of no more than 9 months if termination is made by the Company and 6 months if the termination is made by the senior executive. No severance pay shall be made. The board shall have the right to deviate from the guidelines adopted by the annual general meeting, if there are special reasons for this in an individual case. The board of directors proposes that the general meeting resolves to authorize the board of directors to, until the next annual general meeting, with or without deviation from the shareholders' preferential rights, on one or several occasions resolve to issue new shares. The increase of the share capital may – where it entails a deviation from the shareholders' preferential rights – correspond to a dilution of a maximum of 10 percent of the share capital at the time of the first use of the authorisation. Payment shall be made in cash. The authorisation shall primarily be used for the purpose of acquisitions or financing. A valid resolution by the general meeting requires that shareholders holding not less than two-thirds of both the votes cast and the shares represented at the general meeting vote in favour of the proposal. The total amount of shares and votes in the Company at the time of issue of this notice was 16 847 124 shares. All shares carry equal voting rights. The Company does not hold any own shares. The annual report, auditor's report and complete proposals in accordance with above will be available at the Company (address as above) and on the Company's website, www.ncabgroup.com, not less than three weeks before the general meeting. The aforementioned documents will be sent to those shareholders who so request and submit their postal address or e-mail address. Shareholders are reminded of their right pursuant to chapter 7, section 32 of the Swedish Companies Act to request that the board of directors and managing director provide information at the general meeting in respect of any circumstances which may affect the assessment of a matter on the agenda or any circumstances which may affect the assessment of the Company's or a group company's financial position. The obligation to provide information also applies to the Company's relationship to other group companies. Information must be provided if it can be done without significant harm to the Company.
{ "redpajama_set_name": "RedPajamaC4" }
5,914
a feminine spin on motorsports this is where I leave you Valli Hilaire, February 20, 2015 interview: inside the actor's studio with carl edwards win a ride with dale earnhardt Jr. at kroger! NASCAR WAGs brad keselowski is having a baby Valli Hilaire, February 4, 2015 i finally wrote about texas Valli Hilaire, January 31, 2015 two weddings, an engagement, and a baby! Valli Hilaire, January 2, 2015 kenseth wiggles, kahne wins Valli Hilaire, September 1, 2014 Race Weekend Diaries sunday at sonoma: tony stewart meets his future mrs. Valli Hilaire, July 3, 2014 saturday at sonoma: it all leads up to 'napa nascar night' Valli Hilaire, June 30, 2014 friday at sonoma: just ask a crew guy clint bowyer got married Valli Hilaire, April 21, 2014 trevor bayne got married Valli Hilaire, June 9, 2013 interview: inside the actor's studio with david ragan Valli Hilaire, August 1, 2013 interview: inside the actor's studio with david stremme Valli Hilaire, July 15, 2013 Plan a Racing-Inspired Wedding big picture racing Valli Hilaire — June 28, 2007 Welcome to The Fast and the Fabulous! This was a blog based on one woman's thoughts, opinions and experiences involving NASCAR and IndyCar. As of February 20, 2015 this blog is no longer active. Find out why & browse the archives by clicking here. I've been meaning to write about this for some time, I love the new look & feel of NASCAR.COM. I guess it was at the start of this racing season that they launched the revamped site(?). My favorite part is when you click into an article and the huge photo at the top zooms out slowly revealing itself. I love that feature, it's so simple but so dramatic at the same time. If you haven't seen it yet you should. Use this press release regarding Tony Eury Sr.'s move to JR Motorsports as Director of Competition, as an example. I think it's awesome that Eury Sr. will be taking on this role for Dale Earnhardt Jr.'s company. He obviously cherishes his family bonds and trusts the Eury's in general. I think this can only help JR Motorsports to grow and become successful. Shane Huffman, driver of the #88 Navy Chevrolet in the Busch Series, has been doing a good job for JR Motorsports so far this year. He's moved up to 12th place in the Busch Series standings, moving up 3 spots after last weekend's race in Milwaukee. Shane writes a blog on his InfieldParking.com profile page. He updates it, it seems like at least once a week. — I'm curious about what Carl Edwards has up his sleeve for this weekend's race at Loudon in New Hampshire. Says Edwards, "This weekend we are also running a special New England-themed paint scheme. I can' t say just yet what it is, but I know the fans will get a real kick out the No. 99 new look for the weekend." A big Boston Red Sox logo? (He does race for Roush Fenway after all) A photo of a giant bowl of clam chowder? What?? — Buy King of Leon's latest CD "Because of the Times." It's incredibly good rock music, like their two previous CDs. I recommend listening to the songs "Knocked Up" and "On Call" first, if you like those you'll love all the rest. — Do me a favor and fill out my 'Fast and Fabulous' survey. Tags: Carl EdwardsDale Earnhardt Jr.JR MotorsportsShane HuffmanTony Eury Sr. Next post i'm so 'proud' that i totally called it Previous post diverse drivers wanted Archives Select Month February 2015 January 2015 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 February 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 November 2008 October 2008 September 2008 August 2008 July 2008 June 2008 May 2008 April 2008 March 2008 February 2008 January 2008 December 2007 November 2007 October 2007 September 2007 August 2007 July 2007 June 2007 May 2007 April 2007 March 2007 February 2007 January 2007 December 2006 November 2006 October 2006 September 2006 August 2006 July 2006 June 2006 A.J. Allmendinger Brad Keselowski Brian Vickers Carl Edwards Casey Mears Celebrities Chad Knaus Chandra Johnson Clint Bowyer Contests Dale Earnhardt Jr. Danica Patrick Dario Franchitti David Gilliland David Ragan DeLana Harvick Denny Hamlin Elliott Sadler Greg Biffle Infineon Raceway Ingrid Vandebosch Jamie McMurray Jeff Burton Jeff Gordon Jimmie Johnson Joey Logano Juan Pablo Montoya Kasey Kahne Kevin Harvick Kurt Busch Kyle Busch Kyle Petty Marco Andretti Mark Martin Martin Truex Jr. Matt Kenseth Michael Waltrip Photos Ron Malec Ryan Newman Sam Hornish Jr. Scott Speed Television Tony Stewart Travis Kvapil Categories,Such As Race Weekends AboutThe Fast and the Fabulous The Fast and the Fabulous is a motorsports blog dedicated to NASCAR and IndyCar, written by Valli Hilaire. Race weekends are the best weekends Pretty CoolJeff Gordon's Message to Me Copyright © 2018 The Fast and the Fabulous. All rights reserved.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,080
Q: Joint distributions between X-Y Using conditional distribution of X given Y=y Given X~$Gamma(2,\lambda)$ and the conditional distribution of $Y$ given $X=x$ ~ $U(0,x)$. I have already solved for the following joint, marginal, and conditional density functions where: $f_{X,Y}(x,y) = f_Y(y|X=x)f_X(x) = {\lambda^2}e^{-\lambda x}$ if $0\leq y\leq x$, $0$ otherwise. $f_Y(y) = \lambda e^{-\lambda y}$ if $0\leq y$, $0$ otherwise. $f_X(x|Y=y) = \lambda e^{\lambda y-\lambda x}$ I need to use the conditional distribution of X given Y = y to describe the joint distribution of Y and X-Y However, I am confused as to where to proceed from here. I have calculated the joint distribution as $F_{{X-Y},{Y}}(x,y) = \lambda e^{-\lambda y}*({1-e^{-\lambda x}})*({1-e^{-\lambda y}})$ A: The conditional density is not enough to compute the common distribution of $X-Y$ and $Y$. You need the joint distribution... The joint density is $$f_{X,Y}(x,y)=\begin{cases} \lambda^2e^{-\lambda x}&\text{ if } 0\leq y\leq x\\ 0& \text{ otherwise}. \end{cases}$$ In order to to calculate the common distribution function of $X-Y$ and $Y$ we need to calculate the probability below $$F_{X-Y,Y}(u,v)=P(X-Y<u\cap\ Y<v).$$ Let's consider the region $$A_{u,v}=\{x,y\geq0\ : x-y<u, y<v\}=\{x,y\geq 0\ : y>x-u, y<v\}.$$ The probability we are looking for can be given by integrating the common density over $A_{u,v}$: $$F_{X-Y,Y}(u,v)=\iint_{A_{u,v}}f_{X,Y}(x,y)\ dxdy.$$ Taking into account that $f_{X,Y}(x,y)=0$ if $y>x$, the following figure helps evaluate the integral above. If $u\leq0$ or $x\leq0$ or $y\leq0$ then $F_{X-Y,Y}=0$, otherwise, if $u<v$ then $$F_{X-Y,Y}(u,v)=$$ $$=\lambda^2\left[\int_0^u\int_0^xe^{-\lambda x}\ dy \ dx+\int_u^v\int_{x-u}^xe^{-\lambda x}\ dy \ dx+\int_v^{u+v}\int_{x-u}^ve^{-\lambda x}\ dy \ dx\right].$$ If however $u\geq v$ then $$F_{X-Y,Y}(u,v)=$$ $$=\lambda^2\left[\int_0^v\int_0^x e^{-\lambda x}\ dy\ dx+\int_v^u\int_0^ve^{-\lambda x}\ dy \ dx + \int_u^{u+v}\int_{x-u}^ve^{-\lambda x} \ dy \ dx\right].$$ From this point on, there are only trivial definite integrals to be evaluated.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,167
\section{Conclusion} \vspace{-0.05in} \label{Conclusion} Due to the requirement of low bandwidth and a superior user experience of panoramic VR videos, interactive streaming technology has drawn intensive attention in recent years. It has been put into practice by many corporations like Facebook \cite{Facebook}, Google, Microsoft, and DWANGO Co., Ltd. \cite{basicflowswitching}. However, the technology is far from mature because it brings about copy switching and degrades the user experience. Focus-based interactive streaming framework (FISF) points out a novel approach to addressing the problem by predicting behaviors of users, according to the characteristics of videos. It consists of a data-based video focus detection (VFD) , two versions of FIST, and two optimizations. Experimental results show that FISF significantly improves the user experience and reduces transmission bandwidth. To the best of our knowledge, \textit{this is the first time} that video focus detection based on real data analysis is used to optimize panoramic VR video transmission. There are still extensive future work about FIST, like parameter choice strategy and dynamic copy producing. We believe that FISF will be implemented and widely used to provide user-friendly and bandwidth-friendly transmission for panoramic VR videos in the near future. \eject \section{Experimental Evaluation} \vspace{-0.05in} \label{Experiment} \subsection{Experimental Setup} \noindent\textbf{User-watching Traces:} We use real user-watching trace collected by \texttt{Kandao Technology Co., Ltd.} \cite{Kandao} to simulate the runtime bandwidth and flow switching behaviors. To objectively show the performance of FISF and the state-of-the-art, we select five different kinds of VR videos to carry out experiments. These five videos are available at the website \cite{v0}. The first video is a grouping dancing, which has multiple static focuses. The second is a video of constructing a bridge, which has a moving focus. The third is a VR broadcast, which has only one static focus. The fourth is a travel advertisement, which has no obvious focus. The fifth is a solo dance, which has a static focus. We have released our source codes at GitHub \cite{Github}. \noindent\textbf{Computer Setting:} We run simulation experiments on a HP OMEN Notebook PC 15 with 8 CPU cores and 16 GB memory. \vspace{0.05in} \noindent\textbf{Metrics:} We define four metrics to evaluate the transmission performance. \begin{itemize}[leftmargin=1\parindent] \vspace{-0.09in} \item \texttt{Switching number:} defined as the number of switching among copies. \vspace{-0.09in} \item \texttt{Standstill Time:} defined as the lasting time of lag phases when the video is standstill. \vspace{-0.09in} \item \texttt{High quality rate:} defined as $T_h/T_t$, where $T_h$ denotes the time during which the users watch high-resolution areas, and $T_t$ denotes the total time of the watching trace. \vspace{-0.09in} \item $\mathbb{\alpha}$: $\alpha$ is defined as $n/N$, where $n$ denotes the number of bytes transmitted using different algorithm, and $N$ denotes the number of bytes when transmitting the $360^\circ\times180^\circ$ high-resolution videos. \end{itemize} \vspace{-0.09in} \vspace{-0.07in} \subsection{Experimental Results} \vspace{-0.05in} \vspace{0.05in} \noindent\textbf{Parameter selection:} As mentioned above, DBSCAN has two main parameters: \texttt{eps} and \texttt{min-samples}. The choice of these two parameters will significantly affect the accuracy of focus detection and the performance of FISF. Figure \ref{eva:fnp} shows the relation between focus number and eps. We can see that for each video, the focus number declines with the increase of eps. Our experiment also shows that the results do not have a clear relation to min-samples, thus we omit the corresponding experimental results. According to the experimental results, we preset eps as 0.3, min-samples as 100 for static FIST and eps as 0.2, min-samples as 30 for dynamic FIST. \begin{figure}[h] \centering \vspace{-0.05in} \includegraphics[width=0.4\textwidth]{eps} \caption{Focus Number vs. eps.} \label{eva:fnp} \vspace{-0.05in} \end{figure} \noindent\textbf{Switching Number vs. Video ID :} Our results show that \textit{our static version can reduce the number of copy switching by $[23.4\%, 49.1\%]$, with a mean of $37.3\%$, and our dynamic version can reduce that by $[15.3\%, 41.7\%]$, with a mean of $28.3\%$}, compared to the classic algorithm. As shown in Figure \ref{eva:fsn}, the x axis represents the video ID and y axis represents the number of copy switching. Copy switching may cause additional lag phases and computation cost. Thus the switching number should be reduced as much as possible. For most videos, the number of focuses is usually small, and users tend to keep eyes around the focuses. In this way, the number of copy switching will be reduced, and the standstill time will be reduced and computation resource will be saved. \begin{figure}[h] \centering \vspace{-0.05in} \includegraphics[width=0.4\textwidth]{3_id_SwitchTime} \caption{User ID vs. Switching Number.} \label{eva:fsn} \vspace{-0.05in} \end{figure} \vspace{0.05in} \noindent\textbf{Standstill time vs. Video ID:} Our results show that \textit{our static version can shorten standstill time by $[14.9\%, 53.8\%]$, with a mean of $35.8\%$, and our dynamic version can shorten that by $[15.3\%, 40.9\%]$ , with a mean of $26.9\%$}, compared to the classic algorithm. As shown in Figure \ref{eva:bandwidth2}, x axis represents the bandwidth limitation and y axis represents standstill time. Note that standstill time is relative, taking naive version as benchmark. The bandwidth limitation is also relative, and we suppose the bandwidth as $1.0$ with which users can watch the $360^\circ\times180^\circ$ high-resolution videos with no lag phases. The bandwidth requirement of our algorithm is lower and the number of copy switching is smaller as well, thus our algorithm can significantly reduce standstill time and provide a more fluent watching experience for users. \begin{figure}[h] \centering \vspace{-0.05in} \includegraphics[width=0.4\textwidth]{2_bandwidth_StackTime} \caption{Bandwidth vs. Standstill time.} \label{eva:bandwidth2} \vspace{-0.05in} \end{figure} \noindent\textbf{High Quality Rate vs. Video ID:} Our results show that \textit{our static version can improve high quality rate by $[9.7\%, 21.9\%]$ with a mean of $16.9\%$, and our dynamic version can improve that by $[12.1\%, 19.9\%]$ with a mean of $16.4\%$}, compared to the classic algorithm. As shown in Figure \ref{eva:ue}, x axis represents the video ID and y axis represents the high quality rate. \begin{figure}[h] \centering \vspace{-0.05in} \includegraphics[width=0.4\textwidth]{4_id_HighqualityRate} \caption{User ID vs. High quality rate.} \label{eva:ue} \vspace{-0.05in} \end{figure} \vspace{0.05in} \noindent\textbf{$\alpha$ vs. Bandwidth:} Our results show that \textit{our static version can reduce $\alpha$ by $[10.5\%, 20.1\%]$ with a mean of $15.1\%$, and our dynamic version can reduce that by $[14.6\%, 20.4\%]$ with a mean of $18.2\%$}, compared to the classic algorithm. As shown in Figure \ref{eva:bandwidth1}, x axis represents the bandwidth and y axis represents $\alpha$. \begin{figure}[h] \centering \vspace{-0.05in} \includegraphics[width=0.4\textwidth]{1_id_flow} \caption{Video ID vs. Ratio of transmitted bytes $\alpha$.} \label{eva:bandwidth1} \vspace{-0.05in} \end{figure} \section{Introduction} \vspace{-0.05in} \vspace{-0.07in} \subsection{Background and Motivation} \vspace{-0.05in} The panoramic video is widely used to build virtual reality (VR) and is expected to be one of the next generation Killer-Apps. It is widely used in online multimedia \cite{googlestreet}, video surveillance \cite{videosurveillance} and robotics \cite{robotics1} applications because of its good interactivity. As shown in Figure \ref{draw:coordinate}, a panoramic VR video is typically a two-dimensional rectangular video. Video players will map it onto a mesh (typically, sphere or skybox \cite{skybox}), and render it on users' screen like helmet-mounted devices (HMD) or mobile phones. When watching a panoramic VR video, the users can navigate the scenes interactively by changing their viewpoints and viewing directions. One point in a video is described by a quadruple: $(t, x, y, z)$. $t$ is the timing of the video, $(x, y)$ are the spatial coordinates, and $z$ is the users' viewing directions. \begin{figure}[h] \centering \vspace{-0.05in} \includegraphics[width=1\linewidth]{coordinate} \caption{Illustration of Coordinate.} \label{draw:coordinate} \vspace{-0.05in} \end{figure} Transmitting panoramic VR videos is a challenging task because of two problems: 1) panoramic VR videos are typically much larger than normal videos (fluent transmission requires 10+Mbps) but they need to be transmitted with limited bandwidth in mobile networks. 2) high-resolution views should be provided to guarantee a superior user experience and avoid side-effects such as dizziness and nausea. \vspace{-0.04in} \vspace{-0.07in} \subsection{Limitations of Prior Art} \vspace{-0.05in} The naive solution directly transmits $360^\circ\times180^\circ$ high-resolution panoramic VR videos, which consumes much bandwidth and causes \texttt{lag phases}, where lag phase means that the users' screens keep unchanged or vague for seconds. Facebook provides a solution called interactive streaming technology \cite{Facebook}. It produces 32 copies of videos, each of which contains only a \texttt{fixed} high-resolution area (\emph{e.g.}\xspace, $90^\circ\times120^\circ$ scene, see the blue rectangle in Figure \ref{draw:coordinate}), while the other areas are low-resolution. It chooses the most appropriate copy and transmits it to users based on the users' current viewpoints. In this way, the bandwidth for transmission is significantly reduced. However, when the users' viewpoints change, a new copy needs to be transmitted. Transmission latency inevitably incurs lag phases on the users' screens. The lag phases, caused by the transmission latency (typically for seconds), will significantly degrade the user experience because the users can only watch low-resolution panoramic VR videos during lag phases. Because of the method's large influence and practicability, we consider it as the state-of-the-art. Another solution is proposed by Ochi Daisuke and several other researchers \cite{basicflowswitching}, which is similar to Facebook's solution. There are several other cutting-edge researches aiming at addressing the bandwidth and user experience issues, such as object-based transcoding \cite{ObjectBasedEncoding}, perception-based scheduling \cite{PerceptionBasedSchedualing}, and active video annotations \cite{MultiplePerspectives, VideoAnnotation}. They are based on different technologies, such as object detection, annotation, \emph{etc.}\xspace Our proposed solution is different from these researches because it is based on analysis of real-world user-watching traces. \vspace{-0.07in} \subsection{Our Proposed Solutions} \vspace{-0.05in} In this paper, we propose a \texttt{focus-based interactive streaming framework} (FISF). FISF consists of a \texttt{video focus detection} (VFD) algorithm based on user data analysis, a static and a dynamic \texttt{focus-based interactive streaming technologies} (FIST), and two further optimizations: focus merging and prefetch strategy. FISF achieves a much smaller number of lag phases, and makes users enjoy high-resolution views with low bandwidth. FISF is based on our key observation from the tests and analysis of real panoramic videos: when the users watch a panoramic video, there are some viewpoints more likely to be watched for a long time, namely \texttt{focuses}. The framework of FISF is shown in Figure \ref{draw:hier}. First, we propose a VFD algorithm to detect video focuses, where VFD leverages the well-known DBSCAN \cite{dbscan} algorithm. Second, similar to the state-of-the-art \cite{Facebook}, we also produce multiple video copies, and each copy also contains a small area of high-resolution video. In the state-of-the-art solution, the high-resolution area is irrelevant to the video focuses. While in our solution, the high-resolution area of each copy contains one or more video focuses. The state-of-the-art \cite{Facebook} saves bandwidth at a high cost of large switching number and long lag phase time. In contrast, FISF reduces switching number and lag phase time, and even saves more bandwidth \begin{figure}[h] \centering \vspace{-0.05in} \includegraphics[width=1\linewidth]{prin} \caption{Hierachy of FISF.} \label{draw:hier} \vspace{-0.05in} \end{figure} \vspace{-0.07in} \subsection{Key Contributions} \vspace{-0.05in} \begin{itemize}[leftmargin=1\parindent] \vspace{-0.09in} \item We propose the idea of detecting video focuses by analyzing user data, and use it to improve panoramic VR video transmission. We also propose a framework to augment video transmission with video focus detection. \vspace{-0.09in} \item We present a concrete algorithm for data-based video focus detection, two versions of focus-based interactive streaming technologies: a static version and a dynamic version, and two further optimizations. \vspace{-0.09in} \item We simulate our framework and perform extensive experiments using real user-watching traces to evaluate the improvement in terms of user experience and bandwidth. \end{itemize} \section{Methodology} \vspace{-0.05in} \label{Methodology} In this section, we present the three parts of FISF. First, we present a video focus detection method, called \texttt{video focus detection based on user data analysis} (VFD), which uses DBSCAN clustering algorithm \cite{dbscan}. Second, to improve the user experience and save bandwidth for VR videos, we propose an algorithm, namely \texttt{Focus-based Interactive Streaming Technology} (FIST) with a static version and a dynamic version. Third, we propose further optimization methods, including focus merging and prefetch strategy. \vspace{-0.07in} \subsection{Part I: VFD} \label{VFD} \vspace{-0.05in} In this subsection, we present the first part of FISF, namely VFD. It serves to detect the focuses of videos, and provides the focus information, so as to help the video processor produce different copies of videos. As there is no algorithm to detect VR video focuses based on user data, we tried several classic clustering algorithms, and (DBSCAN) \cite{dbscan} exhibits the best performance. According to experimental results on real user data, the focuses detected by VFD highly conform with empirical results and serve well for FIST. \noindent\textbf{Rationale:} Our key observation is that users tend to focus on only some specific points, and ignore other parts when watching a video, especially panoramic VR video, because only part of the video can be seen. The intuition is further confirmed by a simple analysis on real user data. For example, when users look at the picture shown below, the empirical probability distribution of attention is shown in Figure \ref{draw:APD}. Those maximum points with the highest probability to be focused on are focuses. There are two approaches to achieving focus detection: 1) based on computing graphics and 2) based on data analysis. In this paper, we choose the second solution because of two reasons: 1) the second solution is more accurate than the first one because it is based on real user-watching traces; 2) the second solution is more time-efficient in terms of algorithm complexity. \begin{figure}[h] \centering \vspace{-0.05in} \includegraphics[width=1\linewidth]{pd} \caption{Probability Distribution of Attention. Red area means high attention probability. Blue area means low attention probability.} \label{draw:APD} \vspace{-0.05in} \end{figure} \noindent\textbf{DBSCAN:} VFD chooses DBSCAN to detect focuses. The DBSCAN algorithm views clusters as areas of high density separated by areas of low density, where density is defined by two parameters \texttt{min-samples} and \texttt{eps} \cite{dbscan}. The algorithm examines every sample and find its neighbors (which means samples within a distance of eps). If the number of neighbors is larger than min-samples, we say the area near the determined sample is dense and call the sample \texttt{core sample}. If a sample is not a core sample but it is a neighbor of a core sample, we still put it in the cluster. However, if a non-core sample does not have any core sample neighbors, it is not part of any cluster. The key challenge using DBSCAN for focus detection is the selection for parameters. We address this problem using a validation part in VFD. \vspace{0.09in} \noindent\textbf{VFD:} Figure \ref{draw:VFD} shows the flow chart of VFD and Table \ref{table:watchingrecorddata} shows the user-watching trace format. The VFD algorithm is composed of the following three steps. 1) Data filtering. Data filtering preprocesses the user-watching traces and filters out the ``\texttt{dirty data}'' such as a long-time lag phase without any interactive behavior. Then the ``\texttt{clean data}'' are divided into two sets: a training set and a validation set. 2) Clustering. DBSCAN is applied on the training set with preset parameters to provide ``preliminary ''. 3) Validation. These focuses, combined with the validation set, are used to simulate the real user behaviors. This will produce feedback to the DBSCAN algorithm and new parameters will be chosen according to the feedback. This procedure will stop until convergence or it reaches the preset upper bound of iterations. \begin{figure}[h] \centering \vspace{-0.05in} \includegraphics[width=1\linewidth]{VFD} \caption{Workflow of VFD.} \label{draw:VFD} \vspace{-0.05in} \end{figure} \begin{table}[htbp] \begin{tabular}{|c|c|c|c|c|} \hline \textbf{ID}&\textbf{Timing}&\textbf{x}&\textbf{y}&\textbf{z}\\ \hline $101$&0.329&-1.542456&-0.2082523&0.2079071\\ \hline $101$&1.328&-1.54239&-0.2015937&0.2011556\\ \hline $102$&0.045&-1.495437&-0.02360264&1.607887\\ \hline $101$&2.336&-1.541883&-0.198082&0.1975058\\ \hline $\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$\\ \hline \end{tabular} \vspace{-0.05in} \centering\caption{Watching Record Data Table.} \label{table:watchingrecorddata} \end{table} The finite state machine (FSM) of DBSCAN and validation is shown in Figure \ref{draw:FSM}. The initial parameters for DBSCAN are preset, so focuses detected are likely to be not accurate, and thus have a poor performance in saving bandwidth and improving the user experience. To address this issue, we set a validation part to verify the performance by simulating real user behavior using validation set. If the performance is better, we change the parameters in the same direction with a fixed step length (if this is the first time of validation, choose the direction randomly). Otherwise, we change the parameters in the reverse direction with a reduced step length. The procedure stops either when it converges or when the number of iterations reaches the preset upper bound. In section \ref{Experiment}, we carry out an experiment to decide the appropriate preset parameters (see Figure \ref{eva:fnp}). \begin{figure}[h] \centering \vspace{-0.05in} \includegraphics[width=1\linewidth]{FSM} \caption{FSM of VFD.} \label{draw:FSM} \vspace{-0.05in} \end{figure} \vspace{-0.07in} \subsection{Part II: FIST} \vspace{-0.05in} In this subsection, we propose two versions of FIST. We first introduce the basic framework for FIST. Then we introduce two versions of FIST: a static version and a dynamic version, suitable for different videos. \vspace{0.09in} \noindent\textbf{Basic FIST Framework:} The basic framework of FIST is shown in Figure \ref{draw:FIST}. FIST consists of two main parts: 1) A video processor used to produce different copies of videos; 2) A \texttt{selector} to choose which copy to transmit according to the current user viewpoint. Focus information from VFD is passed to the video processor, and several copies are produced. Each copy, namely \texttt{fcopy} has a high-resolution area covering one or more focuses while other parts are low-resolution. When users watch videos, watching devices like helmet-mounted devices or mobiles phones will detect the users' viewpoints and report them to the selector. The selector will choose the copy with the corresponding video to transmit. Note that the producer also produces four copies, namely \texttt{bcopy}, each with a $90^\circ$ high resolution area and together covering the whole video. When the users' viewpoints are out of any focus, the selector will transmit one of these four copies according to the viewpoints. \begin{figure}[h] \centering \vspace{-0.05in} \includegraphics[width=1\linewidth]{FIST} \caption{Framework of FIST.} \label{draw:FIST} \vspace{-0.05in} \end{figure} This method may introduce extra time and space consumption, but they are both ignorable compared to the bandwidth bonus. First, the preprocessing needs to be done only once. Second, the low-resolution area of a video consumes much smaller memory space than the high-resolution area, so the copies will just consume a little extra memory than a $360^\circ\times180^\circ$ high-resolution video. Third, storage is cheap for panoramic VR video provider so memory usage is also not a problem. \vspace{0.05in} \noindent\textbf{Static FIST:} Static FIST uses static focus information to produce video copies. To provide focus information, VFD ignores the time dimension ($t$), and applies DBSCAN on $x$ and $y$ dimension, thus the focuses only contain spatial information. The strategy of selecting which copy to transmit is shown in Algorithm \ref{alg:selection}. \vspace{0.05in} \noindent\textbf{Dynamic FIST:} Dynamic FIST is based on the key observation that focuses move in a predictable pattern in many videos. For example, in a broadcasting video for a basketball game, the focus is likely to follow the ball. If we apply static FIST to this video, it will switch the copies transmitted back and forth, leading to frequent lag phases. Dynamic FIST addresses this problem by applying DBSCAN on $x$, $y$, and $t$ dimensions. Therefore, the focuses contain time information. When we preprocess the videos to produce copies of the original videos, the copy covering dynamic focuses will have a moving high-resolution area. Note that the implementation of Dynamic FIST is still our undergoing work. The selection algorithm for dynamic FIST is shown in algorithm \ref{alg:selection}. \begin{algorithm}[h] \caption{Copy Selection for FIST} \label{alg:selection} \KwIn{~~User viewpoint: $V_u$\\~~~~~~~~~~~~~Current copy being transmitted: $C_c$} \KwOut{Selected Copy} \If{$V_u$ in the high-resolution area of $C_c$} { Keep transmitting $C_c$\\ goto \textbf{Done}\\ } \Else { $\#$ Dynamic version\\ $\mathcal{S}_n$ = find dynamic fcopies containing $V_u$\\ $\#$ Static version\\ $\mathcal{S}_n$ = find static fcopies containing $V_u$\\ \If{$|\mathcal{S}_n|$==0} { $C_n$ = find bcopies containing $V_u$\\ } \Else { $C_n$ = randomly choose a fcopy from $\mathcal{S}_n$ } Switch to transmit $C_n$ } \textbf{Done} \end{algorithm} \begin{comment} \begin{algorithm}[h] \caption{Copy Selection for FIST} \label{alg:selection} \KwIn{User viewpoint: $V_u$\\ Current copy being transmitted: $C_c$} \KwOut{Selected Copy} $\#$ Note that the branch is only active in dynamic FIST\\ \If{$V_u$ locates in the high-resolution area of a dynamic fcopy $C_d$} { $\#$ Note that $C_d$ may be the same as $C_c$\\ Switch to transmit $C_d$ } $\#$ Note that the branch is only active in static FIST\\ \ElseIf{$V_u$ locates in the high-resolution area of a static fcopy $C_s$} { $\#$Note that $C_s$ may be the same as $C_c$\\ Switch to transmit $C_s$ } \ElseIf{$V_u$ locates in the high-resolution area of $C_c$} { Keep transmitting $C_c$ } \Else{ Find the bcopy $C_b$ covering $V_u$\\ Switch to transmit $C_b$ } \end{algorithm} \end{comment} \noindent\textbf{Static Version vs. Dynamic Version:} Note that these two versions adapt to different situations. When a video has static focuses, static version will definitely have better performance due to more accurate focuses. However, when the focus moves, dynamic version will be better because in static version, the server will need to switch copies frequently and introduce many lag phases. \vspace{-0.07in} \subsection{Part III: Undergoing Work} \vspace{-0.05in} In this section, we propose two optimization approaches: focus merging and prefetch strategy. \vspace{0.05in} \noindent\textbf{Focus Merging:} We observe that users sometimes change frequently between two near focuses. Although one focus may be covered by a copy aiming at covering another focus, the marginal part of users' view may be vague. To address this issue, we produce a copy covering these two near focuses to prevent this problem. Figure \ref{draw:focusmerging} shows focus merging technology. \begin{figure}[h] \centering \vspace{-0.05in} \includegraphics[width=1\linewidth]{focusmerge} \caption{A Focus Merging Example: Red part means high-resolution area, blue the opposite.} \label{draw:focusmerging} \vspace{-0.05in} \end{figure} \noindent\textbf{Prefetch Strategy:} The second optimization is prefetch strategy. One can construct a probabilistic model from user data and apply it to predict the movement of viewpoint. Based on its prediction of viewpoint, the server and the client will both leave some bandwidth to prefetch the predicted copies. If the users behave as predicted, they will immediately see the high-resolution part without any lag phase.
{ "redpajama_set_name": "RedPajamaArXiv" }
388
#define LOG_TAG "AudioDecoder" #include "audio/android/AudioDecoder.h" #include "audio/android/AudioResampler.h" #include "audio/android/PcmBufferProvider.h" #include "audio/android/AudioResampler.h" #include <thread> #include <chrono> namespace cocos2d { namespace experimental { size_t AudioDecoder::fileRead(void* ptr, size_t size, size_t nmemb, void* datasource) { AudioDecoder* thiz = (AudioDecoder*)datasource; ssize_t toReadBytes = std::min((ssize_t)(thiz->_fileData.getSize() - thiz->_fileCurrPos), (ssize_t)(nmemb * size)); if (toReadBytes > 0) { memcpy(ptr, (unsigned char*) thiz->_fileData.getBytes() + thiz->_fileCurrPos, toReadBytes); thiz->_fileCurrPos += toReadBytes; } // ALOGD("File size: %d, After fileRead _fileCurrPos %d", (int)thiz->_fileData.getSize(), thiz->_fileCurrPos); return toReadBytes; } int AudioDecoder::fileSeek(void* datasource, int64_t offset, int whence) { AudioDecoder* thiz = (AudioDecoder*)datasource; if (whence == SEEK_SET) thiz->_fileCurrPos = offset; else if (whence == SEEK_CUR) thiz->_fileCurrPos = thiz->_fileCurrPos + offset; else if (whence == SEEK_END) thiz->_fileCurrPos = thiz->_fileData.getSize(); return 0; } int AudioDecoder::fileClose(void* datasource) { return 0; } long AudioDecoder::fileTell(void* datasource) { AudioDecoder* thiz = (AudioDecoder*)datasource; return (long) thiz->_fileCurrPos; } AudioDecoder::AudioDecoder() : _fileCurrPos(0), _sampleRate(-1) { auto pcmBuffer = std::make_shared<std::vector<char>>(); pcmBuffer->reserve(4096); _result.pcmBuffer = pcmBuffer; } AudioDecoder::~AudioDecoder() { ALOGV("~AudioDecoder() %p", this); } bool AudioDecoder::init(const std::string &url, int sampleRate) { _url = url; _sampleRate = sampleRate; return true; } bool AudioDecoder::start() { auto oldTime = clockNow(); auto nowTime = oldTime; bool ret; do { ret = decodeToPcm(); if (!ret) { ALOGE("decodeToPcm (%s) failed!", _url.c_str()); break; } nowTime = clockNow(); ALOGD("Decoding (%s) to pcm data wasted %fms", _url.c_str(), intervalInMS(oldTime, nowTime)); oldTime = nowTime; ret = resample(); if (!ret) { ALOGE("resample (%s) failed!", _url.c_str()); break; } nowTime = clockNow(); ALOGD("Resampling (%s) wasted %fms", _url.c_str(), intervalInMS(oldTime, nowTime)); oldTime = nowTime; ret = interleave(); if (!ret) { ALOGE("interleave (%s) failed!", _url.c_str()); break; } nowTime = clockNow(); ALOGD("Interleave (%s) wasted %fms", _url.c_str(), intervalInMS(oldTime, nowTime)); } while(false); ALOGV_IF(!ret, "%s returns false, decode (%s)", __FUNCTION__, _url.c_str()); return ret; } bool AudioDecoder::resample() { if (_result.sampleRate == _sampleRate) { ALOGI("No need to resample since the sample rate (%d) of the decoded pcm data is the same as the device output sample rate", _sampleRate); return true; } ALOGV("Resample: %d --> %d", _result.sampleRate, _sampleRate); auto r = _result; PcmBufferProvider provider; provider.init(r.pcmBuffer->data(), r.numFrames, r.pcmBuffer->size() / r.numFrames); const int outFrameRate = _sampleRate; int outputChannels = 2; size_t outputFrameSize = outputChannels * sizeof(int32_t); size_t outputFrames = ((int64_t) r.numFrames * outFrameRate) / r.sampleRate; size_t outputSize = outputFrames * outputFrameSize; void *outputVAddr = malloc(outputSize); auto resampler = AudioResampler::create(AUDIO_FORMAT_PCM_16_BIT, r.numChannels, outFrameRate, AudioResampler::MED_QUALITY); resampler->setSampleRate(r.sampleRate); resampler->setVolume(AudioResampler::UNITY_GAIN_FLOAT, AudioResampler::UNITY_GAIN_FLOAT); memset(outputVAddr, 0, outputSize); ALOGV("resample() %zu output frames", outputFrames); std::vector<int> Ovalues; if (Ovalues.empty()) { Ovalues.push_back(outputFrames); } for (size_t i = 0, j = 0; i < outputFrames;) { size_t thisFrames = Ovalues[j++]; if (j >= Ovalues.size()) { j = 0; } if (thisFrames == 0 || thisFrames > outputFrames - i) { thisFrames = outputFrames - i; } int outFrames = resampler->resample((int *) outputVAddr + outputChannels * i, thisFrames, &provider); ALOGV("outFrames: %d", outFrames); i += thisFrames; } ALOGV("resample() complete"); resampler->reset(); ALOGV("reset() complete"); delete resampler; resampler = nullptr; // mono takes left channel only (out of stereo output pair) // stereo and multichannel preserve all channels. int channels = r.numChannels; int32_t *out = (int32_t *) outputVAddr; int16_t *convert = (int16_t *) malloc(outputFrames * channels * sizeof(int16_t)); const int volumeShift = 12; // shift requirement for Q4.27 to Q.15 // round to half towards zero and saturate at int16 (non-dithered) const int roundVal = (1 << (volumeShift - 1)) - 1; // volumePrecision > 0 for (size_t i = 0; i < outputFrames; i++) { for (int j = 0; j < channels; j++) { int32_t s = out[i * outputChannels + j] + roundVal; // add offset here if (s < 0) { s = (s + 1) >> volumeShift; // round to 0 if (s < -32768) { s = -32768; } } else { s = s >> volumeShift; if (s > 32767) { s = 32767; } } convert[i * channels + j] = int16_t(s); } } // Reset result _result.numFrames = outputFrames; _result.sampleRate = outFrameRate; auto buffer = std::make_shared<std::vector<char>>(); buffer->reserve(_result.numFrames * _result.bitsPerSample / 8); buffer->insert(buffer->end(), (char *) convert, (char *) convert + outputFrames * channels * sizeof(int16_t)); _result.pcmBuffer = buffer; ALOGV("pcm buffer size: %d", (int)_result.pcmBuffer->size()); free(convert); free(outputVAddr); return true; } //----------------------------------------------------------------- bool AudioDecoder::interleave() { if (_result.numChannels == 2) { ALOGI("Audio channel count is 2, no need to interleave"); return true; } else if (_result.numChannels == 1) { // If it's a mono audio, try to compose a fake stereo buffer size_t newBufferSize = _result.pcmBuffer->size() * 2; auto newBuffer = std::make_shared<std::vector<char>>(); newBuffer->reserve(newBufferSize); size_t totalFrameSizeInBytes = (size_t) (_result.numFrames * _result.bitsPerSample / 8); for (size_t i = 0; i < totalFrameSizeInBytes; i += 2) { // get one short value char byte1 = _result.pcmBuffer->at(i); char byte2 = _result.pcmBuffer->at(i + 1); // push two short value for (int j = 0; j < 2; ++j) { newBuffer->push_back(byte1); newBuffer->push_back(byte2); } } _result.numChannels = 2; _result.channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT; _result.pcmBuffer = newBuffer; return true; } ALOGE("Audio channel count (%d) is wrong, interleave only supports converting mono to stereo!", _result.numChannels); return false; } }} // namespace cocos2d { namespace experimental {
{ "redpajama_set_name": "RedPajamaGithub" }
3,379
\section{Introduction} In this paper we attack the computational complexity of the Conjugacy and Conjugacy Search Problems in free solvable groups. We show that they are both solvable in polynomial time and that the degree of the polynomial is uniform for the class of free solvable groups. Further, we show that the Conjugacy Problem and the Conjugacy Search Problem in wreath products are solvable in polynomial time modulo some natural conditions. Algorithmic problems in group theory were considered as early as 1910, when Dehn introduced the now famous Word and Conjugacy Problems. Briefly, for a finitely generated group $G$, given two words as a product of generators, the Word Problem asks whether they are equal as elements of $G$ and the Conjugacy Problem asks whether they are conjugate to each other in $G$. Both of these decision problems quickly became an active area of research. Novikov (\cite{Novikov:1952}, \cite{Novikov:1958}) gave the first example of a finitely presented group with undecidable Word (and hence Conjugacy) Problem. A beautiful result of Miller exhibits a group which has decidable Word Problem and undecidable Conjugacy Problem \cite{Miller1}. At present, there are many interesting classes of groups where these problems are decidable. Here we mention only a few positive results about non-solvable groups and discuss solvable groups in more detail below. The Word and Conjugacy Problems are decidable in braid groups (Artin, \cite{Artin}), hyperbolic groups (Gromov, \cite{Gromov_hyperbolic}), wreath products of groups under some natural additional conditions (Matthews \cite{Matthews:1966}), the Grigorchuk group (Grigorchuk \cite{Grig98}, Leonov \cite{Leonov}), bi-automatic groups (Gersten and Short, \cite{GerShort}), toral relatively hyperbolic groups, free solvable groups (Remeslennikov, Sokolov \cite{Remeslennikov-Sokolov:1970}). Nowadays, while decidability is still an open area of research, the emphasis has shifted to complexity of decidable problems. It is worth mentioning the work of Lysenok, Miasnikov, and Ushakov who showed in \cite{LMU} that the Conjugacy Problem in the Grigorchuk group is decidable in polynomial time, the work the work of Lipton and Zalenstein on the polynomial time decidability of the Word Problem in linear groups \cite{LipZal}, the work of Marshall, Bridson and Haefliger, Epstein and Holt which, through successively improving time bounds, culminates in showing that the Conjugacy Problem in word-hyperbolic groups is decidable in linear time \cite{Epstein_linhyp} and the work of Cannon, Goodman and Shapiro, and Holt and Rees \cite{Holt} in giving a linear time algorithm for deciding the Word Problem in nilpotent groups. Solvable groups offer a whole new world on their own. An example of Kharlampovich of a solvable group with undecidable Word, and hence Conjugacy, Problem shows that one cannot derive any positive results about the entire class of solvable groups. However, there are many interesting subclasses in which the Conjugacy Problem is decidable, for instance finitely generated metabelian groups (Noskov \cite{Noskov:1982}), nilpotent groups (Blackburn \cite{Blackburn_nilpotent}), polycyclic groups (Remeslennikov \cite{Remeslennikov_CP_polycyclic}) and free solvable groups (Remeslennikov - Sokolov \cite{Remeslennikov-Sokolov:1970}. In all of the above cases, however, the results are about decidability without mention of the time complexity. The complexity of algorithmic problems in solvable groups has recently become an active area of research with a paper by Miasnikov, Roman'kov, Ushakov and Vershik \cite{MRUV} which presents a cubic time algorithm to decide the Word Problem in free solvable groups. Most complexity results concern a fixed group. To the knowledge of the author, there is no other studied class of infinite solvable groups for which the Word and Conjugacy Problems can be decided uniformly in polynomial time. Even in the cases where one can solve the given problem using a general description of the group, the algorithm involves heavy pre-computations specific to this group which cannot be generalized to produce a uniformly polynomial-time algorithm. In this paper we use this result in \cite{MRUV} to show that the Conjugacy Problem in free solvable groups is decidable in quintic time. The proof follows the ideas of Remeslennikov and Sokolov (\cite{Remeslennikov-Sokolov:1970}). First, we embed the free solvable group of degree $(d+1)$ and rank $r$ in a wreath product of an abelian group and a free solvable group of degree $d$. The image of a word of length $n$ can be found in time $O(rdn^3)$. Since the images of two words under the Magnus embedding are conjugate if and only if these words are conjugate, we can apply our general result, namely that the Conjugacy Problem in this wreath product is decidable in polynomial time, provided the Conjugacy Problems in each factor (and the Power Problem in the second factor) are decidable in polynomial time. The second factor is a free solvable group of lesser degree, so we proceed by induction. Similarly, we solve the Conjugacy Search Problem. \comment{ The first section gives some general definitions and discusses known algorithmic results on the Magnus embedding, which will be used later on. In Section~\ref{sec:algorithm wreath} the algorithm given by Matthews (\cite{Matthews:1966}) is modified and its complexity is analyzed. In Section~\ref{sec:algorithm free solvable} it is shown that the Conjugacy Problem in a free solvable group of degree $d$ and rank $r$ can be decided in time $O(rdn^3)$, where $n$ is the length of the input words. } \section{Preliminaries} \subsection{Wreath products and the Magnus embedding} We start by defining the objects essential to this paper -- wreath products and the Magnus embedding. Let $G$ be a group generated by a fixed finite set of generators $Y$. We represent elements in $G$ by words $w$ over $Y^{\pm}$ and denote by $|w|$ the length of the word $w$. Let $A$ and $B$ be groups. The \emph{restricted wreath product} $A{\mathrm{wr}} B$ is the group formed by the set $$A{\mathrm{wr}} B = \{bf \mid b\in B,\; f\in A^{(B)}\}, $$ with multiplication defined by $bfcg = bcf^cg$, where $f^c(x) = f(x c^{-1})$ for $x \in B$, where $A^{(B)}$ denotes the set of functions from $B$ to $A$ with finite support (i.e., functions from $B$ to $A$ which take non-zero values only for finitely many elements of $B$). Note that $A^{(B)}$ is a group under pointwise multiplication of functions with identity $1: B \rightarrow 1$, so we can view $A {\mathrm{wr}} B$ as the semi-direct product $B \ltimes A^{(B)}$. Let $X = \{x_1, \hdots, x_n\}$ and $Y = \{y_1, \hdots, y_m\}$ be the generating sets for $A$ and $B$, respectively. $A{\mathrm{wr}} B$ is generated by $X, Y$ in the following sense: every function, $f \in A^{(B)}$ can be written as a product $f = \prod_i a_i^{b_i}$. Indeed consider the functions of the form $$f_{a_i,b_i}(x)=\left\{ \begin{array}{rl} a_i & \text{if } x = b_i \\ 1 & \text{otherwise }\\ \end{array} \right.$$ For simplicity, we denote $f_{a_i, 1}$ by $f_{a_i}$. Then for any $f \in A^{(B)}$, one can write $f = \prod_i f_{a_i,b_i} = \prod_i f_{a_i}^{b_i}$. There is clearly an identification between $f_{a_i}$ and $a_i$. \begin{remark} \label{remark: ordering supp(f)} One can rewrite a word $w = b_1 a_1\hdots b_k a_k$ in generators $X$ and $Y$ as $w = bf$ in polynomial time. Observe that $$w = b_1\hdots b_k a_1^{b_2\hdots b_k}\hdots a_2^{b_3\hdots b_k} a_{k-1}^{b_k} a_k.$$ Here $b = b_1\hdots b_k \in B$ and $a_1^{b_2\hdots b_k} a_2^{b_3\hdots b_k}\hdots a_{k-1}^{b_k} a_k$ corresponds to a function in $A^{(B)}$ as follows. Denote $B_i = b_i\hdots b_k$. For each $1<i<j\leq k$, check whether $B_i=B_j$ in $B$. This amounts to solving ${k-1}\choose 2$ Word Problems in $B$. For each $B_{i_1} = B_{i_2} = \hdots = B_{i_j}$, write $f(B_{i_1}) = a_1\hdots a_j$. This determines $f$ completely and we can change presentations in time $O(|w|^2 T_{WB}(|w|))$, where $T_{WB}$ is the time function for the Word Problem in $B$. Note that if a word is given as a product of generators, converting it to standard (or pair) form gives an ordering for ${\mathrm{supp}}(f) = \{B_i\}_i$ determined by the indices $i$. More precisely, $B_i < B_j$ whenever $i < j$. \end{remark} Fix a free group $F$ of rank $r$ with basis $X$. The derived subgroup $F^{(d)}$ is defined by induction as follows: $F' = [F,F]$ and $F^{(d+1)} = [F^{(d)}, F^{(d)}]$. Define the free solvable group, $S_{d,r} = F/F^{(d+1)}$. Let $N$ be a normal subgroup of $F$. Denote by $\mu : F \rightarrow F/N$ the canonical epimorphism. Let $U$ be a free $\mathbb{Z}(F/N)$-module with basis $\{u_1, \hdots, u_r\}$, so $U \simeq \mathbb{Z}(F/N) \oplus \hdots \oplus \mathbb{Z}(F/N) $. Then the set of matrices \begin{equation*} M(F/N) = \left( \begin{array}{cc} F/N & U\\ 0 & 1 \end{array} \right) = \bigg\{ \left( \begin{array}{cc} g & u\\ 0 & 1 \end{array} \right) \mid g\in F/N, u\in U \bigg\} \end{equation*} forms a group with respect to matrix multiplication. One can see that (see for example, \cite{Remeslennikov-Sokolov:1970}) $M(F/N) \simeq F/F' {\mathrm{wr}} F/N$. The map $\varphi: F(X)\rightarrow M(F/N)$ defined by $$x_i \mapsto \left( \begin{array}{cc} \mu (x_i) & u_i\\ 0 & 1 \end{array}\right), i=1, \hdots, r$$ extends to an injective homomorphism $\varphi : F/N^{\prime} \rightarrow M(F/N)$, called the \emph{Magnus embedding}. In the sequel, for $x \in F$ put $$\varphi(x) = \left( \begin{array}{cc} \mu (x) & u_x\\ 0 & 1 \end{array} \right).$$ \subsection{Algorithmic Results for the Magnus Embedding} Here we present and prove a few preliminary results on the Magnus embedding that we will need in Section \ref{sec:algorithm free solvable}. \begin{theorem}[\cite{Remeslennikov-Sokolov:1970}] \label{prop:f,g conj in S_(d+1) iff in M(S_d)} Let $\bar{f},\bar{g} \in F/N'$, where $N$ is normal in $F$ and $N'$ is torsion-free. Then $\bar{f}$ and $\bar{g}$ are conjugate in $F/N'$ if and only if their images in $M(F/N)$ are conjugate. \end{theorem} In particular, the theorem above holds for the free solvable group $F/F^{(d+1)}$, which is $F/N'$ for $N = F^{(d)}$. \begin{theorem}[\cite{MRUV}] \label{prop:magnus embedding is O(dr w^3)} \label{prop:WP in S_d is O(dr w^3)} The following hold: \begin{enumerate} \item[1)] For a given $w \in S_{d,r}$, one can compute $\varphi(w)$ in time $O(dr|w|^3)$; \item[2)] The Word Problem in $S_{d,r}$ is solvable in time $O(dr|w|^3)$, where $w$ is the input word. \end{enumerate} \end{theorem} \begin{corollary} \label{cor: reduction to CP in wreath is poly} The Conjugacy Problem in $S_{d,r}$ reduces to the Conjugacy Problem in $F/F' {\mathrm{wr}} S_{d-1,r}$ it time $O(rdL^3)$, where $L$ is the length of the input words. \end{corollary} The Power Problem for a group $G$ for given elements $x,y \in G$ consists of determining whether there exists an integer $n\in\mathbb{Z}$ such that $x = y^n$ and if so, to find it. \begin{theorem} \label{prop:power problem is O(dr w^3)} The power problem in $F/F^{(d)}$ is decidable in time $O(rdL^6)$, where $r$ is the rank of $F$ and $L=|x|+|y|$ is the length of the input. \end{theorem} \begin{proof} Let $x$ and $y$ be elements in $F/F^{(d)}$ given as products of generators. Consider first the two trivial cases. If $y=1$, which can be checked in time $O(rd|y|^3)$, the problem reduces to a Word Problem, which is decidable in $O(rd|x|^3)$. If $x=1$, then $n=0$ is always a solution. Hence, after some preliminary computation which can be done in $O(rdL^3)$, we can assume without loss of generality that both $x$ and $y$ are non-trivial elements in $F/F^{(d)}$. Observe the following. \begin{fact} \begin{enumerate} \item \label{fact1}If there exists $n\in \mathbb{Z}$ such that $x=y^n$ in $F/F^{(d)}$, then $x=y^n$ in $F/F'$. \item \label{fact2}If there exists $n\in \mathbb{Z}$ such that $x=y^n$ in $F/F^{(d)}$, then $n$ is unique with this property. \end{enumerate} \end{fact} The first claim follows easily since $F/F'$ is a quotient of $F/F^{(d)}$ and the second one follows from the fact that free solvable groups are torsion-free. We proceed to solve the general case of the Power Problem in a free solvable group $F/F^{(d)}$. \begin{enumerate} \item[Step 1: ] Solve the Power Problem in $F/F'$. It is a free abelian group, so the elements $x$ and $y$ can be uniquely presented in the form $x=x_1^{a_1} \hdots x_r^{a_r}$ and $y=x_1^{b_1} \hdots x_r^{b_r}$, where $X = \{x_1, \hdots, x_r\}$ is the basis for $F$. Obviously, this decomposition can be found in log-linear time, which is certainly in $O(rL^6)$. Then for each $1 \leq i \leq r$ set $n_i = a_i/b_i$. If all $n_i$ are equal and integer, then $x=y^{n_1}$, as required. Otherwise, $x \not\in \langle y\rangle$ and we are done. Clearly, this can be done in time $O(r(|x|+|y|))$. Note that the exponent $n$ satisfies $n \leq |x|+|y| = L$. \item[Step 2: ] Using $n$ from Step~1, check whether the equation \begin{equation} \label{eqn: x=y^n} x=y^n \end{equation} holds in $F/F^{(d)}$. By Theorem~\ref{prop:WP in S_d is O(dr w^3)}, this can be done in time $O\big(rd \big( |x| + n|y|\big)^3 \big) \subseteq O( rd L^6)$. If (\ref{eqn: x=y^n}) does not hold, then $x \neq y^m$ for all integers $m$. Indeed, if there were some $m\in\mathbb{Z}$ for which $x=y^m$ in $F/F^{(d)}$, then by Fact\ref{fact1} the same equation would hold in $F/F'$. But by the uniqueness of $n$ (Fact\ref{fact2}), this is impossible. \end{enumerate} \end{proof} \section{Complexity of the Conjugacy Problem in Wreath Products} \label{sec:algorithm wreath} We establish a bound on the complexity of the Conjugacy Problem in wreath products $A{\mathrm{wr}} B$ by giving a bound for a variant of the algorithm developed by Matthews \cite{Matthews:1966}. Let $x=bf, y=cg \in A{\mathrm{wr}} B$, where $b,c \in B$ and $f,g \in A$. Denote ${\mathrm{supp}}(f) = \{b_1, \hdots, b_n\}$ and ${\mathrm{supp}}(g) = \{\beta_1, \hdots, \beta_m\}$ where the $b_i$ and $\beta_j$ are ordered as in Remark ~\ref{remark: ordering supp(f)}. Recall that all elements are given as words in generators. Let $\bar{b}$ and $\bar{\beta}$ be the longest elements in ${\mathrm{supp}}(f)$ and in ${\mathrm{supp}}(g)$, and $\bar{a}$ and $\bar{\alpha}$ be the longest element in the image of $f$ and of $g$, respectively. For each left $\langle b \rangle$-coset in $B$ that intersects ${\mathrm{supp}}(f)\cup {\mathrm{supp}}(g)$, choose a coset representative from ${\mathrm{supp}}(f) \cup {\mathrm{supp}}(g)$ and let $T_b=\{t_i\}_{i \in I_1\cup I_2}$, where $I_1$ indexes the coset representatives we just chose and $I_2$ indexes the remaining ones. Deciding whether $b_i, b_j \in {\mathrm{supp}}(f) \cup {\mathrm{supp}}(g)$ are in the same coset is a Power Problem, since $b_i, b_j$ are in the same coset if and only if $b_i b_j^{-1} = b^k$ for some $k$. To find $T_b$ one needs to solve the Power Problem ${(n+m) \choose 2}$ times (for all pairs $(b_i, b_j)$). Hence it takes time ${(n+m) \choose 2}T_{PB}( 2|\bar{b}| + 2|\bar{\beta}| + |b|)$, where $T_{PB}$ is the time function for the power problem in $B$. For each $\gamma\in B$ and $i \in I_1 \cup I_2$, associate with $T_b$ the following map $\pi_{t_i}^{(\gamma)}: A^{(B)} \rightarrow A$: \begin{equation*} \pi_{t_i}^{(\gamma)}(f) = \left\{ \begin{array}{rl} \prod\limits_{j=0}^{N-1}f(t_ib^j\gamma^{-1}) & \text{if } b \text{ is of finite order } N, \\ \\ \prod\limits_{j=-\infty}^{\infty}f(t_ib^j\gamma^{-1}) & \text{if } b \text{ is of infinite order. }\\ \end{array} \right. \end{equation*} Note that in the above all the products are finite, since $f$ has finite support. Denote $\pi_{t_i}^{(1)}(f)$ by $\pi_{t_i}(f)$. Matthews gives a condition to check conjugacy, which will be used here. \begin{theorem}[\cite{Matthews:1966}] \label{thm: conjugacy criterion for wreath} Let $A$, $B$ be finitely generated groups. Two elements $x=bf, y=cg \in A{\mathrm{wr}} B $ are conjugate if and only if there exists $d\in B$ such that for all $t_i \in T_b$ the following hold: \begin{enumerate} \item[(1)] $db = cd$, \item[(2)] when the order of $b$ is finite, $\pi_{t_i}^{(d)}(g)$ is conjugate to $\pi_{t_i}(f)$ in A, \item[(3)] when the order of $b$ is infinite, $\pi_{t_i}^{(d)}(g)= \pi_{t_i}(f)$ in A. \end{enumerate} \end{theorem} In order to use this criterion computationally, we need to circumvent the use of the conjugator $d$. \begin{lemma} \label{lemma: conjugacy bar and tilde} Let $\{\bar{s_i}\}_{i\in I}$ and $\{\tilde{s_i}\}_{i\in I}$ be two sets of left $\langle c \rangle$-coset representatives such that $\bar{s_i} \langle c \rangle = \tilde{s_i} \langle c \rangle$. Then $\pi_{\bar{s_i}}(g)$ and $\pi_{\tilde{s_i}}(g)$ are conjugate for any $i\in I$. \end{lemma} \begin{proof} Since $\bar{s_i} \langle c \rangle = \tilde{s_i} \langle c \rangle$, there is some integer $k_i$ for which $\bar{s_i} = \tilde{s_i}c^{k_i}$ and hence, $$\pi_{\bar{s_i}}(g) = \prod_j g(\bar{s_i}c^j) = \prod_j g(\tilde{s_i}c^{k_i}c^j) = \prod_j g(\tilde{s_i}c^{k_i+j}).$$ This last product is a cyclic permutation of the factors in $\prod_j g(\tilde{s_i}c^j) = \pi_{\tilde{s_i}}(g)$ and so is conjugate to $\pi_{\tilde{s_i}}(g)$. \end{proof} Using the Theorem~\ref{thm: conjugacy criterion for wreath} and Lemma~\ref{lemma: conjugacy bar and tilde} we show that the time complexity of the Conjugacy Problem in wreath products is polynomial. \begin{theorem}\label{thm: CP in wreath products is poly} Let $A$ and $B$ be finitely generated groups such that the following hold: \begin{enumerate} \item[1)] there are decision algorithms for the Conjugacy Problem in $A$ and in $B$ with polynomial time functions, $T_{CA}$, $T_{CB}$, respectively; \item[2)] there is an algorithm with polynomial time function $T_{PB}$ for the Power Problem in $B$. \end{enumerate} Then the Conjugacy Problem in $A{\mathrm{wr}} B$ is decidable with complexity \begin{equation} \label{eqn: complexity wreath prod} O\big(L^2T_{CA}(L^2) + LT_{CB}(L) + L^2T_{PB}(L) \big), \end{equation} where $L=|x| +|y|$ is the length of the input pair $x,y \in A{\mathrm{wr}} B$. \end{theorem} \begin{remark} \label{remark: CP => WP} Note that every Word Problem ''s $x=1$?" is precisely the Conjugacy Problem ''Is $x$ conjugate to $1$"? To simplify the presentation, the complexities of all Word Problems considered in this section will be bounded by the complexities of the corresponding Conjugacy Problems. \end{remark} \begin{proof} Let $x=bf, y=cg \in A{\mathrm{wr}} B$. The notation from the beginning of this section will be used throughout. In order to simplify the subsequent treatment of complexity in this section, we will implicitly use the bounds $$|x|, |y|, n, m, |c|, |b|, |\bar{b}|, |t_i|, |\bar{a}| \leq L. $$ \begin{claim} \label{subsec: compute pi_i} There is a polynomial time algorithm which computes $\pi_{t_i}^{(\gamma)}(f)$. More precisely, \begin{itemize} \item $\pi_{t_i}^{(\gamma)}(f)$ can be computed in time $L T_{PB}(L)$. \item $|\pi_{t_i}^{(\gamma)}(f)| \leq L^2$. \end{itemize} \end{claim} \begin{proof} The algorithm is as follows: \begin{description} \item[Step 1: ] For each $b_k \in {\mathrm{supp}}(f)$ check whether there is some $j$ such that $t_i b^j \gamma ^{-1} = b_k$, i.e., $t_i^{-1}b_k \gamma = b^j$. This is an instance of the Power Problem in $B$ and so can be done in time $T_{PB}(2|\bar{b}| + |b| + |\gamma|)$. If such $j$ exists, look up the corresponding value $a_j = f(b_k)$. Otherwise, $a_j$ does not occur in the product. \item[Step 2: ] There are $n$ elements in ${\mathrm{supp}} (f)$ to perform computations on, so computing $\pi_{t_i}^{(\gamma)}(f)$ takes time $n T_{PB}(2|\bar{b}| + |b| + |\gamma|)$. \item[Step 3: ] Set $\pi_{t_i}^{(\gamma)} = \prod_j a_j$. Note that the order in which the factors are multiplied is a priori determined by the solution $j$ to the Power Problem. However, if the order of $b$ is finite, by the definition of $\pi$ we take $j \mod N$, and if the order of $b$ is infinite, then the solution to the Power Problem is unique because in this case $b$ has no torsion. Thus, a fortiori, $\pi_{t_i}^{(\gamma)}$ is indeed equal to $\prod_j a_j$, where the $a_j$ are computed as above. \end{description} Note that $|\pi_{t_i}^{(\gamma)}(f)| \leq n|\bar{a}|$, since each factor in the product $\pi_{t_i}^{(\gamma)}(f)$ is in the image of $f$. \end{proof} \;\;\; We modify the algorithm from \cite{Matthews:1966} so that it runs in polynomial time as follows: \begin{description} \item[Step 1.] Determine whether $b$ and $c$ are conjugate in $B$. This takes time $T_{CB}(|x|+|y|) \in O(T_{CB}(L))$. If not, $x$ and $y$ are not conjugate. If $b$ and $c$ are conjugate in $B$, let $d\in B$ be such that $db = cd$ (it is not required to find this $d$). \item[Step 2.] Consider the following three cases. \end{description} \begin{description} \item[Case 1:] $g = 1$. Then $\pi_{t_i}^{(d)}(g) = 1$, so $x$ and $y$ are conjugate if and only if $\pi_{t_i}(f) = 1$. To check this compute $\pi_{t_i}(f)$ as in Claim~\ref{subsec: compute pi_i} and solve the Word Problem in $A$. This takes time \begin{equation}\label{eqn: complexity case 1} O \big( L T_{PB}(L) + T_{CA}(L^2) \big). \end{equation} \item[Case 2:] $g \neq 1$, and $\pi_{t_i}(f) = 1$ for all $i\in I_1$. In order to check the latter, simply compute $\pi_{t_i}(f)$ for all $i\in I_1$. This will take time $O(L^2 T_{PB}(L))$. Then, by Theorem~\ref{thm: conjugacy criterion for wreath}, $x$ is conjugate to $y$ if and only if $\pi_{t_i}^{(d)}(g) = 1$ for all $i\in I_1$ (since the $\pi_{t_i}^{(d)}(g) = 1$ for $i\in I_2$). Note that we need not know what $d$ actually is -- its existence is enough. Indeed, since $db=cd$, $g(t_i b^j d^{-1}) = g(t_i d^{-1}c^j)$ and hence $$\pi_{t_i}^{(d)}(g) = \prod_j g(t_i b^j d^{-1}) = \prod_j g(t_i d^{-1}c^j) = \pi_{t_id^{-1}}(g),$$ where $\{t_id^{-1}\}_{i\in I_1 \cup I_2}$ is a set of left $\langle c \rangle$-coset representatives. Moreover, by Lemma~\ref{lemma: conjugacy bar and tilde}, $\pi_{t_id^{-1}}(g)$ is conjugate to $\pi_{s_i}(g)$ for any other set of left $\langle c \rangle$-coset representatives $\{s_i\}_{i\in I_1 \cup I_2}$ for which $t_id^{-1}\langle c \rangle = s_i\langle c \rangle$. It follows that $\pi^{(d)}_{t_i}(g) = 1$ for all $i\in I_1 \cup I_2$ if and only if $\pi_{s_i}(g) = 1$ for all $i \in I_1\cup I_2$. Since $\pi_{s_i}(g)=1$ for all $i\in I_2$, to check whether $x$ and $y$ are conjugate, it is enough to check whether for some set of left $\langle c \rangle$-coset representatives $T_c = \{s_i\}_{i\in I_1}$, $\pi_{s_i}(g) = 1$ for all $i \in I_1$. Choosing $T_c$ can be done in time $O(L^2 T_{PB}(L))$ and by Claim~\ref{subsec: compute pi_i}, checking whether $\pi_{s_i}(g) = 1$ for all $i \in I_1$ can be done in time $L^2T_{CA}(L^2)$. Thus checking whether $x$ and $y$ are conjugate takes time \begin{equation}\label{eqn: complexity case 2} O\big( L^2 T_{PB}(L) + L^2 T_{CA}(L^2) \big). \end{equation} \item[Case 3:] $g \neq 1$ and some $\pi_{t_i}(f) \neq 1$. There are two subcases: \item[1)] \emph{The order of $b$ is finite.} By Theorem~\ref{thm: conjugacy criterion for wreath}, $x$ and $y$ are conjugate if and only if $\pi_{t_i}(f)$ and $\pi_{t_i}^{(d)}(g)$ are conjugate. As in Case~2, $\pi_{t_i}^{(d)}(g) = \pi_{t_id^{-1}}(g)$, which is conjugate to $\pi_{s_i}(g)$ if $t_id^{-1} \langle c \rangle = s_i \langle c \rangle$. This does not have to be the case for the set $T_c = \{s_i\}_{i\in I_1}$ computed in Case~2, but we know that for each $i\in I_1 \cup I_2$ there is a unique $k\in I_1 \cup I_2$ such that $t_id^{-1} \langle c \rangle = s_k \langle c \rangle$. Hence, for each $i\in I_1$, it is enough to check for all $k\in I_1$ whether $$\pi_{t_i}(f) \text{ and } \pi_{s_k}(g) \text{ are conjugate. }$$ If for each $i\in I_1$ there is some $k\in I_1$ for which this is true, then $x$ and $y$ are conjugate. Otherwise, they are not. Note that the above computations amount to solving $L^2$ instances of the Conjugacy Problem in $A$ and so determining whether $x$ and $y$ are conjugate can be done in time \begin{equation}\label{eqn: complexity case 3.1} O\big( L^2 T_{PB}(L) + L^2T_{CA}(L^2) \big). \end{equation} \item[2)] \emph{The order of $b$ is infinite.} Let $k$ be a fixed integer such that $\pi_{t_k}(f) \neq 1$ (such a $k$ must be found already in the beginning of Case $3$). We proceed to check that $\pi_{t_k}(f) = \pi_{t_k}^{(d)}(g)$ without finding $d$. Assume that $\pi_{t_k}^{(d)}(g) = 1$ as otherwise, by Theorem~\ref{thm: conjugacy criterion for wreath}, we can conclude that $x$ and $y$ are not conjugate. Since $\pi_{t_k}^{(d)}(g) = \prod_j g(t_kb^jd^{-1}) \neq 1$, there is some integer $l$ for which $g(t_kb^ld^{-1}) \neq 1$. Then $t_kb^ld^{-1} = \beta_p$ for some $\beta_p \in {\mathrm{supp}}(g)$ and so $d = \beta_p^{-1} t_k b^l$. It would suffice to check for all $d$ of the form $d=\beta_p^{-1} t_k b^l$ such that $db = cd$ whether $\pi_{t_i}(f) = \pi_{t_i}^{(d)}(g)$. In order to check the former, we need to check for all $\beta_p \in {\mathrm{supp}}(g)$ whether $\beta_p^{-1} t_k b^{l}b=c\beta_p^{-1} t_k b^l$, i.e., it is enough to check whether $\beta_p^{-1} t_k b = c\beta_p^{-1} t_k$. These are $m$ instances of the Word Problem in $B$ which do not involve $l$, so they can be decided in time $mT_{CB}(6L)$. Thus checking whether $d$ satisfies $db = cd$ can be done in time $O\big(L T_{CB}(L) \big)$. It remains to check whether $\pi_{t_i}(f) = \pi_{t_i}^{(d)}(g)$. Notice that \begin{eqnarray*} \pi_{t_i}^{(d)}(g) &=& \prod\limits_{j=-\infty}^{\infty}g(t_i b^jd^{-1})\phantom{t_k^{-1}\beta_p} = \prod\limits_{j=-\infty}^{\infty} g(t_i b^j b^{-l}t_k^{-1}\beta_p) \\ &=& \prod\limits_{j=-\infty}^{\infty} g(t_i b^{j-l} t_k^{-1}\beta_p) \phantom{_k} = \prod\limits_{j=-\infty}^{\infty} g(t_i b^j t_k^{-1}\beta_p) \; = \; \pi_{t_i}^{(\beta_p^{-1}t_k)}(g). \end{eqnarray*} So we need to check whether $\pi_{t_i}^{(\beta_p^{-1}t_k)}(g) = \pi_{t_i}(f)$. Using \ref{subsec: compute pi_i} this can be done in time \begin{equation}\label{eqn: complexity case 3.2} O\big( L T_{CB}(L) + T_{CA}(L^2) + LT_{PB}(L)\big). \end{equation} \end{description} The complexity of the conjugacy problem in $A {\mathrm{wr}} B$ is $$O\big( L^2T_{CA}(L^2) + L T_{CB}(L) + L^2T_{PB}(L) \big), $$ which is clearly polynomial since $T_{CA}$, $T_{CB}$ and $T_{PB}$ are polynomial. \end{proof} \begin{remark} The algorithm described above differs from the algorithm described in \cite{Matthews:1966} in item $2)$ of Case $3$. The original algorithm is not polynomial in this part. \end{remark} \smallskip \section{Complexity of the Conjugacy Search Problem in Wreath Products} \label{sec:algorithm wreath CSP} We use the same notation as in the previous section. The following result is a corollary of several propositions in \cite{Matthews:1966}, together with their proofs. \begin{lemma} \label{lemma: constructing z=dh} Let $A$ and $B$ be finitely generated groups and let $x=bf$, $y=cg$ be conjugate in $A{\mathrm{wr}} B$. Then $z=dh\in A{\mathrm{wr}} B$ conjugates $x$ to $y$ if and only if $z$ satisfies \begin{enumerate} \item $db=cd$ in $B$; \item when the order of $b$ is finite, $h$ satisfies \begin{equation} \label{eqn: h when b is finite order} h(t_ib^k) = \left(\prod\limits_{j=0}^{k} g(t_ib^jd^{-1})\right)^{-1}\alpha_i\prod\limits_{j=0}^{k}f(t_ib^j), \end{equation} where $\alpha_i$ is such that $\pi_{t_i}^{(d)}(g) = \alpha_i\pi_{t_i}(f)\alpha_i^{-1}$; \item when the order of $b$ is infinite, $h$ satisfies \begin{equation} \label{eqn: h when b is infinite order} h(t_ib^k) = \left(\prod\limits_{j=0}^{k} g(t_ib^jd^{-1})\right)^{-1}\prod\limits_{j=0}^{k}f(t_ib^j). \end{equation} \end{enumerate} \end{lemma} Note that it follows from \cite{Matthews:1966} that the formulas (\ref{eqn: h when b is finite order}) and (\ref{eqn: h when b is infinite order}) define $h(\beta)$ for all $\beta \in B$ and do not depend on the choice of coset representatives. With this, we can now prove the following theorem. \begin{theorem}\label{thm: CSP in wreath products is poly} Let $A$ and $B$ be finitely generated groups such that the following hold: \begin{enumerate} \item[1)] there are algorithms which solve the Conjugacy Search Problem in $A$ and in $B$ with polynomial time functions, $T_{CSA}$, $T_{CSB}$, respectively; \item[2)] there is an algorithm with polynomial time function $T_{PB}$ for the Power Problem in $B$. \end{enumerate} Then the Conjugacy Search Problem in $A{\mathrm{wr}} B$ is solvable with complexity $$O(T_{CSB}(L) + T_{CSA}(L)+ L^2T_{PB}(L)), $$ where $L=|x| +|y|$ is the length of the input pair $x,y \in A{\mathrm{wr}} B$. \end{theorem} \begin{proof} Let $x=bf$, $y=cg$ be conjugate in $A{\mathrm{wr}} B$ (this can be checked in polynomial time using Theorem~\ref{thm: CP in wreath products is poly}). Using the algorithm to solve the Conjugacy Search Problem in $B$, one can find $d\in B$ such that $db=cd$ in time $T_{CSB}(L)$. It remains to show that the function $h$ as described in Lemma~\ref{lemma:constructing z=dh} can be described by a finite set of pairs $\{(b_i, h(b_i))\}$. First, assume that the order of $b$ in $B$ is infinite. Let $$M = \max\{M_{i} \mid t_ib^{M_{i}} \in {\mathrm{supp}}(f)\cup{\mathrm{supp}}(g), \text{ and } i\in I_1 \}.$$ We show that $M$ can be found in polynomial time. For each $b_j\in {\mathrm{supp}}(f)\cup {\mathrm{supp}}(g)$ and for each $t_i \in T_b$, compute $M_{ij}$ such that $t_ib^{M_{ij}} = b_j$. This can be done in time $O(L^2T_{PB}(L))$. Let $M = \max\{M_i \mid b_j \in {\mathrm{supp}}(f)\cup {\mathrm{supp}}(g)\}$. Then $M = \max\{ M_i \mid i\in I_1\}$ can be computed in $O(L^2T_{PB}(L))$ steps. Consider the following cases. \begin{enumerate} \item $k \geq M$. Then $h(t_ib^k) = \big(\pi_{t_i}^{(d)}(g)\big)^{-1} \pi_{t_i}(f) =1$, by Theorem~\ref{thm: conjugacy criterion for wreath}. Hence $h(t_ib^k) = 1$. \item $k<M$. \begin{enumerate} \item If $t_i \notin {\mathrm{supp}}(f)\cup {\mathrm{supp}}(g)$ and $t_id^{-1} \notin {\mathrm{supp}}(f)\cup {\mathrm{supp}}(g)$, then $f(t_ib^j) =1$ and $g(t_id^{-1}b^j) = 1$ for all $j$ and hence $h(\tilde{t_i}b^k)=1$. \item If $t_i\in {\mathrm{supp}}(f)\cup {\mathrm{supp}}(g)$, but $t_id^{-1}\notin {\mathrm{supp}}(f)\cup {\mathrm{supp}}(g)$, then $$h(\tilde{t_i}b^k) = \left( \prod\limits_{j\leq k} g(\tilde{t_i}d^{-1} c^j) \right)^{-1} \prod\limits_{j\leq k} f(t_ib^j) = \prod\limits_{j\leq k} f(t_ib^j),$$ which can be computed in time $O\big(MLT_{PB}(L))\big)$. \item If $t_i \notin {\mathrm{supp}}(f)\cup {\mathrm{supp}}(g)$, but $t_id^{-1}\in {\mathrm{supp}}(f)\cup {\mathrm{supp}}(g)$, $h(\tilde{t_i}b^k) = \prod\limits_{j\leq k} g(\tilde{t_i}b^jd^{-1})$ which can be similarly computed in time $O\big(MLT_{PB}(L))\big)$. \item If $t_i, t_id^{-1}\in {\mathrm{supp}}(f)\cup {\mathrm{supp}}(g)$, then $$h(\tilde{t_i}b^k) = \left(\prod\limits_{j \leq k} g(t_ib^jd^{-1})\right)^{-1} \prod\limits_{j\leq k} f(t_ib^j)$$ can be computed in time $O\big(MLT_{PB}(L)\big)$. \end{enumerate} Thus, if $k< M$, $h(t_ib^k)$ can be computed in time $O\big(MLT_{PB}(L)\big)$. It is clear from the definition of $M$ that $M < L$, so one can compute $h(t_ib^k)$ in time $O(L^2T_{PB}(L)). $ \end{enumerate} Assume that the order of $b$ is finite, say $N$. Using the algorithm to solve the Conjugacy Search Problem in $A$, one can find in time $T_{CSA}(L^2)$, for each $i\in I_1$, an $\alpha_i \in A$ such that $\pi_{t_i}^{(d)}(g) = \alpha_i \pi_{t_i}(f)\alpha_i^{-1}$. Then $h(t_ib^k) = \left(\prod\limits_{j=0}^{k} g(t_ib^jd^{-1})\right)^{-1} \alpha_i \prod\limits_{j=0}^{k}f(t_ib^j)$ can be found in time $O(T_{CSA}(L) + L^2T_{PB}(L))$ by arguing as in the infinite order case (here instead of $M$, we use the order $N$ of $b$). Thus the conjugacy search problem in $A{\mathrm{wr}} B$ is solvable in time $$O\big( T_{CSB}(L) + T_{CSA}(L) + L^2T_{PB}(L) \big).$$ \end{proof} \smallskip \section{Complexity of the Conjugacy and Conjugacy Search Problems in Free Solvable Groups} \label{sec:algorithm free solvable} By Corollary~\ref{cor: reduction to CP in wreath is poly} the Conjugacy Problem in free solvable groups can be reduced in polynomial time to the Conjugacy Problem in a wreath product. Then the result from Section~\ref{sec:algorithm wreath} can be applied to deduce that the Conjugacy Problem in free solvable groups is solvable in polynomial time. Though the bound for the complexity will be polynomial, the degree of the polynomial will depend on the degree of solvability (this is because of the factor of $L$ in front of $T_{CB}(L)$ in (\ref{eqn: complexity wreath prod})). However, by making a modification to the algorithm, the complexity of the Conjugacy Problem in free solvable groups is shown to be a polynomial of degree eight. \begin{theorem}\label{thm: CP in wreath products is poly - modified} The Conjugacy Problem in a wreath product $A{\mathrm{wr}} B$, in which $A$ is abelian is in $$O\big( T_{CA}(L^2) + T_{CB}(L) + L^2T_{PB}(L) \big),$$ where $L$ is the length of the input pair $(x,y)$. \end{theorem} \begin{proof} The algorithm is similar to the one in Theorem~\ref{thm: CP in wreath products is poly}. The only alteration to be made is in Case~$3$, where the order of $b$ is infinite. Let $\{s_i\}_{i\in I_1}$ be the set of coset representatives computed in Case~$2$. Then $\pi_{s_i}(g)$ is conjugate to $\pi_{t_i}^{(d)}(g)$. Since $A$ is now abelian, $\pi_{s_i}(g) = \pi_{t_i}^{(d)}(g)$. Thus $\pi_{t_i}(f) = \pi_{t_i}^{(d)}(g)$ if and only if $\pi_{t_i}(f) = \pi_{s_i}(g)$. Checking this requires \begin{equation}\label{eqn: complexity case 3.2 modified} O \big( |x|T_{PB}(|x|) + |y|T_{PB}(|y|) + T_{CA}(|x|^2 + |y|^2) \big). \end{equation} As a result the overall complexity of the modified algorithm is \begin{equation*} O\big( T_{CA}(L^2) + T_{CB}(L) + L^2T_{PB}(L) \big). \end{equation*} \end{proof} \begin{theorem} The Conjugacy Problem in $S_{d,r}$ is in $O \big( rdL^8 \big)$, where $L = |x|+|y|$ is the input length. \end{theorem} \begin{proof} We proceed by induction on the degree of solvability, $d$. The base case is the abelian group $F/F'$, where the Conjugacy Problem is in $O(rL)$. Now suppose there is an algorithm, which solves the Conjugacy Problem in $F/F^{(d)}$ in $O\big(rd L^8 \big)$. By Corollary~\ref{cor: reduction to CP in wreath is poly}, one can reduce the Conjugacy Problem in $F/F^{(d+1)}$ to the Conjugacy Problem in $F/F' {\mathrm{wr}} F/F^{(d)}$ in time $O(rd L^3)$. Since $F/F'$ is abelian, we apply Theorem~\ref{thm: CP in wreath products is poly - modified}. In order to do this we need polynomial bounds for the Conjugacy Problems of $F/F'$, $F/F^{(d)}$ and the Power Problem in $F/F^{(d)}$. The Conjugacy Problem in $F/F'$ is in $O(rL)$. By the induction hypothesis, there is an algorithm which solves the Conjugacy Problem in $F/F^{(d)}$ in $O\big(rdL^8 \big)$. By Theorem~\ref{prop:power problem is O(dr w^3)} there is an algorithm which solves the Power Problem in $F/F^{(d)}$ in $O(rdL^6)$. Then from Theorem~\ref{thm: CP in wreath products is poly - modified}, the complexity of the Conjugacy Problem in $F/F^{(d+1)}$ is \begin{equation*} O \big( rL^2 + rdL^8 + L^2rdL^6 \big), \end{equation*} It is easily seen now that the complexity of the Conjugacy Problem in free solvable groups is \begin{equation*} O \big( rdL^8 \big). \end{equation*} \end{proof} Since all the proofs of the decidability results are constructive one can also deduce the following theorem. \begin{theorem} The Conjugacy Search Problem in $S_{d,r}$ is solvable in time $O\big( rdL^8 \big)$, where $L = |x|+|y|$ is the input length. \end{theorem} \begin{proof} Again we proceed by induction on the degree of solvability $d$, this time making sure that at each step we are effectively finding the required object. When $d=1$ the group is abelian and so deciding the Conjugacy Search Problem there is trivial -- two words are conjugate if and only if the identity is a conjugator. Now suppose that there is an algorithm running in time $O(rd L^8)$, which, if two words $\bar{x}, \bar{y} \in F/F^{(d)}$ are conjugate, exhibits a conjugator. We proceed to describe an algorithm which does the same for two conjugate elements $x,y \in F/F^{(d+1)}$ given as products of generators of $F$. As before, by Corollary~\ref{cor: reduction to CP in wreath is poly}, we reduce the Conjugacy Problem in $F/F^{(d+1)}$ to the Conjugacy Problem in $F/F^{\prime} {\mathrm{wr}} F/F^{(d)}$. Hence by Theorem~\ref{thm: CSP in wreath products is poly} there is an algorithm running in time $O(rdL^8)$, which finds a conjugator for $\varphi(x)$ and $\varphi(y)$. The proof of Theorem~2 in \cite{Remeslennikov-Sokolov:1970} gives a pre-image $s \in F/F^{(d+1)}$ for this conjugator. One can see easily that computing $s$ can be done in time $O(r(d+1)L^3)$. Thus, the overall complexity of this algorithm is $$O\big( r(d+1)L^8 \big). $$ \end{proof} \smallskip \smallskip \smallskip
{ "redpajama_set_name": "RedPajamaArXiv" }
586
Title: Walt Whitman to James Redpath, 29 June 1886 Whitman Archive ID: med.00722 Source: The current location of this manuscript is unknown. The transcription presented here is derived from Walt Whitman, The Correspondence, ed. Edwin Haviland Miller (New York: New York University Press, 1961–1977), 4:36. For a description of the editorial rationale behind our treatment of the correspondence, see our statement of editorial policy. Contributors to digital file: Stefan Schöberlein and Kyle Barton Cite this page: "Walt Whitman to James Redpath, 29 June 1886." The Walt Whitman Archive. Gen. ed. Matt Cohen, Ed Folsom, and Kenneth M. Price. Accessed 7 January 2023. <http://www.whitmanarchive.org>. Camden, I send you "How I made a Book—or tried to"—If you can use it I think it should be in the Review1—It makes 3300 words, & would take from 7 to 8 pages Rev.—The price is $80, & I should want 100 proof sets on slips. James Redpath (1833–1891), an antislavery activist, journalist, and longtime friend of Whitman, was the author of The Public Life of Capt. John Brown (Boston: Thayer and Eldridge, 1860), a correspondent for the New York Tribune during the war, and the originator of the "Lyceum" lectures. He met Whitman in Boston in 1860, and he remained an enthusiastic admirer; see Horace Traubel, With Walt Whitman in Camden, Friday, January 4, 1889. He concluded his first letter to Whitman on June 25, 1860: "I love you, Walt! A conquering Brigade will ere long march to the music of your barbaric jawp." Redpath became managing editor of The North American Review in 1886. See also Charles F. Horner, The Life of James Redpath and the Development of the Modern Lyceum, (New York: Barse & Hopkins, 1926); John R. McKivigan, Forgotten Firebrand: James Redpath and the Making of Nineteenth-Century America, (Ithaca, NY: Cornell University Press, 2008); and J.R. LeMaster, "Redpath, James [1833–1891]," Walt Whitman: An Encyclopedia, ed. J.R. LeMaster and Donald D. Kummings (New York: Garland Publishing, 1998). 1. Whitman sent the article to Redpath, of The North American Review, on June 29 (Whitman's Commonplace Book, Charles E. Feinberg Collection of the Papers of Walt Whitman, 1839–1919, Library of Congress, Washington, D.C.), and it evidently appeared in the Philadelphia Press and other newspapers associated with Charles Allen Thorndike Rice's syndicate on July 11. He received $80 from Rice on July 10 (Whitman's Commonplace Book). This article, with "A Backward Glance on My Own Road," "How Leaves of Grass Was Made," and "My Book and I" became "A Backward Glance O'er Travel'd Roads" in November Boughs (1888), 5–18. [back]
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,087
import urllib import time import json import requests import os TOKEN = "<your-bot-token>" URL = "https://api.telegram.org/bot{}/".format(TOKEN) IMAGE_URL = "https://api.telegram.org/file/bot{}/".format(TOKEN) DOWNLOADED_IMAGE_PATH = "/Telegram/" def get_url(url): """Downloads the content from a URL and gives us a string""" response = requests.get(url) content = response.content.decode("utf8") return content def get_json_from_url(url): """Gets the string response as above and parses this into a Python dictionary using json.loads()""" content = get_url(url) js = json.loads(content) return js def get_updates(offset=None): """Call this command https://api.telegram.org/bot<your-bot-token>/getUpdates and retrieves a list of updates""" url = URL + "getUpdates?timeout=100" if offset: #If this is specified, we'll pass it along to the Telegram API to indicate that we don't want to receive any messages with smaller IDs this. url += "&offset={}".format(offset) js = get_json_from_url(url) return js def get_last_update_id(updates): """Calculates the highest ID of all the updates we receive from getUpdates""" update_ids = [] for update in updates["result"]: update_ids.append(int(update["update_id"])) return max(update_ids) def send_message(text,chat_id,reply_markup=None): """Takes the text of the message we want to send (text) and the chat ID of the chat where we want to send the message (chat_id)""" text = urllib.quote_plus(text) url = URL + "sendMessage?text={}&chat_id={}&parse_mode=Markdown".format(text, chat_id) if reply_markup: url += "&reply_markup={}".format(reply_markup) get_url(url) def request_image_path(photo_id): """Call this command https://api.telegram.org/bot<your-bot-token>/getFile?file_id=<id>, retrieves the information of the image selected and return the file path""" url = URL + "getFile?file_id={}".format(photo_id) js = get_json_from_url(url) file_path = js["result"]["file_path"] return file_path def get_image(image_path): """Receive a path and download the image in a folder named Telegram""" filename = image_path[image_path.find("/")+1:] url = IMAGE_URL + image_path try: image_on_web = urllib.urlopen(url) buf = image_on_web.read() path = os.getcwd() + DOWNLOADED_IMAGE_PATH file_path = "%s%s" % (path, filename) downloaded_image = file(file_path, "wb") downloaded_image.write(buf) downloaded_image.close() image_on_web.close() except: return False return True def handle_updates(updates): """Manage the incomming updates and check if the user send an image and call get_image method""" for update in updates["result"]: try: text = update["message"]["text"] except: text = None chat = update["message"]["chat"]["id"] if text == "/start": send_message("Welcome to Image Bot. Send any image and I'll store it forever >:D", chat) elif text: send_message("That is not an image", chat) else: photo_list = update["message"]["photo"] biggest_index = len(photo_list)-1 photo_id = photo_list[biggest_index]["file_id"] photo_path = request_image_path(photo_id) if get_image(photo_path): send_message("Image Saved", chat) else: send_message("Oops :C", chat) def main(): last_update_id = None while True: print("getting updates") updates = get_updates(last_update_id) if len(updates["result"])> 0: last_update_id = get_last_update_id(updates) + 1 handle_updates(updates) if __name__ == '__main__': main()
{ "redpajama_set_name": "RedPajamaGithub" }
3,668
1 What Are the Benefits of Part-Time Jobs for Students? 2 What Is an External Obstacle That Can Get in Your Way of Success? In 2016, 69.7 percent of high school graduates in the United States headed straight to college after summer vacation, according to the Bureau of Labor Statistics. But attending college directly after high school isn't for everyone. Some students are tired of school or never really enjoyed it in the first place. Others may want a break before they return to full-time education. And for some high school graduates, working instead of attending college is a monetary decision. There are several reasons why waiting on college can be a good idea. Unless you've been offered a full scholarship, going to college will cost you or your family money. And even if your schooling is paid for, you'll still need money for food, lodging, books, trips home, entertainment and more. Taking a year or two off to work and save money, especially if you're able to remain living for free with your parents, can be a very wise idea. Most kids who finish up their 12 mandatory years of school are pretty sick of it by the time they graduate. While college is quite a different experience than high school, it still involves classrooms, books and even more studying. Taking a year or more to work can really help you value your college education. A year away from school can make the heart grow fonder of thinking and learning, especially if your job is tedious menial labor. The monotony of certain jobs tops the monotony of a classroom any day. College is a fairly sheltered environment. Often, you end up hanging around with the same types of people you knew in high school. Even if personalities vary, you will most likely find yourself in a group of people who are the same age as you, doing the same things you do every day. Work, on the other hand, can expose you to a wider range of people. The ages, socioeconomic backgrounds and hopes and dreams of your coworkers will be quite different than your own, and this is a good thing. It won't hurt to bring a bit more life knowledge to college, when you do decide to attend. Earning your own money is a great way to learn how to manage it. And managing your own money is one of the first paths to independence. If you live on your own while you work, you'll gain even more life skills, including paying rent, paying utilities, buying your own groceries, gas and more. So many kids arrive at college and immediately have to decide on a major field of study. Most schools don't require you to choose a major until the end of your second year, but having a major from the start will help you to graduate on time and get the classes that you need. However, not many of us know what we want to study when we first arrive at college. Taking time off to work will also give you time to think. And if your job is washing dishes or busing tables, your brain will have a lot of free time to wander and consider career choices. You'll also have time to do a bit of research on different professions or maybe even secure a paid internship at a company to find out what type of work you'd like to do. It's possible you'll like working so much more than being in school that you decide to forgo college altogether. I know the common mantra is you can't get a good job without a college degree, but that is changing a bit. Some companies offer apprenticeships, and some professions favor work experience much more than a degree. Besides, you can always change your mind later on and attend a university. College isn't going anywhere. Skyler, Heather. "Six Advantages of Working After High School." Work - Chron.com, http://work.chron.com/six-advantages-working-after-high-school-15907.html. 29 June 2018. What Are the Benefits of a Bachelor's Degree Vs. a Master's Degree in Business?
{ "redpajama_set_name": "RedPajamaC4" }
5,715
Plans for the first Montserrat Calabash Festival commenced in 2005. The idea was the brain child of local businesswoman, Florence Griffith Joseph and was brought to life with assistance from volunteers in the Montserrat Hospitality Association. As a result, the first celebration of the Montserrat Calabash Festival took place in July 2006. This year will mark our 8th Montserrat Calabash Festival which is scheduled to occur during the week of July 14 – 21, 2013, under the theme: Embracing our African Heritage. Our main sponsors are the Government of Montserrat, the Montserrat Tourist Board, The Royal Bank of Canada, The Bank of Montserrat, and several local businesses in Montserrat. As an aid to our redevelopment following the volcanic destruction of our main town, we thought that we needed a new festival during the summer months. We presently have only two major festivals, one in December and the other in March. Having a festival during summer, meant we would be attracting new and returning visitors to experience our unique little island. Increased tourism is one of our goals. Naming the festival was spiritual. The name evoked several emotions. The calabash is historically very meaningful in our community dating back hundreds of years. It has been quite multifunctional and economically significant. Very much like our people, the calabash is a symbol of strength, versatility and resilience. Each year the festival is held during the 3rd week of July, which also commemorates the anniversary of our Soufriere Hills Volcano that became active on July 18th, 1995. Coupled with remembering July 18th, the summer festival provides an ideal opportunity for everyone to join in thanksgiving for our blessings, to experience enthusiastic, fun, cultural and interactive activities that showcase who we are as a people. A central point of the festival is the feature of the products of the calabash fruit as a cottage industry. We have produced an array of beautiful calabash products, which are artistically carved and decorated for use in a variety of ways. Some such items consist of hand bags, hanging baskets, masks, decorative art objects, musical instruments, key rings, refrigerator magnets, lamp shades, bowls, jewellery and clothing accessories such as buttons and appliqués. This initiative seeks to promote our local crafts industry by producing various products made from and/or incorporating the calabash fruit. For example, we held a unique fashion show in 2009 where designers skilfully and artistically incorporated the calabash within their clothing pieces, many of which were later sold after the show. The activity was intended to promote the talents of our local and creative designers. Every year we seek to involve as many different segments of the population as possible. We aim to provide something to please everyone and so for the academics and politically minded there is the ever popular George Irish Lecture series. Dr. Irish was the first local Tutor to head the Extra Mural Department of the University of the West Indies on Montserrat. He was also involved in several community activities. He was the founder of the popular Emerald Community Singers, The Montserrat Theatre Group, the President of The Montserrat Allied Workers Union and had many other involvements. We chose to honour him by naming this lecture series in his honour. Over the years we have introduced and incorporated new ideas into our schedule of activities, which have been met with a great deal of zeal and participation by the public. In 2009 we introduced two new events, a Gospel concert and an 11/11 Cricket fun day. The Gospel Concert was specially put on the calendar for the Christian community to be able to enjoy more of the activities of the festival. The concert attracts many different groups, choirs and individuals to share their talents, to worship and give thanks to God. One of our volunteers, Lincoln Joseph along with Basil Morgan, Jeff Layne and other members of the community, started a new game of cricket called 11/11 with rather fun and unique rules. These rules require that each player bowls one over. Sixes are easily scored as the field is shorter and so the game moves faster as each inning is timed to 35 minutes. Spectators and participants alike all enjoy the game. We have also introduced events specifically tailored towards our young Montserratians. A Calahoopers dance group taught by Ms. Yvonne Getfield, was specially formed for the Montserrat Calabash Festival. Ms. Getfield taught young persons to dance and entertain audiences with the hula hoop. They have since performed and delighted audiences around Montserrat. Unfortunately this activity is not on the calendar this year. What is a festival without food? On the Friday of the festival we hold a grand Food Fair featuring food from around the Caribbean including vegetarian meals and our own national dish, Goat Water, which is a flavourful stew served with bread rolls. In addition, other traditional and not so frequently prepared dishes like duconoo/duckna, bakes, salt fish and ground provision, locally raised/free-range chickens called yard-fowl, ginger sticks, potato pudding, various local fruit juices and lots more may be found at the food fair. Everyone looks forward to the wide variety of foods from around the region including Guyana, Jamaica, Trinidad and Tobago, Dominica and Barbados. The Montserrat Calabash Festival showcases our rich music through concerts and guest appearances of our masquerades, string band musicians, steel pan bands and other local artistes/musicians. We started a jazz event over the years and brought in world renowned Paul Lunga, African King of Jazz for two years to entertain us. In 2011 we introduced a Music Fest By the Bay event, with hopes of attracting jazz and other musicians from around the region who are willing to help promote Montserrat and to make this a memorable event for all who are fortunate to witness this celebration of musical culture. Last year the crowd and vibrations were fantastic and we see this event growing ever larger this year. A dedicated committee of seven members currently organize the festival. They are Aldean Moore Williams, Eudora Osborne, Rose Willock, Merle Galloway, Vereen Thomas Woolcock, Pat Ryan and Florence Griffith Joseph. Most of these members have been organizing the festival from its inception, along with several volunteers from within the community and Veta Wade and Cupid Francis from The Department of Culture. Many visitors are expected to arrive on the ferry especially for the weekend, which operates from the new ferry terminal near Heritage Key in St. Johns, Antigua. No advance ticketing is required, as travellers can purchase their tickets upon check-in. However, to facilitate ticketing, check-in and other clearances at the point of sale at the ports, passengers are asked to check-in 90 minutes in advance before the scheduled time for departure. Each passenger is allowed 2 pieces of baggage free and any additional baggage is charged per piece. For further information on the service contact Mr. Roosevelt Jemmotte at 664 496 9912 in Montserrat or Jennifer Burke in Antigua at 268 778 9786. For any other information for the Calabash Festival please check the website www.visitmontserrat.com or contact Florence Griffith Joseph at flogriff@candw.ms, Telephone 664-492-1743.
{ "redpajama_set_name": "RedPajamaC4" }
6,125
Q: How to avoid multiple if statements when checking a regexp match Currently I've got code like this .. var result = line.match( some regexp ); if (result !== null) { return callback({ a: 'aaaaa', b: bvariable }); }); } var result = line.match( some other regexp ); if (result !== null) { return callback({ d: 'ddddd', c: bvariable }); }); } I have about 10 of these all with different RegExps and callbacks and the list will get bigger. Is there a better/cleaner way of doing this ? A: You could refactor out the regex and the callback into an "associative array" (object) and then generalize the rest: var regexs = { regex1: { regex: /./, callback: function () { // callback stuff here } }, regex2: { regex: /[a-z]/, callback: function () { // callback stuff here } } }; var result = line.match(regexs.regex1.regex); if (result !== null) { return regexs.regex1.callback(); } A: create an associative array of pairs. the first element in each pair is a regexp, the second is the callback. loop over the array and match the regexp, if there is a match then call the callback. var assoc = [ { r: /\d+/, f: function (m) { console.log(m[1]); } }, { r: /\w+/, f: function (m) { console.log(m[2] + m[3]); } } ]; for (var i = 0; i < assoc.length; i++) { var m = line.match(assoc[i].r); if (m) { return assoc[i].f(m); } } A: Consider: input = "foo bar baz" regexes = [ /aa/, /ba./, /quux/, ] regexes.some(function(re) { if(re.test(input)) { // do stuff return true; } return false; }) Note that you don't need match unless you're actually using what it returns. A: You can get rid of the result variable if(line.match( some regexp )) { return callback({ a: 'aaaaa', b: bvariable }); } if(line.match( some other regexp )) { return callback({ d: 'ddddd', c: bvariable }); } As the match function always returns null or an array, is safe to asume that if the match fails, the null returned will be casted to false, and if the match succeed the array returned (no matter how many elements it have) will be casted to true. A: If your regex patterns are simple strings and assuming you want the first pattern matched before the second, you can: var pattern1='aaa', pattern2='bbb', RE=new RegExp('('+pattern1+'|'+pattern2+')'); // matches pattern1 OR pattern2 if(line.match(RE)){ var result=RegExp.$1; // content of first pair of capturing parens if(result===pattern1) { return callback(/* args specific to first pattern */); } else if(result===pattern2){ return callback(/* args specific to second pattern */); } else { return callback(/* args for no match */); } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,567
Home/Coach Sales, Scania/Scania celebrates success and sustainability at Euro Bus Expo 2018 Scania celebrates success and sustainability at Euro Bus Expo 2018 Scania has enjoyed its best ever Euro Bus Expo exhibition, both in terms of orders signed and sealed at the show and with regard to visitor reaction to the news that all vehicles on the company's stand this year were capable of being powered by renewable fuels. In addition to a hybrid coach chassis and a Scania Touring coach, the combustion engines of both of which can run on HVO (hydrotreated vegetable oils), the stand featured the 100th biogas double decker bus to be produced by Scania in conjunction with bodybuilder ADL. "Innovation, sustainability, partnerships and performance are our four key messages today and all were present in abundance on our stand at this year's show," comments Martin West, New Bus and Coach Sales Director for Scania (Great Britain) Limited. "Each vehicle on display attracted a great deal of attention in its own right, but the levels of interest in alternative and renewable fuels demonstrated to us beyond doubt that the industry is now firmly focusing on operating more sustainably in the future. Among the operators visiting the Scania stand were: Leons Travel Group Stafford-based Leons Travel Group was celebrating its order for two Scania Irizar i6S vehicles due for delivery next March. Featuring 53 seats, full climate control and a centre toilet, both vehicles are 13.2 metres overall length and are based on Scania K 410 EB6x2*4 rear-steer chassis. "We have a long-standing relationship with Scania and the excellent service we receive from the company's bus and coach team and West Pennine Trucks, our local Scania dealer, is what keeps us coming back for more," comments Andy Douglas, Director of Leons Travel Group. "The vehicles themselves certainly have the 'wow factor' and are very much appreciated by our customers and drivers alike." Leons Travel Group will be using its new Scania Irizar i6Ss on a full range of coaching operations, including tours and holidays, cruise liner work and general private hire. JH Coaches Sealing an order for a triaxle Scania Touring coach based on a Scania K 410 EB 6×2*4 chassis was show visitor Ian Shipley, Operations Director of Birtley, Tyne and Wear-based operator JH Coaches. The 59 + courier seat vehicle is scheduled for delivery next April when it will start work on one of the company's key contracts, which involves taking parties of school children to visit the Normandy beaches and battlefields of northern France as part of their curricular activities. "We have taken delivery of six new Scania coaches since January 2017 and look forward to welcoming the new Touring into the fleet next spring," says Ian Shipley. "We are currently in a process of upgrading our entire fleet to Euro 6, and for me Scania offer the best package on the market in terms of product reliability, dealer support and a competitive offer. We are a long-standing Scania operator and as Operations Director I know they give me a peaceful life – that's the appeal in a nutshell to me!" Woodstones Coaches Limited Richard Meredith, Director of Kidderminster operator Woodstones Coaches, signed a deal on the stand with Scania's UK Retail Sales Manager Lee Wale for his company's third Scania K 360 IB4x2 chassis with Irizar i4 bodywork. "We took delivery of our first two Scania Irizar i4s earlier this year and their performance and quality convinced us to order a third," says Richard Meredith. "For us, backup is especially important and we rely on Scania's inclusive two year repair and maintenance contract, which we then extend to cover the vehicles' in-service life with us. With local dealer Keltruck Droitwich managing our servicing, it's a formula that works extremely well for us." Woodstones Coaches Limited are involved in a wide range of private hire operations, including schools work, rail replacement and providing cover for National Express services. Unicorn Travel Wrexham-based Unicorn Travel signed an order for its first ever Scania at the Euro Bus Show 2018. Scheduled delivery of their Scania K 360 IB4x2 Touring will be in April 2019, when it will replace one of the 13-plate vehicles on the fleet. The 12.1-metre overall length vehicle has 51 seats, centre toilet, three-point seatbelts, Frenzel drinks machine, Climate Control and Alcoa Dura-Bright alloy wheels . "We're really excited to have purchased our first Scania. We chose Scania because it's a quality product with good dealer back-up. Our local dealer is open 24 hours, five days a week so our maintenance can be done overnight to limited downtime; we've extended the standard two year R&M package to three too. We have used Scania Finance to purchase the vehicle, it was great to have everything under one roof," says Katrina Carroll, Director of Unicorn Travel. Paul S. Winson Coaches Paul S. Winson Coaches signed an order at the show for a Scania K 410 EB6x2*4 13.22-metre overall length with Irizar i6S bodywork to join its 26 vehicle fleet due to increased business demand. The 53-seater will be the sixth Scania the company has purchased since 2014. Due to be delivered in April 2019, it will be used for general private hire, tour and school work. Director, Paul Winson Jnr says: "When you buy a Scania, it's like buying into a family. With us being a family business, that's important to us. I don't know how they manage it, but they make it feel personal." Director, Matthew Winson adds: "The passengers love the Scania Irizar product, it has serious kerb appeal and catches people's eye. Our drivers really enjoy driving them too." While at the show, Paul S. Winson was presented with the Lego model of one of its coaches as a thank you for working closely with Scania in a testimonial campaign throughout 2018, illustrating one of Scania's key messages, 'performance redefined'. Masons Minibus & Coach Hire Ltd Hertfordshire-based Masons Minibus & Coach Hire Ltd, signed an order for an 80-seater Scania coach at the show. The first new Scania to be purchased by the company will be delivered in April 2019 and will be used for general private hire, excursions and school hire. Director, James Mason says: "We are responsible for transporting 450 students to and from school every day so we've purchased the vehicle to keep up with this demand. We couldn't find another product that compared in terms of total cost of ownership and of such high quality with the unique seating capacity we wanted." Director, Matthew Mason, adds: "We're extremely excited to see the finished product. Purchasing a new vehicle is always a big decision but we know that the Scania is a good investment." Attain Travel Birmingham-based Attain Travel signed an order at the show for a Scania K 360 IB4x2 Touring HD coach. The vehicle will be specified with 51 reclining seats, centre sunken toilet, Frenzel drinks unit, USB chargers at all passenger seats, DVD player with two monitors and Alcoa Dura-Bright alloy wheels Attain's David Costello comments, "This is our fourth Scania Touring and we have been extremely happy with the performance of the vehicle. We have an excellent working relationship with everyone in Scania's Bus and Coach team, so when we were looking for an additional vehicle we had no hesitation in ordering from them again." keladmin 2018-11-20T15:17:05+00:00 November 20th, 2018|Coach Sales, Scania|
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,703
{"url":"https:\/\/chemistry.stackexchange.com\/questions\/76829\/which-of-the-following-compounds-were-involved-in-the-titration","text":"# Which of the following compounds were involved in the titration?\n\nWe have a sample that weighs $\\pu{0.3148 g}$.This sample may contain one,two or even three of the following compounds: $\\ce{NaOH, Na2CO3, NaHCO3}$. We have as a clue, that the sample was titrated with $\\pu{20 mL}$ $\\ce{HCl}$ $\\pu{0.09 M}$ using phenolphthalein as an indicator. Later, the sample was titrated with $\\pu{50 mL}$ of the same $\\ce{HCl}$ using orange methyl as an indicator.\n\nThe question is how do we understand which of the compounds were involved? Should we set for example mass of $\\ce{NaOH} = x$, mass of $\\ce{Na2CO3} = y$ and mass of $\\ce{Na2CO3} = z$, then write down the reactions between all of the compounds with $\\ce{HCl}$ and then extract some equations from the equivalent point? One equation comes from the total mass.\n\nAssume the variables $x$, $y$, $z$. You need three equations to get the values.\nFirst equation: Add all three and equate to $\\pu{0.3148g}$.","date":"2021-03-07 00:01:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8213828206062317, \"perplexity\": 645.4289386880343}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178375529.62\/warc\/CC-MAIN-20210306223236-20210307013236-00511.warc.gz\"}"}
null
null
"Araz" is a story about the relationships between men and women in a world where it is not easy to be a woman. "Araz" is a story about every one of us and the boundaries we have to overcome every day. "Absurdo" rests on the principles of Dada and the universal interplanetary law "Nothing ever happens the way we imagined it would happen." Logic? Reason? Common sense? None of the above! The most-loved bird in the world of ballet – the tired swan – in a totatlly unexpected, radical and sprinkled with laughter all over interpretation. Starring Iker Gómez himself and lots of, lots of water!
{ "redpajama_set_name": "RedPajamaC4" }
1,732
"If Thine Eye Offend Thee" (Part 1 – Blind Obedience? – To Question or Not to Question? That is the Question!) papers July 29, 2016 Articles (Papers), James F. Stoddard III (Papers), L. Hannah Stoddard (Papers), Papers10 comments James F. Stoddard III, with L. Hannah Stoddard Respect for Leaders All of my life I have been accused of blindly obeying my leaders. I was generally the one or among the small minority that left the movie, wouldn't listen to the music or sounded out the unpopular teaching from an inspired leader or prophet. I have been belittled and even discredited because I turned to the inspired words of prophets for my foundation. My understanding of and personal standards in the areas of family, music, film, education, government, science and so forth were derived from time-intensive and at times exhausting study of the teachings of the scriptures and presidents of the Church. For this reason, taking up my pen to write at this time on this subject is markedly ironic. My purpose is to instill faith in Jesus Christ and the Restoration, helping to encourage my fellow members of The Church of Jesus Christ of Latter-day Saints to recommit themselves to living by "every word that proceeds from the mouth of God." This includes inspired revelation we have received from latter-day prophets, seers and revelators. The Joseph Smith Foundation has one of the largest repositories of statements by leaders now in existence. While I do not believe in blind obedience under any circumstance, I do believe in mindful and careful obedience. I have personally studied the words of latter-day prophets for hours a day on average since I was about 12 years old. I have a great respect for the inspiration that has come from inspired leaders through the years. A Prophet's Lament Recently, I have seen many debates on the Internet regarding following leaders in the Church. I have been shocked by some of the discussion, misunderstanding and assertions being made. Hopefully this series of papers can address some of the confusion I have seen. My concern can be expressed in the timeless words of President Brigham Young. The Prophet lamented: What a pity it would be if we were lead by one man to utter destruction! Are you afraid of this? I am more afraid that this people have so much confidence in their leaders that they will not inquire for themselves of God whether they are lead by him. I am fearful they settle down in a state of blind self-security, trusting their eternal destiny in the hands of their leaders with a reckless confidence that in itself would thwart the purpose of God in their salvation, and weaken that influence they could give to their leaders did they know for themselves by the revelations of Jesus that they are led in the right way. Let every man and woman know, by the whispering of the Spirit of God to themselves whether their leaders are walking in the path the lord dictates or not. This has been my exhortation continually. . . . Let all persons be fervent in prayer, until they know the things of God for themselves and become certain that they are walking in the path that leads to everlasting life . . .1 President Brigham Young was clearly frightened by the prospect that members would blindly follow their leaders. Why? Why the necessary emphasis on knowing "the things of God for themselves"? How do we know when to follow? Is it permissible to question a man or woman in authority? At what level? Who can we trust? Who should we trust? The answers to these questions hold eternal consequences for you and I. These are real questions with real answers. This is not a subject to avoid by placing it on the shelf. It is also not a subject that can be understood with soundbites from here and there or cutesy meme quotes from across the Internet. Mountain Meadows Massacre "Blood! Blood! Blood!" On a hot June day in Mesquite, Nevada, an old man lay dying. Friends faithfully watched the final struggles. Tortured by memories of events "his eyes had witnessed but his tongue had never uttered"; his dying cries were "BLOOD! BLOOD! BLOOD!"2 It wasn't long before the room fell silent. The struggle was over. Nephi Johnson had passed on into the hands of his Maker. What memories tortured Nephi Johnson until his last breath? What past left him wracked with such guilt? The answer can be found in the narrative of an infamous event that occurred six decades earlier. On September 11, 1857, over fifty men mercilessly slaughtered 120 men, women and children in what would become known as the Mountain Meadows Massacre. After cowardly luring the Baker–Fancher emigrant company from their defense with false promises of a truce, the assailants (primarily consisting of Mormons in good standing) massacred all but seventeen children deemed too young to expose the murderers. This bloody and diabolical deed commenced at dawn, September 7, 1857, and continued until the 11th, when the besieged emigrants who survived the attacks, under promise of protection were foully murdered. . . . It was a crime for which there can be no apology or excuse, a thing treacherous and damnable in the extreme.3 Thus did President Joseph Fielding Smith, Church Historian and later 10th President of the Church, describe the infamous tragedy that would come to be known as the "Mountain Meadows Massacre". The graphic accounts are stomach-wrenching. "I saw the bodies of men, women, and children, butchered in the most horrible manner," Samuel Pollock said. "Some of the children with their heads mashed in by rocks, I suppose."4 On September 5, 1857, the emigrants were ambushed by a band of Mormon militia and Paiute Indians. For two days the travelers successfully defended themselves until difficult circumstances and the offer of a truce from the Mormons, beguiled the party into a pre-calculated trap. The travelers surrendered, the women and children were separated from the men and both parties obediently followed the assailants. Suddenly, Indians swept down on the helpless women and children while the Mormon men opened fire. Flight or resistance was futile, it was too late. Men, women and children were cut down mercilessly as they fled for their lives. One blood-covered girl, perhaps ten or eleven years old, got within about sixty yards of the wagons before an Indian shot her. Another girl was fleeing for her life when an Indian "plunged his knife through her." . . . Rebecca Dunlap, six years old at the time, remembered the terror. She ran and hid in a cluster of sagebrush near the road. From her hiding place she saw two of her older sisters killed, their bodies falling nearby. She also heard her one-year-old sister, Sarah, crying. Sarah "had been shot through her right arm, below the elbow, by a large ball, breaking both bones and cutting her arm half off." Rebecca pulled Sarah free and took her back into the sagebrush to hide. She stayed there until she saw a white man and begged him for help.5 In vain, terrified women desperately struggled to reach the place where their husbands and fathers were being slaughtered. What would happen to their children? Would they be killed as well? Or would the survivors be left to a fate possibly worse than death itself? I remember standing by my mother, holding onto her skirt, while my mother stood with my baby brother in her arms, and when the white man, not an Indian, raised his gun to take the life of my mother, she said: "God, have mercy on my children!"6 One six-year-old boy ". . . was by his mother when she was killed" and as she lay dying, "pulled the arrows from her back . . ."7. The life was crushed from a teenage boy as his murderer pounded a large rock into the boy's chest.8 Only those deemed "too young to talk" were spared. Even at that young age, however, the last breaths of their dying fathers and mothers were etched forever in the memories of some, never to be forgotten. Sarah Frances (Baker) Mitchell Sarah Frances Baker remembered sitting on her wounded father George's lap in one of the wagons when the same bullet that snuffed out his life took a nick out of her left ear. Sarah wasn't quite three years old. "But even when you're that young," she maintained more than eighty years later, "you don't forget the horror of having your father gasp for breath and grow limp, while you have your arms around his neck, screaming with terror." She recalled "the blood-curdling war-whoops," "the banging of guns," and "the screaming of the other children and the agonized shrieks of women" being killed. "And you wouldn't forget it, either," she said, "if you saw your own mother topple over in the wagon beside you, with a big red splotch getting bigger and bigger on the front of her calico dress.9 Survivor, Nancy Saphrona Huff remembered: At the close of the massacre there was 18 children still alive, one girl, some ten or twelve years old they said was to big and could tell so they killed her, leaving 17. . . . I saw them shoot the girl after we were gathered up.10 Annie Elizabeth Hoag recorded John D. Lee, one of the primary leaders in the massacre, saying: When they came to one man that had his child in his arms an infant babe, he says give up that child. No, Lee, says the man, I know you, I recognise you [even] if you are painted[,] and you know the penalty of shedding innocent blood. If you kill me you kill my child, I will part with the last drop of blood there is in my body before I give up my child. Lee asked him again, if he would give up his child and he said no; then John D. Lee said it was his turn to assist and he shot him through the heart and killed the child at the same time. He said he didn't consider himself under the penalty of shedding innocent blood, he could not help it, because the man would not give up his child.11 Jacob Hamblin and Thales Haskell (my third great-grandfather) were prominently engaged in Indian relations at the time of the massacre. They were strongly opposed to, but unfortunately absent from home at the time of the attack. During their return, they were met with the news of the massacre. Upon arrival at Mountain Meadows, the men left their wives to survey the scene. When they [Hamblin and Haskell] reached Mountain Meadows, the two men told the girls to stay in the wagon under the cover, which was tightly drawn and securely fastened all around, while they went out to look over the country. Unable to restrain their curiosity and impatient with waiting, the girls did not obey, but climbed out to do some exploring themselves. What they discovered sent them back to the wagon in terror, and the husbands returned to find them trembling and crying. To the end of her days, Priscilla [Hamblin] was haunted by that sight of putrefying, dismembered women's bodies. 12 Priesthood Leaders Conspire Isaac C. Haight Who conspired and carried out this tragedy? The answer is shocking. The men who plotted and executed the massacre were not only members of the Church of Jesus Christ of Latter-day Saints, but the prominent conspirators were local priesthood leaders. Isaac C. Haight was the mayor and stake president of Cedar City.13 John M. Higbee, the man known for signaling to commence the slaughter, was one of President Haight's counselors.14 Philip Klingensmith was the bishop of Cedar City, Utah15 and therefore reportedly "exercised informal leadership"16. William Dame, who ordered out the militia, was the stake president of Parowan.17 William Dame The Mountain Meadows Massacre was not simply a case of "Mormons" annihilating a traveling emigrant party. The sickening fact is, that many otherwise faithful LDS men were driven to commit this cowardly act of violence through blind obedience to a Stake President and/or Bishop. The drama of this story centers around priesthood leaders urging and even demanding the men and women under their stewardship to support and participate in the murders. Nineteen years following the massacre, almost to the day, Nephi Johnson stood as a witness at the trial of John D. Lee. Lee was later executed for his role in the Mountain Meadows Massacre. As the prosecution quizzed Johnson, the questions turned to one of the most sobering aspects of the massacre: the role of political and ecclesiastical pressure from leaders. Q: Who was the highest military officer in Cedar City at that time? A: I think it was Isaac C. Haight. Q: You thought it would not be safe for you to refuse, had you any rea­sons to fear danger – had any persons ever been injured for not obeying, or anything of that kind? A: I don't want to answer. Q: It is necessary to the safety of the man I am defending, and I there­fore insist upon an answer. Had any person ever been injured for not obeying? A: Yes, sir; they had. Q: And from what you had seen before that, you thought it was your duty, under the circumstances, to obey counsel, or commands given you by Haight? A: Yes, sir. Q: Did Haight hold any office except that of Major in the military? A: He held the office of President of Cedar City. Q: An ecclesiastical office – President of that Stake of Zion, I believe you call it? A: Yes, sir.18 Nephi Johnson Before the killings, Nephi Johnson's conscience warned against involvement. In fact, he initially refused to act as an interpreter between the white men and Indians planning the attack. "I Did Not Want Anything to Do with killing the Emigrants, for I was Determined in my Own Mind that I would Keep Away from them." But on Thursday Haight gave him no choice. Two express riders arrived at Johnson's home with written orders, telling him Haight required him to go to Cedar City whether he "wanted to or Not."19 Nephi Johnson was again faced with internal conflict as he met with John D. Lee, John M. Higbee and a number of prominent Indian chiefs. Through interpreter Nephi Johnson, Lee suggested to the Paiute leaders "that he would try and get the emigrants out of their camp as well as giving up their arms after which they would kill them." At first, Johnson hesitated to interpret the awful message. Lee "wanted me to talk to the Indians in a way I didn't want to," Johnson later recalled.20 Johnson ultimately concluded to accommodate his leaders, leading to further compromises during the massacre. The memories would stay with him from that moment onward, undoubtedly leading to his dying cries, "Blood! Blood! Blood!" The dark days of early September 1857 became a battleground as each man was left with a decision which would prove life-altering. Should he follow authority or should he question? As some of the men listened to their own Bishop and/or Stake Presidents outline strategies and give orders, many made the deadly decision to blindly follow. The following statement from John D. Lee could be held suspect as some, including myself, believe research has shown that Lee gave inaccurate history in some respects. However, the report may show substance into arguments being made at the time. Several of the dignitaries bowed in prayer-invoked the aid of the Holy Spirit to prepare their minds and guide them to do right and carry out the counsel of their leaders. Higbee said that President J. C. Haight had been to Parowan to confer with Colonel Dame, and their counsel and orders were that "This Emigrant Camp Must Be Used Up." I replied, "Men, women and children?" "All," said he, "except such as are too young to tell tales, and if the Indians cannot do it without help, we must help them." I commenced pleading for the company, and I said though some of them have behaved badly, they have been pretty well chastised. . . . Ira Allen, High Counselor, and Robert Wiley and others spoke, reproving me sharply for trying to dictate to the priesthood; that it would set at naught all authority; . . . . Counselor C. Hopkins, a near friend of mine, came to me and said; "Brother Lee, come get up and don't draw off from the priesthood. You ought not to do so. You are only endangering your own life by standing out. You can't help it; if this is wrong-the blame won't rest on you." I said, "Charley, this is the worst move 'this people' ever made. I feel it." He said, "Come, go back and let them have their way." I went back. . . . It was further told the men that President Haight said that if we were united in carrying out the instructions we would all receive a "celestial reward." I said I was willing to put up with a less reward, if I could be excused. "How can you do this without shedding innocent blood?" Here I got another lampooning for my stubbornness and disobedience to the priesthood. I was told that there was not a drop of innocent blood in the whole company of emigrants. Also referred to the Gentile nations who refused the children of Israel passage through their country when Moses led them up out of Egypt—that the Lord held that crime against them and when Israel waxed strong the Lord commanded Joshua to slay the whole nation, men, women and children. "Have not these people done worse than that to us? Have they not threatened to murder our leaders and Prophet, and have they not boasted of murdering our Patriarchs and Prophets, Joseph and Hyrum? Now Talk About Shedding Innocent Blood." They said that I was a good, liberal, free-hearted man, but too much of this sympathy would be always in the way; that every man now had to show his colors; that it was not safe to have a Judas in camp. Then it was proposed that every man express himself. That if there was a man who would not keep a close mouth, they wanted to know it then. This gave me an understanding what I might expect if I continued to oppose them. Major Higbee said: Brother Lee is right. Let him take an expression of the people. I knew I dare not refuse; so I had every man speak and express himself. All said they were willing to carry out the counsel of their leaders; that the leaders had the Spirit of God, and knew better what was right than they did.21 As the men aimed their guns at defenceless men, women and children, surely many prayed for forgiveness and absolution. Although conscience was strongly dictating in many cases that this was wrong, all apparently clung to the false spirit's subtle words, "that the leaders had the Spirit of God, and knew better what was right than they did." They were simply following orders. Wouldn't God vindicate them in the end? The Prophet Joseph's Counsel If only these men at Mountain Meadows had known or followed the true counsel of the Prophet Joseph Smith which stood in stark contrast to that of these other priesthood leaders. When you are obliged to fight be sure that you do not stain your hands in the blood of women and children, and when your enemies call for quarters be sure you grant them the same and then you will gain power over the world you will be forever called the Nauvoo Legion.22 If only these men had known or followed the leadership principles of Joseph Smith wherein he taught that we are not a people who blindly follow. Some years ago, in Nauvoo, a gentleman in my hearing, a member of the Legislature, asked Joseph Smith how it was that he was enabled to govern so many people, and to preserve such perfect order; remarking at the same time that it was impossible for them to do it anywhere else. Mr. Smith remarked that it was very easy to do that. "How?" responded the gentleman; "to us it is very difficult." Mr. Smith replied, "I teach them correct principles, and they govern themselves."23 The Prophet Joseph consistently taught that we must never blindly follow any priesthood leader, even the President of the Church. President Joseph Smith read the 14th chapter of Ezekiel–said the Lord had declared by the Prophet, that the people should each one stand for himself, and depend on no man or men in that state of corruption of the Jewish church–that righteous persons could only deliver their own souls–applied it to the present state of the Church of Jesus Christ of Latter-day Saints–said if the people departed from the Lord, they must fall–that they were depending on the Prophet, hence were darkened in their minds, in consequence of neglecting the duties devolving upon themselves …24 Blindly following should never be part of the program in the true Church of Jesus Christ. A Difficult Test To better understand the dilemma faced by these early Saints, an understanding of the times is necessary. The Baker–Fancher emigrant wagon train was by no means purely innocent or irreproachable. It is reported that the emigrants claimed to have the gun that killed "Old Jo Smith".25 There were those in the company who "boasted of having participated in the Missouri outrages and the Haun's Mill Massacre."26 Disrupted communities along their way complained of the emigrants stealing, looting and so forth. Utah citizens were also under the impression that the Baker-Fancher party had poisoned Native Americans with a dead ox27 and stirred up various Indians along their way. Finally, it must be remembered that there was an army headed toward Utah with soldiers vowing to destroy the Mormon settlements, killing men, women and children and raping along the way. Such threats had formerly been carried out in Missouri and Nauvoo with unforgettable results. The extermination attempts could easily be repeated. September 1857 was clearly a time of war. Further, the emigrants had foolishly announced intentions to raise an army in California, return to Utah and obliterate the "Mormons".28 In addition to the faulty guidance of leaders, the circumstances made the ordeal a difficult matter to clearly articulate right from wrong. The need for personal revelation and not blind obedience was more crucial than ever. In the end, helpless victims were massacred and the perpetrators lived forever tormented by horrid memories because true principles as taught by the Prophet Joseph Smith and the scriptures were not understood and/or followed. One cannot imagine the suffering this event has caused. The perpetrators were also victims. Over 20 years ago I first read accounts of militia members at Mountain Meadows. For me, personally, merely studying the journal accounts of these men–the never-ending nightmares, the demonic visits of evil spirits and the unbelievable tortures of conscience have never left my memory. They cemented an unspeakable desire to never blindly follow any man or woman to destruction. While I believe this to be the darkest story in Church history, I also believe the lessons from Mountain Meadows should be repeatedly taught to every latter-day saint. We must know our history! Every time someone says, "I don't want to know about such things, I think we should just focus on Faith, Repentance and Baptism," I cringe. What is generally meant by this phrase is, "I am comfortable where I am, don't rock my boat, don't make me think!" The men at Mountain Meadows may have felt a bit that way before the massacre, but I doubt that many of them wished they had not spent more time in spiritual preparation after the fateful events. Ideas that lead to dumbing down the people make it possible for leaders like Isaac Haight to prey upon the weak and ultimately could lead to a repeat of Mountain Meadows or much worse. Again, to those who endlessly repeat the monotonous drum roll of "we should just focus on Faith, Repentance and Baptism" I have this reply. Is this not the very essence of faith! Isn't the lesson from all scripture and every prophet from the beginning to the end that we must trust in God and not in man? I have never seen one revelation provide even a single exception to this rule. From my experience, very few truly understand blind obedience. I have never heard the teachings of the Lord Jesus Christ, the teachings of the Prophet Joseph Smith or the teachings of President Brigham Young condemning blind obedience taught in a class. How can this be? True doctrine forever changes attitudes and behavior. Please teach the lessons from Mountain Meadows. Some unwittingly teach that we should unquestioningly follow every word that comes from the mouth of our Church leaders. Some even taunt it is breaking temple covenants to question Church authority. We have recently heard many of our friends criticize men and women who raised honest concerns regarding the personal integrity and instruction given by a leader. One confidently declared, "If I read a book [by an appointed leader] that taught evolution, I would follow it and God would not punish me for believing it. We can't go astray following them. God won't punish us." Another warned, "It's breaking your temple covenants to raise concerns with the character, motives or actions of ordained leaders." Sadly, these likely well-meaning members have not carefully studied the history of the Church nor the doctrines taught by the scriptures and inspired prophets of God. Some of the men near Mountain Meadows held similar opinions and the consequences still haunt us. Brigham Young conversely taught: I do hope and pray my brethren and sisters to pay attention, that the Spirit of the Lord may be in your hearts, that you may see and understand things as they are. I would say, still further, if there be error advanced here, do not receive it, pass it by, and live so that you will know truth from error, light from darkness, the things that are of God from those not of God; and if an error should drop from the lips of one of our Elders, do not receive, believe, or practice it. Truth is what we want, and we ought to live so that we can understand and know it for ourselves. This is our privilege and duty; and we request of the Latter-day Saints, and of all people, to live so that they may know and understand the things of God, and receive and embrace them in their faith, and practice them in their lives.29 I told the people in Nauvoo . . . that if they were not Saints at that critical juncture, they ought to repent of their sins, and get the Holy Ghost, and not live another twenty-four hours without the Spirit of revelation within themselves, for who knows but what you are the elect; and you know that false prophets were to arise in the last days, and, if possible, deceive the very elect, and that many false shepherds would come and pretend to be the true shepherds. Now, be sure to get the spirit of revelation, so that you can tell when you hear the true Shepherd's voice, and know him from a false one; for if you are the elect, it would be a great pity to have you led astray to destruction.30 The Lord has given a protective promise to those who desire to not be led astray. President Brigham Young taught that this assurance is given because each Latter-day Saint has the privilege, if they are pure, to receive a personal witness. It is up to them whether they will be led astray or not. The First Presidency have of right a great influence over this people; and if we should get out of the way and lead this people to destruction, what a pity it would be! How can you know whether we lead you correctly or not? Can you know by any other power than that of the Holy Ghost? I have uniformly exhorted the people to obtain this living witness each for themselves; then no man on earth can lead them astray.31 The LDS men at Mountain Meadows needed more than ever to have questioned and then prayed. Because they evidently did not question, they did not humbly pray for guidance. This was their test, their moment to be proved by fire. Tragically, they were not adequately prepared. During the very moments that President Brigham Young was sending word to "not interfere with [the emigrants]", "not meddle", "preserve good feelings" and "let them go in peace", these men were making the most fateful decision of their lives. Thursday, September 10th, the day before the final massacre, James Haslam arrived in Salt Lake City and rushed to deliver a message from President Isaac Haight to President Brigham Young. The express reported that the company of emigrants had "behaved verry mean" and asked the simple question: what should be done? Jacob Hamblin recorded: The spirit of the Express rather asked the privilege to chastize [the emigrants]. . . . President Young answered them rather haistily, saying No, when I want Marshal Law proclaimed, I will let you know.32 President Young realized the urgency of the situation and acted immediately. Young asked Haslam, who had spent most of the last sixty hours in the saddle, if he "could stand the trip back" to Cedar City. When Haslam replied that he could, Young told him to get a little sleep and be back in the office in an hour for his written reply.33 President Young's response was swift, decided and clear. In regard to emigration trains passing through our settlements we must not interfere with them untill they are first notified to keep away. You must not meddle with them. The Indians we expect will do as they please but you should try and preserve good feelings with them. There are no other trains going south that I know of[.] [I]f those who are there will leave let them go in peace. While we should be on the alert, on hand and always ready we should also possess ourselves in patience, preserving ourselves and property ever remembering that God rules. He has overruled for our deliverance this once again and he will always do so if we live our religion, be united in our faith and good works. All is well with us.34 As Haslam prepared to depart, the anxiety and concern felt by President Young was clearly manifest in the prophet's repeated instruction, "Brother Haslam, I want you to ride for dear life; ride day and night; spare no horse flesh."35 Haslam followed Brigham Young's instruction and rode "for dear life". He arrived in Cedar City on Sunday and immediately handed President Haight the missive. The horror, the grief and the heartbreak can only be imagined. Haight "sobbed 'like a child'" for half an hour "and could manage only the words, 'Too late, too late.'"36 I have repeatedly contemplated how ironic it is that at the very moment Brigham Young was dictating instruction to Haslam to let the emigrants pass, the men at Mountain Meadows were making life changing decisions. Brigham Young's letter did not arrive in time to save the people from their deadly mistake. But, the grim truth remains, every man and woman involved in the Mountain Meadows Massacre was without excuse. Each individual could have received President Young's instruction through the Holy Spirit. Each of us must personally learn to discern between true and false leaders, between true and false teachings. There is no other way. Some of the men did pray and follow inspiration in dealing with the emigrants. Silas Sanford Smith, nephew to the Prophet Joseph Smith and my ancestor maintained friendly relations and even "took supper" with the emigrants.37 Discovering the location of Silas' home in Paragonah, the Baker-Fancher party deliberately camped near by. Silas' brother, Jesse N. Smith was one of William Dame's two counselors in the Parowan stake presidency. "[Jesse N. Smith] sold flour and salt to the Arkansas company when it came to his home in Paragonah northeast of Parowan . . . Jesse's retrospective diary gave no hint that he or any of his neighbors had bad feelings for the emigrants."38 Jesse N. Smith Jesse N. Smith's role in the events surrounding the massacre demonstrates that not every leader was in favor of violence against the Arkansas company. After word reached William Dame of the initial attacks on the emigrants beginning Monday, September 7th, Jesse Smith and Edward Dalton were sent to investigate the skirmish. They returned and "expressed much disgust over what they had seen and learned, as John D. Lee and other white men were assuming a very hostile attitude toward the emigrants in connection with the Indians."39 Later Wednesday night, a council was held. William Dame, Calvin Pendleton, Jesse N. Smith, William Barton, Isaac Haight and other leading men were present. The council decided that "a company should be sent out from Parowan . . . to call the Indians off, gather up the stock for the company, and let them continue their journey in peace."40 No sooner had the council concluded when Isaac Haight approached William Dame privately. Shortly after, Haight was riding for Cedar City feeling that Dame gave "the final order to destroy the entire company." He would later express bitter regret saying "I would give a world if I had it, if we had abided by the deci[s]ion of the council."41 In other words, there was opposition and discrepancy among the leaders. In such cases, there will be conflict. Leaders will almost never stand in perfect agreement unless they all possess the spirit of God. It is left, then, for us to follow those who are right. I am forever grateful that my ancestors were among those who listened to the true spirit and counseled in wisdom. Other Heroes Not all of the men in Cedar City, Parowan and the surrounding settlements decided to blindly follow their conspiring leaders. Some bravely refused to compromise. When forced between God and man, they chose to follow conscience. One man after another said he had gone to the Meadows because of military orders–they had been coerced. For some it was probably true; but it was also true that many men did not go, giving rise to a healthy store of folklore–proud families telling stories of how their ancestors refused to participate in the crime. "Old Joseph Walker . . . when told to go to the Meadows, put his fist in Haights face and told him to go to hell and do his own dirty work," said one account.42 If the story be true, recall the consequences Brother Joseph Walker faced for his defiance. He was not resisting an irritable neighbor. He wasn't refusing a family member. He was standing in the face of authority–for Isaac Haight was not only his mayor, but his Stake President. Another man claimed that his stepbrothers "hid in the furrows of a potato patch until the Cedar party went on." Peter Nelson reportedly concealed himself in a bin of grain and escaped the soldiers' detection by breathing through a straw. Yet another man supposedly dodged service by claiming to be ill, first lyin in a pile of hot bricks to simulate a fever. . . . All told, less than one-fifth of Cedar City's militiamen went to the Meadows. . . .43 Some of the men were unaware that they were being recruited to massacre the emigrants. Some labored under the impression they were called to "burry the dead." They did "not know that they must first make the dead," Haight said privately. Yet only a fraction of the men took shovels or spades with them, and most took guns. When private John Bradshaw, a thirty-eight-year-old English brick maker, showed up at the mustering grounds with just a spade, Haight wanted to know why he was not carrying a gun. Where was his ammunition? "I told him I didn't know that it required a gun to bury dead people," Bradshaw replied. "He . . . called me a fool; told me I didn't know anything about it, didn't understand things." Haight dismissed Bradshaw, sending him home.44 Regardless of whether the decision to participate in the massacre occurred at home or upon their arrival, the decision was a struggle. Several of the Mormon men "shed tears at the sight of the dead lying before them, and only in obedience to what they considered legitimate military authority would they have done what they did," reported one.45 Before withdrawing from the scene of the massacre, the Stake President of Parowan and the Bishop of Cedar City, Utah were careful to impress upon the men the conspiratorial nature of this crime. The details were never to be revealed.46 Consider carefully whether you would have followed your leaders before passing judgement on the men who "dutifully" followed their leaders on that fateful day. Would you have stood against your Bishop? Would you have stood against your Stake President? Would you stand firm if your leaders asked you to do something that was wrong? The men did not feel the pressure alone. As Major Higbee left Cedar City on Thursday with the husbands, fathers and brothers, the wife of the Stake President "Sister Haight" was quick to admonish her Relief Society sisters regarding their duty. . . . Sister Willis and Sister Haight. . . taught them the necessity of being obedient to their husbands &c—7 and not to be fearful in these troublesome times but to be prayerful and attend to their duties . . . said that these were squally times, and we ought to attend to secret prayer in behalf of our husbands, sons, fathers, & brothers. instructed the sisters to teach their sons & daughters the principles of righteousness, and to implant a desire in their hearts to avenge the blood of the Prophets. . . . advised them to attend strictly to secret prayer in behalf of the brethren that are out acting in our defence.47 As the sisters listened to their leaders in the Relief Society, any whose conscience pricked were forced with a similar judgement. Follow the Stake President blindly, or pray and receive a personal confirmation, and act accordingly. Sister Haight, good intentioned or not, was no doubt eager to settle any doubts in her sisters' minds regarding the wisdom and prudence of the attack being instigated and led by her husband. She strove to "implant a desire in their hearts to avenge the blood of the Prophets." However, it is likely she and her sisters were unaware of the Prophet's teaching that this "avenging" should be left to the Lord and not to man. The previous year, Wilford Woodruff recorded President Brigham Young as teaching: While reading the revelation upon the patriarchal marriage & While reading that paragraph relating to the sheding of innocent Blood President Young remarked that that was a vary nice point to distinguish between innocent Blood & that which is not innocent. Were we now Commanded to go & avenge the Blood of the prophets whare it wood reach infants from the Cradle to the third & forth generation would they know what to do in such a case? They would not. But there is one thing that is a consolation to me And that is I am satisfied that the Lord will not require it of this people untill they become sanctifyed & are led by the spirit of God so as not to shed inocent Blood.48 President Young clearly taught that the Saints were not spiritually mature and prepared to take any acts of "avenging" into their own hands. Those who chose to blindly follow the leaders counseling them to the contrary were haunted by the consequences for the rest of their lives. In our own day, the dilemma remains. Men and women struggle with the question of "blind obedience". When a leader asks you do something, what do you do? Many think a leader could never lead you amiss. Is it inappropriate to question? Could religious leaders exist who are not men "of God walking in his ways and keeping his commandments."?49 The Saints living near Mountain Meadows were forced to answer these questions. Sadly, they were not the last. Helmuth Hübener Over eighty years later, Germany found itself trapped within the iron fist of Hitler and his Nazi rule of terror. In the midst of world war, one Latter-day Saint teenage boy was excommunicated by the Branch President of the St. Georg Branch in Hamburg, Germany. His crime? Opposing the Nazi regime and secretly distributing leaflets exposing Hitler's lies and propaganda. A motorcyclist reads a sign stating "Jews are not welcomed here." Germany, ca. 1935. — US Holocaust Memorial Museum, courtesy of Margaret Chelnick Helmuth Hübener's LDS Branch President, Arthur Zander, was "enthusiastic in his support of the Nazi regime".50 Arthur Zander led the St. Georg Branch in Hamburg in 1938. He wore the gold swastika lapel pin given to those who joined the Nazi Party before March 1933. "He was quite enthused about it," recalled Karl-Heinz Schnibbe, who as a youth was involved in the Helmuth Hubener gang of resistors. "He saw good in [the Nazi Party]. No more unemployment, the autobahn was constructed, and everyone had work under Adolf." Franz Jacobi, Zander's first counselor and also a Party devotee was, in Schnibbe's words, a "super Nazi." The two congregational leaders wanted to begin Sunday services with the Hitler salute, but found their enthusiasm overruled by the district president. . . . when the Fuhrer* would speak over the airwaves, Zander would provide a radio and lock the doors so that nobody in the congregation could leave during the broadcast. . . . Zander erected a sign outside that announced Jews were not allowed to enter.51 As sixteen year old Helmuth Hübener secretly listened to forbidden BBC broadcasts, he was stirred to action. He could not stand idly by while tyranny stifled freedom. However, the young boy stood in a difficult position. For Helmuth, defending freedom, defying Nazi rule and following his conscience meant contradicting his local Priesthood authority. Who was right? Helmuth refused to compromise. For months, he and two friends bravely composed and distributed anti-fascist pamphlets revealing the lies and propaganda of the Third Reich. Their "resistance" was brought to an immediate end when Helmuth was arrested by the Gestapo on February 5th, 1942. While suffering interrogation and torture in Gestapo chambers, he suffered yet another blow. He was cut off from the Church he loved. On February 15th, without any consultation with the district president, President Zander . . . wrote "Excommunicated" on Hubener's membership record. He did this with the apparent consent of Interim West German Mission President, Anton Huck, although there is no evidence that a Church court was convened.(Manuscript History of the West German Mission, LDS Historical Archives, Salt Lake City) . . . On the Sunday after the arrest, Karl, Rudi, Hubener's mother, and grandmother all attended the St. Georg branch, where they heard Brother Friedrich Jakobi say: "I'm glad they caught him. If I'd known what he was doing, I'd have shot him myself." (Interviews with Wobbe, Schnibbe, Berndt, and Hans Kunkel)52 District President Otto Berndt stated: Otto Berndt said Zander might have performed the excommunication because he believed it would placate the Gestapo. But, added Berndt, "He did it behind my back."53 Notice of Helmuth Hübener's execution Helmuth Hubener was finally beheaded by guillotine on October 27, 1942 after months of imprisonment, isolation, torture and interrogation. Four years would pass before Hubener's membership record would be marked "excommunication done by mistake" and it was not until January 24, 1948 that President George Albert Smith ordered the entry, "Decision of excommunication reversed by the First Presidency of the Church of Jesus Christ of Latter-day Saints, who ordered this notation placed upon the record of excommunication." Was Helmuth's sacrifice, standing against Nazi terror, wise? Should he instead have followed without question? Helmuth defied his priesthood leader to stand for truth. Was he right? Two different stories, two different places, two different times, two different enemies, but the same question. If your priesthood leader instructs you to do something which is wrong, do you blindly follow? During my life, I have often heard individuals express belief in following leaders regardless of whether instruction was right or wrong. If a leader's guidance contradicted scripture, for example, these unwary latter-day saints were convinced that no negative consequences would follow their dutiful obedience. If a leader ever could be in error, surely the leader alone would be held responsible. How could an individual be punished for obedience? This reckless attitude likely stems from ignorance of scripture and the revelations of the Lord. Perhaps few are familiar with the words of the Lord Jesus Christ on the subject. Jesus Christ on Following Religious Leaders (Mark 9, JST) We have often heard Christ's instruction to the Twelve, "if thy hand offend thee, cut it off: it is better for thee to enter into life maimed, than having two hands to go into hell,". I have participated in many Gospel Doctrine lessons where speculation was shared and possible interpretations imagined by well-meaning members. Have you ever wondered what the Lord was teaching in these verses? What specifically was the Son of God, while dwelling in mortality, trying to convey? The clear meaning of the passage was corrupted and lost until the Prophet Joseph Smith was inspired to restore a portion. Turning now to the Joseph Smith Translation of Mark 9, we may discover an inspired commentary different from traditional understanding. 40 Therefore, if thy hand offend thee, cut it off; or if thy brother offend thee and confess not and forsake not, he shall be cut off. It is better for thee to enter into life maimed, than having two hands, to go into hell. 41 For it is better for thee to enter into life without thy brother, than for thee and thy brother to be cast into hell; into the fire that never shall be quenched, where their worm dieth not, and the fire is not quenched. (Mark 9:40-41, JST) So far, nothing too controversial here! If a neighbor, co-member or friend falls away or abandons the truth, we must not endanger our own salvation and allow ourselves to be led by them. We should exercise caution and prudence to ensure that we are not deceived. It would be better to "cut them off" than to be corrupted. This is not a revolutionary concept; we have heard this often. However, there is more. What is the context of these verses? The Savior continues by warning against transgression in individuals of position, leaders we generally do not suspect. 42 And again, if thy foot offend thee, cut it off; for he that is thy standard, by whom thou walkest, if he become a transgressor, he shall be cut off. 43 It is better for thee, to enter halt into life, than having two feet to be cast into hell; into the fire that never shall be quenched. (Mark 9:42-43, JST) Mark 9 becomes a bit more controversial here. We are warned that if the one who is "thy standard . . . become a transgressor, he shall be cut off." Who could this religious "standard" be? Whose "walk" do you follow? Who shows forth the religious path? In scripture, our "walk" is synonymous with obedience to the commandments (Mosiah 4:15, D&C 25:2, 95:12, 2 John 1:6) and righteous participation in ordinances (D&C 136:4). When speaking of men "by whom thou walkest", the Son of God is undoubtedly warning against appointed exemplars and teachers that can fall. Could the religious leader we feel admiration for and confidence in "become a transgressor"? The Savior cautions us, for He knows that if we are not on our guard, we will easily be deceived and follow them "into hell". 44 Therefore, let every man stand or fall, by himself, and not for another; or not trusting another. (Mark 9:44, JST) Verse 44 clearly contradictsthe notion that any individual, including ecclesiastical leaders, should be blindly followed. Each one of us is left to stand on our own two feet. At the judgement seat of God, we will stand accountable for our own actions. Again, remember the Prophet Joseph's witness that blind obedience in all cases leads to corruption and "darkening". The Lord Jesus Christ and the Prophet Joseph Smith have unmistakably spoken on blind obedience, blind obedience to any leader at any level in the Kingdom of God. The Lord has declared that this dispensation shall have His word through the Prophet Joseph Smith (D&C 5:10). If we believe in the revelations, we can safely lay aside every statement made by any authority that contradicts this certain word of God. President George Q. Cannon taught, It is the design of the Lord to develop within every man and woman the principle of knowledge, that all may know for themselves. He has poured out His holy spirit upon all of us,and not upon President Young nor upon bro. Joseph alone. The Lord designs that the principle of knowledge shall be developed in every heart, that all may stand before Him in the dignity of their manhood, doing understandingly what He requires of them, not depending upon nor being blindly led by their priests or leaders, as is the universal custom, and one of the most fruitful sources of evil to the people on the face of the earth. George Q. Cannon God intends to break down this order of things, and to develop in the bosom of every human being who will be obedient to the gospel and the principles of truth and righteousness, that knowledge which will enable them to perform understandingly all the labors and duties he requires of them. Note President Cannon's conviction that "blind obedience" is "one of the most fruitful sources of evil to the people on the face of the earth." Our own history bears testimony to the truth of this statement. The victims of the Mountain Meadows Massacre cry from the ground, "Let every man stand or fall, by himself, and not for another!" The story of Helmuth Hubener pleads "We ought to obey God rather than men."55 President Cannon continued: If we, in our experience, have not yet proved the truth of the words of the prophet—"Cursed is he that trusteth in man, or maketh flesh his arm"—probably we will do if we live long enough. There is a curse attending every man and woman who does this. . . . We must all learn to depend upon God and upon Him alone. Why, the very man upon whom we think we can rely with unbounded confidence, and trust with all we possess, may disappoint us sometimes, but trust in God and He never fails.56 The testimony of scripture, latter-day prophets and history is clear. If your trust is in man, any man, you will be disappointed. If you blindly follow the "arm of flesh", you will be cursed. Build your foundation on the Rock that will never fail. Place your trust firmly in the power that will never disappoint. 45 Seek unto my Father, and it shall be done in that very moment what ye shall ask, if ye ask in faith, believing that ye shall receive. (Mark 9:45, JST) Throughout the Dark Ages, apostate clergy and tyrannical leaders indoctrinated their people to blindly follow. Instead of directing all attention to the Father and the Son, priests became the light. Priests were infallible. To question the authorities was heretical. God became an essence, a power, a force that could not be approached directly. On the contrary, the Prophet Joseph Smith restored the understanding that we can each receive a personal knowledge of truth and also of error. Revelation is not only possible, it is essential! God desires each of us to become ". . . agents unto [our]selves." We are grateful for and recognize the necessity of ordained leaders with priesthood keys. They are called of God and obedience to their inspired counsel has blessed my life and the lives of my children and posterity forever. Nevertheless, I stand and fall by my own actions. You will stand and fall by yours. Our responsibility is to bow the knee, humbly pray and receive our own witness, our own knowledge of every principle, doctrine and question. To do this we must ask questions. Those who don't ask questions don't receive answers. 46 And if thine eye which seeth for thee, him that is appointed to watch over thee to show thee light, become a transgressor and offend thee, pluck him out. The references to "eye", "watch over thee", "appointed" and "show thee light" have clear religious connotations. The leader described is appointed to reveal the path. He is responsible to watch over God's people and guide them to the "light". Again, the Lord is careful to specify that this man has been "appointed". From the text, we can deduce that the "transgressor" described is not an "imposter". He is not a non-member or an excommunicated apostate. Being "appointed" clearly infers priesthood authority. He has not called himself. He has not been selected by democratic vote. He has been called by God to lead, to assist the weak, to watch over the flock and bring the light of God to a darkened world. His high position may lead us to conclude that he could not fall. Would the Lord appoint a man who later becomes a "transgressor"? The Lord has indicated that He could. What should our reaction be? According to God, we must "pluck him out" as our stewardship permits. 47 It is better for thee to enter into the kingdom of God, with one eye, than having two eyes to be cast into hell fire. 48 For it is better that thyself should be saved, than to be cast into hell with thy brother, where their worm dieth not, and where the fire is not quenched. In the Lord's eyes, we are accountable for our own salvation. It would be better to disregard the false counsel of a mortal man, than be "cast into hell with thy brother". The admonition of Jesus Christ in JST Mark 9 may be uncomfortable. It may contradict traditions fostered since our youth. It may cause us to fall to our knees. It may lead us to search the word of God in scripture with more diligence. It may lead us to correct errors in our lives that we previously laid at the feet of others, perhaps leaders. Those of us who are leaders may think and pray more carefully before we speak or act in the name of God. The implications of truly understanding this fundamental doctrine may alter our eternal destinies as individuals and as a people. "Cursed is he that putteth his trust in man . . ." It has been over 150 years since the Mountain Meadows Massacre and the controversy still swirls. Who is to blame for the violent end to the Baker-Fancher party? Did Brigham Young order the massacre? Is he in any way responsible? The evidence available clearly testifies that President Young had nothing to do with the murders and would have stopped it if he would have known. In addition to the rider-delivered message relayed to Isaac Haight, the president resolutely declared two decades later: My disposition is such that had I known anything about it I would have gone to that camp and fought the Indians and white men who took part in the perpetration of the massacre to the death, rather than such a deed should have been committed.57 In our day, there is a ruthless "smear campaign" underway (both from within and without the Church) to slander the integrity, character and credibility of President Brigham Young. The Mountain Meadows Massacre is only one arrow in this battle to defame a noble prophet of God. Some argue that responsibility should be laid on President Young because he compelled "absolute obedience to church authorities"58 and took "quick and decisive action to instill loyalty and obedience in those followers who dared to question him."59 The "Mormon" people, allegedly, lived in a "system of bondage"60 and tyranny; grown men and women crippled in servitude. In other words, in the eyes of many, regardless of President Young's opposition to the Mountain Meadows Massacre specifically, he remains guilty. The men in Cedar City and Parowan had been groomed under a leader who demanded unswerving blind obedience. My cousin, historian Ron Walker, who recently passed away, co-authored his own treatment of the massacre titled, Massacre at Mountain Meadows. We know from conversations with Brother Walker that he was in no way desiring to disparage President Young. Throughout the work he attempts to portray accurate history, demonstrating President Young's opposition to the massacre. However, one criticism I have with the work, is that it places some blame on President Young for contributing to "a culture that encouraged blind obedience".61 Nothing could be further from the truth! Brigham Young's fervent ambition was to foster independence in every individual. He wanted the people to stand on their own two feet and never lean. Brigham Young wisely counseled: I have said to the Latter-day Saints, many and many a time, and I say to them now, live your religion, that the Spirit of God may be within you like a well of water springing up to everlasting life. Suppose I were to give way to the spirit of the enemy and leave the spirit of the Gospel, then, if you were not prepared to judge between the voice of the Good Shepherd and the voice of the stranger, I could lead you to ruin. Be prepared that you may know the voice when it comes through the servants of God, then you can declare for yourselves. 'This is the word of the Lord.' My caution and counsel to the Latter-day Saints, and to all the inhabitants of the earth is—Live so that you will know truth from error. I say to you . . . brethren and sisters, be faithful, live so that the Spirit of the Lord will abide within you, then you can judge for yourselves. I have often said to the Latter-day Saints—"Live so that you will know whether I teach you truth or not." Suppose you are careless and unconcerned, and give way to the spirit of the world, and I am led, likewise, to preach the things of this world and to accept things that are not of God, how easy it would be for me to lead you astray! But I say to you, live so that you will know for yourselves whether I tell the truth or not.62 The contemporary attitude we have seen among many mainstream LDS members to "blindly follow" did not begin with President Young, nor with his predecessor and mentor the Prophet Joseph Smith. Note again the Prophet's counsel and urgent warning against blind obedience. ". . . if the people departed from the Lord, they must fall–that they were depending on the Prophet, hence were darkened in their minds, in consequence of neglecting the duties devolving upon themselves …"63 The Prophet taught that the Saints were "depending on the Prophet" and thus their minds were "darkened". Specifically, many have a false understanding that we can blindly follow the president of the Church. Sometimes they say it other ways such as "implicitly trust" or "never worry about teaching anything untrue", etc. Here, the Prophet Joseph Smith attempts to silence that argument forever. Any individual who puts blind obedience, even in the Prophet, becomes "darkened" and loses the Spirit of God. When the Prophet Joseph Smith, Brigham Young and others implored the Saints to receive a witness for themselves and never blindly follow, they were merely echoing the words of Jesus Christ. 43 It is better for thee, to enter halt into life, than having two feet to be cast into hell; into the fire that never shall be quenched. 44 Therefore, let every man stand or fall, by himself, and not for another; or not trusting another. . . . 46 And if thine eye which seeth for thee, him that is appointed to watch over thee to show thee light, become a transgressor and offend thee, pluck him out. (Mark 9:42-46, JST) The Lord clarified that in dealing with leaders, we never put blind faith or implicit unthinking obedience in any leader. It is spelled out in the Lord's words. Even the leader who "seeth for thee", "is appointed to watch over thee", "is thy standard" and is the man "by whom thou walkest" should not be blindly followed. Some have taught that JST Mark 9 involves a lesson by the Lord to his disciples upon excommunication. The issue with this is the context. The context is clearly cutting out he who "seeth for thee", he "by whom thou walkest", he who "is appointed to watch over thee", he who "is thy standard", unmistakably this is speaking of someone looking up at a leader, not a leader looking down at a lay member The context is how should members view leaders who are in authority over them, NOT vice versa. How should members with wicked leaders respond to the situation? Members do not excommunicate leaders. Again, these are not our words, these are the words of Jesus Christ. We must stand loyally with the revelations. What do the revelations teach regarding trust in man and blind obedience? . . . I will not put my trust in the arm of flesh; for I know that cursed is he that putteth his trust in the arm of flesh. Yea, cursed is he that putteth his trust in man or maketh flesh his arm. (2 Nephi 4:34) Cursed is he that putteth his trust in man, or maketh flesh his arm, or shall hearken unto the precepts of men, save their precepts shall be given by the power of the Holy Ghost. (2 Nephi 28:31) Thus saith the Lord; Cursed be the man that trusteth in man, and maketh flesh his arm, and whose heart departeth from the Lord. (Jeremiah 17:5) Then Peter and the other apostles answered and said, We ought to obey God rather than men. (Acts 5:29) The weak things of the world shall come forth and break down the mighty and strong ones, that man should not counsel his fellow man, neither trust in the arm of flesh—(D&C 1:19) Put not your trust in princes, nor in the son of man, in whom there is no help. (Psalms 146:3) It is better to trust in the Lord than to put confidence in man. (Psalms 118:8) The fear of man bringeth a snare: but whoso putteth his trust in the Lord shall be safe. (Proverbs 29:25) The foundation of the Restoration rests on a firm bedrock that trust should be placed solely in God. The message was unmistakably clear and concise. Any change we may witness today cannot be identified as anything less than a removal from our foundation. In Part 2, we will address arguments some advocate for following a leader blindly: "he is a member of the First Presidency" "he is a member of the Twelve" "he is a General Authority" "he/she was appointed by the President" "his/her instruction comes from a 'church publication' or 'program'" "his/her instruction is believed to have come from the President of the Church" Young, Brigham. Journal of Discourses. Vol. 9. 150. January 12, 1862 Brooks, Juanita. The Mountain Meadows Massacre. Norman: U of Oklahoma, 1963. Vii. Smith, Joseph Fielding. Essentials in Church History. Salt Lake City: Deseret Book Co., 1950. 420. United States v. John D. Lee, First Trial, Jacob S. Boreman Transcript, Jacob S. Boreman, Collection, Huntington Library, San Marino, VA., 4:67, as quoted in Walker, Ronald W., Richard E. Turley, and Glen M. Leonard. Massacre at Mountain Meadows. New York: Oxford UP, 2008. 205. Print. Walker, Ronald W., Richard E. Turley, and Glen M. Leonard. Massacre at Mountain Meadows. New York: Oxford UP, 2008. 201-02. Print. Congressional Record Containing the Proceedings and Debates of the Fifty-Ninth Congress, Second Session (Washington, D.C.: GPO, 1907), 91:2687, Feb. 11, 1907 Special Report of the Mountain Meadows Massacre by Brevet Major J.H. Carleton, U.S.A, 25 May 1859. William Young, United States v. John D. Lee, First Trial, Jacob S. Boreman Transcript, Jacob S. Boreman, Collection, Huntington Library, San Marino, VA. 4:53-54, 5:2II, United States v. John D. Lee, First Trial, Adam Patterson Shorthand Notes, Jacob S. Boreman, Collection, Huntington Library, San Marino, CA, transcription by LaJean Carruth located in CHL. 5:39 Walker, Ronald W., Richard E. Turley, and Glen M. Leonard. Massacre at Mountain Meadows. New York: Oxford UP, 2008. 203. Print. "The Mountain Meadow Mas[s]acre" Statement of Mrs. G. D. Cates, One of the Children Spared At the Time," Dardanelle Arkansas Independent, Aug. 27, 1875. Reprinted in "The Mountain Meadow Massacre: Statement of One of the Few Survivors," Daily Arkansas Gazette, Sep. 1, 1875. Annie Elizabeth Hoag, United States v. John D. Lee, First Trial, Jacob S. Boreman Transcript, Jacob S. Boreman, Collection, Huntington Library, San Marino, VA. 4:28-29; United States v. John D. Lee, First Trial, Adam Patterson Shorthand Notes, Jacob S. Boreman, Collection, Huntington Library, San Marino, CA, transcription by LaJean Carruth located in CHL. 5:19; United States vs. John D. Lee, First Trial, Josiah Rogerson Shorthand Notes, Josiah Rogerson, Transcripts and Notes of John D. Lee trials, 1875085, CHL, transcription by LaJean Carruth located in CHL. 4:13-14. Brooks, Juanita. The Mountain Meadows Massacre. Norman: U of Oklahoma, 1963. 43. Turley, Richard E., Jr. "The Mountain Meadows Massacre – Ensign Sept. 2007." LDS.org. N.p., n.d. Web. 22 July 2016. also Walker, Ronald W., Richard E. Turley, and Glen M. Leonard. Massacre at Mountain Meadows. New York: Oxford UP, 2008. 59, 144. Print. Brooks, Juanita. The Mountain Meadows Massacre. Norman: U of Oklahoma, 1963. 80. Print. Ibid., 56. see also Parowan Stake, Historical Record, 1855-60, CHL., Jan. 20, 1856, 2nd sec., II. and Turley, Richard E., Jr. "The Mountain Meadows Massacre – Ensign Sept. 2007." LDS.org. N.p., n.d. Web. 22 July 2016. Testimony of Nephi Johnson, witness for the prosecution at 2nd trial of John D. Lee, September 14 to 20, 1876 as recorded in Lee, John D., and William W. Bishop. Mormonism Unveiled; Or, The Life and Confessions of the Late Mormon Bishop, John D. Lee. St. Louis, MO: Bryan, Brand, 1877. 349. Print. "Lee's Confession," Sacramento Daily Record-Union, Mar 24, 1877; "Lee's Last Confession," San Francisco Daily Bulletin Supplement, Mar 24, 1877; http://cdnc.ucr.edu/cgi-bin/cdnc?a=d&d=SDU18770324.2.14 Smith, Joseph. as reported by William B. Pace, Autobiography of William Bryan Pace, similar statement found in The Latter-day Saints' Millennial Star 38:751 John Taylor, "The Organization of the Church," Millennial Star, Nov. 15, 1851, p. 339. Smith, Joseph. History of the Church. 5:19; address of the Prophet to the Relief Society, reported by Eliza R. Snow Smith Ibid., 143 (also pg. 113 and 133). Young, Brigham. Journal of Discourses. Vol. 13. 345. May 5, 1870. Young, Brigham. Journal of Discourses. Vol. 6. 45. November 15, 1857. Young, Brigham. Journal of Discourses. Vol. 6. 100. November 29, 1857. Extracts from Hamblin's journal, in Hamblin to Young, Nov. 13, 1871, as quoted in Walker, Ronald W., Richard E. Turley, and Glen M. Leonard. Massacre at Mountain Meadows. New York: Oxford UP, 2008. 358. Print. Brigham Young to Isaac C. Haight, Sept. 10, 1857, Letterpress Copybook 3:827–28, Brigham Young Office Files, Church Archives. Hamilton G. Park, statement, Oct. 1907, typescript, Collected Material concerning the Mountain Meadows Massacre, Church History Library, The Church of Jesus Christ of Latter-day Saints, Salt Lake City, UT.; Hamilton G. Park, statement, ca. 1910, Church History Library, The Church of Jesus Christ of Latter-day Saints, Salt Lake City, UT.; Haslam, 9-12; James Haslam, United States v. John D. Lee, Second Trial, Jacob S. Boreman Transcript, Jacob S. Boreman, Collection, Huntington Library, San Marino, CA. 1:12 Silas Smith, United States v. John D. Lee, First Trial, Jacob S. Boreman Transcript, Jacob S. Boreman, Collection, Huntington Library, San Marino, VA., 4:90-92 Andrew Jenson, notes of discussion with William Barton, Jan. 1892, Mountain Meadows file, Jenson Collection, Church Archives Ibid., 180-1. Nephi Johnson Testimony, Witness for the Prosecution at Second Trial of John D. Lee, September 14 to 20, 1876: "Q: At the Meadows, before you left there, was it not told you in a speech then made to you, that it must be kept secret; that it would be best to keep silent? Were you not so advised by your leaders? A: Yes, sir. Q: Who gave that advice? Who ordered you to keep silent? A: Klingensmith and Haight gave the advice." "Cedar City Ward Relief Society, Minutes, September 10 and December 10, 1857, and March 11, 1858." LDS.org. N.p., n.d. Web. 22 July 2016. <https://www.churchhistorianspress.org/the-first-fifty-years-of-relief-society/part-2/2-6>. April 15th, 1856, Wilford Woodruff's Journal Mosiah 23:14 Minert, Roger P. Under the Gun: West German and Austrian Latter-day Saints in World War II. BYU, Religious Studies Center, 2011. 128. Nelson, David Conley. Moroni and the Swastika: Mormons in Nazi Germany. N.p.: n.p., n.d. 282-84. Print. The Fuhrer's New Clothes: Helmuth Huebner and the Mormons in the Third Reich. Sunstone, vol. 5, no. 6, pp. 23 Nelson, David Conley. Moroni and the Swastika: Mormons in Nazi Germany. N.p.: n.p., n.d. 309. Print. Cannon, George Q.. Journal of Discourses. Vol. 12. 45. April 21, 1867 "Interview with Brigham Young." The Deseret News [Salt Lake City, Utah] 23 May 1877 Turner, John G. Brigham Young: Pioneer Prophet. Cambridge: Massachusetts, 2012. 254. Young, Ann Eliza. Wife No. 19. Hartford, Conn.: Dustin, Gilman, 1876. 32. Bergera, Gary James. "Nailing down the Nightmare of Mountain Meadows Massacre." The Salt Lake Tribune. N.p., 21 Nov. 2008. Web. <http://archive.sltrib.com/story.php?ref=/opinion/ci_11042948>. Young, Brigham. Journal of Discourses. Vol. 18. 248. June 23, 1874 Effects of Noah's Flood on Carbon Dating Lessons learned from the MTC President Sex Abuse Scandal Questioning the Comma in Verse 13 of the Word of Wisdom 10 thoughts to ""If Thine Eye Offend Thee" (Part 1 – Blind Obedience? – To Question or Not to Question? That is the Question!)" Alayna July 31, 2016 at 7:07 pm As dark and heart aching as these stories are, they point out some very valid points. Points that shouldn't be taken lightly. In a world full of, POP idols, Sport idols, Fantasy idols, are we in danger of making our leaders idols? Though their council is often inspired and wise, we should never forget to look to the one True King, the one in the end who will decide if we are worthy to live like Him. I recently read something by Paul, something that goes along well with this. "And I, brethren, when I came to you, came not with excellency of speech or of wisdom, declaring unto you the testimony of God. For I determined not to know any thing among you, save Jesus Christ, and him crucified. And I was with you in weakness, and in fear, and in much trembling. And my speech and my preaching was not with enticing words of man's wisdom, but in demonstration of the Spirit and of power: That your faith should not stand in the wisdom of men, but in the power of God." (1 Corinthians 1:1-5) Because in the end, we all answer to the same God. Respect your leaders who walk in flesh and blood now, but ask yourself this, as flesh and blood only last for a short while. In the end who would you look to more? The flesh and blood of man, or the Spirit of our Lord. We are all going to answer to the same judge, so make sure you know who your answering to now. Zachariah Thorne July 31, 2016 at 7:22 pm "I do hope and pray my brethren and sisters to pay attention, that the Spirit of the Lord may be in your hearts, that you may see and understand things as they are." I think that this is a very powerful, and thought invoking quote. Because understanding things as they are, and not how things are perceived by others is a huge underlying problem in todays world. Especially in the youth, kids more and more nowadays are following "leaders" blindly and trusting in media or other sorts of leaders, or figures of idol to them. inquiring of the lord and trusting your heart in such matters should always be evoked. I enjoyed this article alot and appreciate the time and resources that went into it. Aaron August 1, 2016 at 3:39 am The historical examples of the Mountain Meadows Massacre and the resistance of Helmuth Hübener and his young troop against the Third Reich and concurrently their local congregation, are piercing accounts of when men appointed to Priesthood leadership have stepped out of their bounds and aligned against God's commandments. Accounts with tragic but alternate endings. The grim state of those men, prodded on by their leaders -who appeared secure in their own judgment concerning the will of the Lord- they acknowledged the duties they had to man, before their duty to God and are somber warnings. In contrast Helmuth Hübener's story, though a tragic one, as he he stood before the Blood Tribunal severed from country and Church, but justified before God, is a mighty witness. It's easy to look back and say we would never have participated in such cowardly genocide or never supported a tyrannical Nazi regime! While hindsight may be 20/20 the lines are more blurred when they are in reality experienced. Would we, for example, be willing to speak out so boldly against the,"Flattery of prominent men in the world? Or false educational ideas?" Will we be willing to stand by the revealed word of God when it doesn't coincide with modern cultural norms or political correctness? Would we be so readily alert to red flags if those ideals were coming from our Priesthood authority? I for one, plan to think carefully before answering. President Young was a man through whom revelation poured. He once stated that "When I tell the truth, that is enough, and I care not whether those who hear it believe it or not, for that is their business… If we do not speak to you by the Spirit of revelation and the power of God, we do not magnify our calling. I think that I tell you the words of the Lord Almighty every time I rise here to speak to you…If I do not speak here by the power of God, if it is not revelation to you every time I speak to you here, I do not magnify my calling." Like you have masterfully illuminated, Brigham Young was in no way the type who said "take my word for it", he continued: "Here are my brethren and sisters pouring out their souls to God, and their prayers and faith are like one solid cloud ascending to the heavens. They want to be led right; they want the truth; they want to know how to serve God and prepare for a celestial kingdom. Do you think the Lord will allow you to be fooled and led astray? No." (JD 9:140-141). All this comes down to one main point: The Lord reveals doctrines and principles to the saints according to their spiritual capacity and willingness to accept and understand. In essence we will be given exactly what we as a people want. So can we be misled in our own Church? Even by the upper echelons? Elder Bednar gave this counsel to anyone called to authority, "We also must be careful to remember in our service that we are conduits and channels; we are not the light." (Elder David A. Bednar, Act in Doctrine, p. 130) It stands that moral character and worthiness are a prerequisite to this magnification (Doctrine and Covenants 18:31) and it would be naive for us to suggest that the dictates and revelations of God change as society "evolves". If anything, the more we reject the more we will lose (Alma 12:10-11) and if we choose to be lead astray, our revelation will be replaced with a counterfeit (Isaiah 30:8-14). So the answer is verily yes, as Judas of old there are men called and ordained with the authority of God who have not magnified their callings. I'm content to leave that to the Lord. As the article has perfectly illustrated this isn't about pointing fingers but standing on our own two feet and taking responsibility for ourselves. It is about opening our eyes and facing the music. We cannot subsist upon borrowed light. Or we will risk making mistakes with tragic, temporal and spiritual consequences. It's critical that we teach and understand these principles and stories as they not only dispel the many rumors concerning the history but encourage more accountability among those in authoritative positions and the "lay" membership. Even if they are not the "warm fuzzies" we're used to, even if the concept scares us, even if we feel like our security blanket is ripped from us until we've been left to "fend for ourselves." We need to understand. The scriptures are replete with the examples of those who were asking, seeking and knocking, the adverse of blind obedience. The story of Nephi, almost prefacing the entire Book of Mormon, has always struck a chord with me and was an oft cited example growing up. He is well known for his faithful obedience. Led into a vast impregnable wilderness upon the word of his father and witnessing the conflict among his family, the young man decided to find out for himself the inspiration behind the pursuit. Do we realize what a daunting ideal that may have been? Nephi's father was not only the Prophet but his Patriarch, though the important role of the father has sadly diminished in our society, for the young Jewish boy being respectful and submissive to such a figure was the pinnacle of honoring his Priesthood. While his brothers were apt enough for rebellion, Nephi undoubtedly had a reservoir of respect for his father and to question may have been a frightening proposition. However his great desire to know of the mysteries of God, for himself, trumped any uncertainties there may have been upon such a course. Even while his brethren were disputing and reasoning among themselves concerning the matter (1 Nephi 15:2-3) he inquired a knowledge for himself (1 Nephi 2:16) and due to his diligence in keeping the commandments received his answer (1 Nephi 5:11). The Lord has not and never will ask for servile compliance but for every member of Zion to develop the discernment to know and govern themselves. The history and admonition of the Prophets cited in this article are worth reflecting on. Do I have the courage to ask good and honest questions, seeking to know with real intent for myself? Could I follow the Savior's admonition if "thine eye offend thee"? Am I basing that understanding upon the revealed word of God without contradiction or influence of my own notions? I hope to answer in the affirmative. I've learned that the quest for further light and knowledge is an individual journey. Men are personally responsible to and for the knowledge granted unto them and they stand alone before God, not man, for their actions. Thank you so much for writing this paper! Adam August 1, 2016 at 4:23 am I am grateful to be better informed on the Meadows Massacre event. This event is used by many Anti-Mormons. It is important that the perpetrators were clarified as well as their intentions and using the actual account concerning Brigham's views on the matter. Concerning following blindly I cannot help, but remember an account with Moses and the Israelites. This account is found in Numbers 11. Moses was commanded by God to choose seventy elders to assist him. Moses had written down the names of the men that were appointed to be the seventy. The gathering place was at the tabernacle so that they could receive a blessing from God. All, but two men (Eldad and Medad) had come to the tabernacle. So Moses decided to go ahead with the ceremony without them. Eldad and Medad were absent because they had been moved upon by the Holy Spirit to prophecy to the people. After hearing the two men Joshua informed Moses saying "My lord Moses, forbid them." Moses then replied "Enviest thou for my sake? Would God that all the Lord's people were prophets, and that the Lord would put his spirit upon them!" Heavenly Father wants His children to grow and become more like Him because they chose to. Lucifer and the sons of perdition are crafty and have on occasion have even imitated angles. However I believe that our loving Savior Jesus Christ and His Father have given us a powerful foundation through Their Commandments, and by the Gift of the Holy Ghost. "For he is the same yesterday, today, and forever; and the way is prepared for all men from the foundation of the world, if it so be that they repent and come unto Him." (Nephi 10:18) Are our lamps filled with oil so that the Bridegroom can recognize us and allow us into the wedding? Though we honor His anointed, our salvation is not attained by borrowed light… Norman Svelund August 1, 2016 at 10:16 pm A great article that I want to save. However, I was unable to print it out. Can you resend it to me where I will be able to print this article? Joseph Smith Foundation August 10, 2016 at 4:58 pm One of our team members just emailed a PDF version of the article to your address. We also recently added a "Print PDF" button to the top of each page on the Joseph Smith Forum Papers site. Thank you for your interest! Jerry Gunsalus August 2, 2016 at 7:28 pm Always good to know the truth ! John Robertson February 23, 2017 at 5:27 am I have only gotten as far as your Brigham YOung quote at the beginning, so I can't comment on the rest, but I want to note that the thing about thwarting the designs of God is quite significant. We are not saved except by being changed, and we are not changed except by keeping the great commandment given at the gate "Recieve the Holy Ghost". If we are living blindly then we are not learning to discern truth from error. There is no point in giving men the gift of the Holy Ghost for them to spend their whole lives tinkering about with whether or not it has confirmed the truthfulness of the church to them. That is a bare starting point. James G April 4, 2017 at 4:38 pm A full four years after mountain meadows this is what president Brigham Young had to say about blind obedience and the church's risk of being led astray. I tried to edit down for brevity but hate doing it. In fact is is better to read the whole address for context and better understanding. President Brigham Young prophet of the Lord really let it spill out here. A full four years after mountain meadows he very powerfully spoke on blind obedience and the churchs risk of being led astray. – dimitri "When I tell the truth, that is enough, and I care not whether those who hear it believe it or not, for that is their business. If you had lived in the days of Jesus, Peter, John, etc., and had seen men called to be Apostles of the Lord Jesus, every time they taught the people, … if they did not do it by the Spirit of revelation and by the power of God, they did not magnify their calling. There are not many who know this. If we do not speak to you by the Spirit of revelation and the power of God, we do not magnify our calling. I think that I tell you the words of the Lord Almighty every time I rise here to speak to you. I may blunder in the use of the English language; but suppose I should use language that would grate on the ears of some of the learned, what of that? God can understand it, and so could you, if you had the Spirit of the Lord. … The Spirit of revelation is the best grammar you ever studied. … What do we care about words? Chiefly to speak and to hear others speak so as to be understood. We have our language; but if a man speaks by the power of God, it is little matter to me what his words are, or the language he uses. …I understood the spirit of their preaching; I received that spirit; it was light, intelligence, power, and truth, and it bore witness to my spirit, and that was enough for me. … If I do not speak here by the power of God, if it is not revelation to you every time I speak to you here, I do not magnify my calling. What do you think about it? I neither know nor care. If I do not magnify my calling, I shall be removed from the place I occupy. God does not suffer you to be deceived. Here are my brethren and sisters pouring out their souls to God, and their prayers and faith are like one solid cloud ascending to the heavens. They want to be led right; they want the truth; they want to know how to serve God and prepare for a celestial kingdom. Do you think the Lord will allow you to be fooled and led astray? No. Brother Kimball said, today, when he was speaking, if you suffer yourselves to find fault with your Bishop, you condescend to the spirit of apostasy. Do any of you do this? If you do, you do not realize that you expose yourself to the power of the Enemy. What should your faith and position be before God? Such that, if a Bishop does not do right, the Lord will remove him out of your Ward. You are not to find fault. As brother Wells has said, speak not lightly of the anointed of the Lord. But you say they are out of the way. Who has made any of my brethren a judge over their Bishop? You read in the Book of Doctrine and Covenants, in a revelation to Joseph Smith (brother Kimball and myself were present), that it takes twelve High Priests to sit in council upon the head of a Bishop. D&C 68:22-24 Can they judge him? No; for they must then have the Presidency of the High Priesthood to sit at their head and preside over them. Yet many rise up and condemn their Bishop. Perhaps that Bishop has been appointed expressly to try those persons and cause them to apostatize. A great many will not apostatize until they arrive here; and who knows but what the Lord has prompted a Bishop to do so-and-so to cause somebody to apostatize. One of the first steps to apostasy is to find fault with your Bishop; and when that is done, unless repented of, a second step is soon taken, and by-and-by the person is cut off from the Church, and that is the end of it. Will you allow yourselves to find fault with your Bishop? No; but come to me, go to the High Council, or to the President of the Stake, and ascertain whether your Bishop is doing wrong, before you find fault and suffer yourselves to speak against a presiding officer. I want you to have faith enough concerning myself and my Counselors for the Lord to remove us out of the way, if we do not magnify our calling, and put men in our places that will do right. I had the promise, years ago, that I never should apostatize and bring an evil upon this people. God revealed that through Joseph, long before he died; and if I am not doing right, you may calculate that the Lord is going to take me home. He will not send me to hell, but he will take me home to himself. "I will take you up here, Brigham, and give you a few lessons." I am going where He is, for I have that promise, and so have many others. I am telling you these things for your comfort. In all this there are no new principles and doctrines, though it is new to many of you. You must have faith in God that he will lead his people right, in a way to preserve them from every evil." Brigham Young JD 9:137 Michelle April 20, 2017 at 5:10 am Wonderful article! This was so inspiring. Thank you! Try these: carbon datingdavid whitmerblind obedienceteen apathyword of wisdomeducationmusicpatriarchal order Choose a category Articles (Papers) Subject (Papers) Restoration (Papers) Science (Papers) Family (Papers) Education (Papers) Health (Papers) Book of Mormon Parallels (Papers) Christian (Papers) Book of Mormon Geography (Papers) Government (Papers) House of Israel (Papers) Author Pages (Papers) Authors (Papers) Scott Bradley (Papers) Gary T. Wright (Papers) Wayne N. May (Papers) Jack Monnett (Papers) James F. Stoddard III (Papers) Neil J. Flinders (Papers) Rod L. Meldrum (Papers) Joseph D. Johnstun (Papers) L. Hannah Stoddard (Papers) A. Jane Birch (Papers) Ezra Taft Benson (Papers) David M. Barker (Papers) Bruce H. Porter (Papers) A. LeGrand Richards (Papers) Lee H. Pearson (Papers) Susan M. Schnell (Papers)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,070
Culture & Jobs at Duo Security Duo Security, now part of Cisco, is the leading multi-factor authentication (MFA) and secure access provider. Working at Duo Security Hiring in: Computer & Network Security Employees: 1,000 duo.com Duo + Cisco = Disco With the Most Loved Company in Security and the global leader in network technology joining forces, there are more exciting opportunities than ever to be at the forefront of securing the cloud. Our mission is simple: democratize security by making it easy and effective for everyone. We're transforming security from the ground up by solving the world's most pressing geopolitical challenge — safe, secure information access. We engineer our business to enable our customers to easily address their ever-evolving security challenges. We believe that impactful work is rewarding work and that our team is at its best when everyone feels empowered to bring their whole self to work. We learn together by hiring for cultural contribution, not cultural fit, and recognize that diversity in background and thought are essential to building high-impact teams. We invest in growth and learning opportunities and encourage our people to never stop learning. We foster collaboration and believe in being recognized (and rewarded!) for hard work. We champion a healthy work-life balance. We're kinder than necessary. Together we build for the future by designing simple solutions for complex problems. And that's why we're the most loved and trusted name in security. Read more about Duo's impact on the cybersecurity industry here. Letter to future employee Dear future team member, We're on the frontlines of the security industry, solving what we consider to be the world's most pressing geopolitical challenge - changing security as we know it. They say you're as good as the company you keep, so we're excited to fill our bench with a star lineup that guides our growth and innovation. We're looking for enthusiastic, proactive people who are driven to help others, make the world a better place through technology, and cultivate their career path along the way. If you get a kick out of collaborating with inspiring teammates, creating and supporting products that really make a difference, we want you. We're a people-first business focused on solutions you actually want to use - and we're looking for like minded folks to help achieve that mission. #WeAreDuo A Day in the Life at Duo Security Duo Security office in Ann Arbor Jobs at Duo Security No open jobs. Check back later! Duo Security In The News The 2023 Purpose Awards This year has certainly had its share of ups and downs. It's true that the Midwest isn't totally immune to the widespread tech layoffs that are happening on the coasts, but overall the Midwest tech community is still going strong. Times might be tough. Our community is tougher. 10 Midwest Startups Focused on Community This time of year we like to take a moment to reflect on giving back to the community. One of the biggest themes of giving back in the startup world is returning the love to the people and the planet that provides support when you're getting started. As part of this reflection, we'd like to bring to light a few companies that keep the community and our environment top of mind. Best Places to Work in Ann Arbor in 2023 Ann Arbor is a growing hub for startup and tech companies. With a top university growing and supporting entrepreneurs, major tech companies like Google with hubs in the city, and a culture for innovation, Ann Arbor has some amazing places to work in the startup and tech ecosystem. 25 Top Midwest Cybersecurity Companies to Know The Midwest is a growing hub for cybersecurity companies. With major startup successes like Duo Security and well established companies like Batelle located in the region, it's been fertile ground for a variety of cybersecurity startups. 36 Ann Arbor Tech Companies & Startups Hiring in 2022 Ann Arbor continues to be one of the top growing tech ecosystems around the country with some great hiring tech companies. A combination of university talent pipelines, top-rated quality of life, a growing startup ecosystem, and growing VC funding has Tree Town topping lists as one of the best places to start a tech company or find a top tech job. Two southeast Michigan cybersecurity startups raise new funds Two cybersecurity startups in Ann Arbor and Detroit today announced new funding rounds. Best Places to Work 2022: Small Startups Americans are leaving their jobs in record numbers to search for something better. So when it comes to the best places to work in 2022, what does that look like? Best Places to Work 2022: Large Tech Companies 5 Reasons Why the Detroit Region is America's Next Tech Hub When people think of Detroit, their first thought typically isn't technology. They think of Henry Ford's Model T, our history of automotive innovation, and maybe even our reputation for advanced manufacturing—but rarely our tech ecosystem. Michigan's Startup & Tech Leaders on Emerging Tech Trends for 2022 The last two years changed a lot about business, particularly in the tech and startup space. In the Midwest, things are changing even more rapidly. We caught up with just a few of many Midwest tech leaders based in Michigan to ask them for their insights on the emerging trends in Midwest tech. How The Midwest Startup & Tech Scene Boomed in 2021 As the old saying goes, Rome wasn't built in a day, and neither was Silicon Valley, nor the Silicon Prairie. The list is out for 2023 — check out the Top Ann Arbor Tech Companies & Startups to Watch in 2023 Ann Arbor continues to be one of the top growing tech ecosystems around the country with some of the best startups to watch. A combination of university talent pipelines, top-rated quality of life, growing startup ecosystem, and growing VC funding has Tree Town topping lists as one of the best places to start a tech company or find a top tech job. The list is out for 2023 — check out the Top Detroit Startups to Watch in 2023 Michigan is now the state with the highest growth in VC investment. Now many Detroit startups are on the fast track to growth. Whether it's new funding, expansions or IPOs, it's been an eventful year in Detroit startups. Next year is looking even better. 2022 Purpose Awards: Meet the Midwest Startup Community Builders When you think about tech hubs, do you think of the Midwest? Ten years ago, most people didn't. And right now, many people still think of Silicon Valley, New York City, Boston, maybe even Austin now as a place to go for tech. 2022 Ann Arbor Companies with the Best Benefits Company Benefits are evolving like never before, and in-demand tech and startup workers are looking for new types of employee benefits, culture, and purpose. Remote workers are choosing to live in the Midwest. Here's why. During the pandemic, many people who moved back with family moved to the Midwest, and if you did that's not bad news for your career. New Ann Arbor Ecosystem Report Reveals Record-Breaking Year of Growth Ann Arbor as a tech ecosystem has grown leaps and bounds over the last few years with major acquisitions of Duo Security and Llamasoft and the rise of new, cutting-edge startups. An entrepreneurial ripple effect is starting to highlight the power of the startup community. And it's not slowing down. 17 Midwest Startups Hiring for Marketing Jobs From content marketing and SEO jobs to growth marketing, Midwest tech startups have a growing number of job opportunities for marketing professionals. And the Midwest is also home to a growing number of marketing and media startups. 15 Midwest Startups & Tech Companies Hiring for Product Roles Are you interested in a product job at a tech startup? You might only think of the coasts when it comes to product, because historically the Midwest tech scene was focused on IT and engineering. How To Keep Your Diversity & Inclusion Plan Going It's been over a year since the murder of George Floyd launched the country into a time of racial reckoning. For many, it was more than a moment (to quote Hamilton, "this is not a moment, it's the movement.") But for many, momentum has fallen. The Rise and Future of the Columbus Startup Ecosystem A place that was once known as Test City, USA, due to its close resemblance to the overall population and preferences of our country as a whole, is now the 14th largest city in the US and growing. Many people know Columbus, Ohio, as the home of the Ohio State Buckeyes or as the capital of a state that produced 8 US presidents. However, some things that the general population may not know is that Columbus is within 500 miles of 50% of the US population, was home to the first woman to fly around the world, The Entrepreneurial Ripple Effect is Happening in Detroit It's no secret that one successful startup can birth many more. In Detroit, this phenomenon is starting to take off with multiple startups beginning out of a growing number of successful home-grown tech companies. The Best Coworking Spaces in Detroit for your Hybrid Work Model Many companies are emerging from the COVID-19 pandemic with a hybrid work model, meaning some employees will return to the office at least part-time, but others will collaborate remotely. Navigating what this working plan actually looks like can be tricky. When — and where — can your team get together? 19 Great Tech Companies Hiring Remote Startup Jobs Companies everywhere are hiring for remote jobs. 47 Top Startups & Tech Companies Hiring at the Virtual Career Fair On April 13th, we hosted the Midwest Startup & Tech Virtual Career Fair: Best Places To Work. We had tons of companies join and hundreds of jobs seekers. But if you missed the event and are looking for a job in tech—whether that's in software engineering, marketing, people, product, sales, or other—don't worry. You can still apply to jobs at these amazing companies. The profound ROI on having fun Entrepreneurs are artists that produce at scale. Candidates Should Ask These 6 Questions About Diversity and Inclusion Diversity, inclusion, equity and belonging. It's something many startups and tech companies say they value and are actively working on. But the truth is, not all of them are walking the walk. Top Ann Arbor Tech Companies to Watch in 2021 The list is out for 2022 — check out the Top Ann Arbor Tech Companies & Startups to Watch in 2022 Ann Arbor is perfectly positioned to be one of the best cities for tech jobs. Not just in the Midwest, but in the country. Meet The Winners of the 2020 Purpose Awards There's no doubt that 2020 will go down in the history books. A once-in-a-lifetime pandemic, a national reckoning with racial injustice, record unemployment, a divisive election, and a lot of other news that seems so small compared to everything else. AaDya Security raises $2.7 million, continues upward trend in security funding AaDya Security just announced that they've closed a $2.7 million funding round, led by Firebrand Ventures (Kansas City, Mo.), 645 Ventures (NYC), and Next Coast Ventures (Austin, Texas). Top Midwest Tech Companies Hiring Software Engineers Right Now Yes. You read that title right. Top tech companies are hiring—right now. Woot woot, cue the happy dance. How to Write a Job Description that Brings in Top Tech Talent "It's just a job description. It's not a big deal." 8 Reasons You'll Love Working and Living in the Midwest With the rise of remote work becoming more and more common, even at coastal tech giants like Facebook and Twitter, people are looking at the Midwest in a whole new light. The Duo Blueprint: How Ann Arbor's Duo Security is Changing the Midwest Startup Scene "Our small city of freaks and geeks, fast learners, and team players has been a crucible for startups," said Dug Song, Co-founder & General Manager of Duo Security at Cisco. Culture contribution: why we no longer say 'culture fit' We'll be the first to say that we are always learning. That's what we do at startups—we listen, learn, think, embrace new ideas, and make changes. Keep That Same Energy: What We Learned at the Diversity and Inclusion Panel On June 23rd, purpose.jobs and DVP hosted a virtual panel discussion on diversity and inclusion. We had nearly 200 people join us on Zoom from various parts of the country and even around the world. Our Response to COVID-19: We're on Team You These are unbelievably tough times for everyone, everywhere. Ann Arbor's Shine & Rise Offers Resources, Community to Women in Tech When Kristina Oberly left her job for an Ann Arbor tech startup, she was only the 4th hire at the company—and the only woman. The Top Ann Arbor Startups To Watch In 2020 Looking for the most recent startups to watch? Check out these lists: Top Ann Arbor Tech Companies to Watch in 2021 Top Ann Arbor Tech Companies & Startups to Watch in 2022 How One Software Engineer Found Her Dream Job What are the best jobs for software engineers? And how do you really know what it's like to work at a company before you actually work there? Behind the Scenes: Duo Security & Ashley Vartyak Many startup companies launch with the intent of creating the perfect mix of culture and financial autonomy. Although inspiration and pure grit are necessary tools for startup success, they can't be the sole basis for a solid business plan. Help by Sharing Share this company with friends and family Email LinkedIn Facebook Twitter Tip: Paste this link anywhere Apply to Duo Security today and get matched to hundreds of other jobs 🚀
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,247
Authors often describe writing a book as going on a journey. You set out with a clear direction, visit some specific resting places on the way and arrive at an obvious, final destination. So it was with writing and researching about organizational reward and recognition. Some of the accepted concepts and practices that I have witnessed in the last 30 years came under very close scrutiny. So much so that I began to collect a number of so-called 'facts' about managing reward and recognition that revealed themselves as shady imposters. Here are some of them. It may be counter-intuitive to CEOs and financial directors to suggest that more money only ever produces a marginal effect on performance and in many cases actually impairs good performance, but it's true. In 1990 a study by Jensen and Murphy looking at the relationship between executive pay and corporate performance which analysed over 2,000 executives across 1,200 organizations in the USA found that there was no correlate between PRP (performance related pay) and quoted company performance as measured by the share price. They even found that 'executives tended to be overpaid for bad performance and underpaid for good performance'. The Mazda Motor Corporation ran an incentive for their B-Series trucks within 900 dealers to 'move the metal'. There was heated debate amongst the management team about whether the incentive should be cash or merchandise. As neither faction would give way they decide to reward half the dealers with cash as $75 per unit sold and half the dealers with an awards catalogue with the same reward value for each sale. The cash dealers improved their sales performance by just 2.28 per cent. The non-cash dealers went well beyond their targets to a 15.65% increase. In 2004 Scott Jeffrey, a junior professor at the University of Waterloo, Ontario, tested some volunteer students with a word game. One group was rewarded with money the second group was offered nothing. The third group was offered a therapeutic massage of a length determined by their prowess at the word game. The cash group performed 14.6 per cent better than the group with no reward, as you might expect. But the massage group outstripped the cash group with an improvement of 38.6 per cent. What conclusions would you draw about the effectiveness of cash v non-cash as an efficient reward mechanism for above average performance? If we are paying 30 per cent of our remuneration budget for benefits why the hell would we need recognition as well? This is one of those issues where the senior team has forgotten to ask themselves what benefits are actually for. I went back to the original two-factor theory of Herzberg. In this study of just 200 employees in Pittsburgh, USA, he was attempting to isolate what factors made people more satisfied/motivated at work and therefore perform better for the business… and hopefully stay longer. He discovered in broad terms that achievement opportunities and recognition can increase job satisfaction by 40 per cent or more. But he also learned that factors such as salary/perks, work conditions and status only push satisfaction levels by just 5 per cent at most and are often of neutral use for performance improvement. It is clear from other studies that benefits are just a loyalty device which may prevent an employee leaving earlier than they otherwise would. Recognition of achievements in the workplace foster higher performance, promote loyalty and ultimately make the organization more profitable or effective. Is employee engagement just another word for doing internal comms better…and does it make a difference anyway? LV, the car insurance people, introduced a new employee engagement scheme based on values research with staff and an extensive internal comms campaign that went on for six years. Employee engagement was benchmarked at the outset at 64 per cent. By the end of the six years engagement levels rose to 83 per cent and profits rose over the same period by 327 per cent. Towers Watson, the global outsourcing provider, produced a study of 85,000 employees across 16 countries in 2005 measuring the correlation between highly-engaged staff and Stock Market performance. It concluded that those with highly-engaged employees had improved their EPS (earnings-per-share) growth rate by 28 per cent, compared with just 11.2 per cent for organizations with averagely engaged employees. Sears, the department store group, began an extensive employee engagement initiative in the early 1990s when many retailers were cutting costs and losing staff. They discovered that for every 5 per cent of improved engagement from employees they could add a predictable half of one percent in sales revenue growth. Between 1990 and 1993, Sears went from a $3billion loss to a net profit of $752million. On the journey I came across many unexpected things…there are now more mobile phones than people in the World, travel/leisure concepts are the most effective reward, better engagement needs to come from the CEO, not HR, most millennials use up to five social media channels a day in addition to those provided by their employer and very few people go to work setting out to perform badly. John Fisher is Managing Director of FMI, the brand engagement specialists and author of Strategic Reward and Recognition (2015, Kogan Page).
{ "redpajama_set_name": "RedPajamaC4" }
5,331
John Newton Cooper CBE (17 de juliol de 1923 – 24 de desembre de 2000) va ser cofundador, emb el seu pare Charles Cooper, de la Cooper Car Company. Nascut a Surbiton, Surrey, Anglaterra, RU es va convertir en una llegenda del motor amb el seu disseny de xassís amb motor posterior que eventualment canvià la cara d'aquest esport en el seu nivell més alt, des de Fórmula 1 a l'Indianapolis 500. Referències Perfil de John Cooper al The 500 Owners Association John Cooper: L'Home que Va Derrotar a Itàlia The Guardian, 5 de gener de 2001, Pàgina 22: Obituari d'Alan Henry. Persones de la Fórmula 1 Comandants de l'Orde de l'Imperi Britànic Morts a Anglaterra Anglesos
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,476
\section{Introduction} Consider the wave equation on a Riemannian manifold $X:$ $$ \Box u=0\text{ on } \RR\times X $$ where $\Box=D_t^2-\Lap_g,$ $$ \Lap_g=\sum \frac{1}{\sqrt{g}} D_j g^{jk} \sqrt{g} D_k $$ and $D_j\equiv i^{-1} \pa_{x_j}$. If $X$ happens to be an odd dimensional Euclidean space, then \emph{Huygens' Principle} applies, i.e., the solution $$ \cos t\sqrt{\Lap} \delta_q $$ which has initial data a delta-function (and initial derivative zero) is supported exactly on sphere of radius $\abs{t}.$ In even space dimensions, or on a general odd dimensional manifold, this principle is well known to fail, but quite a nice proxy for it persists: we in general have $$ \singsupp u(t) \subset \big\{p:\ \text{there exists a geodesic of length } \abs{t} \text{ with endpoints }p,q\big\}. $$ (Recall that the singular support of a distribution is the set of points near which is it not locally a smooth function.) A more precise result yet is the refinement of this statement to deal with the \emph{wavefront set} of the distribution $u;$ $\WF u$ is a conic closed subset of $T^*X$ such that $\pi \WF u=\singsupp u.$ H\"ormander's rather general theorem \cite{Hormander9} on propagation of singularities tells us in this special case that for a solution $u$ of the wave equation, $\WF u$ is invariant under the (forward and backward) geodesic flow on $T^*X.$ Thus the initial wavefront given by (the lift to the light cone of) $N^* \{q\}$ then spreads into the conormal bundle of expanding distance spheres. Generalizing this result to manifolds with boundary (with Dirichlet or Neumann boundary conditions) turns out be a rather complicated story. Chazarain \cite{Ch:73} showed that singularities striking the boundary transversely simply reflect according to the usual law of geometric optics (conservation of energy and tangential momentum, hence ``angle of incidence equals angle of reflection'') for the reflection of bicharacteristics. The difficulties arise, however, in the treatment of geodesics tangent to the boundary: in \cite{Melrose-Sjostrand1} and \cite{Melrose-Sjostrand2} Melrose--Sj\"ostrand showed that, at these ``glancing points,'' singularities may only propagate along certain generalized bicharacteristics. By parametrix constructions of Melrose \cite{Melrose14} and Taylor \cite{Taylor1}, these $\CI$ singularities do \emph{not} propagate along concave boundaries (e.g.\ they do not ``stick'' to the exterior of a convex obstacle). Note that this last result ceases to be true in the analytic, rather than smooth, category. A simple summary of some of the fundamental results in the subject is provided by Figure~\ref{figure:fundamental}. \begin{figure}[bth]\label{figure:fundamental} \includegraphics[scale=1.3,angle=-90]{obstacle3.pdf}\caption{Singularities of the fundamental solution of the wave equation exterior to a convex obstacle.}\end{figure} This figure shows the singularities of the fundamental solution the wave equation in the exterior of a convex obstacle in the plane. There is (part of) a circular front of directly propagated singularities as well as a curved front of singularities reflected off the obstacle in accordance with Snell's law. Most crucially, \emph{there are no singularities behind the obstacle} in the ``shadow region,'' as a consequence of the parametrix construction of Melrose and Taylor. By contrast, it has been known since the late 19th century (starting with work of Sommerfeld \cite{Sommerfeld1}) that if the obstacle has a sharp corner, singularities \emph{do} propagate, i.e., \emph{diffract,} into the shadow region behind the obstacle. Figure~\ref{figure:wedge} shows the fundemental solution of the wave equation in the exterior of a wedge; we can easily see a circular wave of singularities emanating from the tip of the wedge and giving rise to singularities in the shadow region. \begin{figure} \includegraphics[scale=0.7]{wedge-edited.pdf}\caption{Singularities of the fundamental solution of the wave equation exterior to a wedge. \label{figure:wedge}}\end{figure} As alluded to above, general boundaries present special difficulties of their own, so in order to study the diffraction phenomenon in a simple setting, we now mostly set aside this class of manifolds, and focus on \emph{manifolds with conic singularities} where wave equation solutions will exhibit diffraction, but the geometry of geodesics is relatively manageable. \section{Conic geometry} We define a \emph{conic manifold} to be a manifold $X$ (of dimension $n$) with boundary $Y=\pa X,$ and a Riemannian metric on $X^\circ$ such that in terms of some boundary defining function $x$ we have in a collar neighborhood of $Y,$ $$ g=dx^2 +x^2 h $$ where $h$ is a smooth symmetric 2-cotensor such that $h|_{Y}$ is a metric on $Y.$ Note in particular that $g$ degenerates at $\pa X$ so as not to be a metric uniformly up to the boundary. The upshot is that while $X$ looks like a manifold with boundary from the point of view of $\CI$ structure, it is metrically a manifold with conic singularities: from the point of view of metric geometry, if we write the connected components of the boundary as $$ Y=\bigsqcup Y_i $$ then each boundary component $Y_i$ should be viewed as a \emph{cone point}. (See Figure~\ref{figure:conicgeometry}.) \begin{figure}[bth] \includegraphics[scale=.2,angle=-90]{conicmfld.pdf} \includegraphics[scale=.2,angle=-90]{conicmfld2.pdf} \caption{Smooth structure, and Riemannian picture of $X$ \label{figure:conicgeometry}} \end{figure} The conic manifold as defined here should thus be viewed as a manifold with conic singularities \emph{already equipped} with the blow-up that has desingularized it to a smooth manifold with boundary. Here the cost of having a smooth manifold is of course having a degenerate metric. A very special case of a conic manifold is that of a surface obtained by gluing together two copies of the interior (or exterior) of a polygonal planar domain along their common edges. This gives a flat surface with cone points where the polygon had vertices. The study of the wave equation on the original domains with Dirichlet/Neumann conditions is equivalent to the study of odd/even solutions of the wave equation on the doubled manifold---see Hillairet \cite{Hillairet:2005}. The behavior of geodesics on conic manifolds is of considerable interest near the cone point. The crucial observation is that it is in fact quite hard to aim a geodesic so as to hit the cone point: most will pass nearby and miss. Indeed, starting out near the cone point, there is a unique direction to aim in, in order to reach a nearby cone point. \begin{proposition}[Melrose--Wunsch \cite{Melrose-Wunsch1}] Every $y \in Y=\pa X$ is the endpoint of a unique geodesic; these geodesics foliate a collar neighborhood of $Y$: \end{proposition} This is equivalent to a normal-form statement for the metric: we can find coordinates so that $h=h(x,y,dy)$ has no $dx$ components, and thus the curves $x=x_0 \pm t, y=y_0$ are unit-speed geodesics. A crucial point in trying to make sense of propagation of singularites is to make a reasonable definition of the continuation of a geodesic that reaches a cone point. There are two reasonable candidates for this definition, one more restrictive than the other, and both play a role here: \begin{definition} We define geodesics passing through $\bigsqcup Y_j \equiv \pa X$ as follows: \begin{itemize} \item A \emph{diffractive geodesic} is a geodesic which, upon reaching the boundary component $Y_i$ along a geodesic ending at a point $y\in Y_i,$ immediately then leaves the boundary from some point $y' \in Y_i.$ \item A \emph{geometric geodesic} is a geodesic which, upon reaching the boundary component $Y_i$ along a geodesic ending at a point $y\in Y_i,$ immediately then leaves the boundary from some point $y' \in Y_i$ such that $y,y'$ are endpoints of a geodesic \emph{in $Y_i$} (w.r.t.\ the metric $h|_{Y_i})$ of length $\pi.$ \item A \emph{strictly diffractive} geodesic is one which is diffractive but not geometric. \end{itemize} \end{definition} A more intuitive definition of geometric geodesics is as follows: they are the geodesics that are \emph{locally approximable} by families of geodesics in $X^\circ$. We refer the reader to \cite{Melrose-Wunsch1} for more detail on these definitions. \section{Propagation of singularities on conic manifolds} Consider now solutions to the wave equation on a manifold with conic singularities. We always employ the \emph{Friedrichs extension} of the Laplacian acting on $\mathcal{C}_c^\infty(X^\circ).$ (This stipulation is important only in dimension two, where $\Lap$ is not essentially self-adjoint.) We now can (roughly) state the following: \begin{theorem}[Melrose--Wunsch \cite{Melrose-Wunsch1}] Singularities for solutions to the wave equation propagate along diffractive geodesics; strictly diffractive geodesics generically propagate \emph{weaker} singularities than geometric geodesics. \end{theorem} The genericity condition is that the incident singularities not be precisely \emph{focused} on the cone tip and applies, e.g., to Cauchy data that are conormal with respect to a manifold that is at most simply tangent to the hypersurfaces at constant distance from a cone tip. In this case---and in particular for the fundamental solution---we find that \emph{the diffracted wave for the fundamental solution is $(n-1)/2-\epsilon$ derivatives smoother than the main wavefront,} where $n$ is the dimension of $X.$ We remark that this result has been subsequently generalized to cover the cases of manifolds with incomplete edge singularities \cite{MVW1}, as well as manifolds with corners \cite{Va:04}, \cite{MeVaWu:13}. The rest of this paper is essentially applications and extensions of this result in various contexts. \section{Local energy decay on conic manifolds with Euclidean ends}\label{section:BW} Consider now a noncompact $n$-manifold $X$ with ends that are Euclidean. We will consider solutions to the wave equation $$ \Box u=0 $$ on $X$ with compactly supported Cauchy data in the energy space. If $X$ is a smooth manifold, it has long been known that the decay of local energy can be obstructed by the trapping of geodesics; recall that a geodesic is said to be forward- or backward-\emph{trapped} if it remains in a compact set as $t \to \pm \infty.$ Classic work of Lax--Philips \cite{Lax-Phillips1} and Morawetz \cite{Morawetz:Decay} shows that, for odd $n,$ absence of trapping implies exponential local energy decay; on the other hand, results starting with Ralston \cite{Ralston:Localized} show that trapping of rays implies that exponential local energy decay cannot hold. The usual line of reasoning in obtaining such estimates involves obtained bounds on the \emph{cutoff resolvent} $$ \chi (\Lap-\lambda^2)^{-1} \chi,\quad \chi \in \mathcal{C}_c^\infty. $$ It is well known that in odd dimensions this operator can be meromorphically continued from $\Im \lambda>0$ to $\CC,$ and its poles are known as \emph{resonances}. Exponential local energy decay is then obtained by showing that no resonances lie in some \emph{strip} $\Im \lambda>-\nu,$ $\nu>0$ (and that the resolvent has an upper bound with polynomial growth in this strip). The situation with conic manifolds is thus interesting for the following reason: as soon as we have more than one cone point (or, indeed,\footnote{The author is grateful to Yves Colin de Verdi\`ere for pointing out this possibility. In practice, it seems hard to create an interesting example of a non-simply connected manifold where the \emph{only} trapping is a strictly diffractive geodesic of this form. On the other hand one may probably add a complex absorbing potential to the problem to destroy other trapping and create non-simply connected examples.} at least one cone point if the manifold is non-simply connected) there must be trapped diffractive geodesics: we can simply continue traversing geodesics connecting the various cone points. An example of particular interest is (the double of) a domain exterior to one or more polygons in $\RR^2$: diffractive geodesics can move along edges of one polygon and also along lines connecting vertices of two different polygons. To what degree, one wonders, does this obstruct energy decay? The following theorem (which answers affirmatively a conjecture of Chandler-Wilde--Graham--Langdon--Spence~\cite{CWGLS:2012} for polygonal exterior domains) shows that the obstruction is very minor: \begin{theorem}[Baskin--Wunsch \cite{BaWu:13}]\label{theorem:BaWu} Assume that no three cone points in $X$ are collinear and no two are conjugate. Assume that geodesics missing the cone points escape to infinity at a uniform rate. For $\chi \in \CcI(X),$ there exists $\delta>0$ such that the cut-off resolvent $$ \chi (\Lap-\lambda^2)^{-1}\chi $$ can be analytically continued from $\Im \lambda >0$ to the region $$ \Im \lambda >-\rho \log \smallabs{\Re \lambda},\ \smallabs{Re \lambda} >\rho^{-1} $$ and for some $C,T>0$ enjoys the estimate $$ \norm{\chi (\Lap-\lambda^2)^{-1}\chi}_{L^{2}\to L^{2}} \leq C \smallabs{\lambda}^{-1} e^{T\smallabs{\Im \lambda}} $$ in this region. \end{theorem} We contrast this with the the standard result for smooth non-trapping perturbations of Euclidean space. In that case the methods of Vainberg \cite{Vainberg:Asymptotic} and Lax--Phillips \cite{Lax-Phillips1} yield precisely the \emph{same} resolvent estimate on $\RR$ and a slightly stronger result on resonance-free regions: \emph{any} region of the form $\Im \lambda>-\rho \smallabs{\log \Re \lambda}$ is free of resonances outside a large disc. Thus the effect of diffractive trapping by cone points is extremely weak. Previous results in this direction include energy decay results of \cite{Cheeger-Taylor2}, Section 6, in certain special cases of conic singularities; analogous results for multiple inverse square potentials were previously proved by Duyckaerts \cite{Duyckaerts1}. Burq \cite{Burq:Coin} gave a precise description of the resonances in the closely related case of two convex analytic domains in the plane, one of which has a corner facing the other. The diffractive trajectory here bounces back and forth between the corner and the other obstacle, and Burq showed the associated resonances lie along a family of logarithmic curves. We now briefly describe some results on evolution equations that follow from Theorem~\ref{theorem:BaWu}. We let $\mathcal{D}_s$ denote the domain of $\Lap^{s/2}$ (hence locally just $H^s$ away from cone points) and let $\sin t\sqrt\Lap/\sqrt\Lap$ be the wave propagator. Let $\chi$ equal $1$ on the set where $X$ is not isometric to $\RR^n.$ In odd dimensions, the resolvent is a meromorphic function of $\lambda\in \CC$ (with no difficulties at $\lambda=0$) so in this case Theorem~\ref{theorem:BaWu} shows that there are only finitely many resonances in any horizontal strip in $\CC.$ This enables us to show the following by a contour deformation argument: \begin{corollary}\label{corollary:resexp} Let $n$ be odd. Under the assumptions of Theorem~\ref{theorem:BaWu}, for all $A>0,$ small $\ep>0,$ $t>0$ sufficiently large, and $f \in \mathcal{D}_1,$ $$ \chi \frac{\sin t\sqrt{\Lap}}{\sqrt{\Lap}} \chi f= \sum_{\substack{\lambda_j \in \Res(\Lap) \\ \Im \lambda >-A}}\sum_{m=0}^{M_j} e^{-it \lambda_j} t^m w_{j,m} + E_A(t) f $$ where the sum is of resonances of $\Lap,$ i.e.\ over the poles of the meromorphic continuation of the resolvent, and the $w_{j,m}$ are the associated resonant states corresponding to $\lambda_j.$ The error satisfies $$ \norm{E_A(t)}_{\mathcal{D}_{1}\to L^{2}} \leq C_\ep e^{-(A-\ep)t}. $$ In particular, since the resonances have imaginary part bounded above by a negative constant, $\chi \frac{\sin t\sqrt{\Lap}}{\sqrt{\Lap}} \chi f$ is exponentially decaying in this case. \end{corollary} Another corollary is a local smoothing estimate for the Schr\"odinger equation. As it comes from the resolvent estimate on $\RR,$ this is again lossless as compared to the situation on free $\RR^n$: \begin{corollary} \label{corollary:local-smoothing} Suppose $u$ satisfies the Schr{\"o}dinger equation on $X$: \begin{align*} i^{-1}\pa_t u(t,z) + \Lap u(t,z) &= 0 \\ u(0,z) &= u_{0}(z)\in L^{2}(X) \end{align*} Under the assumptions of Theorem~\ref{theorem:BaWu}, for all $\chi \in C^{\infty}_{c}(X)$, $u$ satisfies the local smoothing estimate without loss: \begin{equation*} \int_{0}^{T}\norm{\chi u(t) }_{\mathcal{D}_{1/2}}^{2}\, dt \leq C_{T} \norm{u_{0}}_{L^2}^{2}. \end{equation*} \end{corollary} The elements of the proof of Theorem~\ref{theorem:BaWu} are twofold. The first step is to show that a \emph{very weak Huygens principle} holds. We recall that in nontrapping manifolds, a solution to the wave equation with compactly supported initial data is eventually \emph{smooth}---this is the usual ``weak Huygens principle.'' Here we show instead that the solution eventually gets \emph{as smooth as we like}: \begin{proposition}\label{proposition:Huygens} Let $\chi \in \CcI(X).$ For any $s \in \RR,$ there exists $T_s\gg 0$ such that whenever $t>T_s,$ $$ \chi U(t) \chi: H^r \to H^{r+s} $$ for all $r.$ \end{proposition} The second part of the theorem is a modification of the celebrated pa\-ra\-met\-rix construction of Vainberg \cite{Vainberg:Asymptotic} (see also \cite{Tang-Zworski1}). This argument in its original form builds a parametrix for the resolvent out of the fundamental solution to the wave equation, assuming that the latter satisfies the weak Huygens principle; the new variant, by contrast, makes the weaker assumption of the output of Proposition~\ref{proposition:Huygens} and produces a very slightly weaker result (smaller resonance-free region). Among the further applications of this line of reasoning is the following theorem on Strichartz estimates for exterior polygonal domains (joint work with Baskin and Marzuola) \cite{BaskinMarzuolaWunsch:2014}): for an exterior polygonal domain where the only trapped geodesics are strictly diffractive (and where no three vertices are collinear) we find that the same Strichartz estimates for the Schr\"odinger equation hold as on Euclidean space (locally in time for Neumann conditions, and globally for Dirichlet). \section{The wave trace} If $X$ is a compact Riemannian manifold without boundary let $$ (\phi_j, \lambda_j^2) $$ denote the eigenfunctions and eigenvalues of $\Lap.$ One might like to study the ``inverse spectral problem'' of using the $\lambda_j$ to characterize $X$ by forming a useful generating function out of the $\lambda_j.$ An obvious but not directly useful one might be $$ \sum_j \delta(\lambda-\lambda_j), $$ but a much more tractable one is the Fourier transform of this quantity, $$ \sum_j e^{-it\lambda_j}. $$ The utility of this generating function stems from its identification as $$ \Tr U(t), $$ where $$U(t)\equiv e^{-it\sqrt{\Lap}}$$ is the ``half-wave'' evolution operator, mapping functions on $X$ to (certain) solutions to the wave equation. If we can say something about the trace of $U(t)$ in terms of the geometry of $X,$ we can thus hope to learn something about spectral geometry. In the setting of smooth boundaryless manifolds, we have the following classical results on the wave trace. Let $$ \LSpec (X) =\{0\} \bigcup \big\{\pm \text{lengths of periodic geodesics on }X\big\}. $$ \begin{theorem}[Chazarain \cite{Chazarain1}, Duistermaat--Guillemin \cite{Duistermaat-Guillemin1}; cf.\ also Colin de Verdi\`ere \cite{Co:73a}, \cite{Co:73b}]\label{theorem:smoothpoisson} $$\singsupp \Tr U(t) \subset \LSpec (X).$$ \end{theorem} This allows one to dream of ``hearing'' lengths of closed geodesics, but does not rule out the possibility that the allowable singularities do not, in fact, arise. The presence of honest singularities is, however, guaranteed by: \begin{theorem}[Duistermaat--Guillemin \cite{Duistermaat-Guillemin1}] Let $L$ be the length of an nondegenerate periodic closed geodesic $\gamma$ on $L$ that is isolated in the length spectrum. Then near $t=L$ we have $$ \Tr U(t) \sim \frac{L_0}{2\pi} i^{\sigma} \abs{I-P}^{-1/2}(t-L)^{-1}, $$ where \begin{itemize} \item $L_0$ is the length of the primitive closed geodesic if $\gamma$ is an iterate of a shorter one. \item $\sigma$ is the Morse index of the variational problem for a periodic geodesic, evaluated at $\gamma.$ \item $P$ is the linearized Poincar\'e map, obtained as the linearization at $\gamma$ of the first return map to a hypersurface of the phase space, transverse to $\gamma.$ \end{itemize} \end{theorem} Note that the nondegeneracy condition in the hypotheses is simply the condition that $I-P$ be nonsingular. The generalization of Theorem~\ref{theorem:smoothpoisson} to compact conic manifolds is straightforward: let $$\DLSpec(X)= \{0\} \bigcup \big\{\pm \text{lengths of periodic diffractive geodesics on }X\big\}.$$ \begin{theorem}[Wunsch \cite{Wunsch2}] On a conic manifold $X,$ $$ \singsupp \Tr U(t) \subset \DLSpec{X}. $$ \end{theorem} The singularities at lengths of geodesics in $X^\circ$ are easily seen to be described by the same formula given by Duistermaat--Guillemin, but the geodesics interacting through conic points are not so simple. We consider $\gamma$ a closed, \emph{strictly diffractive} geodesic undergoing $k$ diffractions and traversing geodesic segments $\gamma_1,\dots, \gamma_k$ connecting cone points $Y_{i_1}, \dots Y_{i_k}.$ Recall that the hypothesis that the geodesic be strictly diffractive means that it interacts with each cone point by entering and leaving on a pair of geodesics that cannot be uniformly locally approximated by geodesics in $X^\circ.$ This is generically the case for all closed geodesics. Assume further that the length $L$ of $\gamma$ is isolated in the length spectrum, and make the additional nondegeneracy hypothesis that no two cone points along the geodesic are conjugate to one another. Note that the following was previously known by work of Hillairet \cite{Hillairet:2005} in the important special case of flat surfaces with conic singularities (hence in particular for doubles of polygons). \begin{theorem}[Ford--Wunsch \cite{1411.6913}]\label{theorem:FoWu} Near $t=L,$ $$ \Tr U(t) \sim \int e^{i(t-L)\xi} a(\xi) \, d\xi $$ where \begin{equation}\label{symbol} a(\xi) \sim L_0 \cdot (2\pi)^{\frac{kn}{2}} \, e^{\frac{ik(n-3)\pi}{4}} \, \chi(\xi) \, \xi^{-\frac{k(n-1)}{2}} \prod_{j=1}^k i^{-m_{\gamma_j}} \, \mathcal{D}_j \, \mathcal{W}_j \ \text{as $|\xi| \to \infty$}. \end{equation} \end{theorem} Here, $\chi\in \CI(\RR)$ is $1$ for $\xi>1$ and $0$ for $\xi<0.$ Note that the power of $\xi$ is such that we obtain greater smoothness as the number of diffractions increases. The leading order singularity as a function of $t$ is proportional to $(t-L+i0)^{-1+k(n-1)/2}$ (but is multiplied by $\log(t-L+i0)$ if the power is an integer). As before $L_0$ denotes the length of the ``primitive'' geodesic if $\gamma$ is an iterate of a shorter one. The integers $m_{\gamma_j}$ are simply the Morse indices of the variational problems associated to traveling from one cone point to the next, evaluated at $\gamma_j.$ We will now explain the factors $\mathcal{D}_j$ and $\mathcal{W}_j.$ The terms $\mathcal{D}_j$ are associated to the diffractions through each successive cone point $Y_{i_j}.$ They are constructed as follows. Each cone point $Y_{i_j}$ is equipped with a metric $h_{i_j}\equiv h\rvert_{Y_{i_j}}.$ It thus has a Laplace-Beltrami operator $\Lap_{i_j}$ and we may use the functional calculus to take functions of this operator. In particular, let $$\nu_{i_j} \equiv \sqrt{ \Delta_{Y_{i_j}} + \left( \frac{2-n}{2} \right)^2 }.$$ We then form the operator family $$ e^{-it\nu_{i_j}}: L^2(Y_{i_j})\to L^2(Y_{i_j}). $$ This is essentially a ``half Klein Gordon propagator'' on the link of the cone point (i.e., a boundary component). Now let $\kappa(\bullet)$ denote the Schwartz kernel of an operator. Supposing that the diffractive geodesic $\gamma$ enters $Y_{i_j}$ at the point $y$ and leaves from point $y',$ we set $$ \mathcal{D}_j \equiv \kappa(e^{-i\pi \nu_{i_j}})[y,y']. $$ The propagator kernel is of course not continuous in general, however note that the strictly diffractive nature of the geodesic ensures that $y$ and $y'$ are not connected by a geodesic of length $\pi$ in the link, which in turn precisely ensures, by propagation of singularities, that the Schwartz kernel of the time-$\pi$ Klein Gordon propagator is smooth near $(y,y'),$ hence the evaluation of this distribution makes sense. Now we turn to $\mathcal{W}_j$. These quantities are associated to the geodesic segments $\gamma_j$ connecting successive cone points. They are best described in terms of Jacobi fields, but can also be viewed as a proxy for a quantity involving the derivative of the expenential map, hence a substitute for the term involving the Poincar\'e map in the Duistermaat--Guillemin formula. Note that the exponential map from one cone point to the next does not make sense, since any small perturbation of the geodesic $\gamma_j$ will miss the next cone point entirely rather than simply hitting it at a different point. Correspondingly, if we let $\mathbf{J}$ be a set of Jacobi fields that are orthonormal to $\gamma_j$ and at $\gamma_j(0)$ give an orthonormal basis of $TY_{i_j}$ then $\mathbf{J}$ becomes \emph{singular} as we approach the end of $\gamma_j$ at $Y_{i_{j+1}}.$ On the other hand, the metric is also singular at cone points, in the sense that it vanishes on $TY,$ so we can nonetheless make sense of the determinant $$ \det_g \mathbf{J}\rvert_{Y_{i_{j+1}}}. $$ Then we have $$ \mathcal{W}_j \equiv\big\lvert\det_g \mathbf{J}\rvert_{Y_{i_{j+1}}}\big\rvert^{-1/2}. $$ This quantity can be made to look more like the derivative of an exponential map as follows: we set \begin{equation}\label{thetaj} \Theta_j=(\length(\gamma_j)^{-(n-1)})\big\lvert\det_g \mathbf{J}\rvert_{Y_{i_{j+1}}}\big\rvert. \end{equation} Consider the case in which $Y_j$ is a ``fictitious'' cone point obtained by blowing up a smooth point $p_0$ on a manifold. Then Jacobi vector fields tangent to $Y_j$ are obtained as lifts under the blow-down map of Jacobi fields vanishing at $p_0,$ and $\Theta_j$ becomes a standard expression for $\det D\exp_{Y_j} (\bullet)$ in terms of Jacobi fields, at least when evaluated in $X^\circ$ (cf.\ \cite{Be:77}): in that case we simply have $$ \Theta_j=\det_g \big\lvert D \exp_{Y_j}(\bullet)\big\rvert. $$ Since $\mathcal{W}_j=\length(\gamma_j)^{-(n-1)/2} \Theta_j^{-1/2}$ we recover the relationship with the exponential map in the case of a trivial cone point. In rough outline, the proof of Theorem~\ref{theorem:FoWu} goes as follows. We know explicitly what the wave propagator look like on a model \emph{product cone} $\RR_+\times Y_j$ endowed with the scale invariant metric $dx^2 +x^2 h_0(y,dy)$---this is a computation of Cheeger--Taylor \cite{Cheeger-Taylor1}, \cite{Cheeger-Taylor2} involving bravura use of the Hankel transform. In particular, we can evaluate the symbol of the diffracted wavefront explicitly in that case. More generally, in \cite{Melrose-Wunsch1} the author and Melrose prove that near a cone point, the diffracted front of the wave propagator is guaranteed to be a conormal distribution. The first new step is therefore to show that in the non-product case, the principal symbol of the diffracted front is still, modulo adjustments involving comparing half-densities on the two spaces, given by the same expression as in the product case where we use the model metric $dx^2+x^2 h\rvert_{x=0} (y,dy).$ This involves comparing the two propagators and showing that the difference between model and exact propagators can be estimated by a \emph{Morawetz inequality} near the cone tip. Having understood the effect of a single diffraction, we then proceed as follows. We take a microlocal partition of unity $A_j$ on $X,$ where for technical reasons the $A_j$ are restricted to be simply cutoff functions near each boundary component $Y_i$ but are otherwise fully localized in phase space. We then decompose the wave trace as follows: fix small times $t_j$ with $\sum t_j=T.$ Then by cyclicity of the trace $$ \Tr U(t) = \sum_{i_0,\dots, i_{N}}\Tr \sqrt{A_{i_0}} U(t-T)A_{i_1} U(t_1) A_{i_2}\dots A_{t_{N}} U(t_N) \sqrt{A_{i_0}}. $$ By propagation of singularities, this term is guaranteed to be trivial unless there is a diffractive geodesic successively passing through the microsupports of the $A_{i_j}$'s, hence we may throw away most of this sum. The remaining terms are then computed by a stationary phase computation, gluing together the propagators for ``free'' propagation through $X^\circ$ with those for the diffractive interaction with cone points (this was the same strategy previously used by Hillairet in \cite{Hillairet:2005} as well as by the author in \cite{Wunsch2}). \section{Lower bounds for resonances} While $\Tr U(t)$ only makes sense (even distributionally) on a \emph{compact} manifold, if we return to the setting of Section~\ref{section:BW} where we have a \emph{noncompact} manifold with Euclidean ends, we may still make sense of an appropriately \emph{renormalized} wave trace, and use the diffractive trace formula (Theorem~\ref{theorem:FoWu}) to obtain lower bounds on resonances. In odd dimensions, we let $\mathcal{A}$ denote the generator of the wave group, and hence $e^{t \mathcal{A}}$ the wave group itself; likewise we let $\mathcal{A}_0$ be the generator of the wave group on Euclidean space. We then have the trace formula \begin{equation}\label{traceres} \Tr (e^{t \mathcal{A}} -e^{t \mathcal{A}_0}) = \sum_{\lambda_j \in \Res} e^{-i \lambda_j t},\ t > 0 \end{equation} where the sum is over the resonances, counted with multiplicity (see e.g. \cite{Sjostrand-Zworski5} for the details of how to makes sense of this difference of operators in a wide variety of contexts). This result in various settings was first proved by Bardos-Guillot-Ralston \cite{Bardos-Guillot-Ralston1}, Melrose \cite{MR83j:35128}, and Sj\"ostrand-Zworski \cite{Sjostrand-Zworski5}; an analogous result in even dimensions can be found in \cite{Zw:99}. Now if we can actually guarantee the existence of singularities in the (renormalized) wave trace, a Tauberian theorem of Sj\"ostrand-Zworski \cite{Sjostrand-Zworski3} allows us to deduce from \eqref{traceres} in a lower bound on the number of resonances in logarithmic regions in $\CC.$ Fortunately, Theorem~\ref{theorem:FoWu} applies equally well in this context, and we obtain a lower bound on the number of resonances as follows. Let $$ N_\rho(r) =\# \{\text{Resonances in } \smallabs{\lambda}<r,\ \Im \lambda \geq -\rho \log \smallabs{\Re{\lambda}}\}. $$ Then we have: \begin{theorem}[Hillairet--Wunsch]\label{theorem:HiWu} Under the geometric assumptions of Theorem~\ref{theorem:BaWu}, let $L$ be the length of a closed, strictly diffractive geodesic $\gamma$ undergoing $k$ diffractions. Assume, in the notation of Theorem~\ref{theorem:FoWu}, that all the diffraction coefficients $\mathcal{D}_j$ are nonzero along $\gamma;$ assume also that there are no closed diffractive geodesics beside iterates of this one having length in $L\NN.$ Then for all $\ep>0,$ $$ N_\rho(\rho)\geq C_{\rho,\epsilon} r^{1-\epsilon} $$ provided $$ \rho > \frac{(n-1)k}{2L} $$ \end{theorem} A detailed proof, which simply consists of using the trace formula (Theorem~\ref{theorem:FoWu}) in \eqref{traceres} together with the Tauberian theorem of \cite{Sjostrand-Zworski3}, can be found in \cite{Ga:15}. Note that the bound on $\rho$ written here is that which we obtain by considering the whole sequence of singularities of the wave trace obtained by considering arbitrary \emph{iterates} of the geodesic $\gamma.$ We remark that the distinction between the trace of the full wave group and $\Tr U(t)$ is immaterial for this purpose since the former is twice the real part of the latter, and it is not difficult to verify from examination of \eqref{symbol} that the singularities arising from iterates of a given geodesic cannot all be purely imaginary. The optimal $\rho$ here is generally obtained by choosing $\gamma$ to be the geodesic that traverses the longest geodesic segment connecting a pair of distinct cone points, back and forth (assuming the diffraction coefficients are nonvanishing). If $D_{\text{max}}$ denotes the greatest distance between a pair of cone points, then we have a closed geodesic of length $2 D_{\text{max}}$ with $k=2,$ and we obtain the bound $$ \rho > \frac{(n-1)}{2D_{\text{max}}}. $$ Remarkably, this theorem is essentially sharp, as was shown by Galkowski, who has produced an effective version of the Vainberg argument previously employed in \cite{BaWu:13}: \begin{theorem}[Galkowski \cite{Ga:15}] Let $D_{\text{max}}$ be the greatest distance between two cone points. For any $\epsilon>0$ the constant $\rho$ in Theorem~\ref{theorem:BaWu} can be taken to be $(n-1)/(2D_{\text{max}})-\epsilon,$ i.e.\ $N_\rho(r)$ is \emph{bounded} for all $\rho<(n-1)/2D_{\text{max}}.$ \end{theorem} Since $N_\rho$ is bounded for $\rho<(n-1)/2D_{\text{max}}$ and (subject to the nondegeneracy hypotheses of Theorem~\ref{theorem:HiWu}) almost linearly growing for $\rho>(n-1)/2D_{\text{max}},$ we find that in any set near the critical curve $\Im\lambda=-((n-1)/2D_{\text{max}}) \log \smallabs{\Re \lambda}$ of the form $$ \big(-\frac{n-1}{2D_{\text{max}}}-\ep\big) \log \smallabs{\Re \lambda}<\Im \lambda<\big(-\frac{n-1}{2D_{\text{max}}}+\ep\big) \log \smallabs{\Re \lambda},\quad \abs{\lambda}>\ep^{-1} $$ there are infinitely many resonances. The intuition behind the importance of the longest geodesic connecting two cone points is that repeatedly traversing this segment back and forth is the way in which a trapped singularity can diffract \emph{least frequently}. Since each diffraction loses considerable energy owing to the smoothing effect of diffraction, a resonant state propagating back and forth along this geodesic is the one that loses energy to infinity at the slowest rate. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,390
UWM Report UWM Research A better understanding of blood flow and brain aneurysms By Silke Schmidt December 13, 2018 A weak blood vessel in the brain is a dangerous thing. Six million Americans have one that bulges out like a bubble and fills with blood, creating a brain aneurysm. If it ruptures – as it does in about 30,000 people annually – it can result in a coma, permanent brain damage, paralysis or death. To better diagnose and treat brain aneurysms, Roshan D'Souza, a UWM associate professor of mechanical engineering, is developing new methods of analysis for a special kind of magnetic resonance imaging technology: 4D Flow MRI. Roshan D'Souza Unlike traditional organ scans, 4D Flow MRI provides spatial 3D measurements of blood flow velocity and tracks its variation over time – the fourth dimension. D'Souza wants to use the shear stress that the flowing blood imposes upon a vessel's wall as a biomarker for the potential growth of an aneurysm. This would indicate if and when treatment is needed to prevent rupture. Because the resolution and raw data from the MRI scans aren't of a high enough quality to be clinically useful, D'Souza is developing a new approach that merges two pieces of complementary information: flow physics simulations and actual data from the scan. "It's similar to how meteorologists produce storm warnings," D'Souza says. "A high-resolution storm simulation based on a physics model receives updates from sensors that track an ongoing storm to generate improved predictions." In this case, the MRI data update the parameters of the physics model to generate a higher-resolution image. D'Souza and his collaborators at the Medical College of Wisconsin hope to eventually design clinical trials to quantify the new method's benefits. And the brain is only the beginning. "We can expand our research to 4D Flow MRI studies of the liver, heart and kidneys," D'Souza says. "I think this method has the potential to make a real difference for many patients." (UWM Photo/Troye Fox) Learning with a virtual reality table Lab director Jian Zhao says UWM's hologram table takes group planning, design and research collaborations to a new level. Designing a one-block fountain that runs on rainwater Architect James Wasley designed a rainwater fountain that recirculates and cleans runoff, and enhances Milwaukee's Harbor District. Partnering with Milwaukee's overlooked neighborhoods Arijit Sen leads the Buildings-Landscapes-Cultures Field School, an ongoing project that gets architecture students involved with local Milwaukee neighborhoods. Media Services at UW-Milwaukee PO Box 413, University of Wisconsin-Milwaukee, Milwaukee, WI 53211 (414) 229-7490 media-services@uwm.edu © 2020 University of Wisconsin-Milwaukee. All rights reserved.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,479
Come From Away announces new West End cast Noises Off transfers to the West End WICKED LONDON ANNOUNCES NEW CAST Come From Away Extends in the West End 2019 Olivier Awards Lady Day at Emerson's Bar & Grill comes to the West End After the news of Audra McDonald's pregnancy last year, producers of the postponed London run of Lady Day at Emerson's Bar & Grill are delighted to announce that this summer, McDonald will be making her long awaited West End debut portraying jazz legend Billie Holiday in Lady Day at Emerson's Bar & Grill in a performance that won her a record-setting sixth Tony Award. Written by Lanie Robertson and directed by Lonny Price, Lady Day at Emerson's Bar & Grill will run for a limited engagement at Wyndham's Theatre with previews from 17 June to 9 September, and opening night for press on 27 June 2017. Lady Day at Emerson's Bar & Grill won two Tony awards in 2014, including 'Best Performance by a Leading Actress in a Play' for Audra McDonald, making her Broadway's most decorated performer, winner of six Tony Awards and the first and only person to receive awards in all four acting categories. 1959, in a small, intimate bar in Philadelphia, Billie Holiday puts on a show that unbeknownst to the audience, will leave them witnesses to one of the last performances of her lifetime. Through her poignant voice and moving songs, one of the greatest jazz singers of all-time shares her loves and her losses. Billie 'Lady Day' Holiday had what is widely considered one of the greatest jazz voices of all-time. Born Eleanora Fagan in April 1915, she rose to popularity in the 1930s and 1940s with her pioneering vocal style strongly inspired by jazz instrumentalists. After a turbulent personal life and struggle with addiction, she died at the untimely age of 44. In 2000, Holiday was inducted into the Rock and Roll Hall of Fame. Tickets are available www.ladydaywestend.com
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,388
With the Reporter for CRD IV we have made it easy and simple to create XBRL files for the European Banking Authorities (EBA). The reports are delivered to your receiving authority, who then consolidate and forwards the reports collected nationally to the EBA. Fill in the amounts you need to report in the EBA Excel template. Take a look at our 3-minute video demonstration of our Reporter solution, and learn how to fill, convert and submit your CRD IV report to your local business authority in XBRL. In this example a COREP report. ParsePort CRD IV Reporter supports: COREP (common reporting), FINREP (financial reporting), AE (asset encumbrance), LCR (liquidity coverage ratio), NSFR (net stable funding ratio) and LR (leverage ratio). The price structure for our Reporter for CRD IV are simple. You pay a fixed annual subscription fee to access to our conversion tool. Please contact us here for more information.
{ "redpajama_set_name": "RedPajamaC4" }
6,270
// This code is auto-generated, do not modify using Ds3.Models; using System; using System.Net; namespace Ds3.Calls { public class ModifyNodeSpectraS3Request : Ds3Request { public string Node { get; private set; } private string _dnsName; public string DnsName { get { return _dnsName; } set { WithDnsName(value); } } private string _name; public string Name { get { return _name; } set { WithName(value); } } public ModifyNodeSpectraS3Request WithDnsName(string dnsName) { this._dnsName = dnsName; if (dnsName != null) { this.QueryParams.Add("dns_name", dnsName); } else { this.QueryParams.Remove("dns_name"); } return this; } public ModifyNodeSpectraS3Request WithName(string name) { this._name = name; if (name != null) { this.QueryParams.Add("name", name); } else { this.QueryParams.Remove("name"); } return this; } public ModifyNodeSpectraS3Request(string node) { this.Node = node; } internal override HttpVerb Verb { get { return HttpVerb.PUT; } } internal override string Path { get { return "/_rest_/node/" + Node; } } } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,556
Q: Assigning variable with getByText not works as expected Here is my testing code with testing-library with angular. Here I am using title variable for regexp. import { render, screen } from '@testing-library/angular'; import '@testing-library/jest-dom'; import { AppComponent } from './app.component'; describe('App component', () => { it('app should render', async () => { const title = "Hello there " + new Date() + 'app is running!'; const app = await render(AppComponent, { componentProperties: { title: title } }); expect(app).toBeTruthy(); expect(screen.getByText(new RegExp(title, "i"))).toBeInTheDocument(); screen.debug(); }); }) But getting an error as : TestingLibraryElementError: Unable to find an element with the text: /Hello there Fri Mar 11 2022 20:07:42 GMT+0530 (India Standard Time)app is running!/i. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible. Ignored nodes: comments, <script />, <style /> <body> <div id="root0" ng-version="13.1.3" > <h1> Hello there Fri Mar 11 2022 20:07:42 GMT+0530 (India Standard Time)app is running! app is running! </h1> <router-outlet /> </div> </body> 17 | }); 18 | expect(app).toBeTruthy(); > 19 | expect(screen.getByText(new RegExp(title, "i"))).toBeInTheDocument(); | ^ 20 | screen.debug(); 21 | }); 22 | }) Any one help me to understand the issue here? how to handle this scenario? A: You have to escape the title before creating the RegExp because it contains regexp specific chars (+, (, )). const title = "Hello there " + new Date() + 'app is running!'; new RegExp(title, 'i').test(title) // false
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,125
Q: Using a shared variable in a function Hi I'm following a neural net tutorial where the author seems to be using shared variables everywhere. From my understanding, a shared variable in theanos simply is a space in memory that can be shared by the gpu and cpu heap. Anyway, I have two matrices which I declare as shared variables and I want to perform some operation on them using function. (Question 1) I'd love it if someone could explain why function is usefull vs regular def function. Anyway, I'm setting up my definition like such: import theano import theano.tensor as T from theano import function import numpy as np class Transform: def __init__(self, dimg): dimg = dimg.astype(theano.config.floatX) self.in_t = theano.shared(dimg, name='dimg', borrow=True) def rotate(self, ox, oy, radians): value = np.zeros((2 * self.in_t.get_value().shape[0], 2 * self.in_t.get_value().shape[1])) out_t = theano.shared(value, name='b', dtype=theano.config.floatX), borrow=True) din = theano.tensor.dmatrix('a') dout = theano.tensor.dmatrix('b') def atest(): y = x + y return y f = function(inputs=[], givens={x: self.in_t, y: self.out_t}, outputs=atest) return f() The problem is that I have no idea how to use the shared variables in a regular function-output call. I understand that I can do updates via function([],..update=(shared_var_1, upate_function)). But how do I access them in my regular function? A: Theano beginner here, so I'm not sure that my answer will cover all the technical aspects. Answering your first question: you need to declare theano function instead of def function, because theano is like a "language" inside python and invoking theano.function you're compiling some ad-hoc C code performing your task under the hood. This is what makes Theano fast. From the documentation: It is good to think of theano.function as the interface to a compiler which builds a callable object from a purely symbolic graph. One of Theano's most important features is that theano.function can optimize a graph and even compile some or all of it into native machine instructions. About your second quetion, in order to access what's stored in your shared variable you should use shared_var.get_value() Check these examples: The value can be accessed and modified by the .get_value() and .set_value() methods. This code: a = np.array([[1,2],[3,4]], dtype=theano.config.floatX) x = theano.shared(a) print(x) Will output <CudaNdarrayType(float32, matrix)> But using get_value(): print(x.get_value()) It outputs [[ 1. 2.] [ 3. 4.]] Edit: to use shared variables in functions import theano import numpy a = numpy.int64(2) y = theano.tensor.scalar('y',dtype='int64') z = theano.tensor.scalar('z',dtype='int64') x = theano.shared(a) plus = y + z theano_sum = theano.function([y,z],plus) # Using shared variable in a function print(theano_sum(x.get_value(),3)) # Changing shared variable value using a function x.set_value(theano_sum(2,2)) print(x.get_value()) # Update shared variable value x.set_value(x.get_value(borrow=True)+1) print(x.get_value()) Will output: 5 4 5
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,341
package com.omottec.demoapp.activity; import android.content.Intent; import android.os.Bundle; import android.support.annotation.Nullable; import android.support.v4.app.FragmentActivity; import android.view.View; import android.widget.TextView; import com.omottec.demoapp.R; /** * Created by qinbingbing on 9/21/16. */ public class AnimActivity extends FragmentActivity { TextView mTv; @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.full_screen_text); mTv = (TextView) findViewById(R.id.tv); mTv.setText("AnimActivity"); mTv.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent intent = new Intent(AnimActivity.this, AnimActivity1.class); startActivity(intent); overridePendingTransition(R.anim.down_2_up, R.anim.up_2_down); } }); } @Override public void finish() { super.finish(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,712
Byron The Aquarius Naam Byron Blaylock Herkomst Verenigd Koninkrijk Woonachtig Verenigde Staten Genres deephouse, house Site byrontheaquarius.com Boekingen onehouseartists.com When Byron The Aquarius talks about live frequencies, his enthusiasm is infectious. A keys player by trade, Byron's love for melody and harmony has consistently drawn the attention of electronic music's leading labels and producers. Hailing from Birmingham, Alabama, Byron launched his production career in 2007, collaborating with well-known Parisian producer Onra on an EP entitled 'The Big Payback." That collaboration led to numerous releases on independent labels like Rush Hour, Warp (with Flying Lotus), HHV, Circulations, Reebok Classics A Journey into Future, Real Soon, and Giant Step Records. In 2010, Byron signed to a production deal with Denaun Porter from D12 (Eminem producer). It was at Denaun's studio in Detroit that Byron realized he wanted to make his own music and bravely returned to Alabama to hone his craft through jazz studies and jazz composition programs in piano at Morehouse University and Jacksonville State University. In 2015, Byron released a ten track solo album on BBE entitled 'Planets of Love." When he wasn't working on his own music, he was playing keys in the studio with Kai Alce. Their creations got into the hands of Theo Parrish, who loved the sound of the sublime jazz-infected house and picked two tracks to release on his legendary Sound Signature label in 2016. The resulting 'Highlife EP" has been hailed as "stunning", "intoxicating" and "essential." In April, Byron dropped a project entitled 'Gone Today Here TMRW" on Kyle Hall's esteemed Detroit label, Wild Oats. He'll also be heading out on tour around the world for the summer and performing at Dimensions Festival in September. Wherever Byron is -- in the studio working on music, performing live, or DJing – you can be sure 2016 will be filled with the sound of live frequencies. -------------- ABOVE IS COMBINED ARTIST, BELOW IS DELETED ARTIST ---------------- Hailing from Birmingham, Alabama, Byron launched his production career in 2007, collaborating with well-known Parisian producer Onra on an EP entitled "The Big Payback." That collaboration led to numerous releases on independent labels like Rush Hour, Warp (with Flying Lotus), HHV, Circulations, Reebok Classics A Journey into Future, Real Soon, and Giant Step Records. In 2010, Byron signed to a production deal with Denaun Porter from D12 (Eminem producer). It was at Denaun's studio in Detroit that Byron realised he wanted to make his own music and bravely returned to Alabama to hone his craft through jazz studies and jazz composition programs in piano at Morehouse University and Jacksonville State University. In 2015, Byron released a ten track solo album on BBE entitled "Planets of Love." When he wasn't working on his own music, he was playing keys in the studio with Kai Alce. Their creations got into the hands of Theo Parrish, who loved the sound of the sublime jazz-infected house and picked two tracks to release on his legendary Sound Signature label in 2016. The resulting "Highlife EP" has been hailed as "stunning", "intoxicating" and "essential." So what's next for Byron The Aquarius? In April, Byron drops a project entitled "Gone Today Here TMRW" on Kyle Hall's esteemed Detroit label, Wild Oats. He'll also be heading out on tour for the summer and performing at the Dimensions Festival in August. Wherever Byron is -- in the studio working on music, performing live, or DJing – you can be sure 2016 will be filled with the sound of live frequencies. pagina laatst gewijzigd op maandag 20 mei 2019 om 14:45 Party agenda Byron The Aquarius agenda van Byron The Aquarius Laatste optreden was op zondag 30 juni 2019: Nomads Festival, Rhônepark, Amsterdam 23 · optredens geen · in de toekomst 9 K likes
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,070
#include "ksp_plugin/vessel.hpp" #include "gmock/gmock.h" #include "gtest/gtest.h" #include "physics/ephemeris.hpp" #include "physics/solar_system.hpp" namespace principia { using physics::Ephemeris; using physics::SolarSystem; using quantities::si::Kilo; using quantities::si::Kilogram; using quantities::si::Metre; using quantities::si::Second; namespace ksp_plugin { class VesselTest : public testing::Test { protected: VesselTest() : adaptive_parameters_( DormandElMikkawyPrince1986RKN434FM<Position<Barycentric>>(), /*max_steps=*/1000, /*length_integration_tolerance=*/1 * Metre, /*speed_integration_tolerance=*/1 * Metre / Second), ephemeris_fixed_parameters_( McLachlanAtela1992Order5Optimal<Position<Barycentric>>(), /*step=*/10 * Second), history_fixed_parameters_( McLachlanAtela1992Order5Optimal<Position<Barycentric>>(), /*step=*/1 * Second) { solar_system_.Initialize( SOLUTION_DIR / "astronomy" / "gravity_model_two_bodies_test.proto.txt", SOLUTION_DIR / "astronomy" / "initial_state_two_bodies_test.proto.txt"); ephemeris_ = solar_system_.MakeEphemeris( /*fitting_tolerance=*/1 * Metre, ephemeris_fixed_parameters_); earth_ = std::make_unique<Celestial>( solar_system_.massive_body(*ephemeris_, "Earth")); vessel_ = std::make_unique<Vessel>(earth_.get(), ephemeris_.get(), history_fixed_parameters_, adaptive_parameters_, adaptive_parameters_); t0_ = solar_system_.epoch(); t1_ = t0_ + 11.1 * Second; t2_ = t1_ + 22.2 * Second; t3_ = t2_ + 33.3 * Second; } SolarSystem<Barycentric> solar_system_; std::unique_ptr<Ephemeris<Barycentric>> ephemeris_; std::unique_ptr<Celestial> earth_; Ephemeris<Barycentric>::AdaptiveStepParameters const adaptive_parameters_; Ephemeris<Barycentric>::FixedStepParameters const ephemeris_fixed_parameters_; Ephemeris<Barycentric>::FixedStepParameters const history_fixed_parameters_; std::unique_ptr<Vessel> vessel_; DegreesOfFreedom<Barycentric> d1_ = { Barycentric::origin + Displacement<Barycentric>( {1 * Kilo(Metre), 2 * Kilo(Metre), 3 * Kilo(Metre)}), Velocity<Barycentric>({4 * Kilo(Metre) / Second, 5 * Kilo(Metre) / Second, 6 * Kilo(Metre) / Second})}; DegreesOfFreedom<Barycentric> d2_ = { Barycentric::origin + Displacement<Barycentric>( {11 * Kilo(Metre), 12 * Kilo(Metre), 13 * Kilo(Metre)}), Velocity<Barycentric>({14 * Kilo(Metre) / Second, 15 * Kilo(Metre) / Second, 16 * Kilo(Metre) / Second})}; DegreesOfFreedom<Barycentric> d3_ = { Barycentric::origin + Displacement<Barycentric>( {21 * Kilo(Metre), 22 * Kilo(Metre), 23 * Kilo(Metre)}), Velocity<Barycentric>({24 * Kilo(Metre) / Second, 25 * Kilo(Metre) / Second, 26 * Kilo(Metre) / Second})}; Instant t0_; Instant t1_; Instant t2_; Instant t3_; }; using VesselDeathTest = VesselTest; TEST_F(VesselDeathTest, Uninitialized) { EXPECT_DEATH({ vessel_->history(); }, "is_initialized"); EXPECT_DEATH({ vessel_->prolongation(); }, "is_initialized"); } TEST_F(VesselTest, Initialization) { EXPECT_FALSE(vessel_->is_initialized()); vessel_->CreateHistoryAndForkProlongation(t2_, d2_); EXPECT_TRUE(vessel_->is_initialized()); auto const& history = vessel_->history(); EXPECT_EQ(t2_, history.last().time()); auto const& prolongation = vessel_->prolongation(); EXPECT_EQ(t2_, prolongation.last().time()); auto const& prediction = vessel_->prediction(); EXPECT_EQ(t2_, prediction.last().time()); EXPECT_FALSE(vessel_->has_flight_plan()); } TEST_F(VesselTest, Dirty) { EXPECT_FALSE(vessel_->is_dirty()); vessel_->set_dirty(); EXPECT_TRUE(vessel_->is_dirty()); } TEST_F(VesselTest, Parent) { Celestial celestial(earth_->body()); EXPECT_EQ(earth_.get(), vessel_->parent()); vessel_->set_parent(&celestial); EXPECT_EQ(&celestial, vessel_->parent()); } TEST_F(VesselTest, AdvanceTimeInBubble) { vessel_->CreateHistoryAndForkProlongation(t1_, d1_); vessel_->AdvanceTimeInBubble(t2_, d2_); EXPECT_EQ(t2_ - 0.2 * Second, vessel_->history().last().time()); EXPECT_EQ(t2_, vessel_->prolongation().last().time()); EXPECT_EQ(d2_, vessel_->prolongation().last().degrees_of_freedom()); EXPECT_TRUE(vessel_->is_dirty()); vessel_->AdvanceTimeInBubble(t3_, d3_); EXPECT_TRUE(vessel_->is_dirty()); } TEST_F(VesselTest, AdvanceTimeNotInBubble) { vessel_->CreateHistoryAndForkProlongation(t1_, d1_); vessel_->AdvanceTimeNotInBubble(t2_); EXPECT_EQ(t2_ - 0.2 * Second, vessel_->history().last().time()); EXPECT_EQ(t2_, vessel_->prolongation().last().time()); EXPECT_NE(d2_, vessel_->prolongation().last().degrees_of_freedom()); EXPECT_FALSE(vessel_->is_dirty()); } TEST_F(VesselTest, Prediction) { vessel_->CreateHistoryAndForkProlongation(t1_, d1_); vessel_->AdvanceTimeNotInBubble(t2_); vessel_->UpdatePrediction(t3_); EXPECT_LE(t3_, vessel_->prediction().last().time()); } TEST_F(VesselTest, FlightPlan) { vessel_->CreateHistoryAndForkProlongation(t1_, d1_); vessel_->AdvanceTimeNotInBubble(t2_); EXPECT_FALSE(vessel_->has_flight_plan()); vessel_->CreateFlightPlan(t3_, 10 * Kilogram, adaptive_parameters_); EXPECT_TRUE(vessel_->has_flight_plan()); EXPECT_EQ(0, vessel_->flight_plan().number_of_manœuvres()); EXPECT_EQ(1, vessel_->flight_plan().number_of_segments()); vessel_->DeleteFlightPlan(); EXPECT_FALSE(vessel_->has_flight_plan()); } TEST_F(VesselDeathTest, SerializationError) { EXPECT_DEATH({ serialization::Vessel message; vessel_->WriteToMessage(&message); }, "is_initialized"); EXPECT_DEATH({ serialization::Vessel message; Vessel::ReadFromMessage(message, ephemeris_.get(), earth_.get()); }, "message.has_history"); } TEST_F(VesselTest, SerializationSuccess) { serialization::Vessel message; vessel_->CreateHistoryAndForkProlongation(t2_, d2_); vessel_->AdvanceTimeNotInBubble(t2_); vessel_->UpdatePrediction(t3_); vessel_->CreateFlightPlan(t3_, 10 * Kilogram, adaptive_parameters_); vessel_->WriteToMessage(&message); EXPECT_TRUE(message.has_history()); EXPECT_TRUE(message.has_prediction_fork_time()); EXPECT_TRUE(message.has_prediction_last_time()); EXPECT_TRUE(message.has_flight_plan()); vessel_ = Vessel::ReadFromMessage(message, ephemeris_.get(), earth_.get()); EXPECT_TRUE(vessel_->is_initialized()); EXPECT_TRUE(vessel_->has_flight_plan()); } } // namespace ksp_plugin } // namespace principia
{ "redpajama_set_name": "RedPajamaGithub" }
5,427
{"url":"https:\/\/ccssmathanswers.com\/into-math-grade-5-module-4-lesson-1-answer-key\/","text":"# Into Math Grade 5 Module 4 Lesson 1 Answer Key Write Numerical Expressions\n\nWe included HMH Into Math Grade 5 Answer Key PDF Module 4 Lesson 1 Write Numerical Expressions\u00a0to make students experts in learning maths.\n\n## HMH Into Math Grade 5 Module 4 Lesson 1 Answer Key Write Numerical Expressions\n\nI Can write a numerical expression to model a real-world situation, and I can interpret a numerical expression.\n\nA drum line is made up of 14 fourth-grade drummers and 12 fifth-grade drummers. The fourth-grade drummers stand in a line, and the fifth-grade drummers stand in a line behind them.\n\nDraw a visual model of the situation. Describe how you can represent how many more drummers are in fourth grade than in fifth grade.\n\nExplanation:\nand the fifth-grade drummers stand in a line behind them.\nDrawn a visual model of the situation above,\nRepresented by drawing 2 more drummers are in fourth grade than in fifth grade as 14 \u2013 12 = 2.\n\nTurn and Talk How do you know which operation to use for a situation?\n\nExplanation:\nThe best way to determine what operations I will need to introduce to the values that are presented in the problem is to read the problem\ncarefully and look for words that indicate what is being asked. There are many different types of words and phrases that will indicate a certain operation.\n\nBuild Understanding\n\n1. Every day Thora practices each page of this music 6 times.\n\nDraw a visual model to show the number of pages Thora practices each day. Explain your visual model.\n\nA. Which operation describes the situation? How do you know?\n\nExplanation:\nGiven every day Thora practices each page of this music 6 times.\nDrawn a visual model to show the number of pages Thora practices each day so same page is drawn 6 times, addition operation describes the situation as it is 6 times\nwe add same page 6 times suppose if it is page 1 we take as page 1 + page 1 + page 1 + page 1 + page 1 + page 1 = 6 times same page 1.\n\nConnect to Vocabulary\nYou can model a context mathematically using a numerical expression. A numerical expression is a mathematical phrase that uses only numbers and operation signs.\n\nB. How can you model the number of pages express of music Thora practices each day using a numerical expression? ___\n__________________________\n__________________________\nB. Numerical expression 1 + 1 + 1 + 1 + 1 + 1 = 6,\nC. number 1 and +,\n\nExplanation:\nB. The number of pages express of music Thora practices each day using a numerical expression is\n1 + 1 + 1 + 1 + 1 + 1 = 6,\nC. The numbers and operation sign in my numerical expression is 1 means same page 1 and sign is + means we add same page 6 times.\n\nTurn and Talk How would the expression change if Thora practices the same number of pages for 5 days?\n6 pages X 5,\n\nExplanation:\nGiven if the expression change if Thora practices the same number of pages for 5 days it will change by multiplication sign as 6 pages X 5.\n\n2. Mr. Lopez bought a snack 2 times this week while watching the drum line practice. Both days he started with the same amount of money and had $3 left after he bought his snack. A. How can you model the amount of money Mr. Lopez spent in one day? Write a numerical expression. __________________________ B. How can you model the amount of money he spent on snacks? Write a numerical expression for each day. __________________________ C. How can you model the amount of money he spent in 2 days using a single numerical expression? Use two operation signs and parentheses in your numerical expression. __________________________ D. How do you know where to place the parentheses in your numerical expression? Answer: A. x \u2013 y =$3,\nB. y,\nC. 2(x \u2013 y)= 2 X $3, D. While repeating, Explanation: Given Mr. Lopez bought a snack 2 times this week while watching the drum line practice. Both days he started with the same amount of money and had$3 left after he bought his snack.\nA. Let x be the amount of money Mr. Lopez had for each day and y is the amount of snack,\nso the numerical expression for one day is x \u2013 y =$3. B. The amount of money he spent on snacks, Numerical expression for each day is y. C. The amount of money he spent in 2 days using a single numerical expression is 2(x \u2013 y)= 2 X$3.\nD. To place the parentheses in my numerical expression if the same expression is repeated.\n\nTurn and Talk Suppose there were no parentheses in your numerical expression from Part C. Would the answer be the same? Why or why not?\n\nExplanation:\nIf there were no parentheses in my numerical expression from part C \u2013 2(x \u2013 y)= 2 X $3, answer may vary it would have been 2x \u2013 y = 2 X$3, instead of 2 days amount y spend may be considered for one day.\n\nCheck Understanding Math Board\n\nWrite a numerical expression to model the situation.\n\nQuestion 1.\nBeverly has 2 pens. She buys 1 more pen.\nx = 2 + 1,\n\nExplanation:\nGiven Beverly has 2 pens.\nShe buys 1 more pen, numerical expression to model the situation is supposed x be total number of pens Beverly have in all are x = 2 + 1.\n\nQuestion 2.\nTwo students share 8 markers equally.\nx = 8 \u00f7 2,\n\nExplanation:\nGiven two students share 8 markers equally let x be the share so x = 8 \u00f7 2.\n\nQuestion 3.\nModel with Mathematics Fifteen ensembles, or groups of musicians, perform in the summer parade. Each group has the same number of performers as shown. Write a numerical expression that models the total number of performers in the parade.\n\n15x\n\nExplanation:\nGiven fifteen ensembles, or groups of musicians, perform in the summer parade. Each group has the same number of performers as shown. A numerical expression that models the total number of performers in the parade are let x be the number of groups so 15x.\n\nWrite a numerical expression to model the words.\n\nQuestion 4.\n43,\n\nExplanation:\nGiven to add 28 to 15 we will get\n28\n+15\n43.\n\nQuestion 5.\nSubtract 1 from 12 and then multiply the difference by 4.\n44,\n\nExplanation:\nGiven to subtract 1 from 12 and then multiply the difference by 4 is (12 \u2013 1) X 4 = 11 X 4 = 44.\n\nQuestion 6.\nReason Clarence deletes 1 of 17 folders of photos from a computer. He puts an equal number of the remaining folders onto two computer drives. Write a numerical expression to model the situation. How do you know your answer is correct?\n(17 -1) \u00f7 2,\n\nExplanation:\nGiven Clarence deletes 1 of 17 folders of photos from a computer. He puts an equal number of the remaining folders onto two computer drives, Numerical expression to model the situation is\u00a0 (17 \u2013 1) \u00f7 2, the answer is correct because first we will subtract 1 folder and remaining folders we are dividing by 2 we will get exact results.\n\nI\u2019m in a Learning Mindset!\n\nWhat is still unclear about numerical expressions?\n____________________________\n____________________________\n____________________________","date":"2022-09-28 06:34:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3942830264568329, \"perplexity\": 1518.1491228524228}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030335124.77\/warc\/CC-MAIN-20220928051515-20220928081515-00195.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:intro} A performance is a sonic rendition of a written musical score (in the case of Western classical music). The characteristics of a music performance play a major role in how listeners perceive music, even if performances are based on the same underlying score \cite{lerch19mpa}. To perform a musical piece, the performer must first parse the score, interpret or modify the musical information, and utilize complex motor skills to render the piece on their instrument \cite{palmer1997music}. From the perspective of the performer, mastery over the art of music performance is often a journey spanning several years of instruction and practice. A major factor in learning and improving one's skill as a performer is to analyze and obtain feedback regarding the performance. Due to the complex nature of music performance, students require regular feedback from trained professionals. Teachers are expected to grade or rate students based on various performance criteria such as note accuracy or musicality. These criteria are often ill-defined and subject to interpretation, thus making objective and consistent music performance assessment (MPA) rather difficult \cite{wesolowski2016examining,thompson2003evaluating}. Regardless, this subjective manner of MPA is still used, e.g., in school systems where ensemble members are selected based on instructors' assessments of student auditions. Wu et al.\ discussed the notion of objective descriptors (features) which are potentially useful for automatic MPA \cite{wu16towards}. Such features are computed by applying signal processing methods to recorded performances and are used to model teachers' assessments of the performances using machine learning. With the rise of deep learning, neural networks were found to outperform the classical pipeline of feature extraction followed by regression \cite{pati2018assessment}. However, one issue with these approaches is that they ignore the score that the students are meant to play. We will refer to such approaches as \textit{score-independent}. The idea of incorporating score-based features utilizing audio to score alignment was explored, e.g., by Vidwans et al.\ \cite{vidwans17objective}. Further analysis of hand-crafted features for MPA showed the relative importance of score-based features over score-independent ones \cite{gururani18analysis}. Therefore, the design of deep architectures that incorporate score information is an obvious and overdue extension of previous approaches. The goal of this paper is to explore different methods to incorporate this score information. Our hypothesis is that including score information will lead to improved performance of deep networks in the objective MPA task. To this end, we present three architectures which combine score and audio features to make a \textit{score-informed} assessment of a music performance. First, we concatenate aligned pitch contours and scores into a 2-dimensional time-series feature representation that is fed to a convolutional neural network (CNN). Second, we propose a joint embedding model for aligned score and pitch contours. The assessment ratings are predicted using the cosine similarity between the score and performance embeddings. Third, we utilize the distance matrix, a mid-level representation combining both the score and pitch contour, as the input to a deep CNN trained to predict the teachers' assessments. Finally, using a fairly large scale dataset of middle school and high school student auditions, we perform an in-depth evaluation comparing these proposed architectures against each other and with a score-independent baseline approach for MPA . \section{Related Work} \label{sec:relwork} MPA deals with the task of assessing music performances based on audio recordings. Progress in MPA is roughly categorized into feature design-based approaches \cite{knight2011potential,nakano2006automatic,wu16towards,romani2015real,gururani18analysis} and feature learning-based approaches \cite{wu2018learned,pati2018assessment,han2014hierarchical}. Feature design-based methods rely on signal processing techniques to either extract standard spectral and temporal features \cite{knight2011potential}, or use expert knowledge to extract perceptually motivated features capable of characterizing music performances \cite{nakano2006automatic,romani2015real}. The extracted features are then fed into simple machine learning classifiers to train models which predict different performance assessment ratings. Feature learning-based approaches, on the other hand, stem from the argument that important features for modeling performance assessments are not trivial and cannot be easily described. Hence, they rely on using mid-level representations (such as pitch contours or mel-spectrograms) as input to sophisticated machine learning models such as sparse coding \cite{han2014hierarchical,wu2018learned} and neural networks \cite{pati2018assessment}. Most performances of Western music, however, are based on written musical scores. Hence, performances are also assessed based on their perceived deviations from the underlying score. There has been some prior research on incorporating the score information into the assessment modeling process. Most of the these approaches rely on computing descriptive features using some notion of \textit{distance} between the score representation and the performance representation \cite{mayor2009performance,devaney_study_2012,molina2013fundamental,falcao_dataset_2019,huang_automatic_2019}. The most common approach has been to first use an alignment algorithm, e.g., Dynamic Time Warping (DTW) \cite{sakoe_dynamic_1978}, to temporally align the performance recording with the score and then compute descriptive features which characterize the deviations of the performance from the score \cite{vidwans17objective,bozkurt2017voice}. However, to the best of our knowledge, incorporating score information directly into neural network-based models for MPA has not been investigated before. Score-informed approaches have helped improve results for both related performance analysis tasks and other music information retrieval tasks. Most of these methods have also relied on an alignment between the audio recording and the score as the primary tool for incorporating score information. Aligning audio recordings with scores has been useful for several tasks such as detecting expressive features in music performances \cite{li2015analysis}, identifying missing notes and errors in piano performances \cite{ewert2016score}, and segmenting syllables in vocal performances\cite{pons2017score}. Scores have also been used to generate soft labels and/or artificial training data for tasks such as source separation \cite{miron2017monaural,ewert2017structured}. \section{Methods} \label{sec:method} We propose and compare three different approaches to incorporate the score information with audio features for MPA.\footnote{The code is available at: https://github.com/biboamy/FBA-Fall19} The score information is represented as the MIDI pitch sequence (in ticks) obtained from the sheet music of the score to be performed. Henceforth, the MIDI pitch sequence will be referred to as the \textit{score}. The student's \textit{performance} is represented by the pitch contour of the audio. We use pitch contour since it captures both pitch and rhythmic information. Musical dynamics and timbre are ignored in this study; while dynamics are an important expressive tool for the performer \cite{lerch19mpa}, the score usually lacks specificity in dynamics instructions and cannot serve as the same absolute reference as for pitch and rhythm. \subsection{Score-Informed Network (SIConvNet)} \label{sec:si_network} \begin{figure}[t] \centering \includegraphics[width =1\columnwidth]{figs/SI_model.pdf} \caption{Schematic for the SIConvNet. The aligned \textit{score} and \textit{pitch contour} are stacked together and fed into a 4-layer CNN to directly predict the assessment ratings.} \label{fig:si_model} \end{figure} The first approach that we use is probably the most straightforward way of incorporating score information into the assessment model. A simple CNN is used that relies on both the score and performance as the input and directly predicts the assessment ratings. \subsubsection{Input Representation} \label{sec:input_rep} The input representation for this approach is constructed by simply stacking an aligned pitch contour and score pair to create a $N\times 2$ matrix, where $N$ is the sequence length of the pitch contour. \ashis{The two channels correspond to the pitch contour and score, respectively.} In order to obtain this representation, we first consider a pitch contour snippet of length $N$ \amy{(sequence of logarithmic frequencies)}. Then, we find the corresponding part of the score using a DTW-based alignment process. The obtained score snippet \amy{(sequence of MIDI note numbers)} is then resampled to have the same length $N$ as the pitch contour. \subsubsection{Model Architecture} A schematic of the model architecture is shown in \figref{fig:si_model}. We use a simple 4-layer CNN based on the architecture proposed by Pati et al.\ \cite{pati2018assessment} and append a single linear layer which predicts the assessment. Each convolutional stack consists of a 1-D convolution followed by a 1-D batch normalization layer \cite{ioffe2015batch} and ReLU non-linearity. The linear layer at the end comprises of a dense layer followed by Leaky ReLU non-linearity. \subsection{Joint Embedding Network (JointEmbedNet)} \label{sec:je_network} \begin{figure}[t] \centering \includegraphics[width =1\columnwidth]{figs/JE_model.pdf} \caption{Schematic of the JointEmbedNet architecture.} \label{fig: je_model} \end{figure} \ashis{The second approach is based on the assumption that performances are rated based on some sort of perceived distance between the performance and the underlying score being performed.} Consequently, we use two separate encoder networks to project the score and the pitch contour to a joint latent space and then use the similarity between the embeddings to predict the assessment ratings. \subsubsection{Input Representation} This approach uses the same input representation as SIConvNet (see \secref{sec:input_rep}). However, instead of stacking the aligned pitch contour and the score, the individual $N \times 1$ sequences are fed separately to the two encoders. \subsubsection{Model Architecture} This network (see \figref{fig: je_model}) uses two 1-D convolutional encoders having the same architecture as SIConvNet. Each encoder has 4 convolutional blocks to extract high level feature embeddings. The performance encoder is expected to extract relevant features pertaining to the performance from the pitch contour. On the other hand, the score encoder is expected to extract the important features from the score. Assuming that the assessment rating for the performance is high if these two embeddings are similar, we use the cosine similarity $\mathrm{cos}(E_\mathrm{score}, E_\mathrm{performance})$ between the two embeddings to obtain the predicted assessment rating. $E_\mathrm{score}$ and $E_\mathrm{performance}$ are the embeddings obtained from the score and performance encoders, respectively. \ashis{If the two embeddings are similar}, the cosine similarity is close to one, and the model \jiawen{will} predict a higher rating. \subsection{Distance Matrix Network (DistMatNet)} \label{sec:dm_network} The final approach uses a distance matrix between the pitch contour and the score as the input to the network. Given the information from both the \ashis{pitch contour and the score}, the task of performance assessment might be interpreted as finding a \textit{performance distance} between them. Thus, the choice of the distance matrix as the input representation allows the model to learn from the pitch differences. A Residual CNN \cite{He2016DeepRL} architecture is chosen for the network. \begin{figure}[t] \centering \includegraphics[width =1\columnwidth]{figs/DM_model.pdf} \caption{Schematic of the DistMatNet architecture.} \label{fig: dm_model} \end{figure} \subsubsection{Input Representation} The distance matrix elements are the pair-wise wrapped distances between the pitch contour and the MIDI pitch sequence. The octave-independent wrapped-distance is used to compensate for possible octave errors made by the pitch tracker. To ensure a uniform input size to the network, the matrix is resampled to a square shape of a fixed size. Thus, a performance with constant tempo would result in an aligned path located on the diagonal. Unlike the previous two methods where the input pitch contour and the score are aligned using DTW, the distance matrix input avoids any error propagation caused by alignment errors. The choice of this input representation stems from the success of distance matrices (or self-similarity matrices) in other areas of MIR such as structural segmentation \cite{grill2015music,cohen2017music} and music generation \cite{wei2019generating}. \subsubsection{Model Architecture} The model architecture is shown in \figref{fig: dm_model}. It is composed of 3 residual blocks. Each residual block has 2 convolutional layers. Dropout and max-pooling are added between each residual block. A classifier with two linear layers (128 features) with one ReLU and dropout layer in between is used after the residual network to perform regression prediction. \amy{We use (3,3) kernal size and 4 feature maps for all convolutional layers, 0.2 dropout rate, and a (3,3) kernal size for all max-pooling layers. \section{Experiments} \label{sec:exp} \subsection{Dataset} The dataset we use to evaluate our methods is a subset of a large dataset of middle school and high school student performances. These are recorded for the Florida All State Auditions, which are separated into three bands: \begin{inparaenum}[(i)] \item middle school band, \item concert band, and \item symphonic band. \end{inparaenum} The recordings contain auditions spanning 6 years (from 2013 to 2018), and feature several \ashis{monophonic} pitched and percussion instruments. Each student performs rehearsed scores, scales and a sight reading exercise. For the purpose of this study we limit our experiments to \ashis{the \textit{technical etude}} for middle school and symphonic band auditions. We choose \textit{Alto Saxophone}, \textit{Bb Clarinet} and \textit{Flute} performances due to these being the most popular across all pitched instruments. \tabref{tab: data_dist} shows the distribution of data across different instruments. \ashis{The average duration of each performance is \unit[30]{s} for middle school and \unit[50]{s} for symphonic band students.} \ashis{The dataset also includes the musical scores that the students are supposed to perform for each exercise}. \jiawen{The average length (in notes) of the musical scores are 136 for middle school and 292 for symphonic band.} Note that \ashis{these scores} are the same across all students performing the same instrument in the same year but vary across years and instruments. The dataset also contains expert assessments for each exercise of a student performance. \ashis{Each performance is rated by one expert along} $4$ criteria \ashis{defined by the Florida Bandmasters' Association (FBA)}: \begin{inparaenum}[(i)] \item musicality, \item note accuracy, \item rhythmic accuracy, and \item tone quality. \end{inparaenum} \ashis{All ratings are on a point-based scale and are normalized to range between $0$ to $1$ by dividing by the maximum.} Since we focus on pitch contours as the primary audio feature, tone quality is excluded from this study. \begin{table} \begin{tabular}{l|c|c} & \multicolumn{1}{l|}{Middle School} & \multicolumn{1}{l}{Symphonic Band} \\ \hline Alto Saxophone & {\color[HTML]{000000} 696} & {\color[HTML]{000000} 641} \\ \hline Bb Clarinet & {\color[HTML]{000000} 925} & {\color[HTML]{000000} 1156} \\ \hline Flute & {\color[HTML]{000000} 989} & {\color[HTML]{000000} 1196} \end{tabular} \caption{Number of performances for the different instruments per band.} \label{tab: data_dist} \end{table} \subsubsection{Data pre-processing} The pitch contours are extracted using the pYIN algorithm \cite{mauch2014pyin} with a block size and hop size of $1024$ and $256$ samples, respectively. The audio sampling rate is \unit[44100]{Hz}. The extracted frequencies are converted from Hz to MIDI pitch \jiawen{(unlike the MIDI pitches from the musical score, these can be floating point numbers)}. Both the resulting pitch contour and musical score are normalized by dividing by $127$. Finally, for the purpose of model training and evaluation, we divide our dataset into three randomly sampled subsets: training, validation, and testing. We use a ratio of $8\colon1\colon1$ for splitting the dataset. We use \textit{random-chunking} as a data augmentation tool when training SIConvNet and JointEmbedNet since it has shown to be useful in improving model performance \cite{pati2018assessment}. First, the pitch contour is chunked into snippets of length $N$ by randomly selecting the starting position. The corresponding aligned and length-adjusted score snippet is obtained using the method described in \secref{sec:input_rep}. We assume the chunked segment has the same assessment score as the whole recording. We do not perform chunking on our distance matrix since the matrix has already been resampled into a smaller resolution. Instead, we discuss how varying the resampling size could effect the performance in one of the experiments. \subsection{Experimental Setup} We present three experiments to evaluate our proposed methods. First, we compare the overall performance of the proposed architectures against a score-independent baseline system PCConvNet \cite{pati2018assessment} which uses only the randomly-chunked pitch contour as input. This experiment also gives us an indication of the effectiveness of each of the proposed methods. Second, we look at the sensitivity of the SIConvNet and JointEmbedNet to the chunk size $N$. Finally, we investigate the effect of varying the resolution of the input distance matrix for the DistMatNet model. The latter two experiments were aimed at understanding the effects of the different hyper-parameters used while constructing the input data for each model. These helped us arrive at the best parameters for each approach. \begin{table}[] \centering \begin{tabular}{c|c|c} \phantom{aa}SIConvNet\phantom{aa} & JointEmbedNet & \phantom{aa}DistMatNet\phantom{aa}\\ \hline 3,089&6,144&63,417 \end{tabular} \caption{\amy{Number of parameters for each model.}} \label{tab:parameters} \end{table} \ashis{The number of trainable parameters for each method is shown in Table~\ref{tab:parameters}. DistMatNet has a higher number of parameters because it uses a higher-dimensional input with a deeper architecture to capture high level information \cite{He2016DeepRL}.} For each method, we trained separate models to predict each assessment criterion. Moreover, to measure the variation of each model, we trained each model on 10 different random seeds. We used $M_i$ to represent the model training on different random seed where $i = 0 \ldots 9$. A boxplot with median and variation of each $M_i$ is shown to demonstrate the results. \amy{All the models are trained based on the mean squared error between estimated and ground truth ratings.} All the models are trained with a stochastic gradient descent optimizer with a 0.05 learning rate. We apply early stopping if the validation loss does not improve for 100 epochs. The performance of all models is measured using the coefficient of determination ($R^2$ score): \begin{equation} R^2 = 1-\frac{\sum_{i}{} (y_i - \hat{y_i})^2 }{\sum_{i}{} (y_i - \bar{y_i})^2}\,, \label{eq: loss} \end{equation} where $y_i$ is the ground truth rating, $\hat{y_i}$ is the estimated rating, and $\bar{y_i}$ is the average of the ground truth rating. $R^2$ is a common metric to evaluate the fit of a regression prediction to the ground truth value. \section{Results \& Discussion} \begin{figure*}[tb] \centering \begin{tabular}{@{}c@{}} \includegraphics[width=0.45\textwidth]{figs/results_middle.pdf} \\[\abovecaptionskip] \small (a) Middle School \end{tabular} \begin{tabular}{@{}c@{}} \includegraphics[width=0.45\textwidth]{figs/results_symphonic.pdf} \\[\abovecaptionskip] \small (b) Symphonic Band \end{tabular} \caption{Box plots showing comparative performance (higher is better) across different models and assessment criteria.} \label{fig:results_combined} \end{figure*} \subsection{Overall Performance} \figref{fig:results_combined} shows the comparative performance for all models for middle school and symphonic band. We can make the following observations (\jiawen{with independent t-test results reported}): \begin{compactenum}[(i)] \item We compare the performance of various models trained on different band performances. All systems perform better (higher $R^2$ value) on the middle school recordings than on the symphonic band recordings \jiawen{($p<0.01$ except JointEmbedNet for musicality)}. One possible explanation for this is that symphonic band scores are usually more complicated and longer. For example, symphonic band scores tend to be performed at high tempo with high note density. The chunking into smaller lengths (and the downsampling of the distance matrix) compared to the score length might lead to a less accurate mapping to the assessment rating. An additional factor is that most performers in the symphonic band auditions exhibit greater skill level than middle school performers thus making it potentially more difficult to model the differences in proficiency levels. \item All score-informed models generally outperform the baseline, implying that score information is indeed helpful for MPA \jiawen{($p<0.01$ except SIConvNet for musicality, DistMatNet for musicality and note accuracy on middle school)}. We notice, however, that the difference between the score-independent baseline and the score-informed models is smaller for the middle school than for symphonic band. Given the significant improvement over the baseline for symphonic band performances (which have complicated scores), we speculate that the score-informed models benefit more from access to score information. In other words, a score reference becomes more impactful with increasing proficiency level while the pitch contour alone contains most relevant information for medium proficiency levels. \item While the two models SIConvNet and JointEmbedNet both use the same input features, JointEmbedNet either outperforms or matches SIConvNet in all experiments. The main difference between these two architectures is that SIConvNet simply performs a regression to estimate the assessments while JointEmbedNet learns a similarity in the embedding space to model the assessments. Therefore, we can assume that JointEmbedNet is able to explicitly model the differences between the input pitch contour and score especially in the case of symphonic band where the scores are more complicated. \item We observe that while DistMatNet and JointEmbedNet both utilize the similarity between the score and pitch contour, albeit at different stages of the network, JointEmbedNet typically performs better across categories and bands, and the gap \jiawen{is larger for musicality than for the other two categories}. It is possible that the absolute pitch at the input may be important for the final assessment (octave jumps, for example, would not be properly modeled in the distance matrix). More likely, however, is that the significantly larger input dimensionality of the matrix (compared to the aligned sequences for JointEmbedNet) negatively impacts performance. Most of the relevant information for MPA centers around the diagonal of the distance matrix with relatively small deviations depending on the students' tempo variation. Most of the distance matrix elements far from the diagonal contain redundant or irrelevant information, thus complicating the task. \jiawen{Another advantage that JointEmbedNet might have over DistMatNet in terms of overall assessment is that the distance is computed on the whole performance while DistMatNet computes a frame-level pitch distance, potentially complicating the task for overall quality measures like musicality.} \end{compactenum} \begin{figure}[tb] \centering \begin{tabular}{@{}c@{}} \includegraphics[width=0.45\textwidth]{figs/chunk_size_results_middle.pdf} \\[\abovecaptionskip] \small (a) Middle School \end{tabular} \begin{tabular}{@{}c@{}} \includegraphics[width=0.45\textwidth]{figs/chunk_size_results_symphonic.pdf} \\[\abovecaptionskip] \small (b) Symphonic Band \end{tabular} \caption{Box plots showing comparative performance (higher is better) across different chunk sizes for SIConvNet and JointEmbedNet.} \label{fig:chunk_size_results} \end{figure} \subsection{Chunk Size} In this experiment, we look at the impact of two different chunk sizes for the first two methods. \figref{fig:chunk_size_results} shows the results on middle school (a) and symphonic band (b). For both SIConvNet and JointEmbedNet, a chunk size of \unit[10]{s} outperforms that of \unit[5]{s} across all the bands. Chunking with random sampling is a form of data augmentation. By using the ground truth rating of the whole performance, the chunks are assumed to reflect the quality of the whole performance. The results show that \unit[5]{s} chunks might be too short to evaluate the whole performance while \unit[10]{s} chunks are much better suited regardless of category and score complexity. Chunk lengths greater than \unit[10]{s} were not tested because we restricted ourselves to the length of the shortest performance in the dataset. Consequently, we used a \unit[10]{s} chunk size for the experiment in \figref{fig:results_combined}. \begin{figure}[tb] \centering \begin{tabular}{@{}c@{}} \includegraphics[width=0.45\textwidth]{figs/dist_mat_results_middle.pdf} \\[\abovecaptionskip] \small (a) Middle School \end{tabular} \begin{tabular}{@{}c@{}} \includegraphics[width=0.45\textwidth]{figs/dist_mat_results_symphonic.pdf} \\[\abovecaptionskip] \small (b) Symphonic Band \end{tabular} \caption{Box plots showing comparative performance (higher is better) across different matrix sizes for the distance matrix network (DistMatNet).} \label{fig:dist_mat_results} \end{figure} \subsection{Distance Matrix Resolution} In this experiment, we study the impact of the different input matrix resolutions $400\times 400$, $600\times 600$, and $900\times 900$, for the DistMatNet model. The results for both middle school and symphonic band are shown in \figref{fig:dist_mat_results}. First, the performance of rhythmic accuracy criterion tends to decrease with increasing distance matrix resolution. It might be more difficult for the same model structure to capture the complexity inside a larger matrix. This can also explain the result for middle school: although increasing the input resolution from $400\times 400$ to $600\times 600$ will capture more details, the performance decreases when the matrix resolution is further increased. Second, an input matrix size of $600\times 600$ leads to a slightly higher average score (0.46) on both symphonic and middle school than the other two resolutions (0.45 for $400\times 400$ and 0.43 for $900\times 900$). We ended up using the $600\times 600$ resolution for the experiment in \figref{fig:results_combined}. \section{Conclusion} This paper presents three novel neural network-based methods that combine score information with a pitch representation of an audio recording to assess a music performance. The methods include: \begin{inparaenum}[(i)] \item a CNN with aligned pitch contour and score as the input, \item a joint embedding model that learns the assessment as the cosine similarity of the embeddings of both the aligned pitch contour and the score, and \item a distance-matrix based CNN, using a differential representation of pitch contour and score at the input. \end{inparaenum} The results show that all the methods outperform the score-independent baseline model. The joint embedding model achieves the highest average performance. \ashis{Beyond the obvious applications in software-based music tutoring systems, score-informed performance assessment models (and objective MPA in general) can benefit the broader area of music performance analysis. Models capable of rating performances along different criteria could serve as useful tools for objective evaluation of generative systems of music performance. In addition, such models could also be explored for objective analysis of inter-annotator differences in rating music performances.} In the future, we plan to incorporate timbre and dynamics information into the models as it has been shown to improve accuracy \cite{pati2018assessment}. This will also enable the model to assess performances in terms of tone quality, the criterion ignored in this study. We also plan to investigate other instruments and to examine cross-instrument relationships by training instrument-specific models. \jiawen{Furthermore, the musical score reference could be replaced with other representations such as the pitch contour of a highly-rated performance.} \section{Acknowledgment}\label{sec:acknowledgement} We would like to thank the Florida Bandmasters Association for providing the dataset used in this study. We also gratefully acknowledge Microsoft Azure who supported this research by providing computing resources via the Microsoft Azure Sponsorship.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,487
\section{Introduction} \section{Introduction} \label{sec:introduction} Consider a vector-valued \ac{awgn} system specified by \begin{align} {\boldsymbol{y}}=\mathbf{A} {\boldsymbol{x}} + {\boldsymbol{z}} \label{eq:sys-1} \end{align} where the \ac{iid} source vector ${\boldsymbol{x}}_{n \times 1}$, with components in a support set $\mathbbmss{X} \subset \mathbbmss{R}$, is measured by the random system matrix $\mathbf{A}_{k \times n} \in \mathbbmss{A}^{k \times n}$, with $\mathbbmss{A}\subset \mathbbmss{R}$, and corrupted by the \ac{iid} zero-mean Gaussian noise vector ${\boldsymbol{z}}_{k \times 1}$, with variance $\lambda_0$, i.e., ${\boldsymbol{z}} \sim \mathcal{N}(\boldsymbol{0},\lambda_0 \mathbf{I})$. The source vector can be estimated from the observation vector ${\boldsymbol{y}}_{k \times 1}$ using a \ac{map} estimator. For a given system matrix $\mathbf{A}$, the estimator maps the observation vector to the estimated vector ${\boldsymbol{\hat{x}}}_{n \times 1} \in \mathbbmss{X}^n$ via the estimation function~${\mathbf{g}}(\cdot | \mathbf{A})$~defined~as \begin{align} {\mathbf{g}}({\boldsymbol{y}} | \mathbf{A})= \arg \min_{{\boldsymbol{v}} \in \mathbbmss{X}^n} \ \left[ \frac{1}{2\lambda} \norm{{\boldsymbol{y}}-\mathbf{A} {\boldsymbol{v}}}^2 + u({\boldsymbol{v}}) \right] \label{eq:int-2} \end{align} for some ``utility function'' $u(\cdot): \mathbbmss{R}^n \mapsto \mathbbmss{R}^+$ and estimation parameter $\lambda \in \mathbbmss{R}^+$. In \eqref{eq:int-2}, $\norm{\cdot}^2$ denotes the Euclidean norm, and it is assumed that the minimum is not degenerate so that ${\mathbf{g}}(\cdot | \mathbf{A})$ is well-defined, at least for almost all ${\boldsymbol{y}}$ and $\mathbf{A}$. In order to analyze the performance of the system in the large-system limit, i.e., $k,n \uparrow \infty$, one considers a general distortion function $\mathsf{d}(\cdot;\cdot): \mathbbmss{X} \times \mathbbmss{X} \mapsto \mathbbmss{R}$. For some choices of $\mathsf{d}(\cdot;\cdot)$, the distortion function determines the distance between the source and estimated vector, e.g. $\mathsf{d}({\hat{x}};x)=\abs{{\hat{x}}-x}^2$; however, in general, it takes different choices. The asymptotic distortion \begin{align} \mathsf{D} = \lim_{n \uparrow \infty} \frac{1}{n} \sum_{i=1}^n \mathsf{d}({\hat{x}}_i;x_i), \label{eq:int-2.1} \end{align} then, expresses the large-system performance regarding the distortion function $\mathsf{d}(\cdot;\cdot)$. The performance analysis of the estimator requires \eqref{eq:int-2} to be explicitly computed, and then, ${\boldsymbol{\hat{x}}}={\mathbf{g}}({\boldsymbol{y}}|\mathbf{A})$ substituted in the distortion function. This task, however, is not trivial for many choices of the utility function and the source support $\mathbbmss{X}$, and becomes unfeasible as $n$ grows large. As basic analytic tools fail, we take a statistical mechanical approach and investigate the large-system performance by studying the macroscopic parameters of a corresponding spin glass. This approach enables us to use the replica method which has been developed in the context of statistical mechanics \subsection{Corresponding Spin Glass} \label{sec:spin_glasses} Consider a thermodynamic system which consists of $n$ particles with each having a microscopic parameter $v_i \in \mathbbmss{V} $. The vector ${\boldsymbol{v}}_{n \times 1}= \left[ v_1,\ldots,v_n\right]^\mathsf{T}$, collecting the microscopic parameters, presents then the microscopic state of the system and is called the ``microstate''. The main goal of statistical mechanics is to excavate the ``macroscopic parameters'' of the system, such as energy and entropy through the analysis of the microstate in the thermodynamic limit, i.e., $n \uparrow \infty$. Due to the large dimension of the system, statistical mechanics proposes a stochastic approach in which the microstate is supposed to be randomly distributed over the support $\mathbbmss{V}^n$ due to some distribution $\mathrm{p}_{{\boldsymbol{v}}}$. For this system, the Hamiltonian $\mathcal{E}(\cdot): \mathbbmss{R}^n \mapsto \mathbbmss{R}^+$ assigns to each realization of the microstate a non-negative energy level, and $\mathrm{H} \coloneqq -\mathsf{E}\hspace{.5mm}_{\mathrm{p}_{{\boldsymbol{v}}}} \log \mathrm{p}_{{\boldsymbol{v}}}$ denotes the system's entropy. The ``free energy'' of the thermodynamic system at the inverse temperature $\upbeta$ is then defined as \begin{align} \mathsf{F}(\upbeta) \coloneqq \mathsf{E}\hspace{.5mm}_{\mathrm{p}_{{\boldsymbol{v}}}} \mathcal{E}({\boldsymbol{v}})-\upbeta^{-1} \mathrm{H}. \label{eq:int-3} \end{align} The second law of thermodynamics states that the microstate at thermal equilibrium takes its distribution such that the free energy meets its minimum. Thus, the microstate's distribution at thermal equilibrium reads \begin{align} \mathrm{p}^{\upbeta}_{{\boldsymbol{v}}}({\boldsymbol{v}})= \left[ \mathcal{Z}(\upbeta)\right]^{-1} e^{-\upbeta \mathcal{E}({\boldsymbol{v}})} \label{eq:int-4} \end{align} where $\mathcal{Z}(\upbeta)$ is a normalization factor referred to as the ``partition function'', and the superscript $\upbeta$ indicates the distribution's dependence on the inverse temperature. The distribution in \eqref{eq:int-4} is known as the ``Boltzmann-Gibbs distribution'' and covers many distributions on $\mathbbmss{V}^n$ by specifying $\mathcal{E}(\cdot)$ and $\upbeta$ correspondingly. Substituting the Boltzmann-Gibbs distribution in \eqref{eq:int-3}, the free energy at thermal equilibrium and inverse temperature $\upbeta$ reads \begin{align} \mathsf{F}(\upbeta) = -\upbeta^{-1} \log \mathcal{Z}(\upbeta). \label{eq:int-5} \end{align} The average energy and entropy of the system at thermal equilibrium are then determined by taking expectation over the distribution in \eqref{eq:int-4}, i.e., \begin{subequations} \begin{align} \mathrm{E}(\upbeta) &\coloneqq \mathsf{E}\hspace{.5mm}_{\mathrm{p}_{{\boldsymbol{v}}}^\upbeta} \mathcal{E}({\boldsymbol{v}})\\ \mathrm{H}(\upbeta) &\coloneqq -\mathsf{E}\hspace{.5mm}_{\mathrm{p}_{{\boldsymbol{v}}}^\upbeta} \log \mathrm{p}^{\upbeta}_{{\boldsymbol{v}}}({\boldsymbol{v}}), \end{align} \end{subequations} which can be calculated in terms of the free energy via \begin{subequations} \begin{align} \mathrm{E}(\upbeta) &= \frac{\mathrm{d}}{\mathrm{d} \upbeta} \left[ \upbeta \mathsf{F}(\upbeta) \right] \label{eq:int-5.1a} \\ \mathrm{H}(\upbeta) &= \upbeta^2 \frac{\mathrm{d}}{\mathrm{d} \upbeta} \left[ \mathsf{F}(\upbeta) \right]. \label{eq:int-5.1b} \end{align} \end{subequations} In spin glasses \cite{edwards1975theory}, the Hamiltonian assigns the energy levels randomly using some randomizer $\mathsf{Q}$ resulting from random interaction coefficients. In fact, each realization of $\mathsf{Q}$ specifies a thermodynamic system represented by the deterministic Hamiltonian $\mathcal{E}(\cdot|\mathsf{Q})$. In statistical mechanics, $\mathsf{Q}$ is known to have ``quenched'' randomness while the microstate is an ``annealed'' random variable. The analysis of spin glasses takes similar steps as above considering a given realization of the randomizer, and therefore, as the system converges to its thermal equilibrium at the inverse temperature $\upbeta$, the microstate's conditional distribution given $\mathsf{Q}$, i.e., $\mathrm{p}^\upbeta_{{\boldsymbol{v}}|\mathsf{Q}}$, is a Boltzmann-Gibbs distribution specified by $\mathcal{E}(\cdot|\mathsf{Q})$. Consequently, the free energy reads \begin{align} \mathsf{F}(\upbeta|\mathsf{Q}) = -\upbeta^{-1} \log \mathcal{Z}(\upbeta|\mathsf{Q}) \label{eq:int-6}. \end{align} where $\mathcal{Z}(\upbeta|\mathsf{Q})$ is the partition function with respect to the Hamiltonian $\mathcal{E}(\cdot|\mathsf{Q})$. Here, the free energy, as well as other macroscopic parameters of the system, is random; however, the physical intuition behind the analyses suggests that these random macroscopic parameters converge to deterministic values in the thermodynamic limit. This property is known as the ``self averaging property'' and has been rigorously justified for some particular classes of Hamiltonians, e.g., \cite{pastur1991absence,guerra2002thermodynamic,guerra2002infinite,korada2010tight}. Nevertheless, in cases where a mathematical proof is still lacking, the property is supposed to hold during the analysis. According to the self averaging property, the free energy of spin glasses converges to its expectation in the thermodynamic limit. As mentioned earlier, the \ac{map} estimator in \eqref{eq:int-2} can be investigated using a corresponding spin glass. To see that, consider a spin glass whose microstate is taken from $\mathbbmss{X}^n$, and whose Hamiltonian is defined as \begin{align} \mathcal{E}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A})=\frac{1}{2\lambda} \norm{{\boldsymbol{y}}-\mathbf{A} {\boldsymbol{v}}}^2 + u({\boldsymbol{v}}). \label{eq:int-7} \end{align} Here, the system matrix $\mathbf{A}$ and the observation vector ${\boldsymbol{y}}$ are considered to be the randomizers of the spin glass. In this case, given $\mathbf{A}$ and ${\boldsymbol{y}}$, the conditional distribution of the microstate is given by \begin{align} \mathrm{p}^{\upbeta}_{{\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A}}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A})= \left[ \mathcal{Z}(\upbeta|{\boldsymbol{y}},\mathbf{A})\right]^{-1} e^{-\upbeta \mathcal{E}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A})}. \label{eq:int-8} \end{align} Taking the limit when $\upbeta \uparrow \infty$ and using Laplace method of integration \cite{merhav2010statistical}, the zero temperature distribution, under this assumption that the minimizer is unique, reduces to \begin{subequations} \begin{align} \mathrm{p}^{\infty}_{{\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A}}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A})&= \mathbf{1} \{ {\boldsymbol{v}} = \arg \min_{{\boldsymbol{v}} \in \mathbbmss{X}^n} \mathcal{E}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A})\}\label{eq:int-9a} \\ &=\mathbf{1} \{ {\boldsymbol{v}} = {\mathbf{g}}({\boldsymbol{y}} | \mathbf{A}) \}, \label{eq:int-9b} \end{align} \end{subequations} where $\mathbf{1}\{\cdot\}$ denotes the indicator function, and ${\mathbf{g}}(\cdot |\mathbf{A})$ is defined as in \eqref{eq:int-2}. \eqref{eq:int-9b} indicates that the microstate of the spin glass converges to the estimated vector of the \ac{map} estimator, i.e., ${\boldsymbol{\hat{x}}}={\mathbf{g}}({\boldsymbol{y}}|\mathbf{A})$, in the zero temperature limit. Invoking this connection, we study the corresponding spin glass instead of the \ac{map} estimator. We represent the input-output distortion of the system regarding a general distortion function $\mathsf{d}(\cdot;\cdot)$ as a macroscopic parameter of the spin glass. Consequently, the replica method developed in statistical mechanics is employed to determine the defined macroscopic parameter of the corresponding spin glass. The replica method is a generally nonrigorous but effective method developed in the physics literature to study spin glasses. Although the method lacks rigorous mathematical proof in some particular parts, it has been widely accepted as an analysis tool and utilized to investigate a variety of problems in applied mathematics, information processing, and coding \cite{mezard1986replica,fu1986application,nishimori2001statistical,montanari2000turbo}. The use of the replica method for studying multiuser estimators goes back to \cite{tanaka2002statistical} where Tanaka determined the asymptotic spectral efficiency of \ac{mpm} estimators by employing the replica method. The study demonstrated interesting large-system properties of multiuser estimators, and consequently, the statistical mechanical approach received more attention in the context of multiuser systems. This approach was then employed in the literature to study multiple estimation problems in large vector-valued linear systems, e.g. \cite{tanaka2001average,guo2003multiuser,guo2005randomly}. The method was also utilized to analyze the asymptotic properties of \ac{mimo} systems in \cite{muller2003channel} considering an approach similar to \cite{tanaka2002statistical}. Regarding multiuser estimators, the earlier studies mainly considered the cases in which the entries of the source vector are binary or Gaussian random variables. The results were later extended to a general source distribution in \cite{guo2005randomly}. The statistical mechanical approach was further employed to address mathematically similar problems in vector precoding, compressive sensing and analysis of superposition codes \cite{muller2008vector,guo2009single,barbier2014replica}, to name just a few examples. Despite the fact that the replica method lacks mathematical rigor, a body of work, such as \cite{montanari2006analysis,huleihel2017asymptotic,korada2010tight,reeves2016replica, barbier2016mutual,barbier2017mutual,barbier2016threshold,barbier2017universal}, has shown the validity of several replica-based results in the literature, e.g., Tanaka's formula in \cite{tanaka2002statistical}, using some alternative rigorous approaches. We later discuss these rigorous results with more details by invoking the literature of compressive sensing \subsection{Decoupling Principle} \label{subsec:decoupling} Considering the \ac{map} estimator defined in \eqref{eq:int-2}, the entries of the estimated vector ${\boldsymbol{\hat{x}}}$ are correlated in general, since the system matrix couples the entries of ${\boldsymbol{x}}$ linearly, and ${\mathbf{g}}(\cdot|\mathbf{A})$ performs several nonlinear operations on ${\boldsymbol{y}}$. In the large-system performance analysis, the marginal joint distribution of two corresponding input-output entries $x_j$ and ${\hat{x}}_j$, $1\leq j\leq n$, is of interest. To clarify our point, consider the case in which a linear estimator is employed instead of \eqref{eq:int-2}, i.e., ${\boldsymbol{\hat{x}}}=\mathbf{G}^{\mathsf{T}} {\boldsymbol{y}}$. Denote the matrices $\mathbf{A}$ and $\mathbf{G}$ as $\mathbf{A}=[{\mathbf{a}}_1 \cdots {\mathbf{a}}_n]$ and $\mathbf{G}=[{\mathbf{g}}_1 \cdots {\mathbf{g}}_n]$, respectively, with ${\mathbf{a}}_i$ and ${\mathbf{g}}_i$ being $k \times 1$ vectors for $1\leq i\leq n$. Therefore, ${\hat{x}}_j$ is written as \begin{subequations} \begin{align} {\hat{x}}_j &= {\mathbf{g}}_j^\mathsf{T} {\boldsymbol{y}} \label{eq:int-10a} \\ &={\mathbf{g}}_j^\mathsf{T} \left[ \sum_{i=1}^n x_i {\mathbf{a}}_i + {\boldsymbol{z}} \right] \label{eq:int-10b}\\ &=({\mathbf{g}}_j^\mathsf{T} {\mathbf{a}}_j) \ x_j + \sum_{i=1, i\neq j}^n ({\mathbf{g}}_j^\mathsf{T} {\mathbf{a}}_i) \ x_i + {\mathbf{g}}_j^\mathsf{T} {\boldsymbol{z}}. \label{eq:int-10c} \end{align} \end{subequations} Here, the right hand side of \eqref{eq:int-10c} can be interpreted as the linear estimation of a single-user system indexed by $j$ in which the symbol $x_j$ is corrupted by an additive impairment given by the last two summands in the right hand side of \eqref{eq:int-10c}. The impairment term is not necessarily independent and Gaussian. For some classes of matrix ensembles, and under a set of assumptions, it is shown that the dependency of the derived single-user systems on the index $j$ vanishes, and the distribution of the impairment terms converges to a Gaussian distribution in the large-system limit \cite{guo2002asymptotic}. As a result, one can assume the vector-valued system described by \eqref{eq:sys-1} followed by the linear estimator $\mathbf{G}$ to be a set on $n$ additive scalar systems with Gaussian noise which have been employed in parallel. In other words, the vector system can be considered to ``decouple'' into a set of similar scalar systems. Each of them relates an input entry $x_j$ to its corresponding estimated one ${\hat{x}}_j$. This asymptotic property of the estimator is referred to as the ``decoupling property'' and can be investigated through the large-system performance analysis. The decoupling property was first studied for linear estimators. Tse and Hanly noticed this property while they were determining the multiuser efficiency of several linear multiuser estimators in the large-system limit \cite{tse1999linear}. They showed that for an \ac{iid} system matrix, the effect of impairment is similar to the effect of some modified Gaussian noise when the dimension tends to infinity. This asymptotic property was then investigated further by studying the asymptotics of different linear receivers and their large-system distributions \cite{tse2000linear, eldar2003asymptotic}. In an independent work, Verd\'u and Shamai also studied the linear \ac{mmse} estimator and showed that the conditional output distribution is asymptotically Gaussian \cite{verdu1999spectral}. In \cite{zhang2001output}, the authors studied the asymptotics of the impairment term when a family of linear estimators is employed and proved that it converges in distribution to a Gaussian random variable. The latter result was further extended to a larger class of linear estimators in \cite{guo2002asymptotic}. Regarding linear estimators, the main analytical tool is random matrix theory \cite{tulino2004random,muller2013applications}. In fact,~invoking~properties of large random matrices and the central limit theorem, the decoupling property is rigorously proved, e.g. \cite{guo1999linear,shamai2001impact}. These tools, however, fail for nonlinear estimators as the source symbol and impairment term do not decouple linearly due to nonlinear operations at the estimators. In \cite{muller2004capacity}, M\"uller and Gerstacker employed the replica method and studied the capacity loss due to the separation of detection and decoding. The authors showed that the additive decoupling of the spectral efficiency, reported in \cite{shamai2001impact} for Gaussian inputs, also holds for binary inputs. As a result,~it was conjectured that regardless of input distribution and linearity, the spectral efficiency always decouples in an additive form \cite{muller2002channel}. In \cite{guo2005randomly}, Guo and Verd\'u justified this conjecture for a family of nonlinear \ac{mmse} estimators, and showed that for an \ac{iid} system matrix, the estimator decouples into a bank of single-user \ac{mmse} estimators under the \ac{rs} ansatz. In \cite{rangan2012asymptotic}, Rangan et al. studied the asymptotic performance of a class of \ac{map} estimators. Using standard large deviation arguments, the authors represented the \ac{map} estimator~as the limit of an indexed \ac{mmse} estimators' sequence. Consequently, they determined the estimator's asymptotics employing the results from \cite{guo2005randomly} and justified the decoupling property of \ac{map} estimators under the \ac{rs} ansatz for an \ac{iid} $\mathbf{A}$. % Regarding the decoupling property of \ac{map} estimators, there are still two main issues which need further~investigations: \begin{inparaenum} \item cases in which the system matrix $\mathbf{A}$ is not \ac{iid}, and \item the analysis of the estimator under the \ac{rsb} ans\"atze. \end{inparaenum} The first issue was partially addressed in \cite{tulino2013support} where, under the \ac{rs} assumption, the authors studied the asymptotics of a \ac{map} estimator employed to recover the support of a source vector from observations received through noisy sparse random measurements. They considered a model in which a sparse Gaussian source vector\footnote{It means that the entries of the source vector are of the form $x_i b_i$ where $x_i$ and $b_i$ are Gaussian and Bernoulli random variables, respectively.} is first randomly measured by a square matrix $\mathbf{V}$, and then, the measurements are sparsely sampled by a diagonal matrix $\mathbf{B}$ whose non-zero entries are \ac{iid} Bernoulli random variables. For this setup, the input-output information rate and support recovery error rate were investigated by considering the measuring matrix $\mathbf{V}$ to belong to a larger set of matrix ensembles. These results, moreover, could address the decoupling property of the considered setting. Although the class of system matrices is broadened in \cite{tulino2013support}, it cannot be considered as a complete generalization of the property presented in \cite{guo2005randomly} and \cite{rangan2012asymptotic}, since it is restricted to cases with a sparse Gaussian source and loading factors less than one, i.e., $kn^{-1}<1$ in \eqref{eq:sys-1}. Vehkaper\"a et al. also tried to investigate the first issue for a similar formulation in compressive sensing \cite{vehkapera2014analysis}. In fact, the authors considered a linear sensing model as in \eqref{eq:sys-1} for the class of rotationally invariant random matrices\footnote{The class of rotationally invariant random matrices is precisely defined later throughout the problem formulation.} and under the \ac{rs} ansatz determined the asymptotic \ac{mse} for the least-square recovery schemes which can be equivalently represented by the formulation in \eqref{eq:int-2}. The large-system results in \cite{vehkapera2014analysis}, however, did not address the asymptotic marginal joint input-output distribution, and the emphasis was on the \ac{mse}. Regarding the second issue, the \ac{map} estimator has not yet been investigated under \ac{rsb} ans\"atze in the literature. Nevertheless, the necessity of such investigations was mentioned for various similar settings in the literature; see for example \cite{yoshida2007statistical,kabashima2009typical,zaidel2012vector}. In \cite{yoshida2007statistical}, the performances of \ac{cdma} detectors were investigated by studying both the \ac{rs} and one-step \ac{rsb} ans\"atze and the impact of symmetry breaking onto the results for low noise scenarios were discussed. The authors in \cite{zaidel2012vector} further studied the performance of vector precoding under both \ac{rs} and \ac{rsb} and showed that the analysis under \ac{rs} yields a significantly loose bound on the true performance. The replica ansatz with one-step of \ac{rsb}, however, was shown to lead to a tighter bound consistent with rigorous performance bound available in the literature. A similar observation was recently reported for the problem of least-square-error precoding in \cite{sedaghat2016lse,bereyhi2017asymptotics}. The replica analyses of compressive sensing in \cite{kabashima2009typical,takeda2011statistical}, moreover, discussed the necessity of investigating the performance of $\ell_p$ minimization recovery schemes under \ac{rsb} ans\"atze for some choices of $p$. \subsection{Compressive Sensing} \label{subsec:compressive_sensing} The \ac{map} estimation of a source vector from a set of noisy linear observations arises in several applications, such as \ac{mimo} and sampling systems. To address one, we consider large compressive sensing systems and employ our asymptotic results to analyze the large-system performance \cite{donoho2006compressed,candes2006robust,candes2006near}. In context of compressive sensing, \eqref{eq:sys-1} represents a noisy sampling system in which the source vector ${\boldsymbol{x}}$ is being sampled linearly via $\mathbf{A}$ and corrupted by additive noise ${\boldsymbol{z}}$. In the ``noise-free'' case, i.e. $\lambda_0 = 0$, the source vector ${\boldsymbol{x}}$ is exactly recovered from the observation vector ${\boldsymbol{y}}$, if the number of observations $k$ is as large as the source length $n$ and the sampling matrix $\mathbf{A}$ is full rank. As the number of observations reduces, the possible answers to the exact reconstruction problem may increase regarding the source support $\mathbbmss{X}$, and therefore, the recovered source vector from the observation vector is not necessarily unique. In this case, one needs to enforce some extra constraints on the properties of the source vector in order to recover it uniquely among all possible solutions. % In compressive sensing, the source vector is supposed to be sparse, i.e., a certain fraction of entries are zero. This property of the source imposes an extra constraint on the solution which allows for exact recovery in cases with $k < n$. In fact, in this case, one should find a solution to ${\boldsymbol{y}}=\mathbf{A} {\boldsymbol{v}}$ over \begin{align} \mathbbmss{S}=\left\lbrace {\boldsymbol{v}}_{n\times 1} \in \mathbbmss{X}^n: \ \norm{{\boldsymbol{v}}}_0 <\alpha n \right\rbrace \label{eq:cs-zero} \end{align} where $\norm{\cdot}_0$ denotes the ``$\ell_0$ norm'' and is defined as $\norm{{\boldsymbol{v}}}_0 \coloneqq \sum_{i=1}^n \mathbf{1}\{v_i \neq 0\}$, and $\alpha \leq 1$ is the source's sparsity factor defined as the fraction of non-zero entries. Depending on $\mathbf{A}$ and $\mathbbmss{X}$, the latter problem can have a unique solution even for $k < n$ \cite{donoho2001uncertainty,elad2002generalized,donoho2003optimally}. Searching for this solution optimally over $\mathbbmss{S}$, however, is an NP-hard problem and therefore intractable. The main goal in noise-free compressive sensing is to study feasible reconstruction schemes and derive tight bounds on the sufficient compression rate, i.e., $k/n$, for exact source recovery via these schemes. In noisy sampling systems, exact recovery is only possible for some particular choices of $\mathbbmss{X}$. Nevertheless, considering either cases in which exact recovery is not possible or choices of $\mathbbmss{X}$ for which the source vector can be exactly recovered from noisy observations, the recovery approaches in these sensing systems need to take the impact of noise into account. The classical strategy in this case is to find a vector in $\mathbbmss{S}$ such that the recovery distortion is small. Consequently, a recovery scheme for noisy sensing system based on the $\ell_0$ norm is given by \begin{align} {\boldsymbol{\hat{x}}}=\arg \min_{{\boldsymbol{v}} \in \mathbbmss{X}^n} \ \left[ \frac{1}{2\lambda} \norm{{\boldsymbol{y}}-\mathbf{A} {\boldsymbol{v}}}^2 + \norm{{\boldsymbol{v}}}_0 \right] \label{eq:int-12} \end{align} which is the \ac{map} estimator defined in \eqref{eq:int-2} with $u({\boldsymbol{v}})=\norm{{\boldsymbol{v}}}_0$. It is straightforward to show that for $\lambda_0=0$, i.e., zero noise variance, \eqref{eq:int-12} reduces to the optimal noise-free recovery scheme as $\lambda \downarrow 0$. Similar to the noise-free case, the scheme in \eqref{eq:int-12} results in a non-convex optimization problem, and therefore, is computationally infeasible. Alternatively, a computationally feasible schemes is introduced by replacing the $\ell_0$ norm in the cost function with the $\ell_1$ norm. The resulting recovery scheme is known as LASSO \cite{tibshirani1996regression} or basis pursuit denoising \cite{chen2001atomic}. Based on these formulations, several iterative and greedy algorithms have been introduced for recovery taking into account the sparsity pattern and properties of the sampling matrix \cite{needell2009cosamp,dai2009subspace,cai2011orthogonal}. The main body of work in noisy compressive sensing then investigates the trade-off between the compression rate and recovery distortion. For large compressive sensing systems, it is common to consider a random sensing matrix, since for these matrices, properties such as restricted isometry property are shown to hold with high probability \cite{foucart2013mathematical}. In this case, the performance of a reconstruction schemes is analyzed by determining the considered performance metric, e.g., \ac{mse} and probability of exact recovery in the noisy and noise-free case, respectively, for a given realization of the sensing matrix. The average performance is then calculated by taking the expectation over the matrix distribution.~Comparing \eqref{eq:int-12} with \eqref{eq:int-2}, one can utilize the \ac{map} formulation, illustrated at the beginning of this section, to study the large-system performance of several reconstruction schemes. This similarity was considered in a series of papers, e.g., \cite{guo2009single,rangan2012asymptotic}, and therefore, earlier replica results were employed to study compressive sensing systems. The extension of analyses from the context of multiuser estimation had the disadvantage that the assumed sampling settings were limited to those setups which are consistent with the estimation problems in the literature. Compressive sensing systems, however, might require a wider set of assumptions, and thus, a large class of settings could not be addressed by earlier investigations. As the result, a body of work deviated from this approach and applied the replica method directly to the compressive sensing problem; see for example \cite{tulino2013support,vehkapera2014analysis,kabashima2010statistical,kabashima2009typical,wen2016sparse,zheng2017does}. Although in general the replica method is considered to be mathematically non-rigorous, several recent studies have justified the validity of the replica results in the context of compressive sensing by using some alternative tools for analysis. A widely investigated approach is based on the asymptotic analysis of \ac{amp} algorithms. In the context of compressive sensing, the \ac{amp} algorithms were initially introduced to address iteratively the convex reconstruction schemes based on $\ell_1$ norm minimization, such as LASSO and basis pursuit, with low computational complexity \cite{donoho2009message,donoho2010message}. The proposed approach was later employed to extend the algorithm to a large variety of estimation problems including \ac{map} and \ac{mmse} estimation; see for example \cite{rangan2011generalized,montanari2012graphical}. The primary numerical investigations of \ac{amp} demonstrated that for large sensing systems the sparsity-compression rate tradeoff of these iterative algorithms, as well as the compression rate-distortion tradeoff in noisy cases, is derived by the fixed-points of ``state evolution'' and recovers the asymptotics of convex reconstruction schemes \cite{donoho2010message}. This observation was then rigorously justified for \ac{iid} sub-Gaussian sensing matrices in \cite{bayati2011dynamics} by using the conditioning technique developed in \cite{bolthausen2009high}. The study was recently extended to cases with rotationally invariant system matrices in \cite{rangan2017vector,takeuchi2017rigorous}. The investigations in \cite{donoho2013information,krzakala2012statistical} moreover showed that using \ac{amp} algorithms for spatially coupled measurements, the fundamental limits on the required compression rate \cite{wu2012optimal,wu2011mmse} can be achieved in the asymptotic regime. The methodology proposed by \ac{amp} algorithms and their state evolution also provided a justification for validity of several earlier studies based on the replica method. In fact, the results given by the replica method were recovered through state evolution of the corresponding \ac{amp} algorithms. Invoking this approach along with other analytical tools, the recent study in \cite{barbier2017mutual} further approved the validity of the replica prediction for the asymptotic \ac{mmse} and mutual information of the linear system in \eqref{eq:sys-1} with \ac{iid} Gaussian measurements. Similar results were demonstrated in \cite{reeves2016replica} using a different approach. \subsection{Contribution and Outline} \label{subsec:contribution} In this paper, we determine the asymptotic distortion for a general distortion function for cases where the \ac{map} estimator is employed to estimate the source vector from the observation given in \eqref{eq:sys-1}. We represent the asymptotic distortion in \eqref{eq:int-2.1} as a macroscopic parameter of a corresponding spin glass and study the spin glass via the replica method. The general replica ansatz is then given for an arbitrary replica correlation matrix, and its special cases are studied considering the \ac{rs} and \ac{rsb} assumptions. The asymptotic distortion is determined for rotationally invariant random system matrices invoking results for asymptotics of spherical integrals \cite{guionnet2002large,guionnet2005fourier}. Using our asymptotic results, we derive a more general form of the decoupling principle by restricting the distortion function to be of a special form and employing the moment method \cite{akhiezer1965classical,simon1998classical}. We show that the vector-valued system in \eqref{eq:sys-1} estimated by \eqref{eq:int-2} decouples into a bank of similar noisy single-user systems followed by single-user \ac{map} estimators. This result holds for any replica correlation matrix; however, the structure of the decoupled single-user system depends on the supposed structure of the correlation matrix. Under the \ac{rsb} assumption with $b$ steps of breaking ($b$\ac{rsb}), the noisy single-user system is given in the form of an input term added by an impairment term. The impairment term, moreover, is expressed as the sum of an independent Gaussian random variable and $b$ correlated interference terms. By reducing the assumption to \ac{rs}, the result reduces to the formerly studied \ac{rs} decoupling principle of the \ac{map} estimators \cite{rangan2012asymptotic} for rotationally invariant system matrix ensembles. In fact, our investigations collect the previous results regarding the decoupling principle in addition to a new set of setups under a single umbrella given by a more general form of the decoupling principle. More precisely, we extend the scope of the decoupling principle to \begin{itemize} \item the systems whose measuring matrix belongs to the class of rotationally invariant random matrices, and \item the replica ansatz with general replica correlations which include the \ac{rs} and \ac{rsb} ans\"atze. \end{itemize} To address a particular application, we study the large-system performance of a compressive sensing system under the $\ell_p$ minimization recovery schemes. We address the linear reconstruction, as well as the LASSO and the $\ell_0$ norm scheme considering both the sparse Gaussian and finite alphabet sources. Our general setting allows to investigate the asymptotic performance with respect to different metrics and for multiple sensing matrices such as random \ac{iid} and projector. The numerical investigations show that the \ac{rs} ansatz becomes unstable for some regimes of system parameters and predicts the performance of $\ell_0$ minimization recovery loosely within a large range of compression rates. This observation agrees with the earlier discussions on the necessity of \ac{rsb} investigations reported in \cite{kabashima2009typical,takeda2011statistical}. We therefore study the performance under \ac{rsb} and discuss the impact of the symmetry breaking. Throughout the numerical investigations, it is demonstrated that the performance enhancement obtained via random orthogonal measurements, reported in \cite{vehkapera2014analysis}, also holds for sparse finite alphabet sources in which sensing via random projector matrices results in phase transitions at higher rates. The rest of the manuscript is organized as follows. In Section \ref{sec:problem_formulation}, the problem is formulated. We illustrate our statistical mechanical approach in Section \ref{sec:statisc_app} and explain briefly the replica method. The general replica ansatz, as well as the general decoupling principle is given in Section \ref{sec:results}. The ansatz under the \ac{rs} and \ac{rsb} assumptions are expressed in Sections \ref{sec:rs} and \ref{sec:rsb}. Based on $b$\ac{rsb} decoupled system, we propose the idea of a replica simulator in \ref{sec:rep_sim} and describe the given ans\"atze in terms of the corresponding decoupled systems. To address an application of our study, we consider large compressive sensing systems in Section \ref{sec:cs} and discuss several examples. The numerical investigations of the examples are then given in Section \ref{sec:numerics}. Finally, we conclude the manuscript in Section \ref{sec:conclusion}. \subsection{Notations} Throughout the manuscript, we represent scalars, vectors and matrices with lower, bold lower, and bold upper case letters, respectively. The set of real numbers is denoted by $\mathbbmss{R}$, and $\mathbf{A}^{\mathsf{T}}$ and $\mathbf{A}^{\mathsf{H}}$ indicate the transposed and Hermitian of $\mathbf{A}$. $\mathbf{I}_m$ is the $m\times m$ identity matrix and $\mathbf{1}_m$ is an $m \times m$ matrix with all entries equal to $1$. For a random variable $x$, $\mathrm{p}_x$ represents either the \ac{pmf} or \ac{pdf}, and $\mathrm{F}_x$ represents the \ac{cdf}. We denote the expectation over $x$ by $\mathsf{E}_x$, and an expectation over all random variables involved in a given expression by $\mathsf{E}$. $\mathbbmss{Z}$ and $\mathbbmss{R}$ represent the set of integer and real numbers and the superscript $+$, e.g. $\mathbbmss{R}^+$, indicates the corresponding subset of all non-negative numbers. For sake of compactness, the set of integers $\{1, \ldots,n \}$ is abbreviated as $[1:n]$, the zero-mean and unit-variance Gaussian \ac{pdf} is denoted by $\phi(\cdot)$, and the Gaussian averaging is expressed as \begin{align} \int \left( \cdot \right) \mathrm{D} t \coloneqq \int_{-\infty}^{+\infty} \left(\cdot\right) \phi(t) \mathrm{d} t . \end{align} Moreover, in many cases, we drop the set on which a sum, minimization or an integral is taken. Whenever needed, we consider the entries of ${\boldsymbol{x}}$ to be discrete random variables, namely the support $\mathbbmss{X}$ to be discrete. The results of this paper, however, are in full generality and hold also for continuous distributions. \subsection{Announcements} Some of the results of this manuscript were presented at the IEEE Information Theory Workshop \cite{bereyhi2016itw} and the Information Theory and Applications Workshop \cite{bereyhi2017replica}. Even though the results have a mathematical flavor, the stress is not on investigating the rigor of the available tools such as the replica method, but rather to employ them for deriving formulas which can be used in different problems. \section{Problem Formulation} \label{sec:problem_formulation} Consider the vector-valued linear system described by \eqref{eq:sys-1}. Let the system satisfy the following properties. \begin{enumerate}[label=(\alph*)] \item ${\boldsymbol{x}}_{n \times 1}$ is an \ac{iid} random vector with each entry being distributed with $\mathrm{p}_x$ over $\mathbbmss{X} \subseteq \mathbbmss{R}$. \item $\mathbf{A}_{k \times n}$ is randomly generated over $\mathbbmss{A}^{k \times n}$ with $\mathbbmss{A} \subseteq \mathbbmss{R}$ from rotationally invariant random ensembles. The random matrix $\mathbf{A}$ is said to be rotationally invariant when its Gramian, i.e., $\mathbf{J}=\mathbf{A}^{\mathsf{T}} \mathbf{A}$, has the eigendecomposition \begin{align} \mathbf{J}= \mathbf{U} \mathbf{D} \mathbf{U}^{\mathsf{T}} \label{eq:sys-1.2} \end{align} with $\mathbf{U}_{n \times n}$ being an orthogonal Haar distributed matrix and $\mathbf{D}_{n \times n}$ being a diagonal matrix. For a given $n$, we denote the empirical \ac{cdf} of $\mathbf{J}$'s eigenvalues (cumulative density of states) with $\mathrm{F}_{\mathbf{J}}^n$ and define it as \begin{align} \mathrm{F}_{\mathbf{J}}^n (\lambda) =\frac{1}{n} \sum_{i=1}^n \mathbf{1} \{ \lambda^{\mathbf{J}}_i < \lambda \}. \end{align} where $\lambda^{\mathbf{J}}_i$ for $i \in [1:n]$ denotes the $i$th diagonal entry of $\mathbf{D}$. We assume that $\mathrm{F}_{\mathbf{J}}^n$ converges, as $n\uparrow\infty$, to a deterministic \ac{cdf} $\mathrm{F}_{\mathbf{J}}$. \item ${\boldsymbol{z}}_{k \times 1}$ is a real \ac{iid} zero-mean Gaussian random vector in which the variance of each entry is $\lambda_0$ \item The number of observations $k$ is a deterministic function of the transmission length $n$, such that \begin{align} \lim_{n \uparrow \infty} \frac{k(n)}{n}=\frac{1}{\mathsf{r}} < \infty . \label{eq:eq:sys-1.3} \end{align}For sake of compactness, we drop the explicit dependence of $k$ on $n$. \item ${\boldsymbol{x}}$, $\mathbf{A}$ and ${\boldsymbol{z}}$ are independent. \end{enumerate} The source vector ${\boldsymbol{x}}$ is reconstructed from the observation vector ${\boldsymbol{y}}$ with a system matrix $\mathbf{A}$ that is known at the estimator. Thus, for a given $\mathbf{A}$, the source vector is recovered by ${\boldsymbol{\hat{x}}}={\mathbf{g}}({\boldsymbol{y}}|\mathbf{A})$ where ${\mathbf{g}}(\cdot|\mathbf{A})$ is given in \eqref{eq:int-2}. % Here, the non-negative scalar $\lambda$ is the estimation parameter and the non-negative cost function $u(\cdot)$ is referred to the ``utility function''. The utility function $u(\cdot)$ is supposed to decouple which means that it takes arguments with different lengths, i.e., $u(\cdot): \mathbbmss{R}^\ell \mapsto \mathbbmss{R}^+$ for any positive integer $\ell$, and \begin{align} u({\boldsymbol{x}})=\sum_{i=1}^n u(x_i). \end{align} In order to use the estimator in \eqref{eq:int-2}, one needs to guarantee the uniqueness of the estimation output. Therefore, we impose the following constraint on our problem. \begin{enumerate}[label=(\alph*)]\addtocounter{enumi}{5} \item For a given observation vector ${\boldsymbol{y}}$, the objective function in \eqref{eq:int-2} has a unique minimizer over the support $\mathbbmss{X}^n$. \end{enumerate} \subsection{\ac{map} Estimator} The \ac{map} estimator in \eqref{eq:int-2} can be considered as the optimal estimator in the sense that it minimizes the reconstruction's error probability postulating a source prior distribution proportional to $e^{-u({\boldsymbol{x}})}$ and a noise variance~$\lambda$. To clarify the argument, assume $\mathbbmss{X}$ is a finite set\footnote{i.e., the entries of ${\boldsymbol{x}}$ are discrete random variables.}. In this case, we can define the reconstruction's error probability~as \begin{align} \mathsf{P_e} = \Pr \{ {\boldsymbol{x}} \neq \tilde{{\mathbf{g}}}({\boldsymbol{y}}| \mathbf{A}) \} \label{eq:sys-3} \end{align}for some estimator $\tilde{{\mathbf{g}}}(\cdot| \mathbf{A})$. In order to minimize $\mathsf{P_e}$, $\tilde{{\mathbf{g}}}(\cdot| \mathbf{A})$ is chosen such that the posterior distribution over the input support $\mathbbmss{X}^n$ is maximized, i.e., \begin{subequations} \begin{align} \tilde{{\mathbf{g}}}({\boldsymbol{y}}| \mathbf{A}) &= \arg \max_{{\boldsymbol{v}}} \mathrm{p}_{{\boldsymbol{x}}|{\boldsymbol{y}},\mathbf{A}}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A}) \label{eq:sys-4a} \\ &= \arg \max_{{\boldsymbol{v}}} \mathrm{p}_{{\boldsymbol{y}}|{\boldsymbol{x}},\mathbf{A}}({\boldsymbol{y}}|{\boldsymbol{v}},\mathbf{A}) \mathrm{p}_{{\boldsymbol{x}}|\mathbf{A}}({\boldsymbol{v}}|\mathbf{A}) \label{eq:sys-4b} \\ &\stackrel{\star}{=} \arg \min_{{\boldsymbol{v}}} \left[ - \log \mathrm{p}_{{\boldsymbol{y}}|{\boldsymbol{x}},\mathbf{A}}({\boldsymbol{y}}|{\boldsymbol{v}},\mathbf{A}) - \log \mathrm{p}_{{\boldsymbol{x}}}({\boldsymbol{v}}) \right]. \label{eq:sys-4c} \end{align} \end{subequations} where $\star$ comes from the independency of ${\boldsymbol{x}}$ and $\mathbf{A}$. Here, $\mathrm{p}_{{\boldsymbol{y}}|{\boldsymbol{x}},\mathbf{A}}({{\boldsymbol{y}}|{\boldsymbol{v}},\mathbf{A}})=\mathrm{p}_{{\boldsymbol{z}}}({{\boldsymbol{y}}-\mathbf{A}{\boldsymbol{v}}})$, and $\mathrm{p}_{{\boldsymbol{x}}}$ is the prior distribution of the source vector. Now, let the estimator postulate the noise variance to be $\lambda$ and the prior to be \begin{align} \mathrm{p}_{{\boldsymbol{x}}}({\boldsymbol{v}})=\frac{e^{-u({\boldsymbol{v}})}}{\sum_{{\boldsymbol{v}}}e^{-u({\boldsymbol{v}})}} \label{eq:sys-5} \end{align} for some non-negative function $u(\cdot)$. Substituting into \eqref{eq:sys-4c}, the estimator $\tilde{{\mathbf{g}}}(\cdot| \mathbf{A})$ reduces to ${{\mathbf{g}}}(\cdot| \mathbf{A})$ defined in \eqref{eq:int-2}. The estimator in \eqref{eq:int-2} models several particular reconstruction schemes in compressive sensing. We address some of these schemes later on in Section~\ref{sec:cs}. \begin{remark} \normalfont In the case that the entries of ${\boldsymbol{x}}$ are continuous random variables, the above argument needs some modifications. In fact, in this case the reconstruction's error probability as defined in \eqref{eq:sys-3} is always one, and therefore, it cannot be taken as the measure of error. In this case, the \ac{map} estimator is considered to maximize the posterior \ac{pdf} postulating $\mathrm{p}_{{\boldsymbol{x}}}({\boldsymbol{v}})\varpropto e^{-u({\boldsymbol{v}})}$ and noise variance $\lambda$. \end{remark} \subsection{Asymptotic Distortion and Conditional Distribution} \label{subsec:distortion_measure} In many applications, the distortion is given in terms of the average \ac{mse}, while in some others the average symbol error rate is considered. In fact, the former takes the $\ell_2$ norm as the distortion function, and the latter considers the $\ell_0$ norm. The distortion function, however, can be of some other form in general. Here, we study the asymptotic performance by considering a general distortion function which determines the imperfection level of the estimation. Thus, we consider a distortion function $\mathsf{d}(\cdot;\cdot)$ which \begin{align} \mathsf{d}(\cdot;\cdot): \mathbbmss{X} \times \mathbbmss{X} \to \mathbbmss{R}. \label{eq:sys-6} \end{align} The term ``average distortion'' usually refers to the case when the averaging weights are uniform. It means that each tuple of source-estimated entries $(x_i,{\hat{x}}_i)$ is weighted equally when the distortion is averaged over all the entries' tuples. It is however possible to average the distortion by a non-uniform set of weights. In the following, we define the average distortion for a class of binary weights which includes the case of uniform averaging,~as~well. \begin{definition}[Asymptotic distortion] \label{def:average_dist} Consider the vectors ${\boldsymbol{x}}_{n \times 1}$ and ${\boldsymbol{\hat{x}}}_{n \times 1}$ defined over $\mathbbmss{X}^n$, and let $\mathsf{d}(\cdot;\cdot)$ be a distortion function defined in \eqref{eq:sys-6}. Define the index set $\mathbbmss{W}(n)$ as a subset of $[1:n]$, and let $\abs{\mathbbmss{W}(n)}$ grow with $n$ such that \begin{align} \lim_{n \uparrow \infty} \frac{\abs{\mathbbmss{W}(n)}}{n}= \eta \label{eq:sys-7} \end{align} for some $\eta \in [0,1]$. Then, the average distortion of ${\boldsymbol{\hat{x}}}$ and ${\boldsymbol{x}}$ over the index set $\mathbbmss{W}(n)$ regarding the distortion function $\mathsf{d}(\cdot;\cdot)$ is defined as \begin{align} \mathsf{D}^{\mathbbmss{W}(n)}({\boldsymbol{\hat{x}}};{\boldsymbol{x}}) \coloneqq \frac{1}{\abs{\mathbbmss{W}(n)}} \sum_{w \in \mathbbmss{W}(n)} \mathsf{d}({\hat{x}}_w;x_w). \label{eq:sys-8} \end{align} Assuming the limit of $\eqref{eq:sys-8}$ when $n \uparrow \infty$ exists, we denote \begin{align} \mathsf{D}^{\mathbbmss{W}}({\boldsymbol{\hat{x}}};{\boldsymbol{x}}) \coloneqq \lim_{n\uparrow\infty} \mathsf{D}^{\mathbbmss{W}(n)}({\boldsymbol{\hat{x}}};{\boldsymbol{x}}) \label{eq:sys-9} \end{align} and refer to it as the ``asymptotic distortion'' over the limit of the index set $\mathbbmss{W}(n)$. \end{definition} Definition \ref{def:average_dist} is moreover utilized to investigate the asymptotic conditional distribution of the estimator which plays a key role in studying the decoupling principle. For further convenience, we define the asymptotic conditional distribution of the \ac{map} estimator as follows. \begin{definition}[Asymptotic conditional distribution] \label{def:asymp_distribution} Consider the source vector ${\boldsymbol{x}}_{n \times 1}$ passed through the linear system in \eqref{eq:sys-1}, and let ${\boldsymbol{\hat{x}}}_{n \times 1}$ be its \ac{map} estimation as given in \eqref{eq:int-2}. For a given $n$, we take an index $j \in [1:n]$ and denote the conditional distribution of ${\hat{x}}_j$ given $x_j$ by $\mathrm{p}_{{\hat{x}}|x}^{j(n)}$ which at the mass point $({\hat{v}},v) \in \mathbbmss{X}\times\mathbbmss{X}$ reads \begin{align} \mathrm{p}_{{\hat{x}}|x}^{j(n)}({\hat{v}}|v) \coloneqq \left[\mathrm{p}_{x_j}(v)\right]^{-1} \mathrm{p}_{{\hat{x}}_j,x_j}({\hat{v}},v) \label{eq:sys-10} \end{align} with $\mathrm{p}_{{\hat{x}}_j,x_j}({\hat{v}},v)$ being the marginal joint distribution of $x_j$ and ${\hat{x}}_j$ at the mass point $({\hat{v}},v)$. Then, in the large-system limit, we define the asymptotic conditional distribution of ${\hat{x}}_j$ given $x_j$ at $({\hat{v}},v)$ as \begin{align} \mathrm{p}_{{\hat{x}}|x}^{j} ({\hat{v}}|v) \coloneqq \lim_{n \uparrow \infty} \mathrm{p}_{{\hat{x}}|x}^{j(n)} ({\hat{v}}|v). \label{eq:sys-11} \end{align} \end{definition} We study the asymptotic distortion over the limit of a desired index set $\mathbbmss{W}(n)$ and distortion function $\mathsf{d}(\cdot;\cdot)$ by defining it as a macroscopic parameter of the corresponding spin glass and employing the replica method to evaluate it. Using the result for the asymptotic distortion, we determine then the asymptotic conditional distribution and investigate the decoupling property of the estimator. \section{Statistical Mechanical Approach} \label{sec:statisc_app} The Hamiltonian in \eqref{eq:int-7} introduces a spin glass which corresponds to the \ac{map} estimator. The spin glass at zero temperature describes the asymptotics of the \ac{map} estimator. For further convenience, we formally define the ``corresponding spin glass'' as follows. \begin{definition}[Corresponding spin glass] \label{def:cors_spin} \normalfont Consider the integer $n\in \mathbbmss{Z}^+$. The corresponding spin glass with the microstate ${\boldsymbol{v}}_{n \times 1} \in \mathbbmss{X}^n$ and the quenched randomizers ${\boldsymbol{y}}_{n \times 1}$ and $\mathbf{A}_{k\times n}$ is given by \begin{itemize} \item $\mathbf{A}$ is a rotationally invariant random matrix, and ${\boldsymbol{y}}$ is constructed as in \eqref{eq:sys-1} from the source vector ${\boldsymbol{x}}$. \item For any realization of $\mathbf{A}$ and ${\boldsymbol{y}}$, the Hamiltonian reads \begin{align} \mathcal{E}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A})=\frac{1}{2\lambda} \norm{{\boldsymbol{y}}-\mathbf{A} {\boldsymbol{v}}}^2 + u({\boldsymbol{v}}). \label{eq:sm-0.1} \end{align} for ${\boldsymbol{v}} \in \mathbbmss{X}^n$. \end{itemize} \end{definition} For the corresponding spin glass, at the inverse temperature $\upbeta$, the following properties are~directly~concluded. \begin{itemize} \item The conditional distribution of the microstate reads \begin{align} \mathrm{p}^{\upbeta}_{{\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A}}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A})=\frac{e^{-\upbeta\mathcal{E}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A}) }}{\mathcal{Z}(\upbeta|{\boldsymbol{y}},\mathbf{A})} \label{eq:sm-0.2} \end{align} with $\mathcal{Z}(\upbeta|{\boldsymbol{y}},\mathbf{A})$ being the partition function \begin{align} \mathcal{Z}(\upbeta|{\boldsymbol{y}},\mathbf{A}) = \sum_{{\boldsymbol{v}}} e^{-\upbeta\mathcal{E}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A})}. \label{eq:sm-0.3} \end{align} \item The normalized free energy is given by \begin{align} \mathsf{F}(\upbeta)=-\frac{1}{\upbeta n} \mathsf{E} \log \mathcal{Z}(\upbeta|{\boldsymbol{y}},\mathbf{A}), \label{eq:sm-0.4} \end{align} where the expectation is taken over the quenched randomizers. \item The entropy of the spin glass is determined as \begin{align} \mathrm{H}(\upbeta) &= \upbeta^2 \frac{\mathrm{d}}{\mathrm{d} \upbeta} \left[ \mathsf{F}(\upbeta) \right]. \label{eq:sm-0.5} \end{align} \end{itemize} Regarding the \ac{map} estimator, one can represent the asymptotic distortion as a macroscopic parameter of the corresponding spin glass. More precisely, using Definition \ref{def:asymp_distribution}, the asymptotic distortion reads \begin{align} \mathsf{D}^{\mathbbmss{W}}({\boldsymbol{\hat{x}}};{\boldsymbol{x}})=\lim_{\upbeta\uparrow\infty}\lim_{n\uparrow\infty}\mathsf{E}_{{\boldsymbol{v}}}^\upbeta\mathsf{D}^{\mathbbmss{W}(n)}({\boldsymbol{v}};{\boldsymbol{x}})\label{eq:sm-1} \end{align} where $\mathsf{E}_{{\boldsymbol{v}}}^\upbeta$ indicates the expectation over ${\boldsymbol{v}} \in \mathbbmss{X}^n$ with respect to the conditional Boltzmann-Gibbs distribution $\mathrm{p}^{\upbeta}_{{\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A}}$ defined in \eqref{eq:sm-0.2}. In fact, by introducing the macroscopic parameter $\mathsf{D}^{\mathbbmss{W}}(\upbeta)$\footnote{In general, $\mathsf{D}^{\mathbbmss{W}}(\upbeta)$ is a function of $\upbeta$, ${\boldsymbol{y}}$, ${\boldsymbol{x}}$ and $\mathbf{A}$. However, we drop the other arguments for sake of compactness.} at the temperature $\upbeta^{-1}$ as \begin{align} \mathsf{D}^{\mathbbmss{W}}(\upbeta)\coloneqq\lim_{n\uparrow\infty}\mathsf{E}_{{\boldsymbol{v}}}^\upbeta\mathsf{D}^{\mathbbmss{W}(n)}({\boldsymbol{v}};{\boldsymbol{x}}), \label{eq:sm-2} \end{align} the asymptotic distortion can be interpreted as the macroscopic parameter at zero temperature. Here, we take a well-known strategy in statistical mechanics which modifies the partition function to \begin{align} \mathcal{Z}(\upbeta,h) = \sum_{{\boldsymbol{v}}} e^{-\upbeta\mathcal{E}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A})+h n\mathsf{D}^{\mathbbmss{W}(n)} ({\boldsymbol{v}};{\boldsymbol{x}})}. \label{eq:sm-3} \end{align} In this case, the expectation in \eqref{eq:sm-2} is taken as \begin{align} \mathsf{D}^{\mathbbmss{W}}(\upbeta)=\lim_{n \uparrow \infty} \lim_{h \downarrow 0} \frac{1}{n}\frac{\partial}{\partial h} \log \mathcal{Z}(\upbeta,h). \label{eq:sm-4} \end{align} The macroscopic parameter defined in \eqref{eq:sm-2} is random, namely depending on the quenched randomizer $\{ {\boldsymbol{y}} , \mathbf{A} \}$. As discussed in Section \ref{sec:spin_glasses}, under the self averaging property, the macroscopic parameter is supposed to converge in the large-system limit to its expected value over the quenched random variables. For the corresponding spin glass defined in Definition \ref{def:cors_spin}, the self averaging property has not been rigorously justified, and the proof requires further mathematical investigations as in \cite{guerra2002infinite}. However, as it is widely accepted in the literature, we assume that the property holds at least for the setting specified here. Therefore, we state the following assumption. \begin{assumption}[Self Averaging Property] \label{asp:1} \normalfont Consider the corresponding spin glass defined in Definition \ref{def:cors_spin}. For almost all realizations of the quenched randomizers $\mathbf{A}$ and ${\boldsymbol{y}}$, \begin{align} \mathsf{D}^{\mathbbmss{W}}(\upbeta) = \mathsf{E}_{{\boldsymbol{y}},\mathbf{A}} \mathsf{D}^{\mathbbmss{W}}(\upbeta). \label{eq:sm-5} \end{align} \end{assumption} Using the self averaging property of the system, the asymptotic distortion is written as \begin{align} \mathsf{D}^{\mathbbmss{W}}({\boldsymbol{\hat{x}}};{\boldsymbol{x}})=\lim_{\upbeta\uparrow\infty}\lim_{n\uparrow\infty}\lim_{h \downarrow 0} \frac{1}{n} \frac{\partial}{\partial h} \mathsf{E} \log \mathcal{Z}(\upbeta,h). \label{eq:sm-6} \end{align} Evaluation of \eqref{eq:sm-6}, as well as the normalized free energy defined in \eqref{eq:sm-0.4}, confronts the nontrivial problem of determining the logarithmic expectation. The task can be bypassed by using the Riesz equality \cite{riesz1930valeurs} which for a given random variable $t$ states that \begin{align} \mathsf{E}\hspace{.5mm} \log t = \lim_{m \downarrow 0} \frac{1}{m}\log \mathsf{E}\hspace{.5mm} t^m. \label{eq:sm-7} \end{align} Using the Riesz equality, the asymptotic distortion can be finally written as \begin{align} \mathsf{D}^{\mathbbmss{W}}({\boldsymbol{\hat{x}}};{\boldsymbol{x}})=\lim_{\upbeta\uparrow\infty}\lim_{n\uparrow\infty}\lim_{h \downarrow 0} \lim_{m \downarrow 0} \frac{1}{m} \frac{1}{n} \frac{\partial}{\partial h} \log \mathsf{E} [\mathcal{Z}(\upbeta,h)]^m. \label{eq:sm-8} \end{align} \eqref{eq:sm-8} expresses the asymptotic distortion in terms of the moments of the modified partition function; however, it does not yet simplify the problem. In fact, one faces two main difficulties when calculating the right hand side of \eqref{eq:sm-8}: \begin{inparaenum} \item the moment needs to be evaluated for real values of $m$ (at least within a right neighborhood of $0$), and \item the limits need to be taken in the order stated. \end{inparaenum} Here is where the replica method plays its role. The replica method suggests to determine the moment for an arbitrary non-negative integer $m$ as an analytic function in $m$ and then assume the two following statements: \begin{enumerate} \item The moment function analytically continues from the set of integer numbers onto the real axis (at least for some $m$ at a right neighborhood of $0$) which means that an analytic expression found for integer moments directly extends to all (or some) real moments. Under this assumption, the expression determined for integer moments can be replaced in \eqref{eq:sm-8}, and the limit with respect to $m$ taken when $m \downarrow 0$. This assumption is the main part where the replica method lacks rigor and is known as the ``replica continuity''. \item In \eqref{eq:sm-8}, the limits with respect to $m$ and $n$ exchange. We refer to this assumption as the ``limit exchange''. \end{enumerate} In order to employ the replica method, we need to suppose the validity of the above two statements; therefore, we state the following assumption. \begin{assumption}[Replica Continuity and Limit Exchange] \label{asp:2} \normalfont For the spin glass defined in Definition \ref{def:cors_spin}, the replica continuity and limit exchange assumptions hold. \end{assumption} By means of Assumption \ref{asp:2}, the calculation of asymptotic distortion reduces to the evaluation of integer moments of the modified partition function which is written as \begin{subequations} \begin{align} \mathsf{Z}(m) &\coloneqq \mathsf{E} [\mathcal{Z}(\upbeta,h)]^m \label{eq:sm-9a} \\ &=\mathsf{E} \prod_{a=1}^m \ \sum_{{\boldsymbol{v}}_a} e^{-\upbeta\mathcal{E}({\boldsymbol{v}}_a|{\boldsymbol{y}},\mathbf{A})+h n\mathsf{D}^{\mathbbmss{W}(n)} ({\boldsymbol{v}}_a;{\boldsymbol{x}})} \label{eq:sm-9b}\\ &=\mathsf{E}_{{\boldsymbol{x}}} \mathsf{E}_{\mathbf{A}} \mathsf{E}_{{\boldsymbol{z}}} \sum_{\{{\boldsymbol{v}}_a\}} e^{-\upbeta\sum_{a=1}^m\mathcal{E}({\boldsymbol{v}}_a|{\boldsymbol{y}},\mathbf{A})+h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)} ({\boldsymbol{v}}_a;{\boldsymbol{x}})} \label{eq:sm-9c}. \end{align} \end{subequations} Here, we refer to ${\boldsymbol{v}}_a\in\mathbbmss{X}^n$ for $a \in [1:m]$ as the replicas, and define $\{{\boldsymbol{v}}_a\} \coloneqq \{{\boldsymbol{v}}_1, \ldots, {\boldsymbol{v}}_m\} \in \mathbbmss{X}^n \times \cdots \times \mathbbmss{X}^n$ as the set of replicas. After taking the expectation with respect to ${\boldsymbol{z}}$ and $\mathbf{A}$, it is further observed that, in the large limit, the expectation with respect to ${\boldsymbol{x}}$ can be dropped due to the law of large numbers. By inserting the final expression for $\mathsf{Z}(m)$ in \eqref{eq:sm-8} and taking the limits, the asymptotic distortion is determined as~in~Proposition~\ref{proposition:1}~given~below. \section{Main Results} \label{sec:results} Proposition \ref{proposition:1} states the general replica ansatz. The term ``general'' is emphasized here, in order to indicate that no further assumption needs to be considered for derivation. Using Proposition \ref{proposition:1} along with results in the classical moment problem \cite{simon1998classical}, a general form of the decoupling principle is justified for the \ac{map} estimator. Before stating the general replica ansatz, let us define the $\mathrm{R}$-transform of a probability distribution. Considering a random variable $t$, the corresponding Stieltjes transform over the upper complex half plane is defined as \begin{align} \mathrm{G}_t(s)= \mathsf{E}\hspace{.5mm} (t-s)^{-1}. \label{eq:rep-1} \end{align} Denoting the inverse with respect to composition by $\mathrm{G}_t^{-1}(\cdot)$, the $\mathrm{R}$-transform is given by \begin{align} \mathrm{R}_t(\omega)= \mathrm{G}_t^{-1}(-\omega) - \omega^{-1} \label{eq:rep-2} \end{align} such that $\lim_{\omega \downarrow 0} \mathrm{R}_{t}(\omega) = \mathsf{E}\hspace{.5mm} t$. The definition can also be extended to matrix arguments. Assume the matrix $\mathbf{M}_{n \times n}$ to have the decomposition $\mathbf{M}=\mathbf{U} \mathbf{\Lambda} \mathbf{U}^{-1}$ where $\mathbf{\Lambda}_{n \times n}$ is a diagonal matrix whose nonzero entries represent the eigenvalues of $\mathbf{M}$, i.e., $\mathbf{\Lambda}=\mathrm{diag}[\lambda_1, \ldots, \lambda_n]$, and $\mathbf{U}_{n \times n}$ is the matrix of eigenvectors. $\mathrm{R}_t(\mathbf{M})$ is then an $n \times n$ matrix defined as \begin{align} \mathrm{R}_t(\mathbf{M})=\mathbf{U} \ \mathrm{diag}[\mathrm{R}_t(\lambda_1), \ldots, \mathrm{R}_t(\lambda_n)] \ \mathbf{U}^{-1}. \label{eq:rep-3} \end{align} \subsection{General Replica Ansatz} Proposition \ref{proposition:1} expresses the macroscopic parameters of the corresponding spin glass, including the asymptotic distortion, in terms of the parameters of a new spin glass of finite dimension. It is important to note that the new spin glass, referred to as ``spin glass of replicas'', is different from the corresponding spin glass defined in Definition \ref{def:cors_spin}. In fact, the spin glass of replicas is the projection of the corresponding spin glass on the reduced support $\mathbbmss{X}^m$ with $m$ indicating the number of replicas. The macroscopic parameters of the spin glass of replicas can therefore readily be determined. \begin{definition}[Spin glass of replicas] \label{def:replica_spin} \normalfont For the finite integer $m$, the spin glass of replicas with the microstate $\mathbf{v}_{m \times 1} \in \mathbbmss{X}^m$ and quenched randomizer $\mathbf{x}_{m \times 1}$ is defined as follows. \begin{itemize} \item All the entries of $\mathbf{x}$ equal to $x$ where $x$ is a random variable distributed with the source distribution $\mathrm{p}_x$. \item For a given realization of $\mathbf{x}$, the Hamiltonian reads \begin{align} \mathcal{E}^\mathsf{R}(\mathbf{v}|\mathbf{x})= (\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{T} \mathrm{R}_{\mathbf{J}}(-2 \upbeta \mathbf{T} \mathbf{Q}) (\mathbf{x}-\mathbf{v}) + u(\mathbf{v}) \label{eq:rep-4} \end{align} where $\mathrm{R}_{\mathbf{J}}(\cdot)$ is the $\mathrm{R}$-transform corresponding to $\mathrm{F}_{\mathbf{J}}$, $\mathbf{T}_{m \times m}$ is defined as \begin{align} \mathbf{T} \coloneqq \frac{1}{2 \lambda}\left[ \mathbf{I}_m - \frac{\upbeta\lambda_0}{\lambda+m \upbeta \lambda_0} \mathbf{1}_m \right], \label{eq:rep-5} \end{align} and $\mathbf{Q}_{m\times m}$ is the so-called replica correlation matrix defined as \begin{align} \mathbf{Q}= \mathsf{E}\hspace{.5mm} (\mathbf{x} - \mathbf{v})(\mathbf{x} - \mathbf{v})^{\mathsf{T}}. \label{eq:rep-6} \end{align} with the expectation taken over $\mathbf{x}$ and $\mathbf{v}$ at thermal equilibrium. \item At thermal equilibrium, the microstate is distributed according to the Boltzmann-Gibbs distribution $\mathrm{p}^{\upbeta}_{\mathbf{v}|\mathbf{x}}$ \begin{align} \mathrm{p}^{\upbeta}_{\mathbf{v}|\mathbf{x}}(\mathbf{v}|\mathbf{x})=\frac{e^{-\upbeta\mathcal{E}^\mathsf{R}(\mathbf{v}|\mathbf{x}) }}{\mathcal{Z}^\mathsf{R}(\upbeta|\mathbf{x})} \label{eq:rep-7} \end{align} where $\mathcal{Z}^\mathsf{R}(\upbeta|\mathbf{x})$ denotes the partition function of the system defined as \begin{align} \mathcal{Z}^\mathsf{R}(\upbeta|\mathbf{x}) \coloneqq \sum_{\mathbf{v}} e^{-\upbeta\mathcal{E}^\mathsf{R}(\mathbf{v}|\mathbf{x})}. \label{eq:rep-8} \end{align} \item The normalized free energy of the system at the inverse temperature $\upbeta$ is given by \begin{align} \mathsf{F}^\mathsf{R}(\upbeta,m)=-\frac{1}{\upbeta m} \mathsf{E} \log \mathcal{Z}^\mathsf{R}(\upbeta|\mathbf{x}), \label{eq:rep-9} \end{align} where the expectation is taken over $\mathbf{x}$ with respect to $\mathrm{p}_x$. The average energy and entropy at the inverse temperature $\upbeta$ are further found using \eqref{eq:int-5.1a} and \eqref{eq:int-5.1b}. \item For the system at thermal equilibrium, the replicas' average distortion regarding the distortion function $\mathsf{d}(\cdot,\cdot)$ at the inverse temperature $\upbeta$ is \begin{align} \mathsf{D}^\mathsf{R}(\upbeta,m)=\frac{1}{m} \mathsf{E} \sum_{a=1}^m \mathsf{d}(\mathrm{v}_a,x), \label{eq:rep-10} \end{align} with expectation being taken over $\mathbf{x}$ and $\mathbf{v}$ with respect to $\mathrm{p}_\mathbf{x}\mathrm{p}_{\mathbf{v}|\mathbf{x}}$. \end{itemize} \end{definition} Considering Definition \ref{def:replica_spin}, the evaluation of the system parameters such as the replicas' average distortion $\mathsf{D}^\mathsf{R}(\upbeta,m)$ or the normalized free energy $\mathsf{F}^\mathsf{R}(\upbeta,m)$ needs the replica correlation matrix $\mathbf{Q}$ to be explicitly calculated first. In fact, \eqref{eq:rep-6} describes a fixed point equation in terms of $\mathbf{Q}$ when one writes out the expectation using the conditional distribution in \eqref{eq:rep-7}. The solution can then be substituted in the distribution and the parameters of the system can be calculated via \eqref{eq:rep-8}-\eqref{eq:rep-10}. The fixed point equation, however, may have several solutions and thus result in multiple outputs for the system. Nevertheless, we express the asymptotic distortion of the \ac{map} estimator in terms of a single output of the spin glass of replicas for which the limits exist and the free energy is minimized. \begin{proposition} \label{proposition:1} Let the linear system \eqref{eq:sys-1} fulfill the constraints of Section \ref{sec:problem_formulation}. Suppose Assumptions \ref{asp:1} and \ref{asp:2} hold, and consider the spin glass of replicas as defined in Definition \ref{def:replica_spin} for a finite integer $m$. Then, the free energy of the corresponding spin glass defined in Definition \ref{def:cors_spin} is given by \begin{align} \mathsf{F}(\upbeta) = \lim_{m\downarrow 0} \frac{1}{m} \left[ \int_0^1 \tr{\mathbf{T} \mathbf{Q} \mathrm{R}_{\mathbf{J}}(-2\upbeta\omega\mathbf{T}\mathbf{Q})} \mathrm{d} \omega - \tr{\mathbf{Q}^\mathsf{T} \mathbf{T} \mathrm{R}_{\mathbf{J}}(-2\upbeta \mathbf{T} \mathbf{Q})} \right] +\mathsf{F}^\mathsf{R} (\upbeta,m) \label{eq:rep-11} \end{align} where $\mathbf{Q}$ is the replica correlation matrix satisfying \eqref{eq:rep-6}. The asymptotic distortion of the \ac{map} estimator regarding the distortion function $\mathsf{d}(\cdot;\cdot)$ is then determined as \begin{align} \mathsf{D}^{\mathbbmss{W}}({\boldsymbol{\hat{x}}};{\boldsymbol{x}})= \lim_{\upbeta\uparrow\infty}\lim_{m\downarrow 0} \mathsf{D}^\mathsf{R}(\upbeta,m). \label{eq:rep-12} \end{align} In case that multiple solutions are available for the replica correlation matrix, the replica's average distortion in \eqref{eq:rep-12} is evaluated via a correlation matrix which minimizes the free energy at zero temperature, i.e., $\upbeta \uparrow \infty$. \end{proposition} \begin{proof} The proof is given in Appendix \ref{app:a}. However, we explain briefly the strategy in the following.\\ Starting from \eqref{eq:sm-9c}, the expectation with respect to the noise term is straightforwardly taken. Using the results from \cite{guionnet2005fourier}, the expectation with respect to the system matrix is further taken as discussed in Appendix \ref{app:d}. Then, by considering the following variable exchange, \begin{align} [\mathbf{Q}]_{ab} = \frac{1}{n} ({\boldsymbol{x}}-{\boldsymbol{v}}_a)^\mathsf{T} ({\boldsymbol{x}}-{\boldsymbol{v}}_b). \label{eq:rep-13} \end{align} $\mathsf{Z}(m)$ is determined in terms of the replica correlation matrix $\mathbf{Q}$. Finally, by employing the law of large numbers, the $m$th moment of the partition function is given as \begin{align} \mathsf{Z}(m)=\mathsf{E}_{{\boldsymbol{x}}} \int e^{-n \left[ \mathcal{G}(\mathbf{T} \mathbf{Q}) - \mathcal{I}(\mathbf{Q})\right] +\epsilon_n} \mathrm{d} \mathbf{Q} \label{eq:rep-14} \end{align} where $\mathrm{d} \mathbf{Q} \coloneqq \prod_{a,b=1}^{m} \mathrm{d} [\mathbf{Q}]_{ab}$ with the integral being taken over $\mathbbmss{R}^{m\times m}$, $\epsilon_n$ tends to zero as $n \uparrow \infty$ and $\mathbf{T}$ is given by \eqref{eq:rep-5}. Moreover, $e^{n \mathcal{I}(\mathbf{Q})}$ denotes the non-normalized probability weight of the vectors of replicas with a same correlation matrix and is explicitly determined in \eqref{eq:a-20b} in Appendix \ref{app:a}, and $\mathcal{G}(\cdot)$ reads \begin{align} \mathcal{G}(\mathbf{M}) = \int_{0}^{{\upbeta}} \mathrm{Tr} \{\mathbf{M} \mathrm{R}_{\mathbf{J}}(-2\omega\mathbf{M})\} \mathrm{d} \omega \label{eq:rep-16} \end{align} for some diagonal matrix $\mathbf{M}$ with $\tr{\cdot}$ denoting the trace operator, and $\mathrm{R}_{\mathbf{J}}(\cdot)$ being the $\mathrm{R}$-transform with respect to $\mathrm{F}_{\mathbf{J}}$. In \eqref{eq:rep-14}, the term $e^{n \mathcal{I}(\mathbf{Q})} \mathrm{d} \mathbf{Q}$ is a probability measure which satisfies the large deviations property. Using results from large deviations \cite{dembo2009large}, the integral in \eqref{eq:rep-14} for large values of $n$ can be written as the integrand at the saddle point $\tilde{\mathbf{Q}}$ multiplied by some bounded coefficient $\mathsf{K}_n$ which results in \begin{align} \mathsf{Z}(m) \doteq \mathsf{K}_n e^{-n \left[ \mathcal{G} (\mathbf{T}\tilde{\mathbf{Q}})-\mathcal{I} (\tilde{\mathbf{Q}}) \right]} \label{eq:rep-17} \end{align} with $\doteq$ denoting the asymptotic equivalence in exponential scale\footnote{$a(\cdot)$ and $b(\cdot)$ are asymptotically equivalent in exponential scale over a non-bounded set $\mathbbmss{X}$, if for $x \in \mathbbmss{X}$ we have $\displaystyle \lim_{x \uparrow \infty} \log |\frac{a(x)}{b(x)}| =0$.}. Consequently, by substituting $\mathsf{Z}(m)$ in \eqref{eq:sm-8}, and exchanging the limits with respect to $n$ and $m$, as suggested in Assumption \ref{asp:2}, the asymptotic distortion is found as in Proposition \ref{proposition:1} where \eqref{eq:rep-6} determines the saddle point of the integrand function in \eqref{eq:rep-14}. The free energy is further determined as in \eqref{eq:rep-11} by substituting \eqref{eq:rep-17} in \eqref{eq:sm-0.4}. Finally by noting the fact that the free energy is minimized at the equilibrium, the proof is concluded. \end{proof} Proposition \ref{proposition:1} introduces a feasible way to determine the asymptotics of the \ac{map} estimator; its validity depends only on Assumptions \ref{asp:1} and \ref{asp:2} and has no further restriction. To pursue the analysis, one needs to solve the fixed point equation \eqref{eq:rep-6} for the replica correlation matrix $\mathbf{Q}$ and calculate the parameters of the spin glass of replicas explicitly. The direct approach to find $\mathbf{Q}$, however, raises both complexity and analyticity issues. In fact, finding the saddle point by searching over the set of all possible choices of $\mathbf{Q}$ is a hard task to do; moreover, several solutions may not be of use since they do not lead to analytic $\mathsf{F}^\mathsf{R}(\upbeta,m)$ and $\mathsf{D}^\mathsf{R}(\upbeta,m)$ in $m$, and thus, they cannot be continued analytically to the real axis via Assumption \ref{asp:2}. To overcome both the issues, the approach is to restrict our search into a parameterized set of replica correlation matrices and find the solution within this set. Clearly, the asymptotics found via this approach may fail as several other solutions are missed by restriction. The result, in this case, becomes more trustable by extending the restricted set of replica correlation matrices. Several procedures of restrictions can be considered. The procedures introduced in the literature are roughly divided into \ac{rs} and \ac{rsb} schemes. The former considers the $m$ replicas to interact symmetrically while the latter recursively breaks this symmetry in a systematic manner. In fact, the \ac{rs} scheme was first introduced due to some symmetric properties observed in the analysis of the spin glasses \cite{mezard2009information}. The properties, however, do not force the correlation matrix to have a symmetric structure, and later several examples were found showing that \ac{rs} leads to wrong conclusions. For these examples, the \ac{rsb} scheme was further considered as an extension of the symmetric structure of the correlation matrix to a larger set. We consider both the \ac{rs} and \ac{rsb} schemes in this manuscript; however, before pursuing our study, let us first investigate the general decoupling property of the estimator which can be concluded from Proposition \ref{proposition:1}. \subsection{General Decoupling Property of \ac{map} Estimator} Regardless of any restriction on $\mathbf{Q}$, the general ansatz leads to the decoupling property of the \ac{map} estimator. In fact by using Proposition \ref{proposition:1}, it can be shown that for almost any tuple of input-output entries, the marginal joint distribution converges to a deterministic joint distribution which does not depend on the entries' index. The explicit term for the joint distribution, however, depends on the assumptions imposed on the correlation matrix. \begin{proposition}[General Decoupling Principle] \label{proposition:2} Let the linear system \eqref{eq:sys-1} fulfill the constraints of Section \ref{sec:problem_formulation}. Suppose Assumptions \ref{asp:1} and \ref{asp:2} hold, and consider the spin glass of replicas as defined in Definition \ref{def:replica_spin} with the replica correlation matrix $\mathbf{Q}$. Then, for $j \in [1:n]$, the asymptotic conditional distribution of the \ac{map} estimator $\mathrm{p}_{{\hat{x}}|x}^{j}$ defined in Definition \ref{def:asymp_distribution} is, almost sure in $\mathbf{A}$, independent of $j$, namely \begin{align} \mathrm{p}_{{\hat{x}}|x}^{j} ({\hat{v}}|v) = \mathrm{p}_{\hat{\mathrm{x}}|\mathrm{x}} ({\hat{v}}|v) \label{eq:rep-18} \end{align} for some conditional distribution $\mathrm{p}_{\hat{\mathrm{x}}|\mathrm{x}} $ at the mass point $({\hat{v}},v) \in \mathbbmss{X} \times \mathbbmss{X}$. Consequently, the marginal joint distribution of the entries $x_j$ and ${\hat{x}}_j$ is described by the input-output joint distribution of the single-user system specified by the conditional distribution $\mathrm{p}_{\hat{\mathrm{x}}|\mathrm{x}}$ and the input $\mathrm{x} \sim \mathrm{p}_x$. The explicit form of $\mathrm{p}_{\hat{\mathrm{x}}|\mathrm{x}}$ depends on $\mathbf{Q}$. \end{proposition} \begin{proof} The proof follows from Proposition \ref{proposition:1} and the moment method \cite{akhiezer1965classical,simon1998classical}. From the classical moment problem, we know that the joint probability of the tuple of random variables $(t_1,t_2)$ are uniquely specified with the sequence of integer joint moments, if the joint moments are uniformly bounded. More precisely, by defining the $(k,\ell)$-joint moment of the tuple $(t_1,t_2)$ as \begin{align} \mathsf{M}_{k,\ell}\coloneqq \mathsf{E}\hspace{.5mm} t_1^k t_2^\ell, \label{eq:rep-19} \end{align} the sequence of $\left\lbrace \mathsf{M}_{k,\ell} \right\rbrace$ for $(k,\ell ) \in \mathbbmss{Z}^+ \times \mathbbmss{Z}^+$ is uniquely mapped to the probability distribution $\mathrm{p}_{t_1,t_2}$, if $\mathsf{M}_{k,\ell}$ is uniformly bounded for all integers $k$ and $\ell$. Consequently, one can infer that any two tuples of the random variables $(t_1,t_2)$ and $({\hat{t}}_1,{\hat{t}}_2)$ with the same sequences of the joint moments are identical in distribution. To determine the joint moment of input and output entries, consider the distortion function \begin{align} \mathsf{d}({\hat{x}},x)= {\hat{x}}^k x^\ell \label{eq:rep-20} \end{align} in Proposition \ref{proposition:1}, and evaluate the asymptotic distortion over the limit of $\mathbbmss{W}(n)=[j:j+\eta n]$ for some $\eta$ in a right neighborhood of zero. The $(k,\ell)$-joint moment of $({\hat{x}}_j,x_j)$ is then determined by taking the limit $\eta\downarrow 0$. Substituting the distortion function and the index set in Proposition \ref{proposition:1}, it is clear that the asymptotic distortion is independent of $\eta$ and $j$, and therefore, the limit with respect to $\eta$ exists and is independent of $j$ as well. Noting that the evaluated moments are uniformly bounded, it is inferred that the asymptotic joint distribution of $({\hat{x}}_j,x_j)$ is uniquely specified and does not depend on the index $j$. Finally, by using the fact that the source vector is \ac{iid} and the distribution of the entry $j$ is independent of the index, we conclude that the asymptotic conditional distribution $\mathrm{p}_{{\hat{x}}|x}^{j}$ is independent of $j$. The exact expression for $\mathrm{p}_{{\hat{x}}|x}^{j}$ is then found by determining the solution $\mathbf{Q}$ to the fixed point equation and determining the joint moments. \end{proof} Proposition \ref{proposition:2} is a generalized form of the \ac{rs} decoupling principle for the \ac{map} estimators studied in \cite{rangan2012asymptotic}. In fact, Proposition \ref{proposition:2} indicates that a vector system followed by a \ac{map} estimator always decouples into a bank of identical single-user systems regardless of any restriction on the replica correlation matrix. \subsection{Consistency Test} \label{subsec:cons_test} If one restricts the replica correlation matrix $\mathbf{Q}$ to be of a special form, the asymptotics determined under the assumed structure do not necessarily approximate true asymptotics accurately. Several methods were introduced in the literature to check the consistency of the solution. A primary method is based on calculating the entropy of the corresponding spin glass at zero temperature. As the temperature tends to zero, the distribution of the microstate tends to an indicator function at the point of the estimated vector, and consequently, the entropy of the corresponding spin glass converges to zero\footnote{Note that the entropy of the spin glass at temperature $\upbeta^{-1}$ denotes either the conditional entropy (for discrete supports) or the conditional differential entropy (for continuous supports) of a random vector ${\boldsymbol{v}}$ distributed conditioned to ${\boldsymbol{y}}$ and $\mathbf{A}$ with~Boltzmann-Gibbs~distribution~$\mathrm{p}^{\upbeta}_{{\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A}}$.}. One consistency check is therefore the zero temperature entropy of a given solution. Several works invoked this consistency test and showed that for the settings in which the \ac{rs} ansatz fails to give a tight bound on the exact solution, the zero temperature entropy determined from the \ac{rs} ansatz does not converge to zero; see for example \cite{zaidel2012vector}. This observation illustrates the invalidity of the \ac{rs} assumption and hints at \ac{rsb} ans\"atz giving better bounds on the true solution. Inspired by the aforementioned results, we evaluate the zero temperature entropy of the corresponding spin glass as a measure of consistency. In order to determine the zero temperature entropy, we invoke \eqref{eq:sm-0.5} which determines the entropy in terms of the free energy at inverse temperature $\upbeta$. Considering the free energy of the corresponding spin glass as given in Proposition \ref{proposition:1}, the entropy $\mathrm{H}(\upbeta)$ reads \begin{align} \mathrm{H}(\upbeta)= \lim_{m\downarrow 0} \frac{\upbeta^2}{m} \frac{\partial}{\partial \upbeta} \left[ \int_0^1 \tr{\mathbf{T} \mathbf{Q} \mathrm{R}_{\mathbf{J}}(-2\upbeta\omega\mathbf{T}\mathbf{Q})} \mathrm{d} \omega - \tr{\mathbf{Q}^\mathsf{T} \mathbf{T} \mathrm{R}_{\mathbf{J}}(-2\upbeta \mathbf{T} \mathbf{Q})} \right] +\mathrm{H}^\mathsf{R} (\upbeta,m) \label{eq:rep-21} \end{align} where $\mathrm{H}^\mathsf{R} (\upbeta,m)$ denotes the normalized entropy of the spin glass of replicas. As $\mathrm{H}^\mathsf{R} (\upbeta,m)$ determines the entropy of a thermodynamic system, for any $m \in \mathbbmss{R}^+$ we have \begin{align} \lim_{\upbeta \uparrow \infty} \mathrm{H}^\mathsf{R} (\upbeta,m) = 0, \label{eq:rep-22} \end{align} and therefore, the zero temperature entropy is given by \begin{align} \mathrm{H}^0= \lim_{\upbeta \uparrow \infty} \lim_{m\downarrow 0} \frac{\upbeta^2}{m} \frac{\partial}{\partial \upbeta} \left[ \int_0^1 \tr{\mathbf{T} \mathbf{Q} \mathrm{R}_{\mathbf{J}}(-2\upbeta\omega\mathbf{T}\mathbf{Q})} \mathrm{d} \omega - \tr{\mathbf{Q}^\mathsf{T} \mathbf{T} \mathrm{R}_{\mathbf{J}}(-2\upbeta \mathbf{T} \mathbf{Q})} \right], \label{eq:rep-23} \end{align} which obviously depends on the structure of the replica correlation matrix. In \cite{zaidel2012vector}, the authors determined the zero temperature entropy for the spin glass which corresponds to the vector precoding problem considering the \ac{rs} and 1\ac{rsb} assumptions, and observed that it takes the same form under both assumptions. They then conjectured that the zero temperature entropy is of a similar form for the general \ac{rsb} structure regardless of the number of breaking steps\footnote{In fact in this case, the dependence of the zero temperature entropy on the correlation matrix is completely described via the scalar $\chi$ which corresponds to the diagonal entries of the correlation matrix. See Assumption \ref{asp:3}-\ref{asp:5} for more illustrations.}. Using \eqref{eq:rep-23}, we later show that the conjecture in \cite{zaidel2012vector} is true. \section{RS Ansatz and RS Decoupling Principle} \label{sec:rs} The most elementary structure which can be imposed on the replica correlation matrix is \ac{rs}. Here, one assumes the correlation matrix to be of a symmetric form which means that the replicas of the spin glass defined in Definition~\ref{def:replica_spin} are invariant under any permutation of indices. Using the definition of the replica correlation matrix as given in \eqref{eq:rep-6}, it consequently reads that \begin{equation} (x-\mathrm{v}_a)(x-\mathrm{v}_b)= \begin{cases} q_0 & \text{if}\ a \neq b \\ q_1 & \text{if}\ a=b. \end{cases} \label{eq:rs-1} \end{equation} \begin{assumption}[\ac{rs} Structure] \label{asp:3} \normalfont Considering the spin glass of replicas as defined in Definition \ref{def:replica_spin}, the replica correlation matrix is of the form \begin{align} \mathbf{Q}=\frac{\chi}{\upbeta} \mathbf{I}_m + q \mathbf{1}_m \label{eq:rs-2} \end{align} where $\chi$ and $q$ are some non-negative real scalars. \end{assumption} Considering \eqref{eq:rs-1}, Assumption \ref{asp:3} supposes $q_0=q$ and $q_1={\chi}{\upbeta}^{-1}+q$. Substituting \eqref{eq:rs-2} in Definition \ref{def:replica_spin}, the spin glass of replicas is then specified by the scalars $\chi$ and $q$. The scalars are moreover related via a set of saddle point equations which are obtained from \eqref{eq:rep-6}. Finally, using Proposition \ref{proposition:1}, the asymptotics~of~the~system~are~found. \begin{proposition}[RS Ansatz] \label{proposition:3} Let the linear system \eqref{eq:sys-1} fulfill the constraints of Section \ref{sec:problem_formulation}. Suppose Assumptions \ref{asp:1} and \ref{asp:2}, as well as Assumption \ref{asp:3} hold. Let $x\sim\mathrm{p}_x$, and \begin{align} \mathrm{g} = \arg \min_{v} \left[ \frac{1}{2\lambda^{\mathsf{s}}}(x+ \sqrt{\lambda^{\mathsf{s}}_0} z -v)^2 + u(v) \right] \label{eq:rs-3} \end{align} with $v \in \mathbbmss{X}$, and $\lambda^{\mathsf{s}}_0$ and $\lambda^{\mathsf{s}}$ being defined as \begin{subequations} \begin{align} \lambda^{\mathsf{s}}_0 &\coloneqq \left[ \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda})\right]^{-2} \frac{\partial}{\partial \chi} \left\lbrace \left[\lambda_0 \chi - \lambda q\right] \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda}) \right\rbrace \label{eq:rs-4b}\\ \lambda^{\mathsf{s}} &\coloneqq \left[ \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda}) \right]^{-1} \lambda \label{eq:rs-4a} \end{align} \end{subequations} for some non-negative real $\chi$ and $q$ and the real variable $z$. Then, the asymptotic distortion defined in \eqref{eq:sys-9} reads \begin{align} \mathsf{D}^{\mathbbmss{W}}= \mathsf{E}\hspace{.5mm} \int \mathsf{d}(\mathrm{g},x) \mathrm{D} z, \label{eq:rs-5} \end{align} for $\chi$ and $q$ which satisfy \begin{subequations} \begin{align} \sqrt{\lambda^{\mathsf{s}}_0} \ \chi&= \lambda^{\mathsf{s}} \ \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x) z \ \mathrm{D} z \label{eq:rs-6a} \\ q &= \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x)^2 \ \mathrm{D} z \label{eq:rs-6b} \end{align} \end{subequations} and minimize the zero temperature free energy $\mathsf{F}^0_{\mathsf{rs}}$ which is given by \begin{align} \mathsf{F}^0_{\mathsf{rs}}=\frac{1}{2\lambda} \left[ \int_0^1 \mathrm{F}(\omega) \mathrm{d} \omega - \mathrm{F} (1) \right] + \mathsf{E}\hspace{.5mm} \int \frac{1}{2\lambda^{\mathsf{s}}} \left[ (x+\sqrt{\lambda^{\mathsf{s}}_0}z-\mathrm{g})^2 - \lambda^{\mathsf{s}}_0 z^2 \right] + u(\mathrm{g}) \ \mathrm{D} z \label{eq:rs-7} \end{align} with $\mathrm{F}(\cdot)$ being defined as \begin{align} \mathrm{F}(\omega)=\left[q-\frac{\lambda_0}{\lambda} \chi \right] \frac{\mathrm{d}}{\mathrm{d} \omega} \left[ \omega \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda} \omega) \right]. \label{eq:rs-8} \end{align} \end{proposition} \begin{proof} See Appendix \ref{app:b}. \end{proof} The asymptotic distortion under the \ac{rs} ansatz is equivalent to the average distortion of a scalar \ac{awgn} channel followed by a single-user estimator as shown in Fig. \ref{fig:1}. In this block diagram, the single-user estimator $\mathrm{g}_{\mathsf{map}}[(\cdot);\lambda^{\mathsf{s}},u]$ maximizes the posterior probability over a postulated scalar \ac{awgn} channel. We refer to this estimator as the ``decoupled \ac{map} estimator'' and define it as follows. \begin{figure}[t] \begin{center} \begin{tikzpicture}[auto, node distance=2.5cm,>=latex'] \node [input, name=input] {}; \node [sum, right of=input,fill=blue!20] (sum) {$+$}; \node [sum, above of=sum, node distance=.8cm,fill=blue!20] (times) {$\times$}; \node [input, above of=times, node distance=.6cm] (noise) {}; \node [input, right of=times, node distance=.8cm] (cof) {}; \node [block, right of=sum,fill=red!20] (estimator) { $\mathrm{g}_{\mathsf{map}}[(\cdot);\lambda^{\mathsf{s}},u]$ }; \node [output, right of=estimator, node distance=1.7cm] (output) {}; \draw [draw,->] (input) -- node[pos=.1] {$\mathrm{x}$} (sum) ; \draw [->] (sum) -- node[pos=.8] {$y$} (estimator); \draw [->] (estimator) -- node[pos=.9] [name=x_h] {$\hat{\mathrm{x}}$}(output); \draw [draw,->] (noise) -- node[pos=.1] {$z$}(times); \draw [draw,->] (times) -- node[pos=.1] {}(sum); \draw [draw,->] (cof) -- node[pos=.01,above] {\small $\sqrt{\lambda^{\mathsf{s}}_0}$}(times); \end{tikzpicture} \end{center} \caption{The decoupled single-user system under the \ac{rs} ansatz.} \label{fig:1} \end{figure} \begin{definition}[Decoupled \ac{map} estimator] \label{def:single_user_estimator} The decoupled \ac{map} estimator $\mathrm{g}_{\mathsf{map}}[(\cdot);\lambda^{\mathsf{s}},u]$ with the estimation parameter $\lambda^{\mathsf{s}}$ and the utility function $u(\cdot)$ is defined as \begin{subequations} \begin{align} \mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u] &\coloneqq \arg \max_{v} \ \poster{v}{y} \label{eq:rs-10.1} \\ &=\arg \min_{v} \left[ \frac{1}{2\lambda^{\mathsf{s}}} (y -v)^2+u(v) \right], \label{eq:rs-10} \end{align} \end{subequations} where $\poster{v}{y}$ denotes the ``decoupled posterior distribution'' postulated by the estimator which reads \begin{align} \poster{v}{y} =\mathsf{K} \ e^{-\left[ \tfrac{1}{2\lambda^{\mathsf{s}}} (y -v)^2+u(v) \right]} \label{eq:rs-10.2} \end{align} for some real constant $\mathsf{K}$. \end{definition} Using the definition of the decoupled \ac{map} estimator, the \ac{rs} decoupled system is defined next. \begin{definition}[\ac{rs} decoupled system] \label{def:rs_single_user} Define the \ac{rs} decoupled system to be consistent with the block diagram in Fig. \ref{fig:1} in which \begin{itemize} \item the source symbol $\mathrm{x}$ is distributed with $\mathrm{p}_{\mathrm{x}}$ over the support $\mathbbmss{X}$, \item $z$ is a zero-mean and unit-variance Gaussian random variable, \item $\mathrm{x}$ and $z$ are independent, \item $\hat{\mathrm{x}}$ is estimated from the observation $y \coloneqq \mathrm{x}+ \sqrt{\lambda^{\mathsf{s}}_0} z$ as \begin{align} \hat{\mathrm{x}}=\mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]. \label{eq:rs-9} \end{align} \item $\mathrm{g}_{\mathsf{map}}[(\cdot);\lambda^{\mathsf{s}},u]$ is the decoupled \ac{map} estimator with the estimation parameter $\lambda^{\mathsf{s}}$ and the utility function $u(\cdot)$ as defined in Definition \ref{def:single_user_estimator}. \item $\lambda^{\mathsf{s}}_0$ and $\lambda^{\mathsf{s}}$ are defined as in Proposition \ref{proposition:3}. \end{itemize} \end{definition} Using Proposition \ref{proposition:2}, the equivalency in the asymptotic distortion can be extended to the asymptotic conditional distribution as well. In fact, by considering the decoupling principle, Definition \ref{def:rs_single_user} describes the structure of the decoupled single-user system under the \ac{rs} assumption. \begin{proposition}[\ac{rs} Decoupling Principle] \label{proposition:4} Let the linear system \eqref{eq:sys-1} fulfill the constraints of Section \ref{sec:problem_formulation} and be estimated via the \ac{map} estimator in \eqref{eq:int-2}. Consider the \ac{rs} decoupled system as defined in Definition \ref{def:rs_single_user}, and suppose Assumptions \ref{asp:1}, \ref{asp:2} and \ref{asp:3} hold. Then, for any $j\in [1:n]$, the tuple $({\hat{x}}_j,x_j)$ converges in distribution to $(\hat{\mathrm{x}},\mathrm{x})$ if $\mathrm{p}_x=\mathrm{p}_\mathrm{x}$, almost sure in $\mathbf{A}$. \end{proposition} \begin{proof} Using Proposition \ref{proposition:2}, for any two different indices $j,q \in [1:n]$ we have \begin{align} \mathrm{p}^j_{{\hat{x}}|x}({\hat{v}}|v)=\mathrm{p}^q_{{\hat{x}}|x}({\hat{v}}|v) \label{eq:rs-11} \end{align} at the mass point $({\hat{v}},v)$. Therefore for any index $j$, we have \begin{align} \mathsf{E}\hspace{.5mm} {\hat{x}}_j^k x_j^\ell=\lim_{n\uparrow\infty} \frac{1}{n} \mathsf{E}\hspace{.5mm} \sum_{i=1}^n {\hat{x}}_i^k x_i^\ell. \label{eq:rs-12} \end{align} Consequently, the asymptotic $(k,\ell)$-joint moment of $({\hat{x}}_j,x_j)$ under the \ac{rs} assumption is determined by letting $\mathbbmss{W}(n)=[1:n]$ and the distortion function in the \ac{rs} ansatz \begin{align} \mathsf{d}({\hat{x}};x)= {\hat{x}}^k x^\ell \label{eq:rs-13} \end{align} and determining the asymptotic distortion. Substituting in Proposition \ref{proposition:3}, the asymptotic joint moment reads \begin{align} \mathsf{M}_{k,\ell}^j= \mathsf{E}\hspace{.5mm} \int \mathrm{g}^k x^\ell \mathrm{D} z \label{eq:rs-14} \end{align} where $\mathrm{g}$ is defined in \eqref{eq:rs-3}. Considering Definition \ref{def:rs_single_user} and assuming $\mathrm{p}_\mathrm{x}=\mathrm{p}_x$, \eqref{eq:rs-14} describes the $(k,\ell)$-joint moment of $(\hat{\mathrm{x}},\mathrm{x})$ as well. Noting that $\mathsf{M}_{k,\ell}^j$ is uniformly bounded for any pair of integers $k$ and $\ell$, it is concluded that the asymptotic joint distribution of $({\hat{x}}_j,x_j)$ and the joint distribution of $(\hat{\mathrm{x}},\mathrm{x})$ are equivalent. \end{proof} Proposition \ref{proposition:4} gives a more general form of the \ac{rs} decoupling principles investigated in \cite{rangan2012asymptotic} and \cite{tulino2013support}. In fact, by restricting the system matrix and source distribution as in \cite{rangan2012asymptotic} and \cite{tulino2013support}, one can recover the formerly studied \ac{rs} decoupling principles. \subsection*{RS Zero Temperature} To have a basic measure of \ac{rs} ansatz's consistency, we evaluate the zero temperature entropy under the \ac{rs} assumption following the discussion in Section \ref{subsec:cons_test}. Substituting \eqref{eq:rs-2} in \eqref{eq:rep-23} and taking the same steps as in Appendix \ref{app:b}, the \ac{rs} zero temperature entropy is determined as \begin{align} \mathrm{H}^0_{\mathsf{rs}}=\lim_{\upbeta\uparrow\infty} \frac{\upbeta^2}{2\lambda} \frac{\partial}{\partial \upbeta} \left[ \int_0^1 \mathrm{F}^{\upbeta}(\omega) \mathrm{d} \omega - \mathrm{F}^{\upbeta} (1) \right] \label{eq:rs-15} \end{align} where the function $\mathrm{F}^{\upbeta}(\cdot)$ is defined as \begin{align} \mathrm{F}^{\upbeta}(\omega) = \frac{\chi}{\upbeta} \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda} \omega) + \left[q-\frac{\lambda_0}{\lambda} \chi \right] \frac{\mathrm{d}}{\mathrm{d} \omega} \left[ \omega \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda} \omega) \right]. \label{eq:rs-16} \end{align} Taking the derivative first and then the limit, it finally reads \begin{align} \mathrm{H}^0_{\mathsf{rs}}= \frac{\chi}{2\lambda}\left[ \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda}) - \int_0^1 \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda} \omega) \mathrm{d} \omega \right]. \label{eq:rs-17} \end{align} We later on see that the zero temperature entropy takes the same form under the \ac{rsb} assumptions. \section{RSB Ans\"atze and RSB Decoupling Principle} \label{sec:rsb} In \cite{parisi1980sequence}, Parisi introduced a breaking scheme to broaden the restricted set of correlation matrices. The scheme recursively extends the set of matrices to larger sets. The breaking scheme was then employed to broaden the \ac{rs} structure of the correlation matrices, and therefore, the obtained structure was identified as the broken \ac{rs} or, \ac{rsb} structure. The key feature of Parisi's breaking scheme is that, by starting from the \ac{rs} structure, the new structure after breaking can be reduced to the structure before breaking. Thus, the set of fixed point solutions found by assuming the broken structure includes the solutions of the previous structure as well. \begin{definition}[Parisi's breaking scheme] \label{def:breaking_scheme} Let $m$ be a multiple of the integer $\xi$, and $\mathbf{Q}^{[\ell]}$ represent an $\frac{m}{\xi}\times\frac{m}{\xi}$ correlation matrix. Parisi's breaking scheme then constructs a new $m \times m$ correlation matrix $\mathbf{Q}^{[\ell+1]}$ as \begin{align} \mathbf{Q}^{[\ell+1]}= \mathbf{I}_\xi \otimes \mathbf{Q}^{[\ell]} + \kappa \mathbf{1}_m \label{eq:rsb-1} \end{align} for some real scalar $\kappa$. \end{definition} By choosing $\mathbf{Q}^{[0]}$ to be an \ac{rs} correlation matrix in Definition \ref{def:breaking_scheme}, the matrix $\mathbf{Q}^{[1]}$ finds the \ac{rsb} structure with one step of breaking (1\ac{rsb}). The steps of breaking can be further increased recursively by inserting $\mathbf{Q}^{[1]}$ in the next breaking scheme, determining the new correlation matrix $\mathbf{Q}^{[2]}$, and repeating the procedure. We start by the 1\ac{rsb} correlation matrix, and then, extend the result to higher \ac{rsb} ans\"atze with more steps of breaking. \begin{assumption}[1\ac{rsb} Structure] \label{asp:4} \normalfont Considering the spin glass of replicas as defined in Definition \ref{def:replica_spin}, the replica correlation matrix is of the form \begin{align} \mathbf{Q}=\frac{\chi}{\upbeta} \mathbf{I}_m + p \mathbf{I}_{\frac{m\upbeta}{\mu}} \otimes \mathbf{1}_{\frac{\mu}{\upbeta}} + q \mathbf{1}_m \label{eq:rsb-2} \end{align} where $\chi$, $p$, $q$ and $\mu$ are some non-negative real scalars. \end{assumption} Regarding Parisi's breaking scheme, Assumption \ref{asp:4} considers $\mathbf{Q}=\mathbf{Q}^{[1]}$ by letting $\mathbf{Q}^{[0]}$ have the \ac{rs} structure with parameters $\chi{\upbeta}^{-1}$ and $p$, $\xi={\mu}{\upbeta}^{-1}$ and $\kappa=q$. Here, the 1\ac{rsb} structure reduces to \ac{rs} by setting $p=0$. Therefore, the set of 1\ac{rsb} correlation matrices contains the one considered in Assumption \ref{asp:3}. \begin{proposition}[1RSB Ansatz] \label{proposition:5} Let the linear system \eqref{eq:sys-1} fulfill the constraints of Section \ref{sec:problem_formulation}. Suppose Assumptions \ref{asp:1} and \ref{asp:2}, as well as Assumption \ref{asp:4} hold. Let $x\sim\mathrm{p}_x$ and \begin{align} \mathrm{g} = \arg \min_{v} \left[ \frac{1}{2\lambda^{\mathsf{s}}}(x+ \sqrt{\lambda^{\mathsf{s}}_0} z_0 + \sqrt{\lambda^{\mathsf{s}}_1} z_1 -v)^2 + u(v) \right] \label{eq:rsb-3} \end{align} with $v \in \mathbbmss{X}$ and $\lambda^{\mathsf{s}}_0$, $\lambda^{\mathsf{s}}_1$ and $\lambda^{\mathsf{s}}$ being defined by \begin{subequations} \begin{align} \lambda^{\mathsf{s}}_0 &= \left[ \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda}) \right]^{-2} \frac{\partial}{\partial \chi} \left\lbrace \left[\lambda_0(\chi + \mu p)-\lambda q \right] \mathrm{R}_{\mathbf{J}}(- \frac{\chi+\mu p}{\lambda})\right\rbrace, \label{eq:rsb-3a} \\ \lambda^{\mathsf{s}}_1 &=\left[ \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda}) \right]^{-2} \left[ \mathrm{R}_{\mathbf{J}}(- \frac{\chi}{\lambda}) - \mathrm{R}_{\mathbf{J}}(- \frac{\chi+\mu p}{\lambda}) \right] \lambda \mu^{-1}, \label{eq:rsb-3b} \\ \lambda^{\mathsf{s}}&= \left[ \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda}) \right]^{-1} \lambda \label{eq:rsb-3c} \end{align} \end{subequations} for some non-negative real $\chi$, $p$, $q$ and $\mu$ and real variables $z_0$ and $z_1$. Then, the asymptotic distortion~in~\eqref{eq:sys-9}~reads \begin{align} \mathsf{D}^{\mathbbmss{W}}= \mathsf{E}\hspace{.5mm} \int \mathsf{d}(\mathrm{g},x) \ \tilde{\Lambda}{(z_1|z_0,x)} \mathrm{D} z_1 \mathrm{D} z_0, \label{eq:rsb-4} \end{align} with $\tilde{\Lambda}{(z_1|z_0,x)} \coloneqq \left[ \int \Lambda{(z_1,z_0,x)} \mathrm{D} z_1 \right]^{-1} \Lambda{(z_1,z_0,x)}$ and $\Lambda {(z_1,z_0,x)}$ being defined by \begin{align} \Lambda {(z_1,z_0,x)} = e^{- \mu \left[ \tfrac{1}{2 \lambda^{\mathsf{s}}}(x+ \sqrt{\lambda^{\mathsf{s}}_0} z_0 + \sqrt{\lambda^{\mathsf{s}}_1} z_1 -\mathrm{g})^2 - \tfrac{1}{2\lambda^{\mathsf{s}}} (\sqrt{\lambda^{\mathsf{s}}_0} z_0 + \sqrt{\lambda^{\mathsf{s}}_1} z_1 )^2 + u(\mathrm{g})\right]}. \label{eq:rsb-5} \end{align} The scalars $\chi$, $p$ and $q$ satisfy \begin{subequations} \begin{align} \sqrt{\lambda^{\mathsf{s}}_0}\left[ \chi+\mu p \right] &= \lambda^{\mathsf{s}} \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x) z_0 \ \tilde{\Lambda}{(z_1|z_0,x)} \mathrm{D} z_1 \mathrm{D} z_0 \label{eq:rsb-6a}\\ \sqrt{\lambda^{\mathsf{s}}_1}\left[ \chi+\mu q + \mu p \right] &= \lambda^{\mathsf{s}} \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x) z_1 \ \tilde{\Lambda}{(z_1|z_0,x)} \mathrm{D} z_1 \mathrm{D} z_0 \label{eq:rsb-6b}\\ q+p &= \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x)^2 \ \tilde{\Lambda}{(z_1|z_0,x)} \mathrm{D} z_1 \mathrm{D} z_0 \label{eq:rsb-6c} \end{align} \end{subequations} and $\mu$ is a solution of the fixed point equation \begin{align} \frac{\mu}{2\lambda^{\mathsf{s}}}\left[\mu\frac{\lambda^{\mathsf{s}}_1}{\lambda^{\mathsf{s}}}q+p\right]-\frac{1}{2\lambda}\int_0^{\mu p} \mathrm{R}_{\mathbf{J}}(-\frac{\chi+\omega}{\lambda}) \mathrm{d} \omega = \mathsf{E}\hspace{.5mm} \int \log\tilde{\Lambda}{(z_1|z_0,x)} \ \tilde{\Lambda}{(z_1|z_0,x)} \mathrm{D} z_1 \mathrm{D} z_0. \label{eq:rsb-7} \end{align} Moreover, $\chi$, $p$, $q$ and $\mu$ minimize the zero temperature free energy $\mathsf{F}^{0[1]}_{\mathsf{rsb}}$ which reads \begin{align} \mathsf{F}^{0[1]}_{\mathsf{rsb}}=\frac{1}{2\lambda} \left[ \int_0^1 \mathrm{F}(\omega) \mathrm{d} \omega - \mathrm{F} (1) \right] - \frac{1}{\mu} \mathsf{E}\hspace{.5mm} \int \log \left[ \int \Lambda{(z_1,z_0,x)} \mathrm{D} z_1 \right] \ \mathrm{D} z_0 \label{eq:rsb-8} \end{align} with $\mathrm{F}(\cdot)$ being defined as \begin{align} \mathrm{F}(\omega)=\frac{1}{\mu} \frac{\mathrm{d}}{\mathrm{d} \omega} \left[\int_{\chi \omega}^{\left[\chi+\mu p\right] \omega} \mathrm{R}_{\mathbf{J}}(-\frac{t}{\lambda} ) \mathrm{d} t \right] + \left[q-\lambda_0 \frac{\chi+\mu p}{\lambda} \right] \frac{\mathrm{d}}{\mathrm{d} \omega} \left[ \omega \mathrm{R}_{\mathbf{J}}(-\frac{\chi+\mu p}{\lambda} \omega) \right]. \label{eq:rsb-9} \end{align} \end{proposition} \begin{proof} See Appendix \ref{app:c}. \end{proof} Similar to our approach under the \ac{rs} ansatz, we employ Proposition \ref{proposition:5} to introduce the decoupled 1\ac{rsb} single-user system which describes the statistical behavior of the \ac{map} estimator's input-output entries under 1\ac{rsb} assumption. The decoupled system under 1\ac{rsb} differs from \ac{rs} within a new impairment term which is correlated with the source and noise symbols through a joint distribution. The impairment term intuitively plays the role of a correction factor which compensates the possible inaccuracy of the \ac{rs} ansatz. The decoupled \ac{map} estimator, however, follows the same structure as for \ac{rs}. \begin{figure}[t] \begin{center} \begin{tikzpicture}[auto, node distance=2.5cm,>=latex'] \node [input, name=input] {}; \node [margin, right of=input, node distance=1.6cm,minimum height=2.2em,minimum width=3.5em,fill=yellow!20] (margin) {}; \node [sum, right of=input,node distance=1.6cm, fill=blue!20] (sum0) {$+$}; \node [sum, above of=sum0, node distance=.8cm,fill=blue!20] (times0) {$\times$}; \node [input, right of=times0, node distance=.9cm] (cof0) {}; \node [sum, right of=sum0, node distance=1.8cm,fill=blue!20] (sum1) {$+$}; \node [sum, above of=sum1, node distance=.8cm,fill=blue!20] (times1) {$\times$}; \node [input, right of=times1, node distance=.7cm] (cof1) {}; \node [block, above of=times0, node distance=1.2cm,minimum height=1.7em, minimum width=5em,fill=red!20] (cond) { $\mathrm{p}_{z_1|z_0,\mathrm{x}}$ }; \node [input, above of=times1, node distance=.8cm] (noise1) {}; \node [block, right of=sum1,fill=red!20] (estimator) { $\mathrm{g}_{\mathsf{map}}[(\cdot);\lambda^{\mathsf{s}},u]$ }; \node [output, right of=estimator, node distance=1.7cm] (output) {}; \draw [draw,->] (input) -- node[pos=.1] {$\mathrm{x}$} (sum0) ; \draw [->] (sum0) -- node { } (sum1); \draw [->] (sum1) -- node[pos=.8] {$y$} (estimator); \draw [->] (estimator) -- node[pos=.9] [name=x_h] {$\hat{\mathrm{x}}$}(output); \draw [draw,->] (times0) -- node[pos=.1] {}(sum0); \draw [draw,->] (times1) -- node[pos=.1] {}(sum1); \draw [draw,->] (cof0) -- node[pos=.01,above] {\small $\sqrt{\lambda^{\mathsf{s}}_1}$}(times0); \draw [draw,->] (cof1) -- node[pos=.01,above] {\small $\sqrt{\lambda^{\mathsf{s}}_0}$}(times1); \draw [draw,->] (cond) -- node[pos=.3] {$z_1$}(times0); \draw [draw,->] (noise1) -- node[pos=.1] {$z_0$}(times1); \end{tikzpicture} \end{center} \caption{The decoupled scalar system under the 1\ac{rsb} ansatz.} \label{fig:2} \end{figure} \begin{definition}[1\ac{rsb} decoupled system] \label{def:1rsb_single_user} Fig. \ref{fig:2} defines the 1\ac{rsb} decoupled system in which \begin{itemize} \item the source symbol $\mathrm{x}$ is distributed with $\mathrm{p}_{\mathrm{x}}$ over the support $\mathbbmss{X}$, \item $z_0$ is a zero-mean and unit-variance Gaussian random variable, \item $z_1$ is a random variable correlated with $\mathrm{x}$ and $z_0$, \item $\mathrm{x}$ and $z_0$ are independent, and \begin{align} \mathrm{p}_{z_1|\mathrm{x},z_0}=\tilde{\Lambda} {(z_1|z_0,x)} \phi(z_1) \label{eq:rsb-10} \end{align} with $\tilde{\Lambda}$ defined in Proposition \ref{proposition:5}. \item $\hat{\mathrm{x}}$ is estimated from the observation $y \coloneqq \mathrm{x}+ \sqrt{\lambda^{\mathsf{s}}_0} z_0+ \sqrt{\lambda^{\mathsf{s}}_1} z_1$ as \begin{align} \hat{\mathrm{x}}=\mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]. \label{eq:rsb-11} \end{align} \item $\mathrm{g}_{\mathsf{map}}[(\cdot);\lambda^{\mathsf{s}},u]$ is the decoupled \ac{map} estimator with the estimation parameter $\lambda^{\mathsf{s}}$ and the utility function $u(\cdot)$ as defined in Definition \ref{def:single_user_estimator}. \item $\lambda^{\mathsf{s}}_0$, $\lambda^{\mathsf{s}}_1$ and $\lambda^{\mathsf{s}}$ are defined as in Proposition \ref{proposition:5}. \end{itemize} \end{definition} \begin{proposition}[1\ac{rsb} Decoupling Principle] \label{proposition:6} Let the linear system \eqref{eq:sys-1} fulfill the constraints of Section \ref{sec:problem_formulation} and be estimated via the \ac{map} estimator in \eqref{eq:int-2}. Consider the 1\ac{rsb} decoupled system as defined in Definition \ref{def:1rsb_single_user}, and suppose Assumptions \ref{asp:1}, \ref{asp:2} and \ref{asp:4} hold. Then, for all $j\in [1:n]$, the tuple $({\hat{x}}_j,x_j)$ converges in distribution to $(\hat{\mathrm{x}},\mathrm{x})$ if $\mathrm{p}_x=\mathrm{p}_\mathrm{x}$. \end{proposition} \begin{proof} The proof takes exactly same steps as for the \ac{rs} decoupling principle using Proposition \ref{proposition:5}. \end{proof} The 1\ac{rsb} decoupled system, in general, provides a more accurate approximation of the estimator's asymptotics by searching over a larger set of solutions which include the \ac{rs} ansatz. To investigate the latter statement, consider the case of $p=0$. In this case, the 1\ac{rsb} structure reduces to \ac{rs}. Setting $p=0$ in Proposition \ref{proposition:5}, $\lambda_1$ becomes zero, and consequently $\tilde{\Lambda}(z_1|z_0,x)=1$. Moreover, the fixed point equations in \eqref{eq:rsb-7} hold for any choice of $\mu$, and the scalars $\chi$ and $q$ couple through the same set of equations as in the \ac{rs} ansatz. The zero temperature free energy of the system, furthermore, reduces to its \ac{rs} form under the assumption of $p=0$. Denoting the parameters of the \ac{rs} ansatz by $\left[ \chi_{\mathsf{rs}}, q_{\mathsf{rs}} \right]$, it is then concluded that $\left[ \chi,p, q, \mu \right]=\left[ \chi_{\mathsf{rs}},0, q_{\mathsf{rs}},0 \right]$ is a solution to the 1\ac{rsb} fixed point equations, when an stable solution to the \ac{rs} fixed point exists. The solution, however, does not give necessarily the 1\ac{rsb} ansatz, the stable solution to the 1\ac{rsb} fixed point equations with minimum free energy may occur at some other point. We investigate the impact of replica breaking for some examples later through numerical results. Parisi's breaking scheme can be employed to extend the structure of the correlation matrix to the \ac{rsb} structure with more steps of breaking by recursively repeating the scheme. In fact, one can start from an \ac{rs} structured $\mathbf{Q}^{[0]}$ and employ the breaking scheme for $b$ steps to determine the correlation matrix $\mathbf{Q}^{[b]}$. In this case, the replica correlation matrix is referred to as the $b$\ac{rsb} correlation matrix. \begin{assumption}[$b$\ac{rsb} Structure] \label{asp:5} \normalfont Considering the spin glass of replicas as defined in Definition \ref{def:replica_spin}, the replica correlation matrix is of the form \begin{align} \mathbf{Q}=\frac{\chi}{\upbeta} \mathbf{I}_m + \sum_{\kappa=1}^b p_\kappa \mathbf{I}_{\frac{m\upbeta}{\mu_\kappa}} \otimes \mathbf{1}_{\frac{\mu_\kappa}{\upbeta}} + q \mathbf{1}_m \label{eq:rsb-12} \end{align} where scalars $\chi$ and $q$, and the sequences $\{p_\kappa\}$ and $\{\mu_\kappa\}$ with $\kappa\in[1:b]$ are non-negative reals, and $\{\mu_\kappa\}$ satisfies the following constraint \begin{align} \frac{\mu_{\kappa+1}}{\mu_{\kappa}} \in \mathbbmss{Z}^+ \label{eq:rsb-13} \end{align} for $\kappa\in[1:b-1]$. \end{assumption} Considering the correlation matrix in Proposition \ref{proposition:1} to be of the form indicated in Assumption \ref{asp:5} the previous ans\"atze are extended to a more general ansatz which can reduce to the 1\ac{rsb} as well as \ac{rs} ansatz. Proposition~\ref{proposition:7} expresses the replica ansatz under the $b$\ac{rsb} assumption. \begin{proposition}[$b$RSB Ansatz] \label{proposition:7} Let the linear system \eqref{eq:sys-1} fulfill the constraints of Section \ref{sec:problem_formulation}. Suppose Assumptions \ref{asp:1} and \ref{asp:2}, as well as Assumption \ref{asp:5} hold. For $\kappa \in [0:b]$, define the sequence $\{ \tilde{\chi}_\kappa \}$, such that $\tilde{\chi}_0=\chi$ and \begin{align} \tilde{\chi}_\kappa \coloneqq \chi+\sum_{\varsigma=1}^{\kappa} \mu_\varsigma p_\varsigma \label{eq:rsb-14} \end{align} for $\kappa \in [1:b]$. Let $x\sim\mathrm{p}_x$, and \begin{align} \mathrm{g} = \arg \min_{v} \left[ \frac{1}{2\lambda^{\mathsf{s}}}(x+ \sum_{\kappa=0}^b \sqrt{\lambda^{\mathsf{s}}_\kappa} z_\kappa-v)^2 + u(v) \right] \label{eq:rsb-15} \end{align} with $v \in \mathbbmss{X}$ and $\lambda^{\mathsf{s}}_0$, $\{ \lambda^{\mathsf{s}}_\kappa \}$ and $\lambda^{\mathsf{s}}$ being defined as \begin{subequations} \begin{align} \lambda^{\mathsf{s}}_0 &= \left[ \mathrm{R}_{\mathbf{J}}(-\frac{\tilde{\chi}_0}{\lambda}) \right]^{-2} \frac{\partial}{\partial \tilde{\chi}_{b}} \left\lbrace \left[ \lambda_0 \tilde{\chi}_{b} - \lambda q \right] \mathrm{R}_{\mathbf{J}}(- \frac{\tilde{\chi}_{b}}{\lambda}) \right\rbrace, \label{eq:rsb-16a} \\ \lambda^{\mathsf{s}}_\kappa &=\left[ \mathrm{R}_{\mathbf{J}}(-\frac{\tilde{\chi}_0}{\lambda}) \right]^{-2} \left[ \mathrm{R}_{\mathbf{J}}(- \frac{\tilde{\chi}_{\kappa-1}}{\lambda}) - \mathrm{R}_{\mathbf{J}}(-\frac{\tilde{\chi}_{\kappa}}{\lambda}) \right] \lambda \mu_\kappa^{-1}, \label{eq:rsb-16b} \\ \lambda^{\mathsf{s}}&= \left[ \mathrm{R}_{\mathbf{J}}(-\frac{\tilde{\chi}_0}{\lambda}) \right]^{-1} \lambda \label{eq:rsb-16c} \end{align} \end{subequations} for some non-negative real scalar $q$, sequences $\{ \tilde{\chi}_\kappa \}$ and $\{\mu_\kappa\}$ and sequence of real variables $\{z_\kappa\}$. Then, the asymptotic distortion defined in \eqref{eq:sys-9} reads \begin{align} \mathsf{D}^{\mathbbmss{W}}&= \mathsf{E}\hspace{.5mm} \int \mathsf{d}(\mathrm{g};x) \prod_{\kappa=1}^b \tilde{\Lambda}_\kappa{(z_\kappa| \{z_\varsigma\}_{\kappa+1}^b, z_0,x)} \ \mathrm{D} z_\kappa \mathrm{D} z_0, \label{eq:rsb-17} \end{align} where $ {\{z_\varsigma\}_{\kappa}^b\coloneqq \{z_\kappa,\ldots,z_b\}}$ and $\tilde{\Lambda}_\kappa {(z_\kappa| \{z_\varsigma\}_{\kappa+1}^b, z_0,x)} = \left[ \int \Lambda_\kappa {(\{z_\varsigma\}_{\kappa}^b, z_0,x)} \mathrm{D} z_\kappa\right]^{-1} \Lambda_\kappa {(\{z_\varsigma\}_{\kappa}^b, z_0,x)}$ with \begin{align} \Lambda_{1} {(\{z_\varsigma\}_{1}^b, z_0,x)} \coloneqq e^{-\mu_1 \left[ \tfrac{1}{2 \lambda^{\mathsf{s}}}(x+ \sum\limits_{\kappa=0}^b \sqrt{\lambda^{\mathsf{s}}_\kappa} z_\kappa -\mathrm{g})^2 - \tfrac{1}{2\lambda^{\mathsf{s}}} (\sum\limits_{\kappa=0}^b \sqrt{\lambda^{\mathsf{s}}_\kappa} z_\kappa )^2 + u(\mathrm{g}) \right]} \label{eq:rsb-18} \end{align} and $\{\Lambda_\kappa {(\{z_\varsigma\}_{\kappa}^b, z_0,x)}\}$ for $\kappa\in[2:b]$ being recursively determined by \begin{align} \Lambda_{\kappa} {(\{z_\varsigma\}_{\kappa}^b, z_0,x)} \coloneqq \left[ \int \Lambda_{\kappa-1} {(\{z_\varsigma\}_{\kappa-1}^b, z_0,x)} \ \mathrm{D} z_{\kappa-1} \right]^{\tfrac{\mu_{\kappa}}{\mu_{\kappa-1}}}. \label{eq:rsb-19} \end{align} The scalar $q$ and sequences $\{ \tilde{\chi}_\kappa \}$ and $\{p_\kappa\}$ satisfy \begin{subequations} \begin{align} \sum_{\kappa=1}^b p_\kappa + q &= \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x)^2 \prod_{\kappa=1}^b \tilde{\Lambda}_\kappa {(z_\kappa| \{z_\varsigma\}_{\kappa+1}^b, z_0,x)} \ \mathrm{D} z_\kappa \mathrm{D} z_0,\label{eq:rsb-20a} \\ \tilde{\chi}_{\kappa-1}+\mu_\kappa \left(\sum_{\varsigma=\kappa}^b p_\varsigma + q \right) &= \frac{\lambda^{\mathsf{s}}}{\sqrt{\lambda^{\mathsf{s}}_\kappa}} \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x) z _\kappa \prod_{\kappa=1}^b \tilde{\Lambda}_\kappa {(z_\kappa| \{z_\varsigma\}_{\kappa+1}^b, z_0,x)} \ \mathrm{D} z_\kappa \mathrm{D} z_0 , \label{eq:rsb-20b} \\ \tilde{\chi}_{b} &= \frac{\lambda^{\mathsf{s}}}{\sqrt{\lambda^{\mathsf{s}}_0}} \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x) z _0 \prod_{\kappa=1}^b \tilde{\Lambda}_\kappa {(z_\kappa| \{z_\varsigma\}_{\kappa+1}^b, z_0,x)} \ \mathrm{D} z_\kappa \mathrm{D} z_0, \label{eq:rsb-20c} \end{align} \end{subequations} and the sequence $\{ \mu_\kappa\}$ is given by \begin{align} \{ \mu_\kappa \} =\arg\min_{\{ \tilde{\mu}_\kappa \} } \left[ \frac{1}{2\lambda} \int_0^1 \mathrm{F}(\omega;\{ \tilde{\mu}_\kappa \}) \mathrm{d} \omega - \frac{1}{\tilde{\mu}_b} \mathsf{E}\hspace{.5mm} \int \log \left[ \int \Lambda_b {(z_b, z_0,x)} \mathrm{D} z_b \right] \ \mathrm{D} z_0 - \frac{1}{2\lambda^{\mathsf{s}}} \Delta(\{ \tilde{\mu}_\kappa \}) \right] \label{eq:rsb-20.1} \end{align} where the function $\mathrm{F}(\cdot;\{ \mu_\kappa \})$ is defined as \begin{align} \mathrm{F}(\omega;\{ \mu_\kappa \}) \coloneqq \sum\limits_{\kappa=1}^b \frac{1}{\mu_\kappa} \frac{\mathrm{d}}{\mathrm{d} \omega} \int_{\tilde{\chi}_{\kappa-1} \omega}^{\tilde{\chi}_{\kappa} \omega} \mathrm{R}_{\mathbf{J}}(-\frac{t}{\lambda} ) \mathrm{d} t + \left[q- \frac{\lambda_0}{\lambda} \tilde{\chi}_b \right] \frac{\mathrm{d}}{\mathrm{d} \omega} \left[ \omega \mathrm{R}_{\mathbf{J}}(-\frac{\tilde{\chi}_b}{\lambda} \omega) \right], \end{align} $\Delta(\cdot)$ is defined as \begin{align} \Delta(\{ \mu_\kappa \})=\sum_{\kappa=1}^b \frac{1}{\mu_\kappa} \left[\zeta_\kappa\tilde{\chi}_\kappa -\zeta_{\kappa-1} \tilde{\chi}_{\kappa-1} \right]+\zeta_b q -\frac{\lambda^{\mathsf{s}}_0}{\lambda^{\mathsf{s}}} \tilde{\chi}_b \label{eq:rsb-21} \end{align} with $\zeta_0=1$, and $\zeta_\kappa$ for $\kappa \in [1:b]$ denoting \begin{align} \zeta_\kappa \coloneqq 1 - \sum_{\varsigma=1}^{\kappa} \mu_\varsigma \frac{\lambda^{\mathsf{s}}_\varsigma}{\lambda^{\mathsf{s}}}, \label{eq:rsb-22} \end{align} and $\{ \tilde{\mu}_\kappa \} \in \mathbbmss{S}_{\boldsymbol{\mu}}$ in which \begin{align} \mathbbmss{S}_{\boldsymbol{\mu}} \coloneqq \left\lbrace \{ \mu_1, \ldots, \mu_b \} \ni \mu_\kappa\in\mathbbmss{R}^+ \ \wedge \ \frac{\mu_{\kappa+1}}{\mu_{\kappa}} \in \mathbbmss{Z}^+ \ \forall \kappa \in [1:b-1] \right\rbrace. \label{eq:rsb-23} \end{align} In the case of multiple solutions for $\chi$, $q$, $\{ p_\kappa \}$ and $\{ \mu_\kappa \}$, the ansatz is chosen such that the free energy at zero temperature \begin{align} \mathsf{F}^{0[b]}_{\mathsf{rsb}}=\frac{1}{2\lambda} \left[ \int_0^1 \mathrm{F}(\omega;\{\mu_\kappa\}) \mathrm{d} \omega - \mathrm{F} (1;\{\mu_\kappa\}) \right] - \frac{1}{\mu_b} \mathsf{E}\hspace{.5mm} \int \log \left[ \int \Lambda_b {(z_b, z_0,x)} \mathrm{D} z_b \right] \ \mathrm{D} z_0 \label{eq:rsb-24} \end{align} is minimized. \end{proposition} \begin{proof} See Appendix \ref{app:d}. \end{proof} One can simply observe that Proposition \ref{proposition:7} reduces to Propositions \ref{proposition:5} and \ref{proposition:3} by letting $b=1$ and $p_\kappa=0$ for $\kappa\in[1:b]$, respectively. The ansatz, moreover, extends the corresponding decoupled single-user system of the estimator considering the general decoupling principle investigated in Proposition \ref{proposition:2}. By taking the same steps as in Proposition \ref{proposition:6}, the decoupled $b$\ac{rsb} single-user system is found which represents the extended version of the 1\ac{rsb} system with $b$ additive impairment taps. In fact, considering the impairment terms to intuitively play the role of correction factors, the $b$\ac{rsb} ansatz takes more steps of correction into account. The decoupled \ac{map} estimator, moreover, remains unchanged. \begin{figure}[t] \begin{center} \begin{tikzpicture}[auto, node distance=2.5cm,>=latex'] \node [input, name=input] {}; \node [margin, right of=input, node distance=3.6cm,minimum width=15em, fill=yellow!20] (margin) {}; \node [sum, right of=input,node distance=1.9cm, fill=blue!20] (sum0) {$+$}; \node [sum, above of=sum0, node distance=.8cm,fill=blue!20] (times0) {$\times$}; \node [input, right of=times0, node distance=.9cm] (cof0) {}; \node [input, right of=sum0,node distance=1.2cm] (inter1) {}; \node [input, right of=inter1,node distance=1cm] (inter2) {}; \node [sum, right of=inter2,node distance=1.2cm,fill=blue!20] (sumb) {$+$}; \node [sum, above of=sumb, node distance=.8cm,fill=blue!20] (timesb) {$\times$}; \node [input, right of=timesb, node distance=.9cm] (cofb) {}; \node [sum, right of=sumb, node distance=1.8cm, fill=blue!20] (sum1) {$+$}; \node [sum, above of=sum1, node distance=.8cm,fill=blue!20] (times1) {$\times$}; \node [input, right of=times1, node distance=.7cm,fill=red!20] (cof1) {}; \node [block, above of=times0, node distance=1.2cm,minimum height=1.7em, minimum width=8em,fill=red!20] (cond) { $\mathrm{p}_{z_1|z_2,\ldots,z_b,z_0,\mathrm{x}}$ }; \node [block, above of=timesb, node distance=1.2cm,minimum height=1.7em, minimum width=5em,fill=red!20] (condb) { $\mathrm{p}_{z_b|z_0,\mathrm{x}}$ }; \node [input, above of=times1, node distance=.8cm] (noise1) {}; \node [block, right of=sum1,fill=red!20] (estimator) { $\mathrm{g}_{\mathsf{map}}[(\cdot);\lambda^{\mathsf{s}},u]$ }; \node [output, right of=estimator, node distance=1.7cm] (output) {}; \draw [draw,->] (input) -- node[pos=.1] {$\mathrm{x}$} (sum0) ; \draw [->] (sumb) -- node { } (sum1); \draw [->] (sum1) -- node[pos=.8] {$y$} (estimator); \draw [->] (estimator) -- node[pos=.9] [name=x_h] {$\hat{\mathrm{x}}$}(output); \draw [draw,->] (times0) -- node[pos=.1] {}(sum0); \draw [draw,->] (times1) -- node[pos=.1] {}(sum1); \draw [draw,->] (timesb) -- node[pos=.1] {}(sumb); \draw [-,dotted] (inter1) -- node { } (inter2); \draw [draw,->] (cof0) -- node[pos=.01,above] {\small $\sqrt{\lambda^{\mathsf{s}}_1}$}(times0); \draw [draw,->] (cofb) -- node[pos=.01,above] {\small $\sqrt{\lambda^{\mathsf{s}}_b}$}(timesb); \draw [draw,->] (cof1) -- node[pos=.01,above] {\small $\sqrt{\lambda^{\mathsf{s}}_0}$}(times1); \draw [draw,->] (cond) -- node[pos=.3] {$z_1$}(times0); \draw [draw,->] (condb) -- node[pos=.3] {$z_b$}(timesb); \draw [draw,->] (noise1) -- node[pos=.1] {$z_0$}(times1); \end{tikzpicture} \end{center} \caption{The decoupled scalar system under the \ac{brsb} ansatz.} \label{fig:3} \end{figure} \begin{definition}[$b$\ac{rsb} decoupled system] \label{def:brsb_single_user} Define the $b$\ac{rsb} decoupled system as a single-user system illustrated in Fig. \ref{fig:3} in which \begin{itemize} \item the source symbol $\mathrm{x}$ is distributed with $\mathrm{p}_{\mathrm{x}}$ over the support $\mathbbmss{X}$, \item $z_0$ is a zero-mean and unit-variance Gaussian random variable, \item $z_\kappa$ is a random variable correlated with $\mathrm{x}$, $z_0$ and $\{ z_{\kappa+1}, \ldots, z_b \}$ \item $\mathrm{x}$ and $z_0$ are independent, and \begin{align} \mathrm{p}_{z_\kappa|z_{\kappa+1}, \ldots, z_b,z_0, \mathrm{x}}=\tilde{\Lambda}_\kappa {(z_\kappa| \{z_\varsigma\}_{\kappa+1}^b, z_0,x)} \phi(z_\kappa) \label{eq:rsb-25} \end{align} with $\tilde{\Lambda}$ defined in Proposition \ref{proposition:7}. \item $\hat{\mathrm{x}}$ is estimated from the observation $y \coloneqq \mathrm{x}+ \sum\limits_{\kappa=0}^b \sqrt{\lambda^{\mathsf{s}}_\kappa} z_\kappa$ as \begin{align} \hat{\mathrm{x}}=\mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]. \label{eq:rsb-26} \end{align} \item $\mathrm{g}_{\mathsf{map}}[(\cdot);\lambda^{\mathsf{s}},u]$ is the decoupled \ac{map} estimator with the estimation parameter $\lambda^{\mathsf{s}}$ and the utility function $u(\cdot)$ as defined in Definition \ref{def:single_user_estimator}, and \item $\lambda^{\mathsf{s}}_0$, $\{\lambda^{\mathsf{s}}_\kappa\}$ and $\lambda^{\mathsf{s}}$ for $\kappa\in[1:b]$ are as in Proposition \ref{proposition:7}. \end{itemize} \end{definition} \begin{proposition}[$b$\ac{rsb} Decoupling Principle] \label{proposition:8} Let the linear system \eqref{eq:sys-1} fulfill the constraints of Section \ref{sec:problem_formulation} and be estimated via the \ac{map} estimator in \eqref{eq:int-2}. Consider the $b$\ac{rsb} decoupled system as defined in Definition \ref{def:brsb_single_user}, and suppose Assumptions \ref{asp:1}, \ref{asp:2} and \ref{asp:5} hold. Then, for all $j\in [1:n]$, the tuple $({\hat{x}}_j,x_j)$ converges in distribution to $(\hat{\mathrm{x}},\mathrm{x})$ if $\mathrm{p}_x=\mathrm{p}_\mathrm{x}$. \end{proposition} \begin{proof} Using Proposition \ref{proposition:5}, it takes same steps as for Proposition \ref{proposition:4}. \end{proof} \subsection*{RSB Zero Temperature} In Appendix \ref{app:d}, it is shown that under the $b$\ac{rsb} assumption on the replica correlation matrix the free energy of the corresponding spin glass at the inverse temperature $\upbeta$ reads \begin{align} \mathsf{F}(\upbeta)=\frac{1}{2\lambda} \left[ \int_0^1 \mathrm{F}^{\upbeta}(\omega) \mathrm{d} \omega - \mathrm{F}^{\upbeta} (1) \right] +\mathsf{F}^{\mathsf{R}}(\upbeta). \label{eq:rsb-27} \end{align} Here, $\mathsf{F}^{\mathsf{R}}(\upbeta)$ denotes the normalized free energy of the spin glass of replicas defined in \eqref{eq:rep-9} in the limit $m\downarrow0$, and the function $\mathrm{F}^{\upbeta}(\cdot)$ is defined as \begin{align} \mathrm{F}^{\upbeta}(\omega) = \sum\limits_{\kappa=1}^b \frac{1}{\mu_\kappa} \frac{\mathrm{d}}{\mathrm{d} \omega} \int_{\tilde{\chi}_{\kappa-1} \omega}^{\tilde{\chi}_{\kappa} \omega} \mathrm{R}_{\mathbf{J}}(-\frac{t}{\lambda} ) \mathrm{d} t + \frac{\chi}{\upbeta} \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda} \omega) + \left[q- \frac{\lambda_0}{\lambda} \tilde{\chi}_b \right] \frac{\mathrm{d}}{\mathrm{d} \omega} \left[ \omega \mathrm{R}_{\mathbf{J}}(-\frac{\tilde{\chi}_b}{\lambda} \omega) \right]. \label{eq:rsb-28} \end{align} Following the discussion in Section \ref{subsec:cons_test}, the entropy at the zero temperature reads \begin{align} \mathrm{H}^{0[b]}_{\mathsf{rsb}}=\lim_{\upbeta\uparrow\infty} \frac{\upbeta^2}{2\lambda} \frac{\partial}{\partial \upbeta} \left[ \int_0^1 \mathrm{F}^{\upbeta}(\omega) \mathrm{d} \omega - \mathrm{F}^{\upbeta} (1) \right] \label{eq:rs-29} \end{align} which reduces to \begin{align} \mathrm{H}^{0[b]}_{\mathsf{rsb}}= \frac{\chi}{2\lambda}\left[ \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda}) - \int_0^1 \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda} \omega) \mathrm{d} \omega \right]. \label{eq:rsb-30} \end{align} \eqref{eq:rsb-30} justifies the conjecture in \cite{zaidel2012vector} and states that the zero temperature entropy under any number of breaking steps, including the \ac{rs} case, is of the similar form and only depends on the scalar $\chi$. In fact, the Hamiltonian in \eqref{eq:int-7} reduces to the one considered in vector precoding by considering ${\boldsymbol{x}}$ to be the deterministic vector of zeros, $\lambda_0=0$, $\lambda=1$ and $u({\boldsymbol{v}})=0$. Substituting in \eqref{eq:rsb-30}, the zero temperature entropy reduces to the one determined in \cite{zaidel2012vector} within a factor of $2$. The factor comes from the difference in the prior assumption on the support~of~microstate\footnote{In \cite{zaidel2012vector}, the authors considered ${\boldsymbol{v}}$ to be a complex vector.}. \section{Replica Simulator: Characterization via the Single-User Representation} \label{sec:rep_sim} The general $b$\ac{rsb} decoupling principle determines an equivalent single-user system which describes the input-output statistics of the \ac{map} estimator under the $b$\ac{rsb} ansatz. In order to specify the exact parameters of the decoupled single-user system, the set of fixed point equations needs to be solved explicitly. In this section, we propose an alternative approach which describes an ansatz in terms of the corresponding decoupled system's input-output statistics. We define the exact form of the decoupled system as the ``steady state'' of a transition system named ``replica simulator''. The proposed approach enables us to investigate the properties of the \ac{rs} and \ac{rsb} ans\"atze by studying the replica simulator. In order to clarify the idea of the replica simulator, let us define a set of input-output statistics regarding the $b$\ac{rsb} decoupled system. \begin{definition} \label{def:single_user_parameters} Consider the single-user system consistent with the block diagram in Fig. \ref{fig:3}. Denote the joint distribution of the source and impairment terms with $\mathrm{p}_{\mathrm{x},z_0,\ldots,z_b}$. For this system, \begin{itemize} \item the $\kappa$th noise-error correlation is defined as \begin{align} \mathsf{C}_\kappa = \frac{1}{\sqrt{\lambda^{\mathsf{s}}_\kappa}} \mathsf{E}\hspace{.5mm} (\hat{\mathrm{x}}-\mathrm{x}) z_\kappa \label{eq:sim-1} \end{align} for $\kappa\in[0:b]$, and \item the \ac{mse} is denoted by \begin{align} \mathsf{MSE} = \mathsf{E}\hspace{.5mm} (\hat{\mathrm{x}}-\mathrm{x})^2. \label{eq:sim-2} \end{align} \end{itemize} \end{definition} Invoking Definition \ref{def:single_user_parameters}, the $b$\ac{rsb} ansatz can be completely represented in terms of the input-output statistics of the decoupled system. In fact, by means of Definition \ref{def:single_user_parameters}, the fixed point equations in \eqref{eq:rsb-20a}-\eqref{eq:rsb-20c} can be expressed as \begin{subequations} \begin{align} \sum_{\kappa=1}^b p_\kappa + q &= \mathsf{MSE},\label{eq:sim-3} \\ \tilde{\chi}_{\kappa-1}+\mu_\kappa \left(\sum_{\varsigma=\kappa}^b p_\varsigma + q \right) &=\lambda^{\mathsf{s}} \mathsf{C}_\kappa, \label{eq:sim-4} \\ \tilde{\chi}_{b} &= \lambda^{\mathsf{s}} \mathsf{C}_0, \label{eq:sim-5} \end{align} \end{subequations} for $\kappa\in[1:b]$; moreover, the factor $\Lambda_1$ is given as \begin{align} \Lambda_{1} {(\{z_\varsigma\}_{1}^b, z_0,\mathrm{x})} = e^{-\mu_1 \left[ \tfrac{1}{2 \lambda^{\mathsf{s}}}(y -\hat{\mathrm{x}})^2 - \tfrac{1}{2\lambda^{\mathsf{s}}} (y-\mathrm{x})^2 + u(\hat{\mathrm{x}}) \right]}, \label{eq:sim-6} \end{align} which reduces to \begin{align} \Lambda_{1} {(\{z_\varsigma\}_{1}^b, z_0,\mathrm{x})} = \mathrm{p}_\mathrm{x}(\mathrm{x})^{\mu_1} \left[ \frac{\poster{\hat{\mathrm{x}}}{y}}{\poster{\mathrm{x}}{y}} \right]^{\mu_1} \label{eq:sim-6.1} \end{align} with $\poster{\cdot}{y}$ indicating the decoupled posterior distribution defined in Definition \ref{def:single_user_estimator}. The second term on the right hand side of \eqref{eq:sim-6.1} is an extended form of the likelihood ratio. By defining \begin{align} \Gamma_{1} {(\{z_\varsigma\}_{1}^b, z_0,\mathrm{x})} =\left[ \frac{\poster{\hat{\mathrm{x}}}{y}}{\poster{\mathrm{x}}{y}} \right]^{\mu_1}, \label{eq:sim-6.2} \end{align} \eqref{eq:sim-6.1} reads \begin{align} \Lambda_{1} {(\{z_\varsigma\}_{1}^b, z_0,\mathrm{x})} = \mathrm{p}_\mathrm{x}(\mathrm{x})^{\mu_1} \Gamma_{1} {(\{z_\varsigma\}_{1}^b, z_0,\mathrm{x})} , \label{eq:sim-6.3} \end{align} and $\Lambda_\kappa {(\{z_\varsigma\}_{\kappa}^b, z_0,\mathrm{x})}$ for $\kappa\in[2:b]$ are determined by \begin{align} \Lambda_{\kappa} {(\{z_\varsigma\}_{\kappa}^b, z_0,\mathrm{x})} = \mathrm{p}_\mathrm{x}(\mathrm{x})^{\mu_\kappa} \Gamma_\kappa {(\{z_\varsigma\}_{\kappa}^b, z_0,\mathrm{x})} \label{eq:sim-6.4} \end{align} where $\Gamma_\kappa {(\{z_\varsigma\}_{\kappa}^b, z_0,\mathrm{x})}$ are recursively defined as \begin{align} \Gamma_{\kappa} {(\{z_\varsigma\}_{\kappa}^b, z_0,\mathrm{x})} = \left[ \int \Gamma_{\kappa-1} {(\{z_\varsigma\}_{\kappa-1}^b, z_0,\mathrm{x})} \ \mathrm{D} z_{\kappa-1} \right]^{\tfrac{\mu_{\kappa}}{\mu_{\kappa-1}}}. \label{eq:sim-6.4} \end{align} The fixed point in \eqref{eq:rsb-20.1} is therefore rewritten accordingly. The above alternative representation of the $b$\ac{rsb} ansatz leads us to a new interpretation. In fact, one can define a transition system in which the vector of replica parameters denotes the state, and the decoupled single-user system defines the transition rule \cite{hansen2003algorithms,finkel1998fundamental}. We refer to this transition system as the ``replica simulator'', and define it formally as the following. \begin{definition}[Replica simulator] \label{def:rep_sim} Let $b$ be a non-negative integer. Consider the vector $\mathbf{s}$ as \begin{align} \mathbf{s}\coloneqq \left[ \chi, \mu_1, \ldots, \mu_b, p_1, \ldots, p_b,q\right] \label{eq:sim-7} \end{align} with entries satisfying the corresponding constraints in Proposition \ref{proposition:7}, and denote its support by $\mathbbmss{S}_b$. The transition rule $\mathsf{T}^\mathsf{R}_b: \mathbbmss{S}_b \mapsto \mathbbmss{S}_b$ maps the prior state $\mathbf{s}_i\in \mathbbmss{S}_b$ to the posterior state $\mathbf{s}_f\in\mathbbmss{S}_b$ in the following way: \textit{$\mathsf{T}^\mathsf{R}_b$ realizes the $b$\ac{rsb} decoupled system considering the entries of $\mathbf{s}_i$ as the replica parameters. It then constructs the entries of $\mathbf{s}_f$ by determining a new set of replica parameters from the statistics of the decoupled system using the equivalent representation of the fixed point equations given in \eqref{eq:sim-3}-\eqref{eq:sim-6.4}.} The replica simulator of breaking order $b$ is then defined as the transition system $\mathsf{Sim}^\mathsf{R}[b] \coloneqq\left( \mathbbmss{S}_b, \mathsf{T}^\mathsf{R}_b \right)$. \end{definition} \begin{figure}[t] \begin{center} \begin{tikzpicture}[auto,>=latex'] \node [state,fill=red!20] (s_i) {$\mathbf{s}_i$}; \node [input, right of=s_i, node distance=1.8cm] (help) {}; \node [margin, below of=help, fill=yellow!20, node distance=1.6cm,minimum width=19em, minimum height=5em] (margin) {}; \node [block, below of=s_i, fill=blue!20, blue!20,node distance=1.6cm] (sys) {\textcolor{black}{\small $b$\ac{rsb} decoupling}}; \node [input, right of=sys, node distance=2cm] (mid) {}; \node [block, right of=mid, node distance=1.8cm, fill=blue!20,blue!20] (stat) {\textcolor{black}{\small $\{ \mathsf{C}_\kappa \}, \mathsf{MSE}, \cdots$}}; \node [state, right of=stat, node distance=2.2cm,fill=red!20] (s_f) {$\mathbf{s}_f$}; \node [input, right of=help, node distance=2.7cm] (h2) {}; \node [draw=none,fill=none, below of=h2, node distance=.4cm] {$\mathsf{T}^\mathsf{R}_b$}; \draw [draw,->] (s_i) -- node {} (sys) ; \draw [draw,->] (sys) -- node {$\mathrm{p}_{\hat{\mathrm{x}}|\mathrm{x}}$} (stat) ; \draw [draw,->] (stat) -- node {} (s_f) ; \end{tikzpicture} \end{center} \caption{Replica Simulator of breaking order $b$} \label{fig:4} \end{figure} The structure of the replica simulator is illustrated in Fig. \ref{fig:4}. For the replica simulator of breaking order $b$, a sequence of states $\left\lbrace \mathbf{s}_t \right\rbrace$ is considered to be a ``process'', if for $t \in [1:\infty]$ \begin{align} \mathbf{s}_{t} \stackrel{\mathsf{T}^\mathsf{R}_b}{\longrightarrow} \mathbf{s}_{t+1}. \label{eq:sim-8} \end{align} The state $\mathbf{s}^\star$ is then called the ``steady state'', if setting $\mathbf{s}_t=\mathbf{s}^\star$ results in $\mathbf{s}_{t+1}=\mathbf{s}^\star$. Regarding Proposition \ref{proposition:7}, the $b$\ac{rsb} ansatz is in fact the steady state of the replica simulator which minimizes the free energy function. Our conclusion also extends to the \ac{rs} case, if we set $b=0$. Considering Definition \ref{def:rep_sim}, as well as the above discussions, the $b$\ac{rsb} ansatz can be numerically investigated using the methods developed in the literature of transition systems. This approach may reduce the complexity of numerical analysis; however, it does not impact the computational complexity\footnote{We conjecture that in some cases, our $b$\ac{rsb} decoupled system represents the asymptotic of a decision-feedback system. The validity of this conjecture can further reduce the computation complexity.}. In fact, assuming that one realizes the $b$\ac{rsb} decoupled system for any desired state vector denoted in \eqref{eq:sim-7} via some methods of realization, e.g., Monte Carlo simulation, the $b$\ac{rsb} ansatz can be found by means of an iterative algorithm which has been designed to find the steady state of a transition system. The latter statement can be clarified as in Scheme \ref{scheme:1}. \begin{algorithm}[h] \floatname{algorithm}{Scheme} \algblock[Name]{Start}{End} \algblockdefx[NAME]{START}{END}% [2][Unknown]{Start #1(#2)}% {Ending} \algblockdefx[NAME]{}{OTHEREND}% [1]{Until (#1)} \begin{algorithmic} \BEGIN \Set replica simulator of breaking order $b$, $\mathsf{Sim}^\mathsf{R}[b]$ \If{$b=0$} \State $\mathsf{T}^\mathsf{R}_b$ corresponds to the \ac{rs} decoupled system \Else \State $\mathsf{T}^\mathsf{R}_b$ corresponds to the $b$\ac{rsb} decoupled system \EndIf \Initiate initial state $\mathbf{s}^0 \in \mathbbmss{S}_b$ \Evaluate $\mathbf{s}^0 \stackrel{\mathsf{T}^\mathsf{R}_b}{\longrightarrow} \mathbf{s}$ \Comment{\textsf{A}} \If{$\mathbf{s}=\mathbf{s}^0$} \State $\mathbf{s}^\star \gets \mathbf{s}$ \State \textbf{break} \Else \State \textbf{consider} mapping rule $\mathsf{I}\mathsf{M}(\cdot,\cdot): \mathbbmss{S}_b\times\mathbbmss{S}_b \mapsto \mathbbmss{S}_b$ \Comment{\textsf{B}} \State $\mathbf{s}^0 \gets \mathsf{I} \mathsf{M}(\mathbf{s}, \mathbf{s}^0)$ \State \textbf{return} \textsf{A} \EndIf \Output $\mathbf{s}^{\star}$ \ENDall \end{algorithmic} \caption{Analysis via Replica Simulator} \label{scheme:1} \end{algorithm} In Scheme \ref{scheme:1}, $\mathsf{T}^\mathsf{R}_b$ in step \textsf{A} can be realized via different methods. One may determine the input-output distribution of the single-user system analytically or simulate the system by generating impairment and source samples numerically via Monte Carlo technique. Another degree of freedom is in step \textsf{B} where different mapping rules with different convergence speeds can be employed. For algorithms designed based on Scheme \ref{scheme:1}, the computational complexity depends on the realization method while the convergence speed is mainly restricted with some given mapping rule $\mathsf{I}\mathsf{M}(\cdot,\cdot)$. The replica simulator introduces a systematic approach for investigating the replica an\"atze based on the decoupling principle. Moreover, it gives an intuition about the impact of symmetry breaking. To clarify the latter statement, let us consider an example. \textbf{Example.} (\ac{rs} vs. 1\ac{rsb} ansatz) Let $b=0$; thus, the \ac{rs} fixed point equations read \begin{subequations} \begin{align} q&=\mathsf{MSE}, \\ \chi &= \lambda^{\mathsf{s}} \mathsf{C}_0. \end{align} \end{subequations} The equations under the 1\ac{rsb} assumption are moreover given by \begin{subequations} \begin{align} q+p&=\mathsf{MSE}, \\ \chi + \mu p &= \lambda^{\mathsf{s}} \mathsf{C}_0, \\ \chi + \mu p+ \mu q &= \lambda^{\mathsf{s}} \mathsf{C}_1, \label{eq:fix2} \end{align} \end{subequations} and $\mu$ is determined through the fixed point equation \begin{align} \frac{\mu}{2\lambda^{\mathsf{s}}}\left[\mu\frac{\lambda^{\mathsf{s}}_1}{\lambda^{\mathsf{s}}}q+p\right]-\frac{1}{2\lambda}\int_0^{\mu p} \mathrm{R}_{\mathbf{J}}(-\frac{\chi+\omega}{\lambda}) \mathrm{d} \omega &= \mathrm{I}(z_1;z_0,\mathrm{x} ) + \mathsf{D}_{\mathsf{K} \mathsf{L}} ( \mathrm{p}_{z_1} \Vert \phi ) \label{eq:fix1} \end{align} where $\mathrm{I}(\cdot;\cdot )$ and $\mathsf{D}_{\mathsf{K} \mathsf{L}} ( \cdot \parallel \cdot )$ denote the mutual information and Kullback-Leibler divergence, respectively. Assuming the system matrix to be \ac{iid} and setting $z_1$ to be independent of $z_0$ and $\mathrm{x}$, the right hand side of \eqref{eq:fix1} tends to zero, and therefore, the solutions $\mu=0$ and $p=0$ are concluded. Consequently, \eqref{eq:fix2} becomes ineffective, and the fixed point equations reduce to \ac{rs}. The latter observation can be interpreted in terms of the ``state evolution'' of the replica simulator. More precisely, assume that the initial state of the replica simulator with breaking order one is chosen such that in the corresponding decoupled system, $z_1$ is sufficiently correlated with the source and noise symbols. In this case, by assuming the mapping rule $\mathsf{IM}(\cdot,\cdot)$ to be converging, the correlation in each iteration of Scheme \ref{scheme:1} reduces, and thus, at the steady state, $z_1$ becomes independent of $z_0$ and $\mathrm{x}$. The above discussion can be extended to replica simulators with larger breaking orders. Moreover, further properties of the \ac{rsb} ans\"atze could be studied using methods developed in the literature of transition systems\footnote{The concept of replica simulator may further clarify the connection between \ac{rsb} ans\"atze and message passing based algorithms. Such investigations however are skipped here and can be considered as a possible future work.}. We leave the further investigations as a possible future work. \section{Large Compressive Sensing Systems} \label{sec:cs} Considering the setting represented in Section \ref{sec:problem_formulation}, a large compressive sensing system can be studied through our results by restricting the source's \ac{cdf} to be of the form \begin{align} \mathrm{F}_x(x)=(1-\alpha)\mathbf{1}\left\lbrace x\geq 0 \right\rbrace+\alpha \breve{\mathrm{F}}_x(x). \label{eq:cs-1} \end{align} In the large limit, the source vector distributed as \eqref{eq:cs-1} has $(1-\alpha)n$ entries equal to zero while the remaining $\alpha n$ entries are distributed with $\breve{\mathrm{F}}_x$. In this case, ${\boldsymbol{x}}$ is an $\alpha n$-sparse vector, and thus, \eqref{eq:sys-1} is considered to represent a large compressive sensing system with the sensing matrix $\mathbf{A}$. Considering the prior as in \eqref{eq:cs-1}, different recovery schemes are then investigated by restricting the prior setup of the system, correspondingly. In this section, we study the asymptotics of several recovery schemes using our $b$\ac{rsb} decoupling principle for both the cases of continuous and finite alphabet sources. \subsection{Continuous Sources} Assuming $\mathbbmss{X}=\mathbbmss{R}$, \eqref{eq:cs-1} describes a continuous random variable multiplied by an $\alpha$-Bernoulli random variable. In this case, by varying the utility function $u(\cdot)$, different reconstruction schemes are considered. Here, we address the linear, LASSO and $\ell_0$ norm recovery schemes. The results can however be employed to investigate a general $\ell_p$ norm recovery scheme \cite{zheng2017does}.\\ \exmpl{ex:1} (linear recovery scheme) The \ac{map} estimation is reduced to the linear recovery scheme when the utility function is set to be \begin{align} u(v)=\frac{v^2}{2}. \label{eq:cs-2} \end{align} In fact, in this case, the \ac{map} estimator postulates the prior distribution to be a zero-mean and unit-variance Gaussian and performs considerably inefficient when the source is sparse. Using the $b$\ac{rsb} decoupling principle, we conclude that in the large-system limit the source entry $x_j$ and the estimated entry ${\hat{x}}_j$, for any $j\in[1:n]$, converge in probability to a sparse random variable $\mathrm{x}$ distributed as in \eqref{eq:cs-1} and the estimated symbol $\hat{\mathrm{x}}\coloneqq\mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]$ where the decoupled system reduces to \begin{align} \mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]= \frac{y}{1+\lambda^{\mathsf{s}}}, \label{eq:cs-3} \end{align} with $y$ being given by \begin{align} y=\mathrm{x}+\sum_{\kappa=0}^b \sqrt{\lambda^{\mathsf{s}}_\kappa} z_\kappa, \label{eq:cs-4} \end{align} and the scalars $\lambda^{\mathsf{s}}$ and $\left\lbrace \lambda^{\mathsf{s}}_\kappa \right\rbrace$ for $\kappa\in [0:b]$ are defined as in Proposition \ref{proposition:7}. By letting $b=0$, the result reduces to the \ac{rs} decoupling principle reported in the literature, see \cite{rangan2012asymptotic,tulino2013support}; however, the result here holds for a wider set of sensing matrices and source distributions.\\ \exmpl{ex:2} (LASSO recovery scheme) To study the LASSO recovery scheme, we set \begin{align} u(v)=\abs{v}. \label{eq:cs-5} \end{align} Regarding the $b$\ac{rsb} decoupling principle, the prior distribution of the decoupled system's input $\mathrm{x}$ is postulated to be ``Laplacian'' or ``double exponential'' with unit variance. This postulation results in a better performance of the recovery scheme in many cases, since the Laplacian \ac{pdf} could be a more realistic approximation of the sparse source distribution. Consequently, the decoupled system's output is found as $\hat{\mathrm{x}}=\mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]$ with \begin{align} \mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]= [y-\lambda^{\mathsf{s}} \ \mathsf{sgn}(y)] \ w_{\lambda^{\mathsf{s}}}(y) \label{eq:cs-6} \end{align} where $y$ is denoted as in \eqref{eq:cs-4}, $w_{\lambda^{\mathsf{s}}}(\cdot)$ is the null window function with window width $\lambda^{\mathsf{s}}$ defined as \begin{align} w_{\lambda^{\mathsf{s}}}(y) = \left\{ \begin{array}{rl} 1 &\mbox{ $\abs{y} > \lambda^{\mathsf{s}}$} \\ 0 &\mbox{ $\abs{y} \leq \lambda^{\mathsf{s}}$} \end{array} \right. , \label{eq:cs-7} \end{align} and $\mathsf{sgn}(y)$ is the sign indicator. The decoupled single-user estimator in \eqref{eq:cs-7} which is often referred to as the soft-thresholding operator recovers the earlier \ac{rs} results by setting $b=0$ and the sensing matrix to be \ac{iid} \cite{rangan2012asymptotic}.\\ \exmpl{ex:3} ($\ell_0$ norm recovery scheme) The $\ell_0$ norm recovery scheme which considers \begin{align} u(v)= \mathbf{1} \left\lbrace v\neq 0\right\rbrace \label{eq:cs-8} \end{align} can perform significantly better that the latter schemes when the sparsity increases. In this case, the prior distribution of the $b$\ac{rsb} decoupled system's input $\mathrm{x}$ is found as the limit of \begin{align} \mathrm{p}_{\mathrm{x}}^{\uptheta,\updelta}(\mathrm{x}) = \frac{1}{2\uptheta + 2 \updelta\left(e-1\right) }\left\{ \begin{array}{rl} e &\mbox{ $\phantom{0 \leq}\hspace*{1mm} \abs{\mathrm{x}} \leq \updelta$} \\ 1 &\mbox{ $\updelta <\abs{\mathrm{x}} \leq \uptheta$} \end{array} \right. , \label{eq:cs-9} \end{align} when $\uptheta \uparrow \infty$ and $\updelta\downarrow 0$. For finite values of $\uptheta$ and $\updelta$, $\mathrm{p}_{\mathrm{x}}^{\uptheta,\updelta}$ can be considered as a sparse distribution in which non-zero symbols occur uniformly. The prior explains the better performance of the $\ell_0$ norm recovery scheme compared to the linear and LASSO schemes. For this case, the output of the decoupled single-user system reads $\hat{\mathrm{x}}=\mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]$, such that \begin{align} \mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]= y w_{\vartheta}(y) \label{eq:cs-10} \end{align} where $y$ is denoted as in \eqref{eq:cs-4}, and $w_{\vartheta}(\cdot)$ is the null window function with $\vartheta=\sqrt{2 \lambda^{\mathsf{s}}}$. Here, $\mathrm{g}_{\mathsf{map}}[(\cdot);\lambda^{\mathsf{s}},u]$ is the hard-thresholding operator and recovers the analysis in \cite{rangan2012asymptotic} for a wider class of settings. The above examples have been also studied in earlier replica based studies, e.g., \cite{rangan2012asymptotic}, \cite{vehkapera2014analysis}. The given results can be analytically derived from the above expressions by considering the \ac{rs} ansatz and properly substituting the corresponding $\mathrm{R}$-transforms. We address the results of two important cases reported in the literature in following. \subsubsection*{Special Case 1} In \cite{rangan2012asymptotic}, the authors addressed the case of which an \ac{iid} sparse source is sampled by an \ac{iid} sensing matrix where the matrix entries are zero-mean random variables with the variance vanishing proportional to $k^{-1}$. The asymptotic performance of the estimator was then addressed when the linear, LASSO, and $\ell_0$ norm recovery schemes are employed using the \ac{rs} \ac{map} decoupling principle. The results reported in \cite{rangan2012asymptotic} can be derived directly by setting the $\mathrm{R}$-transform in Proposition \ref{proposition:4} to be \begin{align} \mathrm{R}_{\mathbf{J}} (\omega) = \frac{1}{1-\mathsf{r} \omega}. \end{align} \subsubsection*{Special Case 2} The results of \cite{rangan2012asymptotic} extended in \cite{vehkapera2014analysis} to a larger set of sensing matrices, and the \ac{rs} prediction of the asymptotic \ac{mse} was determined for sparse Gaussian sources. The given results can be recovered by considering the distortion function \begin{align} \mathsf{d}({\boldsymbol{\hat{x}}};{\boldsymbol{x}})=\norm{{\boldsymbol{\hat{x}}}-{\boldsymbol{x}}}^2, \end{align} and the source distribution to be \eqref{eq:cs-1} with $\breve{\mathrm{F}}_x$ representing the zero-mean and unit-variance Gaussian \ac{cdf}. \subsection{Finite Alphabet Sources} Our result can be further employed to study the sampling problem of finite alphabet sources. Considerin \begin{align} \mathbbmss{X} = \left\lbrace 0, t_1, \ldots, t_{\ell-1} \right\rbrace, \label{eq:cs-11} \end{align} in which the symbol $0$ occurs with probability $1-\alpha$, and other $\ell-1$ outcomes are distributed due to $\mathrm{p}_t$. Consequently, the source distribution reads \begin{align} \mathrm{p}_x(x)=\left(1-\alpha\right) \mathbf{1} \{ x=0\} + \alpha \mathbf{1} \{ x\neq 0\} \mathrm{p}_t(x) \label{eq:cs-12} \end{align} which can be interpreted as the multiplication of the non-zero discrete random variable $t$ distributed with $\mathrm{p}_t$ and an $\alpha$-Bernoulli random variable. For sake of brevity, we denote the sorted version of the symbols in the support $\mathbbmss{X}$ by $\mathrm{c}_1, \ldots, \mathrm{c}_\ell$ in whic \begin{align} -\infty < \mathrm{c}_1 < \mathrm{c}_2 < \ldots < \mathrm{c}_\ell < + \infty. \label{eq:cs-13} \end{align} For notational compactness, we further define $\mathrm{c}_0$ and $\mathrm{c}_{\ell+1}$ to be $-\infty$ and $+\infty$, respectively. Similar to the continuous case, different choices of the utility function $u(\cdot)$ address different types of reconstruction schemes which we investigate in the sequel.\\ \exmpl{ex:4} (linear recovery scheme) We consider the case in which the finite alphabet source is reconstructed via the linear recovery scheme as introduced in Example \ref{ex:1}. Using the $b$\ac{rsb} decoupling principle, the source and estimated symbols $(x_j, {\hat{x}}_j)$ converge to the random variables $\mathrm{x}$ and $\hat{\mathrm{x}}$ for all $j\in[1:n]$ where $\mathrm{x}$ is distributed with $\mathrm{p}_x$ defined in \eqref{eq:cs-12}, and $\hat{\mathrm{x}}$ is found as $\hat{\mathrm{x}}=\mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]$ with \begin{align} \mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]= \mathrm{c}_k \qquad \text{if} \qquad y \in \left( v^{\ell_2}_k,v^{\ell_2}_{k+1} \right] \label{eq:cs-14} \end{align} for $k \in [1:\ell]$. The scalar $y$ indicates the observation symbol in the equivalent decoupled system, and the boundary point $v^{\ell_2}_k$ is defined as \begin{align} v^{\ell_2}_k \coloneqq \frac{ 1+\lambda^{\mathsf{s}} }{2} \left( \mathrm{c}_{k-1}+\mathrm{c}_k \right). \label{eq:cs-15} \\ \nonumber \end{align} \exmpl{ex:5} (LASSO recovery scheme) Replacing the reconstruction scheme in Example \ref{ex:4} with LASSO, the single-user estimator of the $b$\ac{rsb} decoupled system is of the form \begin{align} \mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]= \mathrm{c}_k \qquad \text{if} \qquad y \in \left( v^{\ell_1}_k,v^{\ell_1}_{k+1} \right] \label{eq:cs-16} \end{align} for $k \in [1:\ell]$ where the boundary point $v^{\ell_l}_k$ reads \begin{align} v^{\ell_1}_k \coloneqq \frac{1}{2} \left( \mathrm{c}_{k-1}+\mathrm{c}_k \right) + \lambda^{\mathsf{s}} \frac{\abs{\mathrm{c}_k}-\abs{\mathrm{c}_{k-1}}}{\mathrm{c}_k-\mathrm{c}_{k-1}}. \label{eq:cs-17} \\ \nonumber \end{align} \exmpl{ex:6} ($\ell_0$ norm recovery scheme) For finite alphabet sources, the $\ell_0$ norm recovery scheme is optimal in terms of symbol error rate, since it realizes the sparse uniform distribution. In fact, for the case of which the non-zero symbols of the source are uniformly distributed, the $\ell_0$ norm utility function exactly models the source's true prior, and therefore, can be considered as the optimal scheme. Under the $\ell_0$ norm recovery scheme, the $b$\ac{rsb} decoupled system reduces to \begin{align} \mathrm{g}_{\mathsf{map}}[(y);\lambda^{\mathsf{s}},u]= \mathrm{c}_k \qquad \text{if} \qquad y \in \left( v^{\ell_0}_k,v^{\ell_0}_{k+1} \right] \label{eq:cs-18} \end{align} with the boundary point $v^{\ell_0}_k$ being defined as \begin{align} v^{\ell_0}_k \coloneqq \frac{1}{2} \left( \mathrm{c}_{k-1}+\mathrm{c}_k \right) + \lambda^{\mathsf{s}} \frac{\mathbf{1}\{\mathrm{c}_k\neq 0\}-\mathbf{1}\{\mathrm{c}_{k-1}\neq 0\}}{\mathrm{c}_k-\mathrm{c}_{k-1}}. \label{eq:cs-19} \end{align} for $k \in [1:\ell]$. \section{Numerical Results for Large Compressive Sensing Systems} \label{sec:numerics} In this section, we numerically investigate the examples of large compressive sensing systems for some known setups. For this purpose, we simulate the decoupled systems by setting the source distribution and sensing matrix to a specific form and determine the expected distortion of the equivalent scalar system. We, then, discuss the validity of \ac{rs} and \ac{rsb} assumptions for these examples. \subsection{Simulation Setups} The settings and distortion functions being considered in the numerical investigations are as follows. \subsubsection{Sensing Matrices} Throughout the numerical investigations, we consider the two important cases of random ``\ac{iid}'' and ``projector'' matrices. \begin{itemize} \item \textbf{\ac{iid} Random Matrix:} In this case, the entries of $\mathbf{A}_{k\times n}$ are supposed to be generated \ac{iid} from an arbitrary distribution $\mathrm{p}_a$. Without loss of generality, we assume that the entries are zero-mean random variables with variance $k^{-1}$. This structure is the most primary and also the most discussed case in random matrix theory. For this matrix, regardless of $\mathrm{p}_a$, it is well known that the asymptotic empirical eigenvalue \ac{cdf} of the Gramian $\mathbf{J}$ follows the Marcenko-Pastur law which states \begin{align} \mathrm{F}_\mathbf{J}(\lambda)=\left[ 1-\mathsf{r}^{-1}\right]^+ \mathbf{1}\{\lambda>0\} + \int_{-\infty}^\lambda \frac{\sqrt{\mathsf{r}-(1-u)^2}}{2 \pi \mathsf{r} u} \mathrm{d} u \label{eq:cs-20} \end{align} where $\left[x\right]^+$ returns $x$ when $x$ is non-negative and is zero otherwise \cite{muller2013applications,tulino2004random,couillet2011random}. Using the definition of $\mathrm{R}$-transform, it is straightforward to show that $\mathrm{R}_{\mathbf{J}} (\cdot)$ reads \begin{align} \mathrm{R}_{\mathbf{J}} (\omega) = \frac{1}{1-\mathsf{r} \omega}. \label{eq:cs-21} \end{align} \item \textbf{Projector Matrix:} Here, the only constraint on the sensing matrix is that the row vectors are orthogonal. The matrices are also referred to as the ``row orthogonal'' matrices. For the sensing matrix $\mathbf{A}_{k \times n}$, we assume the case that the row vectors are normalized by the number of rows and $k\leq n$; thus, the~outer~product~$\mathbf{A} \mathbf{A}^\mathsf{T}$~reads \begin{align} \mathbf{A} \mathbf{A}^{\mathsf{T}}=\frac{n}{k} \mathbf{I}_k. \label{eq:cs-22} \end{align} Consequently, the Gram matrix $\mathbf{J}$ takes two different eigenvalues: $\lambda=0$ with multiplicity $n-k$, and $\lambda=n k^{-1}$ with multiplicity $k$. Considering the definition of the factor $\mathsf{r}$ in \eqref{eq:eq:sys-1.3}, the asymptotic empirical \ac{cdf} of the eigenvalues read \begin{align} \mathsf{F}_\mathbf{J}(\lambda)=\left[ 1-\mathsf{r}^{-1} \right] \mathbf{1}\{\lambda > 0 \} + \mathsf{r}^{-1} \mathbf{1} \{ \lambda > \mathsf{r} \} \label{eq:cs-23} \end{align} which results in the $\mathrm{R}$-transform of the form \begin{align} \mathrm{R}_{\mathbf{J}} (\omega) = \frac{\sf \omega - 1 + \sqrt{(\mathsf{r} \omega -1)^2+4 \omega}}{2 \omega}. \label{cs:24} \end{align} \end{itemize} } \subsubsection{Source Model}{ We consider the continuous and finite alphabet sources to be distributed with ``sparse Gaussian'' and ``sparse uniform'' distributions, respectively. More precisely, we assume the entries of the continuous and finite alphabet sources to be generated from a Gaussian and uniform distribution, respectively, and multiplied by a Bernoulli random variable with probability $\alpha$ to take $1$ and $1-\alpha$ to take $0$. Moreover, we assume the nonzero outcomes of the finite alphabet source to be of the symmetric form \begin{align} \left\lbrace \pm a, \ldots, \pm \kappa a \right\rbrace \label{eq:cs-source} \end{align} for some positive real $a$ and integer $\kappa$. } \subsubsection{Distortion Function}{ Regarding the continuous sources, we determine the performance of different estimators by considering the \ac{mse} as the distortion function, i.e., \begin{align} \mathsf{d}({\boldsymbol{\hat{x}}};{\boldsymbol{x}})=\norm{{\boldsymbol{\hat{x}}}-{\boldsymbol{x}}}^2. \label{cs:25} \end{align} For the finite alphabet sources, moreover, we determine the probability of the error as the measure for which \begin{align} \mathsf{d}({\boldsymbol{\hat{x}}};{\boldsymbol{x}})=\sum_{i=1}^n \mathbf{1} \left\lbrace{\hat{x}}_i \neq x_i \right\rbrace \label{cs:26} \end{align} as well as the \ac{mse}. Considering either case, it is clear that the \ac{mse} obtained by any $\ell_p$ norm recovery scheme is bounded from below by the \ac{mmse} bound reported in the literature \cite{barbier2017mutual,reeves2016replica}. } \subsection{Numerical Results for Continuous Sources} Considering Examples \ref{ex:1}-\ref{ex:3}, we consider the case in which a sparse Gaussian source with sparsity factor $\alpha=0.1$ is sampled via a random sensing matrix. Fig. \ref{fig:5} shows the \ac{rs} prediction of normalized \ac{mse}, defined as \begin{align} {\mathsf{MSE}}^0=\frac{\mathsf{MSE}}{\mathsf{E}\hspace{.5mm} \abs{x}^2}=\alpha^{-1} \mathsf{MSE}, \label{cs:27} \end{align} \begin{figure}[t] \centering \resizebox{1\linewidth}{!}{ \pstool[width=.35\linewidth]{Figures/fig-1}{ \psfrag{lambda}[c][c][0.35]{$\lambda$} \psfrag{norm-mse}[c][c][0.25]{$\mathsf{MSE}^0$ in [dB]} \psfrag{AAABBBCCC001AA}[l][l][0.25]{Linear-\ac{iid}} \psfrag{AAABBBCCC001BB}[l][l][0.25]{Linear-Projector} \psfrag{AAABBBCCC002AA}[l][l][0.25]{LASSO-\ac{iid}} \psfrag{AAABBBCCC002BB}[l][l][0.25]{LASSO-Projector} \psfrag{AAABBBCCC003AA}[l][l][0.25]{$\ell_0$ norm-\ac{iid}} \psfrag{AAABBBCCC003BB}[l][l][0.25]{$\ell_0$ norm-Projector} \psfrag{0}[r][c][0.25]{$0$} \psfrag{-2}[r][c][0.25]{$-2$} \psfrag{-4}[r][c][0.25]{$-4$} \psfrag{-6}[r][c][0.25]{$-6$} \psfrag{-8}[r][c][0.25]{$-8$} \psfrag{-10}[r][c][0.25]{$-10$} \psfrag{-12}[r][c][0.25]{$-12$} \psfrag{-14}[r][c][0.25]{$-14$} \psfrag{-16}[r][c][0.25]{$-16$} \psfrag{-18}[r][c][0.25]{$-18$} \psfrag{-20}[r][c][0.25]{$-20$} \psfrag{0.05}[c][c][0.25]{$0.05$} \psfrag{0.1}[c][c][0.25]{$0.1$} \psfrag{0.15}[c][c][0.25]{$0.15$} \psfrag{0.2}[c][c][0.25]{$0.2$} \psfrag{0.25}[c][c][0.25]{$0.25$} \psfrag{0.3}[c][c][0.25]{$0.3$} \psfrag{0.35}[c][c][0.25]{$0.35$} \psfrag{0.4}[c][c][0.25]{$0.4$} \psfrag{0.45}[c][c][0.25]{$0.45$} \psfrag{0.5}[c][c][0.25]{$0.5$} }} \caption{\ac{rs} predicted normalized \ac{mse} versus the estimation parameter $\lambda$ for the linear, LASSO and $\ell_0$ norm recovery schemes considering the compression rate $\mathsf{r}=2$. The sparsity factor is set to be $\alpha=0.1$ and the noise variance $\lambda_0$ is set such that the source power to noise power ratio becomes $10$ dB. The dashed and solid lines respectively indicate the cases with random projector and \ac{iid} measurements. The curves match the numerical results reported in \cite{rangan2012asymptotic, vehkapera2014analysis}.} \label{fig:5} \end{figure} as a function of the estimation parameter $\lambda$. The compression rate is set to be $\mathsf{r}=2$, and both the \ac{iid} random and projector sensing matrices are considered. The curves match the results reported in \cite{vehkapera2014analysis} and \cite{rangan2012asymptotic}. As it is seen, the $\ell_0$-norm recovery scheme with the optimal choice of estimation parameter outperforms the LASSO scheme; however, the non-optimal choice of the estimation parameter can make the $\ell_0$ norm's performance even worse than the LASSO. Moreover, in contrast to the noiseless case, the projector matrix is always outperforming the \ac{iid} matrix in the noisy case; this fact has been also reported in \cite{vehkapera2014analysis}. \begin{figure}[t] \centering \resizebox{1\linewidth}{!}{ \pstool[width=.35\linewidth]{Figures/fig-2}{ \psfrag{lambda}[c][c][0.35]{$\lambda$} \psfrag{norm-mse}[c][c][0.25]{$\mathsf{MSE}^0$ in [dB]} \psfrag{AAABBBAA01}[l][l][0.25]{\ac{iid}} \psfrag{AAABBBAA02}[l][l][0.25]{Projector} \psfrag{0}[r][c][0.25]{$0$} \psfrag{-2}[r][c][0.25]{$-2$} \psfrag{-4}[r][c][0.25]{$-4$} \psfrag{-6}[r][c][0.25]{$-6$} \psfrag{-8}[r][c][0.25]{$-8$} \psfrag{-10}[r][c][0.25]{$-10$} \psfrag{-12}[r][c][0.25]{$-12$} \psfrag{-14}[r][c][0.25]{$-14$} \psfrag{-16}[r][c][0.25]{$-16$} \psfrag{-18}[r][c][0.25]{$-18$} \psfrag{-20}[r][c][0.25]{$-20$} \psfrag{0.05}[c][c][0.25]{$0.05$} \psfrag{0.1}[c][c][0.25]{$0.1$} \psfrag{0.15}[c][c][0.25]{$0.15$} \psfrag{0.2}[c][c][0.25]{$0.2$} \psfrag{0.25}[c][c][0.25]{$0.25$} \psfrag{0.3}[c][c][0.25]{$0.3$} \psfrag{0.35}[c][c][0.25]{$0.35$} \psfrag{0.4}[c][c][0.25]{$0.4$} \psfrag{0.45}[c][c][0.25]{$0.45$} \psfrag{0.5}[c][c][0.25]{$0.5$} \psfrag{r=1}[l][c][0.3]{\textcolor{blue}{$\mathsf{r}=1$}} \psfrag{r=3}[r][c][0.3]{\textcolor{black}{$\mathsf{r}=3$}} \psfrag{r=5}[r][c][0.3]{\textcolor{OliveGreen}{$\mathsf{r}=5$}} \psfrag{r=11}[r][c][0.3]{\textcolor{red}{$\mathsf{r}=11$}} }} \caption{\ac{rs} predicted normalized \ac{mse} versus the estimation parameter $\lambda$ for different compression rates considering LASSO recovery. The sparsity factor is set to be $\alpha=0.1$ and the true noise variance $\lambda_0$ is set to be $0.01$. The dashed and solid lines respectively indicate the random projector and \ac{iid} matrices. As the compression rate increases, $\mathsf{MSE}^0$ converges to $0$ dB.} \label{fig:6} \end{figure} In \cite{kabashima2009typical}, the authors showed that in the noiseless sampling case with an \ac{iid} Gaussian matrix, the \ac{rs} ansatz for linear and LASSO recovery schemes is locally stable against perturbations that break the symmetry of the replica correlation matrix. This result in fact agrees with the general belief that convex optimization problems do not exhibit \ac{rsb}~\cite{moustakas2007outage}. The result in \cite{kabashima2009typical}, however, indicated that for the $\ell_0$ norm reconstruction, the \ac{rs} ansatz becomes unstable, and therefore the \ac{rsb} ans\"atze are needed for accurately assessing the performance. In order to investigate the observation of \cite{kabashima2009typical}, we have plotted the normalized \ac{mse} of the LASSO recovery scheme predicted by the \ac{rs} ansatz in terms of the estimation parameter $\lambda$ in Fig. \ref{fig:6} considering different compression rates. It is observed that for a given estimation parameter $\lambda$, the normalized \ac{mse} increases as the compression rate grows. For large compression rates, the normalized \ac{mse} converges to $0$ dB which agrees with the fact that for asymptotically large compression rates, the source and observation vectors are independent, and thus, the \ac{mse} converges to the source power. To investigate the $\ell_0$ norm recovery scheme, we plot the corresponding curves for the $\ell_0$ norm scheme considering Fig. \ref{fig:6} as the reference. For $\ell_0$ norm recovery, Fig. \ref{fig:7} shows the normalized \ac{mse} predicted by the \ac{rs} ansatz as a function of the estimation parameter for two different compression rates. The system setup has been set to be similar to the one considered in Fig. \ref{fig:6}, and the curves have been plotted for both the \ac{iid} and projector measurements. In contrast to the LASSO recovery scheme, the \ac{rs} ansatz starts to give invalid predictions for the $\ell_0$ norm scheme as the compression rate increases. As it is observed, the normalized \ac{mse} drops unexpectedly down for an interval of the estimation parameters when the compression rate grows. \begin{figure}[t] \centering \resizebox{1\linewidth}{!}{ \pstool[width=.35\linewidth]{Figures/Fig-3-neu}{ \psfrag{lambda}[c][c][0.35]{$\lambda$} \psfrag{norm-mse}[c][c][0.25]{$\mathsf{MSE}^0$ in [dB]} \psfrag{AAABBBAA01}[l][l][0.25]{\ac{iid}} \psfrag{AAABBBAA02}[l][l][0.25]{Projector} \psfrag{0}[r][c][0.25]{$0$} \psfrag{-5}[r][c][0.25]{$-5$} \psfrag{5}[r][c][0.25]{$5$} \psfrag{-10}[r][c][0.25]{$-10$} \psfrag{-15}[r][c][0.25]{$-15$} \psfrag{-20}[r][c][0.25]{$-20$} \psfrag{-25}[r][c][0.25]{$-25$} \psfrag{0.05}[c][c][0.25]{$0.05$} \psfrag{0.1}[c][c][0.25]{$0.1$} \psfrag{0.15}[c][c][0.25]{$0.15$} \psfrag{0.2}[c][c][0.25]{$0.2$} \psfrag{0.25}[c][c][0.25]{$0.25$} \psfrag{0.3}[c][c][0.25]{$0.3$} \psfrag{0.35}[c][c][0.25]{$0.35$} \psfrag{0.4}[c][c][0.25]{$0.4$} \psfrag{0.45}[c][c][0.25]{$0.45$} \psfrag{0.5}[c][c][0.25]{$0.5$} \psfrag{0.55}[c][c][0.25]{$0.55$} \psfrag{0.6}[c][c][0.25]{$0.6$} \psfrag{0.65}[c][c][0.25]{$0.65$} \psfrag{0.7}[c][c][0.25]{$0.7$} \psfrag{0.75}[c][c][0.25]{$0.75$} \psfrag{0.8}[c][c][0.25]{$0.8$} \psfrag{0.85}[c][c][0.25]{$0.85$} \psfrag{0.9}[c][c][0.25]{$0.9$} \psfrag{0.95}[c][c][0.25]{$0.95$} \psfrag{1}[r][c][0.25]{$1$} \psfrag{r=1}[l][c][0.3]{\textcolor{blue}{$\mathsf{r}=1$}} \psfrag{r=4}[r][c][0.3]{\textcolor{black}{$\mathsf{r}=4$}} \psfrag{invalid solution}[c][c][0.3]{\textcolor{red}{invalid ans\"atze}} \psfrag{r=11}[r][c][0.3]{\textcolor{red}{$\mathsf{r}=11$}} \psfrag{lambda}[c][c][0.3]{$\lambda$} \psfrag{mse0}[c][c][0.3]{$\mathsf{MSE}^0$ in [dB]} }} \caption{The normalized \ac{mse} as a function of the estimation parameter $\lambda$ determined via the \ac{rs} ansatz for different compression rates considering the $\ell_0$ norm recovery scheme. The sparsity factor and the noise variance are considered to be $\alpha=0.1$ and $\lambda_0=0.01$, respectively. The dashed and solid lines respectively indicate the random projector and \ac{iid} matrices. As the compression rate increases, the \ac{rs} ansatz starts to give invalid solutions at low estimation parameters. In fact, as $\mathsf{r}$ grows, the \ac{rs} fixed point equations have no valid solutions as $\lambda$ reduces. The interval in which the solution is invalid becomes larger as the compression rate increases.} \label{fig:7} \end{figure} In fact, in this interval, the \ac{rs} fixed point equations have either an unstable solution or no solution. To illustrate this result further, let us consider Examples \ref{ex:2} and \ref{ex:3} under the \ac{rs} ansatz when an \ac{iid} sensing matrix is employed. In this case, the equivalent noise power and estimation parameter $\lambda^{\mathsf{s}}_0$ and $\lambda^{\mathsf{s}}$ read \begin{subequations} \begin{align} \lambda^{\mathsf{s}}&=\lambda+\mathsf{r} \chi \label{eq:cs-28a} \\ \lambda^{\mathsf{s}}_0&=\lambda_0+\mathsf{r} q \label{eq:cs-28b}. \end{align} \end{subequations} By increasing the compression rate, the interference increases, and thus, the \ac{mse} takes larger values. Therefore, for small $\lambda$ and $\lambda_0$, one can consider $\mathsf{r} \chi \gg \lambda$ and $\mathsf{r} q \gg \lambda_0$ as $\mathsf{r}$ takes large values and write \begin{subequations} \begin{align} \lambda^{\mathsf{s}}&\approx \mathsf{r} \chi \label{eq:cs-29a} \\ \lambda^{\mathsf{s}}_0&\approx \mathsf{r} q \label{eq:cs-29b}. \end{align} \end{subequations} Considering Example \ref{ex:2}, by substituting \eqref{eq:cs-29a} and \eqref{eq:cs-29b} in the \ac{rs} ansatz, the fixed point equations, as $\mathsf{r}$ grows large, is written in the following form \begin{subequations} \begin{align} u{\sqrt{\mathsf{r}} \ \phi (u\sqrt{\mathsf{r}})} & \approx { \int_{u \sqrt{\mathsf{r}}}^\infty z^2 \ \mathrm{D} z} + \epsilon_r \label{eq:cs-30a} \\ q &\approx {2 \alpha} \int_0^{u \sqrt{\mathsf{r}}} \mathrm{D} z + \epsilon_r \label{eq:cs-30b}. \end{align} \end{subequations} for some $\epsilon_r$ tending to zero as $r\uparrow\infty$, and the bounded real scalar $u$ defined as \begin{align} \displaystyle u \coloneqq \frac{\chi}{\sqrt{q}}. \label{eq:cs-31} \end{align} Taking the limit $\mathsf{r}\uparrow\infty$, \eqref{eq:cs-30a} is valid for any bounded real value of $u$ and \eqref{eq:cs-30b} reduces to $q \approx \alpha$ for large compression rates. Noting that for this setup $q=\mathsf{MSE}$, one concludes that $\mathsf{MSE}^0 \approx 1$ which agrees with the results given in Fig. \ref{fig:6}. A similar approach for the $\ell_0$ norm recovery scheme in Example \ref{ex:3}, however, results in the following contradicting equations \begin{subequations} \begin{align} \int_{u}^\infty z^2 \ \mathrm{D} z &\approx \epsilon_r \label{eq:cs-32a} \\ \int_{0}^u \ \mathrm{D} z &\approx \epsilon_r \label{eq:cs-32b}. \end{align} \end{subequations} for the scalar $u$ defined as \begin{align} \displaystyle u \coloneqq \sqrt{2\frac{\chi}{q}}. \label{eq:cs-33} \end{align} Clearly, the set of equations in \eqref{eq:cs-32a} and \eqref{eq:cs-32b} have no solution as $\mathsf{r} \uparrow \infty$. The approximated fixed point equations explain the invalidity of the \ac{rs} predicted performance of the $\ell_0$ norm recovery scheme for large compression rates. \begin{figure}[t] \centering \resizebox{1\linewidth}{!}{ \pstool[width=.35\linewidth]{Figures/mse_vs_rate-revised.eps}{ \psfrag{LINXXXAAA}[l][l][0.25]{Linear} \psfrag{SINXXXAAA}[l][l][0.25]{LASSO} \psfrag{ZINXXXAAA}[l][l][0.25]{$\ell_0$ norm} \psfrag{invalid ansatze}[c][c][0.3]{\textcolor{red}{invalid ans\"atze}} \psfrag{normalized mse}[c][c][0.3]{$\mathsf{MSE}^0$ in [dB]} \psfrag{0}[r][c][0.25]{$0$} \psfrag{-2}[r][c][0.25]{$-2$} \psfrag{-4}[r][c][0.25]{$-4$} \psfrag{-6}[r][c][0.25]{$-6$} \psfrag{-8}[r][c][0.25]{$-8$} \psfrag{-10}[r][c][0.25]{$-10$} \psfrag{-12}[r][c][0.25]{$-12$} \psfrag{-14}[r][c][0.25]{$-14$} \psfrag{-16}[r][c][0.25]{$-16$} \psfrag{-18}[r][c][0.25]{$-18$} \psfrag{0.5}[c][c][0.25]{$0.5$} \psfrag{1}[c][c][0.25]{$1$} \psfrag{1.5}[c][c][0.25]{$1.5$} \psfrag{2}[c][c][0.25]{$2$} \psfrag{2.5}[c][c][0.25]{$2.5$} \psfrag{3}[c][c][0.25]{$3$} \psfrag{3.5}[c][c][0.25]{$3.5$} \psfrag{4}[r][c][0.25]{$4$} \psfrag{4.5}[c][c][0.25]{$4.5$} \psfrag{5}[c][c][0.25]{$5$} \psfrag{5.5}[c][c][0.25]{$5.5$} \psfrag{6}[c][c][0.25]{$6$} \psfrag{rate}[c][c][0.3]{compression rate $\mathsf{r}$} }} \caption{Optimal normalized \ac{mse} versus the compression rate $\mathsf{r}$ under \ac{rs} assumption considering the linear, LASSO and $\ell_0$-norm recovery schemes. The sparsity factor and the source to noise power ratio are considered to be $\alpha=0.1$ and $10$ dB, respectively. $\mathsf{MSE}^0$ has been minimized numerically over the estimation parameter $\lambda$. The solid and dashed lines show the results for random \ac{iid} and orthogonal measurements, respectively. As $\mathsf{r}$ grows, the \ac{rs} ansatz for $\ell_0$ norm recovery starts to give invalid solutions.} \label{fig:8} \end{figure} In order to further investigate the \ac{rs} ansatz, we plot the optimal normalized \ac{mse} as a function of the compression rate in Fig. \ref{fig:8}. Here, we consider the case with \ac{iid} sensing matrix when the sparsity factor is set to be $\alpha=0.1$ and source to noise power ratio to be $10$ dB. The normalized \ac{mse} is minimized numerically over the estimation parameter $\lambda$. As the figure illustrates, the \ac{mse} of the \ac{rs} ansatz starts to drop in moderate compression rates. The observation confirms the discussion on the stability of the \ac{rs} saddle point given in \cite{kabashima2009typical,yoshida2007statistical}. In fact, the unexpected drop of the \ac{rs} ansatz is caused by the limited stability region of the \ac{rs} fixed point solutions. More precisely, for a given source to noise power ratio and estimation parameter, the \ac{rs} fixed point equations have stable solutions within a certain interval of compression rates. The interval widens as $\lambda$ grows. \begin{figure}[t] \centering \resizebox{1\linewidth}{!}{ \pstool[width=.35\linewidth]{Figures/Fig-MSE-RSB-revised}{ \psfrag{LSO-AAA-XXX-0RSB}[l][l][0.25]{LASSO-RS} \psfrag{ZNR-AAA-XXX-0RSB}[l][l][0.25]{$\ell_0$ norm-RS} \psfrag{ZNR-BBB-XXX-0RSB}[l][l][0.25]{$\ell_0$ norm-RS restricted} \psfrag{ZNR-AAA-XXX-1RSB}[l][l][0.25]{$\ell_0$ norm-1RSB} \psfrag{MSE-AAA-XXX-0RSB}[l][l][0.25]{\ac{mmse} Bound} \psfrag{norm-mse}[c][c][0.25]{$\mathsf{MSE}^0$ in [dB]} \psfrag{0}[r][c][0.25]{$0$} \psfrag{-2}[r][c][0.25]{$-2$} \psfrag{-4}[r][c][0.25]{$-4$} \psfrag{-6}[r][c][0.25]{$-6$} \psfrag{-8}[r][c][0.25]{$-8$} \psfrag{-10}[r][c][0.25]{$-10$} \psfrag{-12}[r][c][0.25]{$-12$} \psfrag{-14}[r][c][0.25]{$-14$} \psfrag{-16}[r][c][0.25]{$-16$} \psfrag{0.5}[c][c][0.25]{$0.5$} \psfrag{1}[c][c][0.25]{$1$} \psfrag{1.5}[c][c][0.25]{$1.5$} \psfrag{2}[c][c][0.25]{$2$} \psfrag{2.5}[c][c][0.25]{$2.5$} \psfrag{3}[c][c][0.25]{$3$} \psfrag{3.5}[c][c][0.25]{$3.5$} \psfrag{4}[r][c][0.25]{$4$} \psfrag{4.5}[c][c][0.25]{$4.5$} \psfrag{5}[c][c][0.25]{$5$} \psfrag{5.5}[c][c][0.25]{$5.5$} \psfrag{6}[r][c][0.25]{$6$} \psfrag{rate}[c][c][0.3]{compression rate $\mathsf{r}$} \psfrag{normalized mse}[c][c][0.3]{$\mathsf{MSE}^0$ in [dB]} }} \caption{\ac{rs} versus 1\ac{rsb} prediction of the optimal normalized \ac{mse} considering the $\ell_0$ norm recovery scheme and random \ac{iid} measurements. The sparsity factor and the source to noise power ratio are considered to be $\alpha=0.1$ and $10$ dB respectively. The green dashed line shows the \ac{rs} prediction of optimal $\mathsf{MSE}^0$ when it is numerically minimized over $\lambda$; however, the green solid line indicates the restricted \ac{rs} predicted $\mathsf{MSE}^0$ minimized numerically over the intervals of $\lambda$ in which the \ac{rs} ansatz is stable. The black solid line denotes the 1\ac{rsb} ansatz which deviates both the \ac{rs} and restricted \ac{rs} curves. For sake of comparison, the normalized \ac{mse} for LASSO recovery as well as the \ac{mmse} bound have been plotted. } \label{fig:9} \end{figure} Fig. \ref{fig:9} compares the \ac{rs} and 1\ac{rsb} ans\"atze for $\ell_0$ norm recovery. In this figure, the optimal normalized \ac{mse} is plotted for the case with \ac{iid} sensing matrix. The sparsity factor is considered to be $\alpha=0.1$ and the noise variance is set to be $\lambda_0=0.01$. For the \ac{rs} ansatz, we have considered two cases, namely when $\mathsf{MSE}^0$ is minimized over \begin{inparaenum} \item all possible estimation parameters and \item the interval of $\lambda$'s in which the \ac{rs} ansatz is valid within\footnote{In fact, we consider $\lambda > \lambda_{\rm d}$, where $\lambda_{\rm d}$ is the point in Fig. \ref{fig:7} in which the normalized \ac{mse} curve is not differentiable.}. \end{inparaenum} The latter case is referred to as the ``\ac{rs} restricted'' curve. As the figure depicts, the difference between the \ac{rs} and the 1\ac{rsb} ansatz is quite small for low compression rates. The 1\ac{rsb} prediction however deviates from \ac{rs} at larger compression rates. This observation indicates that the performance analysis of the $\ell_0$ norm recovery exhibits \ac{rsb}. For sake of comparison, we also plot the curve for the LASSO recovery scheme as well as the \ac{mmse} bound. As it is observed, the normalized \ac{mse} for LASSO bounds the 1\ac{rsb} curve from above. The 1\ac{rsb} ansatz is moreover consistent with the \ac{mmse} bound which is rigorously justified in the literature \cite{barbier2017mutual,reeves2016replica}. The \ac{rs} restricted $\ell_0$ norm's curve, moreover, crosses the LASSO curve. This deviation indicates that, at large compression rates, the optimal estimation parameter $\lambda$, which minimizes the \ac{mse}, lies somewhere out of the interval in which the \ac{rs} ansatz returns a valid approximation for the \ac{mse}. \begin{figure}[t] \centering \resizebox{1\linewidth}{!}{ \pstool[width=.35\linewidth]{Figures/Entropy-revised}{ \psfrag{Ent-RS}[Bc][Bc][0.3]{$\mathrm{H}^0_{\mathsf{rs}}$} \psfrag{Ent-RSB}[c][l][0.3]{$\hspace*{2mm}\mathrm{H}^{0[1]}_{\mathsf{rsb}}$} \psfrag{-0.01}[c][c][0.25]{$-0.01$\hspace{3mm}} \psfrag{-0.02}[c][c][0.25]{$-0.02$\hspace{3mm}} \psfrag{-0.03}[c][c][0.25]{$-0.03$\hspace{3mm}} \psfrag{-0.04}[c][c][0.25]{$-0.04$\hspace{3mm}} \psfrag{-0.05}[c][c][0.25]{$-0.05$\hspace{3mm}} \psfrag{-0.06}[c][c][0.25]{$-0.06$\hspace{3mm}} \psfrag{-0.07}[c][c][0.25]{$-0.07$\hspace{3mm}} \psfrag{-0.08}[c][c][0.25]{$-0.08$\hspace{3mm}} \psfrag{-1}[r][c][0.21]{$-1$} \psfrag{-2}[r][c][0.21]{$-2$} \psfrag{-3}[r][c][0.21]{$-3$} \psfrag{-10}[c][c][0.25]{$-10$\hspace{7mm}} \psfrag{0}[r][c][0.21]{$0$\hspace*{1mm}} \psfrag{1}[c][c][0.25]{$1$} \psfrag{1.5}[c][c][0.25]{$1.5$} \psfrag{2}[c][c][0.25]{$2$} \psfrag{2.5}[c][c][0.25]{$2.5$} \psfrag{3}[c][c][0.25]{$3$} \psfrag{3.5}[c][c][0.25]{$3.5$} \psfrag{4}[r][c][0.25]{$4$} \psfrag{5}[c][c][0.25]{$5$} \psfrag{6}[c][c][0.25]{$6$} \psfrag{rate}[c][c][0.3]{compression rate $\mathsf{r}$} \psfrag{Entropy}[c][c][0.3]{zero temperature entropy $\mathrm{H}^0$} }} \caption{$\mathrm{H}^0_{\mathsf{rs}}$ and $\mathrm{H}^{0[1]}_{\mathsf{rsb}}$ as functions of the compression rate considering the spin glass which corresponds to the $\ell_0$ norm recovery scheme. The setting is considered to be the same as in the setup investigated in Fig. \ref{fig:8} and Fig. \ref{fig:9}. The better approximation of the performance via the 1\ac{rsb} ansatz is demonstrated in this figure.} \label{fig:10} \end{figure} To check the consistency of the solutions under the \ac{rs} and 1\ac{rsb} ans\"atze, we also sketch the zero temperature entropy of the corresponding spin glass as a function of the compression rate in Fig. \ref{fig:10}, considering the same setup as in Fig. \ref{fig:8} and Fig. \ref{fig:9}. The zero temperature entropy also confirms the better accuracy of the 1\ac{rsb} ansatz. Based on the above investigations, the exact performance of the $\ell_0$ norm recovery scheme needs the \ac{rsb} ans\"atze to be accurately studied. It is however useful to see the validity region of the system parameters in which the prediction given by the \ac{rs} ansatz lies approximately on the 1\ac{rsb} curve. For this purpose, we define for the $\ell_0$ norm recovery, the ``\ac{rs} break compression rate'' $\mathsf{r}_{\mathsf{br}}^{\mathsf{RS}}(\cdot)$ as a function of system parameters to be the compression rate that the prediction via the \ac{rs} ansatz starts to deviate from 1\ac{rsb}. More precisely, for a given setting with parameters in the set $\mathbbmss{P}$, the ``\ac{rs} $\epsilon$-validity region'' for a given $\epsilon\in \mathbbmss{R}^+$ is defined as \begin{align} \mathbbmss{V}_{\mathsf{br}}^{\mathsf{RS}} (\mathbbmss{P};\epsilon) \coloneqq \left\lbrace \mathsf{r} : \abs{\mathsf{MSE}^0_{\mathsf{RS}}(\mathsf{r},\mathbbmss{P})-\mathsf{MSE}^0_{1\mathsf{RSB}}(\mathsf{r},\mathbbmss{P})} < \epsilon \right\rbrace. \label{eq:cs-34} \end{align} where $\mathsf{MSE}^0_{\mathsf{RS}}(\mathsf{r},\mathbbmss{P})$ and $\mathsf{MSE}^0_{1\mathsf{RSB}}(\mathsf{r},\mathbbmss{P})$ indicate, respectively, the normalized \ac{mse} at the compression rate $\mathsf{r}$ for the setup specified by the set $\mathbbmss{P}$ calculated from the \ac{rs} and 1\ac{rsb} ansatz, respectively. Consequently, the ``\ac{rs} break compression rate'' $\mathsf{r}_{\mathsf{br}}^{\mathsf{RS}}(\cdot)$ for maximum deviation $\epsilon$ is given by \begin{align} \mathsf{r}_{\mathsf{br}}^{\mathsf{RS}}(\mathbbmss{P};\epsilon) \coloneqq \max_{\mathbbmss{V}_{\mathsf{br}}^{\mathsf{RS}} (\mathbbmss{P};\epsilon)} \mathsf{r}. \end{align} In general, the set $\mathbbmss{P}$ includes several setup parameters such as sensing matrix and source's distribution, the sparsity factor, noise variance and estimation parameter. Considering a case with \ac{iid} sensing matrix and sparse Gaussian source with a given sparsity factor $\alpha$, the \ac{rs} break compression rate is a function of $\mathsf{snr}$ and the estimation parameter $\lambda$, i.e., $\mathsf{r}_{\mathsf{br}}^{\mathsf{RS}} (\mathsf{snr},\lambda;\epsilon)$, where we define the source to noise power ratio $\mathsf{snr}$ as \begin{align} \mathsf{snr} \coloneqq \frac{\mathsf{E}\hspace{.5mm} x^2}{\lambda_0} = \alpha \lambda_0^{-1}. \label{eq:cs-35} \end{align} In this case, the \ac{rs} $\epsilon$-validity region $\mathbbmss{V}_{\mathsf{br}}^{\mathsf{RS}} (\mathsf{snr},\lambda;\epsilon)$ is the area enclosed by both the $\lambda$ and $\mathsf{snr}$ axes as well as the $\mathsf{r}_{\mathsf{br}}^{\mathsf{RS}} (\mathsf{snr},\lambda;\epsilon)$ surface. \begin{figure}[t] \centering \resizebox{1\linewidth}{!}{ \pstool[width=.35\linewidth]{Figures/fig-8}{ \psfrag{1}[r][c][0.25]{$1$} \psfrag{1.5}[r][r][0.25]{$1.5$} \psfrag{2}[r][r][0.25]{$2$} \psfrag{2.5}[r][r][0.25]{$2.5$} \psfrag{3}[r][c][0.25]{$3$} \psfrag{3.5}[r][c][0.25]{$3.5$} \psfrag{4}[r][c][0.25]{$4$} \psfrag{-14}[r][c][0.25]{$-14$} \psfrag{5}[c][c][0.25]{$5$} \psfrag{10}[c][c][0.25]{$10$} \psfrag{15}[c][c][0.25]{$15$} \psfrag{0}[c][c][0.25]{$0$} \psfrag{0.01}[c][c][0.25]{$0.01$} \psfrag{0.02}[c][c][0.25]{$0.02$} \psfrag{0.03}[c][c][0.25]{$0.03$} \psfrag{0.04}[c][c][0.25]{$0.04$} \psfrag{0.05}[c][c][0.25]{$0.05$} \psfrag{0.06}[c][c][0.25]{$0.06$} \psfrag{0.07}[c][c][0.25]{$0.07$} \psfrag{0.08}[c][c][0.25]{$0.08$} \psfrag{0.09}[c][c][0.25]{$0.09$} \psfrag{0.1}[c][c][0.25]{$0.10$} \psfrag{lambda}[c][c][0.3]{$\lambda$} \psfrag{snr=12}[c][c][0.3]{\textcolor{red}{$\mathsf{snr}=12$ dB}} \psfrag{snr=9}[c][c][0.3]{$\mathsf{snr}=9$ dB} \psfrag{Rbreak}[c][c][0.3]{$\mathsf{r}_{\mathsf{br}}^{\mathsf{RS}}(\mathsf{snr},\lambda;\epsilon)$} \psfrag{RS validity region}[c][c][0.35]{$\mathbbmss{V}_{\mathsf{br}}^{\mathsf{RS}} (9,\lambda;\epsilon)$} }} \caption{$\mathsf{r}_{\mathsf{br}}^{\mathsf{RS}}(\mathsf{snr},\lambda;\epsilon)$ in terms of the estimation parameter $\lambda$ for $\epsilon=10^{-3}$, and $\mathsf{snr}=9$ dB and $\mathsf{snr}=12$ dB considering an \ac{iid} matrix employed for sensing. The break compression rate starts to saturate as the estimation parameter grows. As $\lambda\downarrow0$, the break compression rate takes values close to one. This observation agrees with \ac{rs} instability reported in \cite{kabashima2009typical}.} \label{fig:12} \end{figure} Fig. \ref{fig:12} illustrates the intersection of the \ac{rs} $\epsilon$-validity region and the planes $\mathsf{snr}=9$ and $\mathsf{snr}=12$ for $\epsilon=10^{-3}$. The break compression rate increases with respect to both $\lambda$ and $\mathsf{snr}$ which agrees with the intuition. Moreover, $\mathsf{r}_{\mathsf{br}}^{\mathsf{RS}} (\mathsf{snr},\lambda;\epsilon)$ starts to saturate as $\lambda$ grows. Another extreme case is when the estimation parameter tends to zero in which the \ac{map} estimator reduces to \begin{align} {\mathbf{g}}({\boldsymbol{y}})= \arg \min_{\norm{{\boldsymbol{y}}-\mathbf{A} {\boldsymbol{v}}}\leq \epsilon_0} \norm{{\boldsymbol{v}}}_0. \label{eq:cs-36} \end{align} with $\epsilon_0=\mathcal{O}(\frac{1}{\sqrt{\mathsf{snr}}})$. For this case, the break compression rate converges to the minimum compression rate $\mathsf{r}=1$. This observation in the large $\mathsf{snr}$ limit agrees with the instability of the \ac{rs} ansatz reported in \cite{kabashima2009typical}. \newpage \subsection{Numerical Results for Finite Alphabet Sources} Considering the finite alphabet source given in \eqref{eq:cs-source}, the boundary points for linear recovery read \begin{align} v_k^{\ell_2} = \pm \left(\frac{2k-1}{2} a(1 + \lambda^{\mathsf{s}}) \right) \end{align} for $k\in[1:\kappa]$. In the case of LASSO reconstruction, the boundary points are given by \begin{align} v_k^{\ell_1} = \pm \left( \frac{2k-1}{2} a + \lambda^{\mathsf{s}} \right) \end{align} for $k\in[1:\kappa]$. Finally, by employing $\ell_0$ norm recovery, we have $v_1^{\ell_0} = \pm \left( \frac{1}{2} a+\frac{\lambda^{\mathsf{s}}}{a} \right)$ and \begin{figure}[t] \centering \resizebox{1\linewidth}{!}{ \pstool[width=.35\linewidth]{Figures/fig-9-neu}{ \psfrag{0}[c][c][0.2]{$0$} \psfrag{-2}[c][c][0.2]{$-2$} \psfrag{-4}[c][c][0.2]{$-4$} \psfrag{-6}[c][c][0.2]{$-8$} \psfrag{-8}[c][c][0.2]{$-8$} \psfrag{-10}[c][c][0.2]{$-10$\hspace{1.5mm}} \psfrag{5}[c][c][0.25]{$5$} \psfrag{10}[r][c][0.25]{$10$} \psfrag{15}[c][c][0.25]{$15$} \psfrag{20}[c][c][0.25]{$20$} \psfrag{0.01}[c][c][0.25]{$0.01$} \psfrag{0.02}[c][c][0.25]{$0.02$} \psfrag{0.03}[c][c][0.25]{$0.03$} \psfrag{0.04}[c][c][0.25]{$0.04$} \psfrag{0.05}[c][c][0.25]{$0.05$} \psfrag{0.06}[c][c][0.25]{$0.06$} \psfrag{0.07}[c][c][0.25]{$0.07$} \psfrag{0.08}[c][c][0.25]{$0.08$} \psfrag{0.09}[c][c][0.25]{$0.09$} \psfrag{0.1}[c][c][0.25]{$0.10$} \psfrag{SUB-AAA-XXX-KKK}[l][l][0.26]{\hspace*{-.85mm}Single-user bound} \psfrag{DDD-AAA-XXX-KKK}[l][l][0.26]{\hspace*{-1.6mm}\ac{iid} matrix} \psfrag{OOO-AAA-XXX-KKK}[l][l][0.26]{\hspace*{-1.6mm}Projector matrix} \psfrag{snr}[c][c][0.3]{$\mathsf{snr}$} \psfrag{K=2}[c][c][0.3]{$\kappa=2$} \psfrag{K=4}[c][c][0.3]{$\kappa=4$} \psfrag{Error Probability}[c][c][0.3]{Error Probability} }} \caption{The error probability versus $\mathsf{snr}$ when the $\ell_0$ norm recovery scheme is employed and the compression rate is set to be $\mathsf{r}=1$. By \ac{iid} measurements, the error probability deviates from the single-user bound within an interval of $\mathsf{snr}$ while random orthogonal measurements meets the single-user bound for almost every $\mathsf{snr}$ and alphabet size. Here, the sparsity factor and estimation parameter are considered to be $\alpha = 0.1$ and $\lambda = 0.1$, respectively.} \label{fig:13} \end{figure} \begin{align} v_k^{\ell_0} = \pm \frac{2k-1}{2} a \end{align} for $k\in[2:\kappa]$. The source to noise power ratio $\mathsf{snr}$ is then defined as \begin{align} \mathsf{snr}\coloneqq \frac{\mathsf{E}\hspace{.5mm} x^2}{\lambda_0}=\frac{\alpha a^2 (\kappa+1)(2\kappa+1)}{6 \lambda_0}. \end{align} Fig. \ref{fig:13} shows the \ac{rs} predicted error probability of the finite alphabet system as a function of $\mathsf{snr}$ for the unit compression rate when the $\ell_0$ norm recovery scheme is employed. The cases with $\kappa=2$ and $\kappa=4$ are considered for both random \ac{iid} and orthogonal measurements. Here, the sparsity factor is set to be $\alpha=0.1$ and $\lambda=0.1$. As the figure illustrates, for either sensing matrix, the error probability meets the single-user bound in a relatively large $\mathsf{snr}$ regime. The deviation from the single-user bound in the \ac{iid} case, moreover, occurs in a larger interval of $\mathsf{snr}$ as $\kappa$ grows. In contrast to \ac{iid} measurements, the coincidence occurs for almost every $\mathsf{snr}$ and $\kappa$ when a projector sensing matrix is employed. This observation is intuitively justified due to the orthogonality of the rows in the latter case. \begin{figure}[t] \centering \resizebox{1\linewidth}{!}{ \pstool[width=.35\linewidth]{Figures/mmse0_snr=5_vs_MAP.eps}{ \psfrag{0}[b][c][0.25]{$0$\hspace*{2mm}} \psfrag{-8}[b][c][0.25]{$-8$\hspace*{4mm}} \psfrag{-4}[b][c][0.25]{$-4$\hspace*{4mm}} \psfrag{-12}[b][c][0.25]{$-12$\hspace*{4mm}} \psfrag{-16}[b][c][0.25]{$-16$\hspace*{4mm}} \psfrag{1}[c][c][0.25]{$1$} \psfrag{1.5}[c][c][0.25]{$1.5$} \psfrag{2}[c][c][0.25]{$2$} \psfrag{2.5}[c][c][0.25]{$2.5$} \psfrag{3}[c][c][0.25]{$3$} \psfrag{3.5}[c][c][0.25]{$3.5$} \psfrag{4}[c][c][0.25]{$4$} \psfrag{4.5}[c][c][0.25]{$4.5$} \psfrag{5}[c][c][0.25]{$5$} \psfrag{10}[r][c][0.25]{$10$} \psfrag{MMSESNRDDDB}[c][r][0.25]{\hspace*{4mm}\ac{mmse} bound} \psfrag{ZIMSESNRDDDB}[c][r][0.25]{\hspace*{-5mm}$\ell_0$ norm} \psfrag{MFE-AAA-XXX-DDD}[l][l][0.25]{\ac{rs} ansatz-\ac{iid}} \psfrag{SPO-AAA-XXX-OOO}[l][l][0.25]{FP solutions-projector} \psfrag{MFE-AAA-XXX-OOO}[l][l][0.25]{\ac{rs} ansatz-projector} \psfrag{iid matrix figure curve}[c][c][0.26]{\ac{iid} Matrix} \psfrag{orthogonal matrix curve}[c][c][0.26]{Projector Matrix} \psfrag{normMSE}[c][c][0.3]{$\mathsf{MSE}^0$ in [dB]} \psfrag{rate}[c][c][0.3]{compression rate $\mathsf{r}$} \psfrag{Error Probability}[c][c][0.3]{Error Probability} }} \caption{Normalized \ac{mse} versus the compression rate $\mathsf{r}$ when $\mathsf{snr} =5$ dB and $a=1$. Here, $\kappa=1$ and the sparsity factor is considered to be $\alpha=0.1$. The $\ell_0$ norm recovery scheme is employed for estimation and the \ac{mse} is numerically minimized over $\lambda$ considering an \ac{iid} sensing matrix. The red curve indicates the \ac{mmse} bound on the normalized \ac{mse}. The figure demonstrates a phase transition in the normalized \ac{mse}.} \label{fig:14} \end{figure} For the \ac{mse}, similar behavior as in Fig.~\ref{fig:13} is observed in Fig.~\ref{fig:14}. As the compression rate increases, considering either the \ac{mse} or error probability, a deviation from the single-user bound is observed for both random \ac{iid} and orthogonal measurements. Considering a fixed $\mathsf{snr}$, it is observed that the \ac{mse}-compression rate has a discontinuity point in which the \ac{mse} suddenly jumps from a lower value to an upper value at a certain compression rate. This phenomenon is known as the ``phase transition'' in which the macroscopic parameters suddenly change the phase within a minor variation of the setting. The compression rate in which the phase transition occurs is referred to as ``transition rate'' which increases as $\mathsf{snr}$ grows. Phase transitions were reported in the literature of communications and information theory for several problems such as turbo coding and \ac{cdma} systems \cite{tanaka2002statistical,agrawal2001turbo,tanaka2002statistical}. Fig. \ref{fig:14} illustrates the phase transition under the \ac{rs} assumption for the case of sparse binary source, i.e., $\kappa=1$, when the source vector is sensed via a random \ac{iid} matrix and $\mathsf{snr}=5$ dB with $a=1$. For estimation, the $\ell_0$ norm recovery scheme is employed which is equivalent to LASSO in this particular example. The estimation parameter $\lambda$ is moreover optimized numerically such that the \ac{mse} is minimized. To determine each point on the curve, the \ac{rs} fixed point equations have been solved for the specific compression rate and all possible solutions have been found. Within this set of solutions, those which are physically stable have been considered and the stable solution with minimum free energy has been taken for the \ac{rs} ansatz. The normalized \ac{mse} at each compression rate has then been plotted using the \ac{rs} ansatz. As the figure shows, the normalized \ac{mse} suddenly jumps to an upper bound which converges to $\mathsf{MSE}^0=0$ dB. For sake of comparison, we have plotted the \ac{mmse} bound as well which shows a samilar behavior at higher rates. In Fig. \ref{fig:15}, we have further plotted the normalized \ac{mse} in terms of the compression rate for the same setting with the sensing matrix replaced by a random projector matrix. The figure shows a phase transition for orthogonal measurements at a higher transition rate compared to the case with random \ac{iid} sensing. Moreover, the improvement in terms of \ac{mse} observed in Fig.~\ref{fig:15} by employing a projector matrix extends the conclusion given for the sparse Gaussian sources in \cite{vehkapera2014analysis} to the sparse finite alphabet sources as well. In order to obtain a more accurate approximation of the performance and transition point, one may investigate the free energy and entropy of the corresponding spin glass by also considering \ac{rsb} ans\"atze \cite{yoshida2007statistical}. The \ac{rs} ansatz, however, gives an accurate approximation of the \ac{mse} for the specific setup considered here. \begin{figure}[t] \centering \resizebox{1\linewidth}{!}{ \pstool[width=.35\linewidth]{Figures/Orth_snr=5_vs_IID.eps}{ \psfrag{0}[b][c][0.25]{$0$\hspace*{2mm}} \psfrag{-8}[b][c][0.25]{$-8$\hspace*{4mm}} \psfrag{-4}[b][c][0.25]{$-4$\hspace*{4mm}} \psfrag{-12}[b][c][0.25]{$-12$\hspace*{4mm}} \psfrag{-16}[b][c][0.25]{$-16$\hspace*{4mm}} \psfrag{1}[c][c][0.25]{$1$} \psfrag{1.5}[c][c][0.25]{$1.5$} \psfrag{2}[c][c][0.25]{$2$} \psfrag{2.5}[c][c][0.25]{$2.5$} \psfrag{3}[c][c][0.25]{$3$} \psfrag{3.5}[c][c][0.25]{$3.5$} \psfrag{4}[c][c][0.25]{$4$} \psfrag{4.5}[c][c][0.25]{$4.5$} \psfrag{5}[c][c][0.25]{$5$} \psfrag{10}[r][c][0.25]{$10$} \psfrag{MSESNR5A}[c][r][0.25]{Projector} \psfrag{MSESNR5D}[b][r][0.25]{\hspace*{-4mm}\ac{iid}} \psfrag{MFE-AAA-XXX-DDD}[l][l][0.25]{\ac{rs} ansatz-\ac{iid}} \psfrag{SPO-AAA-XXX-OOO}[l][l][0.25]{FP solutions-projector} \psfrag{MFE-AAA-XXX-OOO}[l][l][0.25]{\ac{rs} ansatz-projector} \psfrag{iid matrix figure curve}[c][c][0.26]{\ac{iid} Matrix} \psfrag{orthogonal matrix curve}[c][c][0.26]{Projector Matrix} \psfrag{normMSE}[c][c][0.3]{$\mathsf{MSE}^0$ in [dB]} \psfrag{rate}[c][c][0.3]{compression rate $\mathsf{r}$} \psfrag{Error Probability}[c][c][0.3]{Error Probability} }} \caption{$\mathsf{MSE}^0$ in terms of the compression rate $\mathsf{r}$ for $\mathsf{snr} =5$ dB and $a=1$ considering \ac{iid} and projector sensing matrices. Here, $\alpha=0.1$ and $\kappa=1$, and the \ac{mse} is numerically minimized with respect to the estimation parameter. As the figure shows, the projector matrix enhances the reconstruction performance with respect to the both \ac{mse} and transition rate.} \label{fig:15} \end{figure} \section{Conclusion} \label{sec:conclusion} This manuscript considered the performance of the \ac{map} estimator in the large-system limit with respect to a general distortion function. Taking a statistical mechanical approach, the replica method was employed to determine the asymptotic distortion of the system. We deviated from the earlier approaches, e.g., \cite{rangan2012asymptotic, tulino2013support,vehkapera2014analysis}, by evaluating the general replica ansatz of the corresponding spin glass. The general ansatz let us derive the \ac{rs}, as well as the $b$\ac{rsb} ansatz considering the class of rotationally invariant random matrices. The results recover the previous studies \cite{rangan2012asymptotic,tulino2013support,vehkapera2014analysis} in special cases and justifies the uniqueness of the zero temperature entropy's expression under the $b$\ac{rsb} assumption conjectured in \cite{zaidel2012vector}. The replica ansatz evaluated here led us to a more general form of the decoupling principle. In fact, invoking the general replica ansatz, it was shown that for any tuple of input-output entries, the marginal joint distribution converges to the input-output empirical distribution asymptotically. This means that in the large-system limit, the marginal joint distribution of the entries determined by any replica ansatz decouples into a set of identical joint distributions. The form of the asymptotic decoupled distribution, however, depends on the structure imposed on the ansatz. For the $b$\ac{rsb} ansatz, the vector-valued \ac{awgn} system estimated by a \ac{map} estimator decouples into single-user \ac{map}-estimated \ac{awgn} channels followed by some correlated impairment~terms which intuitively model the interference of the system and vanish by reducing the $b$\ac{rsb} assumption to \ac{rs}. The general decoupling principle justified here confirms the conjecture that decoupling is a generic property of \ac{map} estimators, since its validity relies only on the replica continuity assumption. Recent results in statistical mechanics have shown that failures in finding the exact solution via the replica method are mainly caused by the structure imposed on the ansatz, and not replica continuity \cite{talagrand2003spin,guerra2002quadratic,guerra2002thermodynamic,guerra2003broken}. The decoupling property enabled us to represent the equivalent ``replica simulator'' interpretation of the replica ansatz. In fact, the $b$\ac{rsb} fixed point equations are completely described by the statistics of the corresponding decoupled system. The $b$\ac{rsb} ansatz is therefore represented through the state evolution of a transition system which takes an initial set of ansatz parameters as the input and determines a new set of parameters via simulating the $b$\ac{rsb} decoupled system as the output. As an example of vector-valued \ac{map}-estimated systems, we considered the noisy compressive sensing system and studied different sparse recovery schemes, including the linear, LASSO, and $\ell_0$ norm recovery schemes. The numerical investigations showed that for sparse Gaussian sources, the performance of the linear and LASSO recovery schemes are accurately approximated by the \ac{rs} ansatz within a large interval of compression rates. The \ac{rs} prediction of the $\ell_0$ norm scheme's performance, however, was observed to lack the stability within the moderate and high compression rates. As a result, the 1\ac{rsb} ansatz for this scheme was considered which deviates from the \ac{rs} ansatz significantly as the rate grows. For sparse finite alphabet sources, the \ac{rs} prediction of the error probability and \ac{mse} reported phase transitions with respect to the compression rate. The numerical results, moreover, showed a better performance of the random orthogonal measurements compared to the random \ac{iid} sensing in both the cases which were previously reported for sparse Gaussian sources in \cite{vehkapera2014analysis}. The current work can be pursued in several directions. As an example, the ``replica simulator'' introduced in Section \ref{sec:rep_sim} can be studied further by methods developed in the context of transition systems. The analysis may result in proposing a new framework which simplifies the evaluation of fixed point equations. Another direction is the analysis of the conditional distribution of the correlated impairment terms in the $b$\ac{rsb} decoupled system. The study can lead us to understand the necessary or sufficient conditions in which the \ac{rs} ansatz gives an accurate approximation of the system's performance. Inspired by the Sherrington-Kirkpatrick model of spin glasses, for which the full \ac{rsb} ansatz, i.e., $b\uparrow\infty$, has been proved to give a stable solution for all system parameters \cite{mezard2009information}, our conjecture is that the exact solution at all large compression rates, for the cases exhibiting \ac{rsb}, is given by a large number of breaking steps. Nevertheless, as our numerical investigations depicted, even in those cases, the \ac{rs} ansatz, or the $b$\ac{rsb} ansatz with small $b$, can give good approximations of the solution up to some moderate compression rates. The latter study, furthermore, gives us some insights about the accuracy of the approximation given by a finite number of \ac{rsb} steps. Investigating the connection between the replica simulator and message passing based algorithms is another interesting topic for future work. \section{Acknowledgment} The authors would like to acknowledge German Research Foundation, Deutsche Forschungsgemeinschaft (DFG), for support of this work under the Grant No. MU 3735/2-1. \clearpage \appendices \section{Proof of Proposition \ref{proposition:1}} \label{app:a} Starting from \eqref{eq:sm-8}, we define the moment function $\mathsf{Z}(m)$ as in \eqref{eq:sm-9a}. Therefore, \begin{align} \mathsf{D}^{\mathbbmss{W}}({\boldsymbol{\hat{x}}};{\boldsymbol{x}})=\lim_{\upbeta\uparrow\infty}\lim_{n\uparrow\infty}\lim_{h \downarrow 0} \lim_{m \downarrow 0} \frac{1}{m} \frac{1}{n} \frac{\partial}{\partial h} \log \mathsf{Z}(m). \label{eq:a-1} \end{align} Taking the expectation with respect to the noise term first, the moment function is extended as \begin{subequations} \begin{align} \mathsf{Z}(m)&=\mathsf{E}_{{\boldsymbol{x}}} \mathsf{E}_{\mathbf{A}} \mathsf{E}_{{\boldsymbol{z}}} \sum_{\{{\boldsymbol{v}}_a\}} e^{-\upbeta\sum_{a=1}^m\mathcal{E}({\boldsymbol{v}}_a|{\boldsymbol{y}},\mathbf{A})+h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)} ({\boldsymbol{v}}_a;{\boldsymbol{x}})} \label{eq:a-2a}\\ &=\mathsf{E}_{{\boldsymbol{x}}} \mathsf{E}_{\mathbf{A}} \mathsf{E}_{{\boldsymbol{z}}} \sum_{\{{\boldsymbol{v}}_a\}} e^{-\upbeta \sum_{a=1}^m\left\lbrace\frac{1}{2\lambda}({\boldsymbol{x}}-{\boldsymbol{v}}_a)^\mathsf{T} \mathbf{J} ({\boldsymbol{x}}-{\boldsymbol{v}}_a) + \frac{1}{\lambda}{\boldsymbol{z}}^{\mathsf{T}} \mathbf{A} ({\boldsymbol{x}}-{\boldsymbol{v}}_a ) + \frac{1}{2\lambda}\norm{{\boldsymbol{z}}}^2 + u({\boldsymbol{v}}_a)\right\rbrace+h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)} ({\boldsymbol{v}}_a;{\boldsymbol{x}})} \label{eq:a-2b}\\ &\stackrel{\star}{=} \left(\frac{\lambda}{\lambda+m\upbeta\lambda_0}\right)^{\frac{k}{2}} \mathsf{E}_{{\boldsymbol{x}}} \mathsf{E}_{\mathbf{A}} \sum_{\{{\boldsymbol{v}}_a\}} e^{-\upbeta \left\lbrace \sum_{a,b=1}^m \boldsymbol{\tilde{v}}_a^\mathsf{T} \mathbf{J} \boldsymbol{\tilde{v}}_b \zeta_{ab} +\sum_{a=1}^m u({\boldsymbol{v}}_a) \right\rbrace +h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)} ({\boldsymbol{v}}_a;{\boldsymbol{x}})} \label{eq:a-2c} \end{align} \end{subequations} where $\boldsymbol{\tilde{v}}_a={\boldsymbol{x}}-{\boldsymbol{v}}_a$ for $a\in[1:m]$ is the unbiased\footnote{We call it unbiased since it expresses the deviation of the replicas from the source vector.} replica vector, $\mathbf{J}$ is the Gramian of the system matrix defined as $\mathbf{J}\coloneqq \mathbf{A}^\mathsf{T} \mathbf{A}$ and satisfies the properties stated in Section \ref{sec:problem_formulation}, $\star$ comes from taking expectation over ${\boldsymbol{z}}$, and the factor $\zeta_{ab}$ is defined as \begin{align} \zeta_{ab} \coloneqq \frac{1}{2\lambda} \left[ \boldsymbol{1} \{ a=b \}- \frac{\lambda_0}{\lambda+m\upbeta \lambda_0} \upbeta \right] \label{eq:zeta} \end{align} with $\lambda_0$ being the true noise variance as specified in Section \ref{sec:problem_formulation}. \begin{remark} Considering \eqref{eq:a-2b}, one could drop the term $\frac{1}{2\lambda} \norm{{\boldsymbol{z}}}^2$, since it plays no rule in the optimization problem \eqref{eq:int-2}. More precisely, one could redefine the Hamiltonian in \eqref{eq:int-7} to be \begin{align} \mathcal{E}_{\mathrm{new}}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A})=\mathcal{E}({\boldsymbol{v}}|{\boldsymbol{y}},\mathbf{A})-\frac{1}{2\lambda} \norm{{\boldsymbol{z}}}^2 \end{align} without loss of generality. In this case, the coefficient at the right hand side of \eqref{eq:a-2c} reduces to $1$, and $\zeta_{ab}$ reads \begin{align} \zeta_{ab}^{\mathrm{new}} \coloneqq \frac{1}{2\lambda} \left[ \boldsymbol{1} \{ a=b \}- \frac{\lambda_0}{\lambda} \upbeta \right]. \end{align} It is, however, clear from \eqref{eq:a-2c} and \eqref{eq:zeta} that as $m$ tends to zero, both the approaches result in a same result. \end{remark} Considering the expression in \eqref{eq:a-2c}, we define the random variable $\mathsf{Z}(m;{\boldsymbol{x}})$ as \begin{align} \mathsf{Z}(m;{\boldsymbol{x}})\coloneqq \mathsf{E}_{\mathbf{J}} \sum_{\{{\boldsymbol{v}}_a\}} e^{-\upbeta \left\lbrace \sum_{a,b=1}^m \boldsymbol{\tilde{v}}_a^\mathsf{T} \mathbf{J} \boldsymbol{\tilde{v}}_b \zeta_{ab} +\sum_{a=1}^m u({\boldsymbol{v}}_a) \right\rbrace +h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)} ({\boldsymbol{v}}_a;{\boldsymbol{x}})}. \label{eq:a-3} \end{align} Consequently, the moment function is given by taking the expectation over $\mathsf{Z}(m;{\boldsymbol{x}})$ with respect to ${\boldsymbol{x}}$ and multiplying with the scalar $\left(\frac{\lambda}{\lambda+m\upbeta\lambda_0}\right)^{\frac{k}{2}}$. However, we later on show that for almost all realizations of the source vector, $\mathsf{Z}(m;{\boldsymbol{x}})$ converges to a deterministic value, and therefore, the expectation with respect to ${\boldsymbol{x}}$ can be dropped. For $\mathsf{Z}(m;{\boldsymbol{x}})$, we have \begin{subequations} \begin{align} \mathsf{Z}(m;{\boldsymbol{x}}) &= \mathsf{E}\hspace{.5mm}_{\mathbf{J}} \sum_{\{{\boldsymbol{v}}_a\}} e^{- \upbeta \sum_{a,b=1}^m \boldsymbol{\tilde{v}}_a^{\mathsf{T}} \mathbf{J} \boldsymbol{\tilde{v}}_b \zeta_{ab}} \times e^{- \upbeta \sum_{a=1}^m u({\boldsymbol{v}}_a) +h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)}({\boldsymbol{v}}_a;{\boldsymbol{x}})} \label{eq:a-4a} \\ &= \sum_{\{{\boldsymbol{v}}_a\}} \left[ \mathsf{E}\hspace{.5mm}_{\mathbf{J}} e^{- n \upbeta \tr{ \mathbf{J} \mathbf{G}}} \right] \times e^{-\upbeta\sum_{a=1}^m u({\boldsymbol{v}}_a) +h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)}({\boldsymbol{v}}_a;{\boldsymbol{x}})} \label{eq:a-4b} \end{align} \end{subequations} where $\mathbf{G}_{n \times n}$ is defined as \begin{align} \mathbf{G}\coloneqq \frac{1}{n} \sum_{a,b=1}^m \boldsymbol{\tilde{v}}_b \boldsymbol{\tilde{v}}_a^{\mathsf{T}} \zeta_{a,b}. \label{eq:a-5} \end{align} Considering the eigendecomposition of the Gramian $\mathbf{J}=\mathbf{U} \mathbf{D} \mathbf{U}^{\mathsf{T}}$, the expectation in \eqref{eq:a-4b} can be expressed in terms of a spherical integral where the integral measure is the probability measure of $\mathbf{U}$. Regarding the system setup specified in Section \ref{sec:problem_formulation}, the matrix $\mathbf{U}$ is distributed over the orthogonal group $\mathbbmss{O}_n$ with Haar probability distribution. Therefore, the corresponding spherical integral is the so-called ``Harish-Chandra'' or ``Itzykson \& Zuber'' integral. This integral has been extensively studied in the physics and mathematics literature, see for example \cite{harish1957differential}, \cite{itzykson1980planar} and \cite{guionnet2002large}. A brief discussion on the spherical integral and its closed form solution has been given in Appendix~\ref{app:f}. Invoking Theorem 1.2 and 1.7 in \cite{guionnet2005fourier}, as long as $\mathrm{rank}(\mathbf{G})=\mathcal{O}(\sqrt{n})$\footnote{$\mathcal{O}(\cdot)$ indicates the growth order with respect to composition, i.e., $\lim\limits_{n \uparrow \infty}[f(n)]^{-1}\mathcal{O}(f(n))=\mathsf{K}<\infty$.}, the expectation in \eqref{eq:a-4b} can be written as \begin{align} \mathsf{E}\hspace{.5mm}_{\mathbf{J}} e^{- n \upbeta \tr{ \mathbf{J} \mathbf{G}}} = e^{- n \left[\sum_{i=1}^n \int_{0}^{\upbeta \lambda^{\mathbf{G}}_i} \mathrm{R}_{\mathbf{J}}(-2 \omega) \mathrm{d} \omega \right] + \epsilon_n} \label{eq:a-6} \end{align}with $\mathbf{G}$ being defined in \eqref{eq:a-5}, and $\epsilon_n \downarrow 0$ as $n \uparrow \infty$. In order to employ the above result and substitute it in \eqref{eq:a-4b}, we need to check the rank condition. \begin{lemma} \label{lem:a-1} Considering $\mathbf{G}$ to be defined as in \eqref{eq:a-5}, the following argument holds. \begin{align} \mathrm{rank}(\mathbf{G})=\mathcal{O}(\sqrt{n}). \end{align} \end{lemma} \begin{proof} First, we rewrite $\mathbf{G}$ as \begin{subequations} \begin{align} \mathbf{G}&= \frac{1}{2\lambda n} \left[ \sum_{a=1}^m \boldsymbol{\tilde{v}}_a \boldsymbol{\tilde{v}}_a^{\mathsf{T}} - \upbeta \frac{\lambda_0}{\lambda+m\upbeta \lambda_0} (\sum_{a=1}^m \boldsymbol{\tilde{v}}_a) (\sum_{b=1}^m \boldsymbol{\tilde{v}}_b)^{\mathsf{T}} \right] \label{eq:a-7a}\\ &=\frac{1}{2\lambda n} \tilde{\mathbf{V}} (\mathbf{I}_m- \upbeta \frac{\lambda_0}{\lambda+m\upbeta \lambda_0} \mathbf{1}_m) \tilde{\mathbf{V}}^\mathsf{T} \label{eq:a-7b} \end{align} \end{subequations} where $\tilde{\mathbf{V}} = [ \boldsymbol{\tilde{v}}_1, \ldots , \boldsymbol{\tilde{v}}_m]$ is an $n \times m$ matrix with the columns being the unbiased replicas. Then, by considering \eqref{eq:a-7b}, it is obvious that $\mathbf{G}$ could be, at most, of rank $m$. As Assumption \ref{asp:2} indicates, $\mathsf{Z}(m)$ analytically continues to the real axis, and the limit with respect to $m$ is taken in a right neighborhood of $0$. Therefore, for all values of $n$ there exists a constant $\mathsf{K} \in \mathbbmss{R}^+$, such that $m \leq \mathsf{K}$. Consequently, one can write \begin{align} \lim_{n \uparrow \infty}\frac{\mathrm{rank}(\mathbf{G})}{\sqrt{n}} \leq \lim_{n \uparrow \infty} \frac{m}{\sqrt{n}} \leq \lim_{n \uparrow \infty} \frac{\mathsf{K}}{\sqrt{n}}=0 \label{eq:a-8} \end{align} which concludes that $\mathrm{rank}(\mathbf{G})=\mathcal{O}(\sqrt{n})$. \end{proof} Lemma \ref{lem:a-1} ensures that \eqref{eq:a-6} always holds; therefore, noting the fact that $\mathbf{G}$ has only $m$ non-zero eigenvalues, the expectation in the right hand side of \eqref{eq:a-4b} reduces to \begin{align} \mathsf{E}\hspace{.5mm}_{\mathbf{J}} e^{- n \upbeta \tr{ \mathbf{J} \mathbf{G}}} = e^{-n \mathcal{G} (\mathbf{T} \mathbf{Q}^{\mathrm{v}})+\epsilon_n} \label{eq:a-9} \end{align} where the function $\mathcal{G}(\cdot)$ is defined as \begin{align} \mathcal{G}(\mathbf{M}) \coloneqq \int_{0}^{{\upbeta}} \mathrm{Tr} \{\mathbf{M} \mathrm{R}_{\mathbf{J}}(-2\omega\mathbf{M})\} \mathrm{d} \omega \label{eq:a-10} \end{align} for some square matrix $\mathbf{M}$, $\mathbf{T}$ is an $m \times m$ deterministic matrix given by \begin{align} \mathbf{T} \coloneqq \frac{1}{2\lambda} \left[ \mathbf{I}_m- \frac{\lambda_0}{\lambda+m\upbeta \lambda_0} \upbeta \mathbf{1}_m \right], \label{eq:a-11} \end{align} and $\mathbf{Q}^{\mathrm{v}}$ is the $m \times m$ ``correlation matrix'' defined as \begin{align} &\mathbf{Q}^{\mathrm{v}} = \frac{1}{n} \tilde{\mathbf{V}}^{\mathsf{T}} \tilde{\mathbf{V}}. \label{eq:a-12} \end{align} \begin{remark} Note that although $\mathbf{Q}^{\mathrm{v}}$ is symmetric, $\mathbf{T}\mathbf{Q}^{\mathrm{v}}$ is not a symmetric matrix, in general; however, due to the symmetry of $\mathbf{G}$, the eigenvalues of $\mathbf{T}\mathbf{Q}^{\mathrm{v}}$ are real, and therefore, the sequence of integrals over the real axis in \eqref{eq:a-6} exists for all indices. \end{remark} By substituting \eqref{eq:a-9} in \eqref{eq:a-4b}, $\mathsf{Z}(m;{\boldsymbol{x}})$ is given as \begin{align} \mathsf{Z}(m;{\boldsymbol{x}}) = \sum_{ \{ {\boldsymbol{v}}_a \} } e^{-n\mathcal{G} (\mathbf{T} \mathbf{Q}^{\mathrm{v}})- \upbeta \sum_{a=1}^m u({\boldsymbol{v}}_a) + h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)}({\boldsymbol{v}}_a;{\boldsymbol{x}}) + \epsilon_n} . \label{eq:a-13} \end{align} In order to determine the sum in \eqref{eq:a-13}, we follow the technique which has been employed in \cite{muller2008vector} and \cite{zaidel2012vector}. We split the space of all replicas into subshells defined by the correlation matrices in which all the vectors of replicas in each subshell have a same correlation matrix. More precisely, for a given source vector ${\boldsymbol{x}}$, the subshell of the matrix $\mathbf{Q}_{m \times m}$ is defined as \begin{align} \mathbbmss{S}(\mathbf{Q}) = \{ {\boldsymbol{v}}_1, \ldots , {\boldsymbol{v}}_m | ({\boldsymbol{x}}-{\boldsymbol{v}}_a)^{\mathsf{T}} ({\boldsymbol{x}}-{\boldsymbol{v}}_b) =nq_{ab} \} \label{eq:a-14} \end{align}with $q_{ab} =[\mathbf{Q}]_{ab}$ denoting the entry $(a,b)$ of $\mathbf{Q}$. The sum in \eqref{eq:a-13} is determined first over each subshell, and then, over all the subshells as the following. \begin{subequations} \begin{align} \mathsf{Z}(m;{\boldsymbol{x}}) &= \sum_{ \{ {\boldsymbol{v}}_a \} } \left[\int e^{-n\mathcal{G}(\mathbf{T}\mathbf{Q})} \delta(\mathbf{Q}^{\mathrm{v}}-\mathbf{Q}) \mathrm{d} \mathbf{Q}\right] e^{-\upbeta\sum_{a=1}^m u({\boldsymbol{v}}_a) + h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)}({\boldsymbol{v}}_a;{\boldsymbol{x}}) + \epsilon_n} \label{eq:a-15a}\\ &= \int e^{-n\mathcal{G} (\mathbf{T}\mathbf{Q})+ \epsilon_n} \left[ \sum_{ \{ {\boldsymbol{v}}_a \} } \delta(\mathbf{Q}^{\mathrm{v}}-\mathbf{Q}) e^{-\upbeta\sum_{a=1}^m u({\boldsymbol{v}}_a) + h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)}({\boldsymbol{v}}_a;{\boldsymbol{x}}) } \right] \mathrm{d} \mathbf{Q} \label{eq:a-15b} \\ &= \int e^{-n\left[ \mathcal{G} (\mathbf{T}\mathbf{Q})- \mathcal{I}(\mathbf{Q})\right] +\epsilon_n} \mathrm{d} \mathbf{Q} \label{eq:a-15c} \end{align} \end{subequations} where $\mathrm{d} \mathbf{Q} \coloneqq \prod_{a,b=1}^{m} \mathrm{d} q_{ab}$, the integral is taken over $\mathbbmss{R}^{m\times m}$, \begin{align} \delta(\mathbf{Q}^{\mathrm{v}}-\mathbf{Q}) \coloneqq \prod_{a,b=1}^m \delta ( \boldsymbol{\tilde{v}}_a^{\mathsf{T}} \boldsymbol{\tilde{v}}_b - nq_{ab}), \label{eq:a-16} \end{align} and the term $e^{n \mathcal{I}(\mathbf{Q})}$ which determines the probability weight of the subshell $\mathbbmss{S}(\mathbf{Q})$ is defined as \begin{align} e^{n \mathcal{I}(\mathbf{Q})} &\coloneqq \sum_{ \{ {\boldsymbol{v}}_a \} } \delta(\mathbf{Q}^{\mathrm{v}}-\mathbf{Q}) e^{- \upbeta \sum_{a=1}^m u({\boldsymbol{v}}_a) + h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)}({\boldsymbol{v}}_a;{\boldsymbol{x}})}. \label{eq:a-17} \end{align} \begin{remark} One may define the subshells over the transferred correlation matrix $\mathbf{T}\mathbf{Q}$ instead of correlation matrix $\mathbf{Q}$. In this case the subshells over $\mathbf{Q}$ defined in \eqref{eq:a-14} only rotate in the $m$-dimensional space with respect to $\mathbf{T}$. The rotation, however, does not have any impact on the analysis. \end{remark} The last step is to determine $e^{n \mathcal{I}(\mathbf{Q})}$. To do so, we represent the Dirac impulse function using its inverse Laplace transform. By defining $s_{ab}$ as the complex frequency corresponding to $\delta (\boldsymbol{\tilde{v}}_a^{\mathsf{T}} \boldsymbol{\tilde{v}}_b - nq_{ab})$, \begin{align} \delta ( \boldsymbol{\tilde{v}}_a^{\mathsf{T}} \boldsymbol{\tilde{v}}_b - nq_{ab}) &= \int e^{s_{ab} (\boldsymbol{\tilde{v}}_a^{\mathsf{T}} \boldsymbol{\tilde{v}}_b - nq_{ab})} \frac{\mathrm{d} s_{ab}}{2 \pi \mathrm{j}} \label{eq:a-18} \end{align} where the integral is taken over the imaginary axis $\mathbbmss{J}=(t-\mathrm{j} \infty, t+\mathrm{j} \infty)$, for some $t \in \mathbbmss{R}$. Consequently, by defining the frequency domain correlation matrix $\mathbf{S}$ to be an $m \times m$ matrix with $[\mathbf{S}]_{ab}=s_{ab}$, \eqref{eq:a-16} reads \begin{subequations} \begin{align} \delta (\mathbf{Q}^\mathrm{v} -\mathbf{Q})&= \int e^{\sum_{a,b=1}^m s_{ab} (\boldsymbol{\tilde{v}}_a^{\mathsf{T}} \boldsymbol{\tilde{v}}_b - nq_{ab})} \mathrm{d} \mathbf{S} \label{eq:a-19a}\\ &= \int \left[ e^{-n\tr{\mathbf{S}^{\mathsf{T}} \mathbf{Q}}} \right] e^{\sum_{a,b=1}^m s_{ab} \boldsymbol{\tilde{v}}_a^\mathsf{T} \boldsymbol{\tilde{v}}_b} \mathrm{d} \mathbf{S} \label{eq:a-19b} \end{align} \end{subequations} with $\mathrm{d} \mathbf{S}$ being defined as $\displaystyle \mathrm{d} \mathbf{S} \coloneqq \prod_{a,b=1}^{m} \frac{\mathrm{d} s_{ab}}{2 \pi \mathrm{j}}$, and the integral being taken over $\mathbbmss{J}^{m\times m}$. Substituting in \eqref{eq:a-17}, $e^{n \mathcal{I}(\mathbf{Q})}$ reduces to \begin{subequations} \begin{align} e^{n \mathcal{I}(\mathbf{Q})} &= \int e^{-n\tr{\mathbf{S}^\mathsf{T} \mathbf{Q}}} \sum_{ \{ {\boldsymbol{v}}_a \} } e^{\sum_{a,b=1}^m s_{ab}\boldsymbol{\tilde{v}}_a^{\mathsf{T}} \boldsymbol{\tilde{v}}_b - \upbeta\sum_{a=1}^m u({\boldsymbol{v}}_a) + h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)}({\boldsymbol{v}}_a;{\boldsymbol{x}})} \mathrm{d} \mathbf{S} \label{eq:a-20a} \\ &= \int e^{-n\tr{\mathbf{S}^\mathsf{T} \mathbf{Q}}} \ e^{n \mathcal{M}(\mathbf{S})} \mathrm{d} \mathbf{S} \label{eq:a-20b} \end{align} \end{subequations} with $\mathcal{M}(\mathbf{S})$ being defined as \begin{align} \mathcal{M}(\mathbf{S}) = \frac{1}{n} \log \sum_{ \{ {\boldsymbol{v}}_a \} } e^{\sum_{a,b=1}^m s_{ab}\boldsymbol{\tilde{v}}_a^{\mathsf{T}} \boldsymbol{\tilde{v}}_b - \upbeta\sum_{a=1}^m u({\boldsymbol{v}}_a) + h n\sum_{a=1}^m\mathsf{D}^{\mathbbmss{W}(n)}({\boldsymbol{v}}_a;{\boldsymbol{x}})}. \label{eq:a-21} \end{align} Thus, $\mathsf{Z}(m;{\boldsymbol{x}})$ finally reads \begin{align} \mathsf{Z}(m;{\boldsymbol{x}}) = \int \int e^{-n \{ \mathcal{G}(\mathbf{T} \mathbf{Q})+\tr{\mathbf{S}^\mathsf{T} \mathbf{Q}}- \mathcal{M}(\mathbf{S})\}+\epsilon_n} \mathrm{d} \mathbf{S} \mathrm{d} \mathbf{Q}. \label{eq:a-22} \end{align} Consequently, one needs to evaluate the expectation of $\mathsf{Z}(m;{\boldsymbol{x}})$ with respect to ${\boldsymbol{x}}$, in order to determine the moment function $\mathsf{Z}(m)$. However, $\mathsf{Z}(m;{\boldsymbol{x}})$ for almost all realizations of ${\boldsymbol{x}}$ converges to a deterministic asymptotic as $n \uparrow \infty$, and thus, the expectation with respect to ${\boldsymbol{x}}$ can be dropped. To show the latter statement, we note that the only term in \eqref{eq:a-22} which depends on ${\boldsymbol{x}}$ is $\mathcal{M}(\mathbf{S})$. Therefore, it is sufficient to study the convergence of $\mathcal{M}(\mathbf{S})$. Lemma \ref{lem:a-2} justifies this property of $\mathcal{M}(\mathbf{S})$ using the law of large numbers, and the decoupling property of the functions $u(\cdot)$ and $\mathsf{d}(\cdot;\cdot)$. \begin{lemma} \label{lem:a-2} Consider the system specified in Section \ref{sec:problem_formulation}, and let Assumption \ref{asp:2} hold. Then, as $n \uparrow \infty$, $\mathcal{M}(\mathbf{S})$ defined in \eqref{eq:a-21} is given by \begin{align} \mathcal{M}(\mathbf{S})=\mathsf{E}\hspace{.5mm} \left\lbrace (1-\eta) \left[ \log \sum_{\mathbf{v}} e^{(\mathbf{x}-\mathbf{v})^\mathsf{T} \mathbf{S} (\mathbf{x}-\mathbf{v})-\upbeta u(\mathbf{v})} \right] +\eta \left[ \log \sum_{\mathbf{v}} e^{(\mathbf{x}-\mathbf{v})^\mathsf{T} \mathbf{S} (\mathbf{x}-\mathbf{v})-\upbeta u(\mathbf{v}) + h \eta^{-1} \mathsf{d}(\mathbf{v}; \mathbf{x})} \right] \right\rbrace \label{eq:a-23} \end{align}where $\mathbf{v}_{m\times 1} \in \mathbbmss{X}^m$, $\mathbf{x}_{m\times 1}$ is a vector with all the elements being the random variable $x$ which is distributed with the source distribution $\mathrm{p}_x$, the expectation is taken over $\mathrm{p}_x$, and $\mathsf{d}(\mathbf{v}; \mathbf{x})$ is defined as $\mathsf{d}(\mathbf{v}; \mathbf{x})\coloneqq\sum_{a=1}^m \mathsf{d}(\mathrm{v}_a; \mathrm{x}_a)$. \end{lemma} \begin{proof} Consider the decoupling property of the functions $u(\cdot)$ and $\mathsf{d}(\cdot;\cdot)$. Define the vector $\mathbf{v}_{m\times 1}$ over the support $\mathbbmss{X}^m$, and the coefficients $\{ \mathsf{w}_i \}$ for $i \in [1:n]$ as \begin{equation} \mathsf{w}_i= \begin{cases} 0 & \text{if}\ i\notin\mathbbmss{W}(n) \\ \abs{\mathbbmss{W}(n)}^{-1} & \text{if}\ i\in\mathbbmss{W}(n). \end{cases} \label{eq:a-24} \end{equation} Then, $\mathcal{M}(\mathbf{S})$ reads \begin{subequations} \begin{align} \mathcal{M}(\mathbf{S}) &= \frac{1}{n} \log \sum_{ \{ {\boldsymbol{v}}_a \} } \prod_{i=1}^n e^{\sum_{a,b=1}^m s_{ab} (x_i-v_{ai}) (x_i-v_{bi}) - \upbeta \sum_{a=1}^m u(v_{ai}) + h n \sum_{a=1}^m \mathsf{w}_i \mathsf{d}(v_{ai}; x_{i})} \label{eq:a-25a} \\ &= \frac{1}{n} \log \prod_{i=1}^n \sum_{\mathbf{v}} e^{\sum_{a,b=1}^m s_{ab} (x_{i}-\mathrm{v}_a) ( x_{i}-\mathrm{v}_b ) - \upbeta \sum_{a=1}^m u(\mathrm{v}_{a}) + h n \sum_{a=1}^m \mathsf{w}_i \mathsf{d}(\mathrm{v}_{a}; x_{i})} \label{eq:a-25b} \\ &= \frac{1}{n} \left[ \sum_{i\notin \mathbbmss{W}} \mathcal{M}_0(\mathbf{S};x_{i}) + \sum_{i\in \mathbbmss{W}} \mathcal{M}_1(\mathbf{S};x_{i}) \right] \label{eq:a-25c} \end{align} \end{subequations} where the functions $\mathcal{M}_0(\cdot;\cdot)$ and $\mathcal{M}_1(\cdot;\cdot)$ are defined as \begin{subequations} \begin{align} \mathcal{M}_0(\mathbf{S};x_i) &= \log \sum_{\mathbf{v}} e^{(\mathbf{x}-\mathbf{v})^\mathsf{T} \mathbf{S} (\mathbf{x}-\mathbf{v})-\upbeta u(\mathbf{v})} \label{eq:a-26a}\\ \mathcal{M}_1(\mathbf{S};x_i) &= \log \sum_{\mathbf{v}} e^{(\mathbf{x}-\mathbf{v})^\mathsf{T} \mathbf{S} (\mathbf{x}-\mathbf{v})-\upbeta u(\mathbf{v}) + h \tfrac{n}{\abs{\mathbbmss{W}(n)}} \mathsf{d}(\mathbf{v}; \mathbf{x})} \label{eq:a-26b} \end{align} \end{subequations} where $\mathbf{x}_{m \times 1}$ is a vector with all the elements being $x_i$, and we define $\mathsf{d}(\mathbf{v}; \mathbf{x})\coloneqq\sum_{a=1}^m \mathsf{d}(\mathrm{v}_a; \mathrm{x}_a)$ for compactness. As Assumption \ref{asp:2} suggests the limits with respect to $m$ and $n$ can be exchanged in \eqref{eq:a-1}; thus, one can consider the asymptotics of $\mathcal{M}(\mathbf{S})$ for a given $m$ when $n$ tends to its large limit. Regarding the fact that ${\boldsymbol{x}}$ is collected from an \ac{iid} distribution, the term in the right hand side of \eqref{eq:a-25c} converges to the expectation over the distribution $\mathrm{p}_x$ due to the law of large numbers; more precisely, as $n\uparrow\infty$ \begin{subequations} \begin{align} \frac{1}{n} \sum_{i\notin \mathbbmss{W}} \mathcal{M}_0(\mathbf{S};x_{i}) &= \left[1-\frac{\abs{\mathbbmss{W}(n)}}{n}\right] \frac{1}{n-\abs{\mathbbmss{W}(n)}} \sum_{i\notin \mathbbmss{W}} \mathcal{M}_0(\mathbf{S};x_{i}) \longrightarrow (1-\eta) \ \mathsf{E}_{x} \hspace{.3mm} \mathcal{M}_0(\mathbf{S};x) \label{eq:a-27a}\\ \frac{1}{n} \sum_{i\in \mathbbmss{W}} \mathcal{M}_1(\mathbf{S};x_{i}) &= \left[\frac{\abs{\mathbbmss{W}(n)}}{n}\right] \frac{1}{\abs{\mathbbmss{W}(n)}} \sum_{i\in \mathbbmss{W}} \mathcal{M}_1(\mathbf{S};x_{i}) \longrightarrow \eta \ \mathsf{E}_{x} \hspace{.3mm} \mathcal{M}_1(\mathbf{S};x) \label{eq:a-27b} \end{align} \end{subequations} with $\eta$ being defined as in \eqref{eq:sys-7}. Substituting \eqref{eq:a-27a} and \eqref{eq:a-27b} in \eqref{eq:a-25c}, Lemma \ref{lem:a-2} is concluded. \end{proof} \begin{remark} Considering Lemma \ref{lem:a-2}, it eventually says that the probability weight $e^{n \mathcal{I}(\mathbf{Q})}$ for a given correlation matrix $\mathbf{Q}$ converges to a deterministic weight as $n$ tends to infinity. This statement equivalently states that for almost any given realization of the source vector, the correlation matrix converges to its expectation. In fact, considering the correlation matrix $\mathbf{Q}^\mathrm{v}$, as defined in \eqref{eq:a-12}, the entries are functions of ${\boldsymbol{x}}$, and therefore, variate randomly due to the source distribution for a given $n$. Lemma \ref{lem:a-2}, however, indicates that, as $n\uparrow \infty$, the entries converge to some deterministic asymptotics for almost any realization of ${\boldsymbol{x}}$. As an alternative approach, one could study the convergence property of the correlation matrix $\mathbf{Q}^\mathrm{v}$ by means of the law of large numbers first, and then, conclude Lemma \ref{lem:a-2} by rewriting $\mathcal{M}(\mathbf{S})$ in proper way, and replacing it with the expectation using the fact that the probability weight $e^{n \mathcal{I}(\mathbf{Q})}$ needs to converge deterministically as $n\uparrow\infty$. Nevertheless, the approach taken here seems to be more straightforward. \end{remark} Using Lemma \ref{lem:a-2}, we drop the expectation with respect to ${\boldsymbol{x}}$ in \eqref{eq:a-2c}. Replacing in \eqref{eq:a-1}, the asymptotic distortion is found by taking the limits. As Assumption \ref{asp:2} suggests, we exchange the order of the limits and take the limit with respect to $n$ at first. Denoting that the probability measure defined with $e^{n \mathcal{I}(\mathbf{Q})} \mathrm{d} \mathbf{Q}$ satisfy the large deviation properties \cite{dembo2009large}, we can use the saddle point approximation to evaluate the integral in \eqref{eq:a-22} which says that as $n \uparrow \infty$ \begin{align} \mathsf{Z}(m) = \left(\frac{\lambda}{\lambda+m\upbeta\lambda_0}\right)^{\frac{k}{2}} \int \int e^{-n \{ \mathcal{G}(\mathbf{T} \mathbf{Q})+\tr{\mathbf{S}^\mathsf{T} \mathbf{Q}}- \mathcal{M}(\mathbf{S})\}} \mathrm{d} \mathbf{S} \mathrm{d} \mathbf{Q} \doteq \mathsf{K}_n e^{-n \{ \mathcal{G}(\mathbf{T} \tilde{\mathbf{Q}})+\tr{\tilde{\mathbf{S}}^\mathsf{T} \tilde{\mathbf{Q}}}- \mathcal{M}(\tilde{\mathbf{S}})\}}, \label{eq:a-28} \end{align} where we drop $\epsilon_n$ given in \eqref{eq:a-22} regarding the fact that it vanishes in the large limit. Here, $(\tilde{\mathbf{Q}},\tilde{\mathbf{S}})$ is the saddle point of the integrand function's exponent, $\mathsf{K}_n$ is a bounded coefficient, and $\doteq$ indicates the asymptotic equivalency in exponential scale defined as the following. \begin{definition} \normalfont The functions $a(\cdot)$ and $b(\cdot)$ defined over the non-bounded set $\mathbbmss{X}$ are said to be asymptotically equivalent in exponential scale, if \begin{align} \lim_{n \uparrow\infty} \log |\frac{a(x_n)}{b(x_n)}| =0. \label{eq:a-29} \end{align} for an unbounded sequence $\{x_n \in \mathbbmss{X} \}$. \end{definition} As $n \uparrow \infty$, the $m$th moment can be replaced with its asymptotic equivalent in \eqref{eq:a-1}. Consequently, by substituting the equivalent term and exchanging the limits' order, we have \begin{subequations} \begin{align} \mathsf{D}^{\mathbbmss{W}}({\boldsymbol{\hat{x}}};{\boldsymbol{x}})&=\lim_{\upbeta\uparrow\infty}\lim_{m\downarrow 0}\lim_{h \downarrow 0} \lim_{n\uparrow\infty} \frac{1}{m} \frac{\partial}{\partial h} \left[ -\mathcal{G}(\mathbf{T} \tilde{\mathbf{Q}})-\tr{\tilde{\mathbf{S}}^\mathsf{T} \tilde{\mathbf{Q}}}+\mathcal{M}(\tilde{\mathbf{S}}) + \frac{\log \mathsf{K}_n}{n} \right] \label{eq:a-30a}\\ &\stackrel{\star}{=}\lim_{\upbeta\uparrow\infty}\lim_{m\downarrow 0}\lim_{h \downarrow 0} \frac{1}{m} \frac{\partial}{\partial h} \mathcal{M}(\tilde{\mathbf{S}}) \label{eq:a-30b}\\ &= \lim_{\upbeta\uparrow\infty}\lim_{m\downarrow 0} \mathsf{E}\hspace{.5mm} \frac{\sum_{\mathbf{v}} \mathsf{d}(\mathbf{v}; \mathbf{x}) e^{(\mathbf{x}-\mathbf{v})^\mathsf{T} \tilde{\mathbf{S}} (\mathbf{x}-\mathbf{v})-\upbeta u(\mathbf{v})}}{m \sum_{\mathbf{v}} e^{(\mathbf{x}-\mathbf{v})^\mathsf{T} \tilde{\mathbf{S}} (\mathbf{x}-\mathbf{v})-\upbeta u(\mathbf{v})}} \label{eq:a-30c} \end{align} \end{subequations} where $\star$ comes from the fact that $\mathsf{K}_n$ is bounded, and $\mathcal{G}(\mathbf{T} \tilde{\mathbf{Q}})$ and $\tr{\tilde{\mathbf{S}}^\mathsf{T} \tilde{\mathbf{Q}}}$ are not functions of $h$. The saddle point $(\tilde{\mathbf{Q}},\tilde{\mathbf{S}})$ is found by letting the derivatives of the exponent zero. Using the standard definition $\displaystyle \left[ \frac{\partial}{\partial \mathbf{M}} \right]_{ab} \coloneqq \frac{\partial}{\partial [\mathbf{M}]_{ab}}$, the saddle point is given by the following fixed point equations. \begin{subequations} \begin{align} \frac{\partial}{\partial \mathbf{Q}} \left[ \mathcal{G} (\mathbf{T} \mathbf{Q}) + \tr{\mathbf{S} \mathbf{Q}} - \mathcal{M}(\mathbf{S}) \right]|_{(\tilde{\mathbf{Q}},\tilde{\mathbf{S}})}&=0 \label{eq:a-31a} \\ \frac{\partial}{\partial \mathbf{S}} \left[ \mathcal{G} (\mathbf{T} \mathbf{Q}) + \tr{\mathbf{S} \mathbf{Q}} - \mathcal{M}(\mathbf{S}) \right]|_{(\tilde{\mathbf{Q}},\tilde{\mathbf{S}})}&=0. \label{eq:a-31b} \end{align} \end{subequations} \eqref{eq:a-31a} reduces to \begin{align} \tilde{\mathbf{S}}= -\upbeta \mathbf{T} \mathrm{R}_{\mathbf{J}}(-2\upbeta \mathbf{T} \tilde{\mathbf{Q}}), \label{eq:a-32} \end{align} and \eqref{eq:a-31b} results in \begin{align} \tilde{\mathbf{Q}}= \mathsf{E}\hspace{.5mm} \frac{\sum_{\mathbf{v}}(\mathbf{x} - \mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} e^{(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \tilde{\mathbf{S}} (\mathbf{x}-\mathbf{v})-\upbeta u(\mathbf{v})}}{\sum_{\mathbf{v}} e^{(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \tilde{\mathbf{S}} (\mathbf{x}-\mathbf{v})-\upbeta u(\mathbf{v})}}. \label{eq:a-33} \end{align} By replacing \eqref{eq:a-32} in \eqref{eq:a-30c} and \eqref{eq:a-33}, the expression for the asymptotic distortion and the saddle point correlation matrix can be considered as expectations over a conditional Boltzmann-Gibbs distribution $\mathrm{p}^\upbeta_{\mathbf{v}|\mathbf{x}}$ defined as \begin{align} \mathrm{p}^\upbeta_{\mathbf{v}|\mathbf{x}}(\mathbf{v}|\mathbf{x})\coloneqq \frac{e^{-\upbeta \left[(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{T} \mathrm{R}_{\mathbf{J}}(-2\upbeta \mathbf{T} \tilde{\mathbf{Q}}) (\mathbf{x}-\mathbf{v})+ u(\mathbf{v})\right]}}{\sum_{\mathbf{v}} e^{-\upbeta\left[(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{T} \mathrm{R}_{\mathbf{J}}(-2\upbeta \mathbf{T} \tilde{\mathbf{Q}}) (\mathbf{x}-\mathbf{v})+ u(\mathbf{v})\right]}} \label{eq:a-34} \end{align} which simplifies the expressions in \eqref{eq:a-30c} and \eqref{eq:a-33} to those given in Proposition \ref{proposition:1}. In general the fixed point equation \eqref{eq:a-33} can be satisfied with several saddle points, and therefore, multiple asymptotic distortions might be found. In this case, one should note that the valid solution is the one which minimizes the free energy of the spin glasses at the zero temperature, i.e., $\upbeta\uparrow\infty$. Using the $m$th moment, the free energy of the system reads \begin{subequations} \begin{align} \mathsf{F}(\upbeta) &= -\lim_{n\uparrow\infty}\lim_{h \downarrow 0} \lim_{m \downarrow 0} \frac{1}{m} \frac{1}{\upbeta} \frac{1}{n} \log \mathsf{Z}(m) \label{eq:a-35a}\\ &\stackrel{\star}{=} \lim_{m\downarrow 0}\lim_{h \downarrow 0} \frac{1}{\upbeta m} \left[ \mathcal{G}(\mathbf{T} \tilde{\mathbf{Q}})+\tr{\tilde{\mathbf{S}}^\mathsf{T} \tilde{\mathbf{Q}}}-\mathcal{M}(\tilde{\mathbf{S}}) \right] \label{eq:a-35b} \\ &\stackrel{\dagger}{=} \lim_{m\downarrow 0} \frac{1}{m} \left[\frac{1}{\upbeta} \mathcal{G}(\mathbf{T} \tilde{\mathbf{Q}}) - \tr{\tilde{\mathbf{Q}}^\mathsf{T} \mathbf{T} \mathrm{R}_{\mathbf{J}}(-2\upbeta \mathbf{T} \tilde{\mathbf{Q}}) } - \frac{1}{\upbeta} \mathsf{E}\log \sum_{\mathbf{v}} e^{-\upbeta (\mathbf{x}-\mathbf{v})^\mathsf{T} \mathbf{T} \mathrm{R}_{\mathbf{J}}(-2\upbeta \mathbf{T} \tilde{\mathbf{Q}})(\mathbf{x}-\mathbf{v})-\upbeta u(\mathbf{v})} \right] \label{eq:a-35c} \end{align} \end{subequations} where $\star$ comes from the facts that $\mathsf{K}_n$ is bounded and the limits with respect to $m$ and $n$ are supposed to exchange, and $\dagger$ is deduced from \eqref{eq:a-32} and Lemma \ref{lem:a-2}. Finally by considering the definition of $\mathcal{G}(\cdot)$, Proposition \ref{proposition:1} is concluded. \newpage \section{Proof of Proposition \ref{proposition:3}} \label{app:b} Starting from Assumption \ref{asp:3}, the replica correlation matrix is \begin{align} \mathbf{Q}= \frac{\chi }{\upbeta} \mathbf{I}_m + q \mathbf{1}_m \label{eq:b-1} \end{align} for some non-negative real $\chi$ and $q$. Considering Definition \ref{def:replica_spin}, the Hamiltonian of the spin glass of replicas is given by \begin{align} \mathcal{E}^\mathsf{R}(\mathbf{v}|\mathbf{x})= (\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{T} \mathrm{R}_{\mathbf{J}}(-2 \upbeta \mathbf{T} \mathbf{Q}) (\mathbf{x}-\mathbf{v}) + u(\mathbf{v}) \label{eq:b-2} \end{align} with $\mathbf{T}$ being defined in \eqref{eq:rep-5}. Denoting $\mathbf{R}\coloneqq\mathbf{T} \mathrm{R}_{\mathbf{J}}(-2 \upbeta \mathbf{T} \mathbf{Q})$, it is shown in Appendix \ref{app:e} that $\mathbf{R}$ has the same structure as the correlation matrix; thus, one can write \begin{align} \mathbf{R}= e \mathbf{I}_m - \upbeta \frac{f^2}{2} \mathbf{1}_m, \label{eq:b-3} \end{align} for some real $f$ and $e$ which are functions of $\chi$ and $q$. Denoting the eigendecomposition of $\mathbf{Q}$ as $\mathbf{Q}=\mathbf{V} \mathbf{D}^\mathrm{Q} \mathbf{V}^{\mathsf{T}}$, we have\footnote{Note that $\mathbf{Q}$ is full-rank and symmetric.} \begin{subequations} \begin{align} \mathbf{T}=\mathbf{V} \mathbf{D}^\mathrm{T} \mathbf{V}^{\mathsf{T}} \label{eq:b-4a} \\ \mathbf{R}=\mathbf{V} \mathbf{D}^\mathrm{R} \mathbf{V}^{\mathsf{T}} \label{eq:b-4b} \end{align} \end{subequations} where $\mathbf{D}^\mathrm{Q}$, $\mathbf{D}^\mathrm{T}$ and $\mathbf{D}^\mathrm{R}$ are the diagonal matrices of eigenvalues. Therefore, we have \begin{align} \mathbf{D}^\mathrm{R}= \mathbf{D}^\mathrm{T} \mathrm{R}_\mathbf{J}(-2\upbeta \mathbf{D}^\mathrm{T} \mathbf{D}^\mathrm{Q}) \label{eq:b-5} \end{align} which equivalently states that for $a\in[1:m]$ \begin{align} \lambda^\mathbf{R}_a= \lambda^\mathbf{T}_a \mathrm{R}_\mathbf{J}(-2\upbeta \lambda^\mathbf{T}_a \lambda^\mathbf{Q}_a) \label{eq:b-6} \end{align} with $\lambda^\mathbf{R}_a$, $\lambda^\mathbf{Q}_a$ and $\lambda^\mathbf{T}_a$ being the eigenvalue of $\mathbf{R}$, $\mathbf{Q}$ and $\mathbf{T}$ corresponding to the $a$th column of $\mathbf{V}$. The matrices $\mathbf{R}$, $\mathbf{Q}$ and $\mathbf{T}$ have two different corresponding eigenvalues, namely $\displaystyle \left\lbrace e-\upbeta m\frac{f^2}{2}, \frac{\chi}{\upbeta} +m q, \frac{1}{2\lambda}\left[ 1-\frac{m \upbeta\lambda_0}{\lambda+m\upbeta \lambda_0} \right] \right\rbrace$ which occur with multiplicity $1$ and $\displaystyle \left\lbrace e, \frac{\chi}{\upbeta}, \frac{1}{2\lambda} \right\rbrace$ which occur with multiplicity $m-1$. Substituting in \eqref{eq:b-6} and taking the limit when $m \downarrow 0$, $e$ and $f$ are found as \begin{subequations} \begin{align} e &= \frac{1}{2\lambda} \mathrm{R}_{\mathbf{J}}(- \frac{\chi}{\lambda}), \label{eq:b-7a} \\ f^2 &= \frac{1}{\lambda^2} \frac{\partial}{\partial \chi} \left\lbrace \left[ \lambda_0 \chi - \lambda q \right] \mathrm{R}_{\mathbf{J}}(- \frac{\chi}{\lambda}) \right\rbrace. \label{eq:b-7b} \end{align} \end{subequations} To pursue the analysis, we rewrite the Hamiltonian using \eqref{eq:b-3} \begin{align} \mathcal{E}^\mathsf{R}(\mathbf{v}|\mathbf{x})= e \norm{\mathbf{x}-\mathbf{v}}^2 - \upbeta \frac{f^2}{2} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{1}_m } + u(\mathbf{v}), \label{eq:b-8} \end{align} and therefore, the partition function $\mathcal{Z}^{\mathsf{R}}(\upbeta|\mathbf{x})$ is given by \begin{align} \mathcal{Z}^\mathsf{R}(\upbeta|\mathbf{x})= \sum_{\{\mathrm{v}_a \} } e^{- \upbeta e \norm{\mathbf{x}-\mathbf{v}}^2 + \upbeta^2 \frac{f^2}{2} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{1}_m } -\upbeta u(\mathbf{v})}. \label{eq:b-9} \end{align} Using the Gaussian integral, we have \begin{align} e^{\upbeta^2 \tfrac{f^2}{2} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{1}_m }} = \int e^{-\upbeta f \left[ \sum_{a=1}^m (x-\mathrm{v}_a) \right] z} \mathrm{D} z, \label{eq:b-10} \end{align} and thus, the partition function reduces to \begin{align} \mathcal{Z}^\mathsf{R}(\upbeta|\mathbf{x})= \int \left[ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + f (x-v)z + u(v)\right]} \right]^m \mathrm{D} z \label{eq:b-11} \end{align} with $v \in \mathbbmss{X}$. The parameters of the spin glass of replicas are then determined using the partition function. Starting with the normalized free energy, it reads \begin{align} \mathsf{F}^\mathsf{R}(\upbeta,m)= -\frac{1}{\upbeta m} \mathsf{E}_{x} \log \int \left[ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + f (x-v)z + u(v)\right]} \right]^m \mathrm{D} z. \label{eq:b-12} \end{align} Noting that $\int \mathrm{D} z$ takes expectation over the Gaussian distribution, one can use the Riesz equality in \eqref{eq:sm-7} to show that when $m$ varies in a vicinity of $0$ \begin{align} \mathsf{F}^\mathsf{R}(\upbeta,m) &= -\frac{1}{\upbeta} \mathsf{E} \int \log \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + f (x-v)z + u(v)\right]} \mathrm{D} z + \epsilon_m \label{eq:b-13} \end{align} where $\epsilon_m$ tends to $0$ as $m\downarrow 0$ and the expectation is taken over $x\sim\mathrm{p}_x$. Consequently, as $m \downarrow 0$ the normalized free energy reads \begin{align} \mathsf{F}^\mathsf{R}(\upbeta) = \lim_{m\downarrow 0} \mathsf{F}^\mathsf{R}(\upbeta,m) = -\frac{1}{\upbeta} \mathsf{E} \int \log \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + f (x-v)z + u(v)\right]} \mathrm{D} z \label{eq:b-13.1} \end{align} The next parameters to be specified are $\chi$ and $q$. By determining the conditional distribution $\mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta$ and substituting in \eqref{eq:rep-6}, the following fixed point equations are deduced \begin{subequations} \begin{align} \left[ \frac{\chi}{\upbeta}+q \right] m &= \mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \norm{\mathbf{x}-\mathbf{v}}^2 \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}), \label{eq:b-14a} \\ \left[ \frac{\chi}{\upbeta}+m q \right] m &= \mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^\mathsf{T} \mathbf{1}_m} \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}). \label{eq:b-14b} \end{align} \end{subequations} where \eqref{eq:b-14a} and \eqref{eq:b-14b} are found by taking the trace and sum over all the entries of the both sides of \eqref{eq:rep-6}, respectively. One can directly evaluate the right hand sides of \eqref{eq:b-14a} and \eqref{eq:b-14b}; however, considering \eqref{eq:b-9}, it is straightforward to show that \begin{subequations} \begin{align} &\mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \norm{\mathbf{x}-\mathbf{v}}^2 \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}) = m \frac{\partial}{\partial e} \mathsf{F}^\mathsf{R}(\upbeta,m), \label{eq:b-15a} \\ &\mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^\mathsf{T} \mathbf{1}_m} \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x})= -\frac{m}{\upbeta f} \frac{\partial}{\partial f} \mathsf{F}^\mathsf{R}(\upbeta,m). \label{eq:b-15b} \end{align} \end{subequations} After substituting and taking the limit $m \downarrow 0$, the fixed point equations finally read \begin{subequations} \begin{align} \frac{\chi}{\upbeta}+q &= \mathsf{E}\hspace{.5mm} \int \frac{ \sum_{v} (v-x)^2 e^{- \upbeta \left[ e (x-v)^2 + f (x-v)z + u(v)\right]}}{ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + f (x-v)z + u(v)\right]}} \mathrm{D} z \label{eq:b-16a} \\ \chi &= \frac{1}{f} \mathsf{E}\hspace{.5mm} \int \frac{ \sum_{v} (v-x) z e^{- \upbeta \left[ e (x-v)^2 + f (x-v)z + u(v)\right]}}{ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + f (x-v)z + u(v)\right]} } \mathrm{D} z. \label{eq:b-16b} \end{align} \end{subequations} with $f$ and $e$ defined in \eqref{eq:b-7a} and \eqref{eq:b-7b}. In order to determine the replicas' average distortion defined in \eqref{eq:rep-10} regarding the distortion function $\mathsf{d}(\cdot;\cdot)$, we replace the Hamiltonian by \begin{align} \mathcal{E}_h^\mathsf{R}(\mathbf{v}|\mathbf{x})=\mathcal{E}^\mathsf{R}(\mathbf{v}|\mathbf{x})+h\sum_{a=1}^m \mathsf{d}(\mathrm{v}_a;x) \label{eq:b-17} \end{align} with $\mathcal{E}^\mathsf{R}(\mathbf{v}|\mathbf{x})$ given in \eqref{eq:b-8}, and take the steps as in \eqref{eq:b-9}-\eqref{eq:b-12} to find the modified form of the normalized free energy, i.e. $\mathsf{F}^\mathsf{R}(\upbeta,h,m)$. The replicas' average distortion is then evaluated as \begin{subequations} \begin{align} \mathsf{D}^{\mathsf{R}}(\upbeta,m)&=\frac{\partial}{\partial h} \mathsf{F}^\mathsf{R}(\upbeta,h,m)|_{h=0} \label{eq:b-18a}\\ &=\mathsf{E}\hspace{.5mm} \int \frac{ \sum_{v} \mathsf{d}(v;x) e^{- \upbeta \left[ e (x-v)^2 + f (x-v)z + u(v)\right]} }{ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + f (x-v)z + u(v)\right]} }\mathrm{D} z. \label{eq:b-18b} \end{align} \end{subequations} which does not depend on $m$, and thus, taking the limit $m \downarrow 0$ is not needed. The last step is to take the zero temperature limit. Using the Laplace method of summation, as $\upbeta\uparrow\infty$ the fixed point equations reduce to \begin{subequations} \begin{align} q &= \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x)^2 \ \mathrm{D} z \label{eq:b-19a}, \\ \chi &= \frac{1}{f} \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x) z \ \mathrm{D} z. \label{eq:b-19b} \end{align} \end{subequations} with $\mathrm{g}$ being defined as \begin{align} \mathrm{g} \coloneqq \arg \min_{v} \left[ e (x-v)^2 + f (x-v)z + u(v)\right]. \label{eq:b-20} \end{align} Taking the same approach, the replicas' average distortion at zero temperature reads \begin{align} \mathsf{D}^{\mathbbmss{W}}&= \mathsf{E}\hspace{.5mm} \int \mathsf{d}(\mathrm{g};x) \ \mathrm{D} z. \label{eq:b-21} \end{align} In order to avoid multiple solutions, we need to find the normalized free energy of the corresponding spin glass as given in Proposition \ref{proposition:1}. In fact, the fixed point equations in \eqref{eq:b-19a} and \eqref{eq:b-19b} may have different solutions, and therefore, the several asymptotics for the distortion can be obtained. In this case, the fixed point solution which minimizes the zero temperature free energy of the system and its corresponding asymptotic distortion are taken. Substituting in Proposition \ref{proposition:1}, the free energy of the corresponding spin glass at the inverse temperature $\upbeta$ is found as \begin{align} \mathsf{F}(\upbeta)=\frac{1}{2\lambda} \left[ \int_0^1 \mathrm{F}^{\upbeta}(\omega) \mathrm{d} \omega - \mathrm{F}^{\upbeta} (1) \right] +\mathsf{F}^{\mathsf{R}}(\upbeta) \label{eq:b-22} \end{align} where the function $\mathrm{F}^{\upbeta}(\cdot)$ is defined as \begin{align} \mathrm{F}^{\upbeta}(\omega) = \frac{\chi}{\upbeta} \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda} \omega) + \left[q-\frac{\lambda_0}{\lambda} \chi \right] \frac{\mathrm{d}}{\mathrm{d} \omega} \left[ \omega \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda} \omega) \right]. \label{eq:b-23} \end{align} By taking the limit as $\upbeta \uparrow \infty$, the zero temperature free energy reads \begin{align} \mathsf{F}^0=\frac{1}{2\lambda} \left[ \int_0^1 \mathrm{F}^{\infty}(\omega) \mathrm{d} \omega - \mathrm{F}^{\infty} (1) \right] + \mathsf{E}\hspace{.5mm} \int e (x-\mathrm{g})^2 + f (x-\mathrm{g})z + u(\mathrm{g}) \ \mathrm{D} z \label{eq:b-24} \end{align} with $\mathrm{g}$ being defined as in \eqref{eq:b-20} and $\mathrm{F}^{\infty}(\omega)\coloneqq \lim_{\upbeta\uparrow\infty}\mathrm{F}^{\upbeta}(\omega)$. By defining $\lambda^{\mathsf{s}}\coloneqq \left[2e\right]^{-1}$ and $\lambda^{\mathsf{s}}_0\coloneqq \left[4e^2\right]^{-1} f^2$ Proposition \ref{proposition:3} is concluded. \newpage \section{Proof of Proposition \ref{proposition:5}} \label{app:c} We take the same approach as in Appendix \ref{app:b}. Considering the replica correlation matrix to be of the form \begin{align} \mathbf{Q}= \frac{\chi}{\upbeta} \mathbf{I}_m+ p \mathbf{I}_{\frac{m \upbeta}{\mu}} \otimes \mathbf{1}_{\frac{\mu}{\upbeta}} +q \mathbf{1}_m , \label{eq:c-1} \end{align} for some non-negative real $\chi$, $p$, $q$, and $\mu$, we need to evaluate the parameters of the spin glass of replicas defined in Definition \ref{def:replica_spin}. Starting with the Hamiltonian, \begin{align} \mathcal{E}^\mathsf{R}(\mathbf{v}|\mathbf{x})= (\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{T} \mathrm{R}_{\mathbf{J}}(-2 \upbeta \mathbf{T} \mathbf{Q}) (\mathbf{x}-\mathbf{v}) + u(\mathbf{v}) \label{eq:c-2} \end{align} where $\mathbf{T}$ is given in \eqref{eq:rep-5}. As discussed in Appendix \ref{app:e}, for a given $\mu$ the matrix $\mathbf{R}\coloneqq\mathbf{T} \mathrm{R}_{\mathbf{J}}(-2 \upbeta \mathbf{T} \mathbf{Q})$ is of the following form \begin{align} \mathbf{R}= e \mathbf{I}_m - \upbeta \frac{g^2}{2} \mathbf{I}_{\frac{m \upbeta}{\mu}} \otimes \mathbf{1}_{\frac{\mu}{\upbeta}} - \upbeta \frac{f^2}{2} \mathbf{1}_m \label{eq:c-3} \end{align} where $e$, $g$ and $f$ can be found in terms of $\chi$, $p$ and $q$. Using the eigendecomposition of $\mathbf{R}$, $\mathbf{Q}$ and $\mathbf{T}$, it is then straightforward to show that for $a\in[1:m]$ \begin{align} \lambda^\mathbf{R}_a= \lambda^\mathbf{T}_a \mathrm{R}_\mathbf{J}(-2\upbeta \lambda^\mathbf{T}_a \lambda^\mathbf{Q}_a) \label{eq:c-4} \end{align} where $\lambda^\mathbf{R}_a$, $\lambda^\mathbf{Q}_a$ and $\lambda^\mathbf{T}_a$ denote the $a$th eigenvalues of $\mathbf{R}$, $\mathbf{Q}$ and $\mathbf{T}$, respectively. Regarding the structure of $\mathbf{Q}$ and $\mathbf{R}$, there are three different sets of corresponding eigenvalues for $\mathbf{R}$, $\mathbf{Q}$ and $\mathbf{T}$, namely\\ \begin{itemize} \item $\displaystyle \left\lbrace e-\mu \frac{g^2}{2}-m\upbeta \frac{f^2}{2}, \frac{\chi+\mu p}{\upbeta} + m q, \frac{1}{2\lambda}\left[ 1-\frac{m\upbeta \lambda_0}{\lambda+m\upbeta \lambda_0} \right] \right\rbrace$ with multiplicity $1$, \item $\displaystyle \left\lbrace e-\mu \frac{g^2}{2}, \frac{\chi+\mu p}{\upbeta}, \frac{1}{2\lambda}\right\rbrace$ with multiplicity $\displaystyle m \upbeta\mu^{-1}-1$, and \item $\displaystyle \left\lbrace e, \frac{\chi}{\upbeta}, \frac{1}{2\lambda} \right\rbrace$ with multiplicity $\displaystyle m-m \upbeta\mu^{-1}$.\\ \end{itemize} Thus, by substituting in \eqref{eq:c-4} and taking the limit when $m \downarrow 0$, $e$, $g$ and $f$ are given as \begin{subequations} \begin{align} e &= \frac{1}{2\lambda} \mathrm{R}_{\mathbf{J}}(- \frac{\chi}{\lambda}), \label{eq:c-5a} \\ g^2 &= \frac{1}{\lambda\mu} \left[ \mathrm{R}_{\mathbf{J}}(- \frac{\chi}{\lambda}) - \mathrm{R}_{\mathbf{J}}(- \frac{\chi+\mu p}{\lambda}) \right], \label{eq:c-5b} \\ f^2 &= \frac{1}{\lambda^2} \frac{\partial}{\partial \chi} \left\lbrace \left[ \lambda_0 (\chi+\mu p) - \lambda q \right] \mathrm{R}_{\mathbf{J}}(- \frac{\chi+\mu p}{\lambda}) \right\rbrace. \label{eq:c-5c} \end{align} \end{subequations} The next step is to evaluate the partition function. Substituting \eqref{eq:c-3} in \eqref{eq:c-2}, the Hamiltonian reads \begin{align} \mathcal{E}^\mathsf{R}(\mathbf{v}|\mathbf{x})= e \norm{\mathbf{x}-\mathbf{v}}^2 - \upbeta \frac{g^2}{2} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{I}_{\frac{m \upbeta}{\mu}} \otimes \mathbf{1}_{\frac{\mu}{\upbeta}} } - \upbeta \frac{f^2}{2} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{1}_m } + u(\mathbf{v}). \label{eq:c-6} \end{align} The partition function is then determined as in \eqref{eq:rep-8}. Substituting in \eqref{eq:rep-8} and using the equalities \begin{subequations} \begin{align} e^{\tfrac{1}{2} \upbeta^2 f^2 \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{1}_m }} &= \int e^{-\upbeta f \left[ \sum\limits_{a=1}^m (x-\mathrm{v}_a) \right] z_0} \mathrm{D} z_0, \label{eq:c-7a} \\ e^{ \tfrac{1}{2} \upbeta^2 g^2 \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{I}_{\frac{m \upbeta}{\mu}} \otimes \mathbf{1}_{\frac{\mu}{\upbeta}}}} &= \prod_{k=0}^{\Xi} \int e^{-\upbeta g \left[ \sum\limits_{a=\varrho_k}^{\breve{\varrho}_k} (x-\mathrm{v}_a) \right] z_1} \mathrm{D} z_1, \label{eq:c-7b} \end{align} \end{subequations} where $\varrho_k=k \mu\upbeta^{-1}+1$, $\breve{\varrho}_k=(k+1)\mu\upbeta^{-1}$, and $\Xi=m\upbeta\mu^{-1}-1$, the partition function is found as \begin{align} \mathcal{Z}^\mathsf{R}(\upbeta;\mu|\mathbf{x})= \int \left[ \int \left[ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v)\right]} \right]^{\frac{\mu}{\upbeta}} \mathrm{D} z_1 \right]^{\frac{m\upbeta}{\mu}} \mathrm{D} z_0 \label{eq:c-8} \end{align} with $v \in \mathbbmss{X}$ where we denoted $\mu$ in the argument of the partition function to indicate that the expression is determined for a given $\mu$. The normalized free energy of the spin glass of replicas then reads \begin{align} \mathsf{F}^\mathsf{R}(\upbeta,m;\mu)= -\frac{1}{\upbeta m} \mathsf{E}_{x} \log \int \left[ \int \left[ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v)\right]} \right]^{\frac{\mu}{\upbeta}} \mathrm{D} z_1 \right]^{\frac{m\upbeta}{\mu}} \mathrm{D} z_0. \label{eq:c-9} \end{align} Using the Riesz equality and taking the limit $m \downarrow 0$, the normalized free energy reduces to \begin{align} \mathsf{F}^\mathsf{R}(\upbeta;\mu) = \lim_{m\downarrow 0} \mathsf{F}^\mathsf{R}(\upbeta,m;\mu) = -\frac{1}{\mu} \mathsf{E} \int \log \left\lbrace \int \left[ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v)\right]} \right]^{\frac{\mu}{\upbeta}} \mathrm{D} z_1 \right\rbrace \mathrm{D} z_0. \label{eq:c-10} \end{align} In order to find the fixed point equations, we use \eqref{eq:rep-6}; therefore, \begin{subequations} \begin{align} \left[ \frac{\chi}{\upbeta}+q+p \right] m &= \mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \norm{\mathbf{x}-\mathbf{v}}^2 \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}), \label{eq:c-11a} \\ \left[ \frac{\chi}{\upbeta}+\frac{\mu p}{\upbeta}+\frac{\mu q}{\upbeta} \right] m &= \mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{I}_{\frac{m \upbeta}{\mu}} \otimes \mathbf{1}_{\frac{\mu}{\upbeta}}} \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}), \label{eq:c-11b}\\ \left[ \frac{\chi}{\upbeta}+\frac{\mu p}{\upbeta}+m q \right] m&= \mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^\mathsf{T} \mathbf{1}_m} \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}) \label{eq:c-11c} \end{align} \end{subequations} where \eqref{eq:c-11a}, \eqref{eq:c-11b} and \eqref{eq:c-11c} are concluded by taking the trace, sum over the diagonal blocks and sum over all the entries of the both sides of \eqref{eq:rep-6}, respectively. To evaluate the right hand sides of \eqref{eq:c-11a}-\eqref{eq:c-11c}, we take the alternative approach and express the expectations as \begin{subequations} \begin{align} &\mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \norm{\mathbf{x}-\mathbf{v}}^2 \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}) = m \frac{\partial}{\partial e} \mathsf{F}^\mathsf{R}(\upbeta,m;\mu), \label{eq:c-12a} \\ &\mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{I}_{\frac{m \upbeta}{\mu}} \otimes \mathbf{1}_{\frac{\mu}{\upbeta}}} \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}) = -\frac{m}{\upbeta g} \frac{\partial}{\partial g} \mathsf{F}^\mathsf{R}(\upbeta,m;\mu), \label{eq:c-12b}\\ &\mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^\mathsf{T} \mathbf{1}_m} \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x})= -\frac{m}{\upbeta f} \frac{\partial}{\partial f} \mathsf{F}^\mathsf{R}(\upbeta,m;\mu). \label{eq:c-12c} \end{align} \end{subequations} Taking the derivatives and limit $m \downarrow 0$, the fixed point equations finally reduce to \begin{subequations} \begin{align} \frac{\chi}{\upbeta}+q+p &= \mathsf{E}\hspace{.5mm} \int \frac{ \sum_{v} (v-x)^2 e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v)\right]} }{\sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v) \right]} } \tilde{\Lambda}^\upbeta \ \mathrm{D} z_1 \mathrm{D} z_0 \label{eq:c-13a} \\ \chi+\mu p+\mu q &= \frac{1}{g} \mathsf{E}\hspace{.5mm} \int \frac{ \sum_{v} (v-x) z_1 e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v)\right]} }{ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v)\right]} } \tilde{\Lambda}^\upbeta \ \mathrm{D} z_1 \mathrm{D} z_0, \label{eq:c-13b}\\ \chi+\mu p &= \frac{1}{f} \mathsf{E}\hspace{.5mm} \int \frac{ \sum_{v} (v-x) z_0 e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v)\right]} }{ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v)\right]} } \tilde{\Lambda}^\upbeta \ \mathrm{D} z_1 \mathrm{D} z_0 \label{eq:c-13c} \end{align} \end{subequations} with $\tilde{\Lambda}^\upbeta\coloneqq \left[ \int \Lambda^\upbeta \mathrm{D} z_1\right]^{-1} \Lambda^\upbeta$ and $\Lambda^\upbeta$ being defined as \begin{align} \Lambda^\upbeta\coloneqq \left[ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v)\right]} \right]^{\frac{\mu}{\upbeta}}. \label{eq:c-14} \end{align} The replicas' average distortion regarding the distortion function $\mathsf{d}(\cdot;\cdot)$ is further determined by modifying the Hamiltonian as \begin{align} \mathcal{E}_h^\mathsf{R}(\mathbf{v}|\mathbf{x})=\mathcal{E}^\mathsf{R}(\mathbf{v}|\mathbf{x})+h\sum_{a=1}^m \mathsf{d}(\mathrm{v}_a;x) \label{eq:c-15} \end{align} with $\mathcal{E}^\mathsf{R}(\mathbf{v}|\mathbf{x})$ given in \eqref{eq:c-6}, and taking the steps as in \eqref{eq:c-6}-\eqref{eq:c-9} to find the modified form of the normalized free energy, i.e. $\mathsf{F}^\mathsf{R}(\upbeta,h,m;\mu)$. The replicas' average distortion then reads \begin{subequations} \begin{align} \mathsf{D}^{\mathsf{R}}(\upbeta;\mu)&= \lim_{m\downarrow0} \frac{\partial}{\partial h} \mathsf{F}^\mathsf{R}(\upbeta,h,m;\mu)|_{h=0} \label{eq:c-16a}\\ &= \mathsf{E}\hspace{.5mm} \int \frac{ \sum_{v} \mathsf{d}(v;x) e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v)\right]} }{\sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v) \right]} } \tilde{\Lambda}^\upbeta \ \mathrm{D} z_1 \mathrm{D} z_0. \label{eq:c-16b} \end{align} \end{subequations} The analysis is concluded by taking the zero temperature limit. As $\upbeta\uparrow\infty$, \eqref{eq:c-13a}-\eqref{eq:c-13c} read \begin{subequations} \begin{align} q + p &= \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x)^2 \tilde{\Lambda} \mathrm{D} z_1 \mathrm{D} z_0 \label{eq:c-17a}, \\ \chi+ \mu q + \mu p &= \frac{1}{g} \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x) z _1 \tilde{\Lambda} \mathrm{D} z_1 \mathrm{D} z_0, \label{eq:c-17b} \\ \chi+ \mu p &= \frac{1}{f} \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x) z _0 \tilde{\Lambda} \mathrm{D} z_1 \mathrm{D} z_0, \label{eq:c-17c} \end{align} \end{subequations} where $\mathrm{g}$ is defined as \begin{align} \mathrm{g} \coloneqq \arg \min_{v} \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v)\right] \label{eq:c-18} \end{align} and $\tilde{\Lambda}\coloneqq \left[\int \Lambda \mathrm{D} z_1 \right]^{-1} \Lambda$ with $\Lambda$ denoting \begin{subequations} \begin{align} \Lambda &\coloneqq \lim_{\upbeta \uparrow \infty} \Lambda^\upbeta \label{eq:c-19a} \\ &= e^{- \mu \left[ e (x-\mathrm{g})^2 + (fz_0+gz_1) (x-\mathrm{g}) + u(\mathrm{g})\right]}. \label{eq:c-19b} \end{align} \end{subequations} Moreover, the asymptotic distortion for a given $\mu$ reads \begin{align} \mathsf{D}^{\mathbbmss{W}}&= \mathsf{E}\hspace{.5mm} \int \mathsf{d}(\mathrm{g};x) \tilde{\Lambda} \mathrm{D} z_1 \mathrm{D} z_0. \label{eq:c-20} \end{align} \eqref{eq:c-10} as well as \eqref{eq:c-17a}-\eqref{eq:c-20} are determined in terms of $\mu$. Moreover, for a given $\mu$, multiple solution to the fixed point equations can be found. Proposition \ref{proposition:1} suggests us to choose the solution which minimizes the free energy. Therefore, one needs to find the optimal $\mu$, and its corresponding $\chi$, $p$ and $q$, such that the free energy meets its minimum value. As the second law of thermodynamics is satisfied at any inverse temperature, we should initially search for the optimal $\mu$ considering a given $\upbeta$. We, then, find the corresponding $\chi$, $p$, and $q$ which minimize the zero temperature free energy. Using Proposition \ref{proposition:1}, the free energy at the inverse temperature $\upbeta$ for a given $\mu$ is written as \begin{align} \mathsf{F}(\upbeta;\mu)=\frac{1}{2\lambda} \left[ \int_0^1 \mathrm{F}^{\upbeta}(\omega;\mu) \mathrm{d} \omega - \mathrm{F}^{\upbeta} (1;\mu) \right] +\mathsf{F}^{\mathsf{R}}(\upbeta;\mu) \label{eq:c-21} \end{align} where the function $\mathrm{F}^{\upbeta}(\cdot;\mu)$ is defined as \begin{align} \mathrm{F}^{\upbeta}(\omega;\mu) = \frac{1}{\mu} \frac{\mathrm{d}}{\mathrm{d} \omega} \int_{\chi \omega}^{\left[\chi+\mu p\right] \omega} \mathrm{R}_{\mathbf{J}}(-\frac{t}{\lambda} ) \mathrm{d} t + \frac{\chi}{\upbeta} \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda} \omega) + \left[q-\lambda_0 \frac{\chi+\mu p}{\lambda} \right] \frac{\mathrm{d}}{\mathrm{d} \omega} \left[ \omega \mathrm{R}_{\mathbf{J}}(-\frac{\chi+\mu p}{\lambda} \omega) \right]. \label{eq:c-22} \end{align} To find $\mu$ at the thermal equilibrium, we let \begin{align} \frac{\partial}{\partial \mu} \mathsf{F}(\upbeta;\mu) = 0 \label{eq:c-23} \end{align} Using the equalities \eqref{eq:c-5a}-\eqref{eq:c-5c}, \eqref{eq:c-23} concludes that $\mu$ satisfies \begin{align} \frac{1}{2\lambda} \left[ p \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda}) + q \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda}) - q \mathrm{R}_{\mathbf{J}}(-\frac{\chi+\mu p}{\lambda}) \right] = \mathsf{F}^{\mathsf{R}}(\upbeta;\mu) + \frac{1}{2\lambda\mu} \int_{\chi}^{\chi+\mu p} \mathrm{R}_{\mathbf{J}}(-\frac{t}{\lambda}) \mathrm{d} t \nonumber \\ + \mathsf{E}\hspace{.5mm} \frac{1}{\upbeta} \int \log \left[ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+gz_1) (x-v) + u(v)\right]} \right] \tilde{\Lambda}^\upbeta \mathrm{D} z_1 \mathrm{D} z_0 \label{eq:c-24} \end{align} which as $\upbeta \uparrow \infty$ reduces to \begin{align} \frac{1}{2\lambda} \left[ p \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda}) + q \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda}) - q \mathrm{R}_{\mathbf{J}}(-\frac{\chi+\mu p}{\lambda}) \right] = \frac{1}{2\lambda\mu} \int_{\chi}^{\chi+\mu p} \mathrm{R}_{\mathbf{J}}(-\frac{t}{\lambda}) \mathrm{d} t + \mathsf{E}\hspace{.5mm} \frac{1}{\mu} \int \log \tilde{\Lambda} \ \tilde{\Lambda} \mathrm{D} z_1 \mathrm{D} z_0. \label{eq:c-25} \end{align} Denoting the solution to \eqref{eq:c-25} by $\mu^\star$, the free energy of the corresponding spin glass is then given as $\mathsf{F}(\upbeta)=\mathsf{F}(\upbeta;\mu^\star)$ which at the zero temperature reads \begin{align} \mathsf{F}^0=\frac{1}{2\lambda} \left[ \int_0^1 \mathrm{F}^{\infty}(\omega) \mathrm{d} \omega - \mathrm{F}^{\infty} (1) \right] - \frac{1}{\mu} \mathsf{E}\hspace{.5mm} \int \log \left[ \int \Lambda \mathrm{D} z_1 \right] \ \mathrm{D} z_0 \label{eq:c-27} \end{align} with \begin{align} \mathrm{F}^{\infty}(\omega)\coloneqq \lim_{\upbeta\uparrow\infty}\mathrm{F}^{\upbeta}(\omega;\mu^\star). \end{align} Finally by defining $\lambda^{\mathsf{s}}\coloneqq \left[2e\right]^{-1}$, $\lambda^{\mathsf{s}}_0\coloneqq \left[4e^2\right]^{-1} f^2$ and $\lambda^{\mathsf{s}}_1\coloneqq \left[4e^2\right]^{-1} g_1^2$, Proposition \ref{proposition:5} is concluded. \newpage \section{Proof of Proposition \ref{proposition:7}} \label{app:d} The strategy here is to extend the approach in Appendix \ref{app:c} to a general number of breaking steps. Following Appendix \ref{app:e} and considering $\mathbf{Q}$ as \begin{align} \mathbf{Q}= \frac{\chi}{\upbeta} \mathbf{I}_m+ \sum_{\kappa=1}^b p_\kappa \mathbf{I}_{\frac{m \upbeta}{\mu_\kappa}} \otimes \mathbf{1}_{\frac{\mu_\kappa}{\upbeta}} +q \mathbf{1}_m , \label{eq:d-1} \end{align} the frequency domain correlation matrix $\mathbf{R}\coloneqq\mathbf{T} \mathrm{R}_{\mathbf{J}}(-2 \upbeta \mathbf{T} \mathbf{Q})$ is written as \begin{align} \mathbf{R}= e \mathbf{I}_m - \upbeta \sum_{\kappa=1}^b \frac{g_\kappa^2}{2} \mathbf{I}_{\frac{m \upbeta}{\mu_\kappa}} \otimes \mathbf{1}_{\frac{\mu_\kappa}{\upbeta}} - \upbeta \frac{f^2}{2} \mathbf{1}_m \label{eq:d-2} \end{align} considering a given vector $\boldsymbol{\mu}=\left[ \mu_1, \ldots, \mu_b \right]^\mathsf{T}$, such that \begin{align} \mu_{\kappa+1} = \vartheta_{\kappa+1} \mu_{\kappa}, \label{eq:d-2.1} \end{align} with $\{ \vartheta_\kappa \}$ being non-negative integers, $e$, $f$ and $\{g_\kappa\}$ are then found in terms of $\chi$, $q$ and $\{p_\kappa\}$ by letting \begin{align} \lambda^\mathbf{R}_a= \lambda^\mathbf{T}_a \mathrm{R}_\mathbf{J}(-2\upbeta \lambda^\mathbf{T}_a \lambda^\mathbf{Q}_a) \label{eq:d-3} \end{align} for $a\in[1:m]$ where $\lambda^\mathbf{R}_a$, $\lambda^\mathbf{Q}_a$ and $\lambda^\mathbf{T}_a$ denote the $a$th corresponding eigenvalues of $\mathbf{R}$, $\mathbf{Q}$ and $\mathbf{T}$. As long as the constraint in \eqref{eq:d-2.1} holds, $\mathbf{Q}$, $\mathbf{T}$ and $\mathbf{R}$ have $b+2$ different sets of corresponding eigenvalues specified by\\ \begin{itemize} \item $\displaystyle \left\lbrace e-\sum_{\kappa=1}^b \mu_\kappa \frac{g_\kappa^2}{2}-m\upbeta \frac{f^2}{2}, \frac{\chi}{\upbeta} + \sum_{\kappa=1}^b p_\kappa \frac{\mu_\kappa}{\upbeta} + m q, \frac{1}{2\lambda}\left[ 1-\frac{m\upbeta\lambda_0}{\lambda+m\upbeta\lambda_0} \right] \right\rbrace$ with multiplicity $\Theta_{b+1}(m)=1$, \item $\displaystyle \left\lbrace e-\sum_{\kappa=1}^b \mu_\kappa \frac{g_\kappa^2}{2}, \frac{\chi}{\upbeta} + \sum_{\kappa=1}^b p_\kappa \frac{\mu_\kappa}{\upbeta}, \frac{1}{2\lambda} \right\rbrace$ with multiplicity $\Theta_b(m)=m\upbeta \mu_b^{-1} -1$, \item $\displaystyle \left\lbrace e-\sum_{\varsigma=1}^{\kappa} \mu_\varsigma \frac{g_\varsigma^2}{2}, \frac{\chi}{\upbeta} + \sum_{\varsigma=1}^\kappa p_\varsigma \frac{\mu_\varsigma}{\upbeta}, \frac{1}{2\lambda} \right\rbrace$ with multiplicity $\Theta_\kappa(m)=m\upbeta \left( \mu_\kappa^{-1} -\mu_{\kappa+1}^{-1} \right)$ for $\kappa\in[1:b-1]$, and \item $\displaystyle \left\lbrace e, \frac{\chi}{\upbeta}, \frac{1}{2\lambda} \right\rbrace$ with multiplicity $\Theta_0(m)=m-m\upbeta \mu_1^{-1}$.\\ \end{itemize} Substituting in \eqref{eq:d-3} $e$, $f$ and $\{g_\kappa\}$ for $\kappa\in[1:b]$ are determined in terms of $\chi$, $q$ and $\{p_\kappa\}$ as \begin{subequations} \begin{align} e &= \frac{1}{2\lambda} \mathrm{R}_{\mathbf{J}}(- \frac{\chi}{\lambda}), \label{eq:d-3.1a} \\ g_\kappa^2 &= \frac{1}{\lambda\mu_\kappa} \left[ \mathrm{R}_{\mathbf{J}}(- \frac{\tilde{\chi}_{\kappa-1}}{\lambda}) - \mathrm{R}_{\mathbf{J}}(-\frac{\tilde{\chi}_{\kappa}}{\lambda}) \right], \label{eq:d-3.1b} \\ f^2 &= \frac{1}{\lambda^2} \frac{\partial}{\partial \tilde{\chi}_{b}} \left\lbrace \left[ \lambda_0 \tilde{\chi}_{b} - \lambda q \right] \mathrm{R}_{\mathbf{J}}(- \frac{\tilde{\chi}_{b}}{\lambda}) \right\rbrace. \label{eq:d-3.1c} \end{align} \end{subequations} where we define $\tilde{\chi}_0 \coloneqq \chi$ and \begin{align} \tilde{\chi}_\kappa \coloneqq \chi+\sum_{\varsigma=1}^{\kappa} \mu_\varsigma p_\varsigma \label{eq:d-21} \end{align} for $\kappa\in[1:b]$. The Hamiltonian of the spin glass of replicas is then determined as in \eqref{eq:rep-4}. Substituting the Hamiltonian in \eqref{eq:rep-8} and using the equalities \begin{subequations} \begin{align} e^{\tfrac{1}{2} \upbeta^2 f^2 \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{1}_m }} &= \int e^{-\upbeta f \left[ \sum_{a=1}^m (x-\mathrm{v}_a) \right] z_0} \mathrm{D} z_0, \label{eq:d-4a} \\ e^{ \tfrac{1}{2} \upbeta^2 g_\kappa^2 \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{I}_{\frac{m \upbeta}{\mu_\kappa}} \otimes \mathbf{1}_{\frac{\mu_\kappa}{\upbeta}}}} &= \prod_{k=0}^{\Xi_\kappa} \int e^{-\upbeta g_\kappa \left[ \sum\limits_{a=\varrho_k^\kappa}^{\breve{\varrho}_k^\kappa} (x-\mathrm{v}_a) \right] z_\kappa} \mathrm{D} z_\kappa, \label{eq:d-4b} \end{align} \end{subequations} with $\varrho_k^\kappa=k \mu_\kappa \upbeta^{-1}+1$, $\breve{\varrho}_k^\kappa=(k+1)\mu_\kappa\upbeta^{-1}$, and $\Xi_\kappa=m\upbeta\mu_\kappa^{-1}-1$, the partition function finally reads \begin{align} \mathcal{Z}^\mathsf{R}(\upbeta;\boldsymbol{\mu}|\mathbf{x})= \int \left[ \bigwedge_{\varsigma=2}^{b} \int \left[ \int \left[ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-v) + u(v)\right]} \right]^{\frac{\mu_1}{\upbeta}} \mathrm{D} z_1 \right]^{\frac{\mu_{\varsigma}}{\mu_{\varsigma-1}}} \mathrm{D} z_\varsigma \right]^{\frac{m\upbeta}{\mu_b}} \mathrm{D} z_0 \label{eq:d-5} \end{align} with $v \in \mathbbmss{X}$ where for the sequences $\{\xi_\varsigma\}$ and $\{z_\varsigma\}$ we define \begin{align} \bigwedge_{\varsigma=1}^{b} \int \mathrm{F}^{\xi_\varsigma} \mathrm{D} z_\varsigma \coloneqq \int \left[ \cdots \int \left[ \int \mathrm{F}^{\xi_1} \mathrm{D} z_1 \right]^{\xi_2} \mathrm{D} z_2 \cdots \right]^{\xi_b} \mathrm{D} z_b. \label{eq:d-6} \end{align} Consequently, one evaluates the free energy as in \eqref{eq:rep-9} which by using the Riesz equality when $m \downarrow 0$ reduces to \begin{align} \mathsf{F}^\mathsf{R}(\upbeta;\boldsymbol{\mu}) = -\frac{1}{\mu_b} \mathsf{E} \int \log \left\lbrace \bigwedge_{\varsigma=1}^{b} \int \left[ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-v) + u(v)\right]} \right]^{\frac{\mu_{\varsigma}}{\mu_{\varsigma-1}}} \mathrm{D} z_\varsigma \right\rbrace \mathrm{D} z_0 \label{eq:d-7} \end{align} where we have defined $\mu_0=\upbeta$ for sake of compactness. The fixed point equations are, moreover, found via \eqref{eq:rep-6} where we have \begin{subequations} \begin{align} \left[ \frac{\chi}{\upbeta}+\sum\limits_{\kappa=1}^b p_\kappa+q \right] m &= \mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \norm{\mathbf{x}-\mathbf{v}}^2 \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}), \label{eq:d-8a} \\ \left[ \frac{\tilde{\chi}_{\kappa-1}}{\upbeta} +\frac{\mu_\kappa}{\upbeta} \left(\sum_{\varsigma=\kappa}^b p_\varsigma + q \right) \right] m &= \mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{I}_{\frac{m \upbeta}{\mu_\kappa}} \otimes \mathbf{1}_{\frac{\mu_\kappa}{\upbeta}}} \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}), \label{eq:d-8b}\\ \left[ \frac{\tilde{\chi}_b}{\upbeta}+m q \right] m&= \mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^\mathsf{T} \mathbf{1}_m} \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}) \label{eq:d-8c} \end{align} \end{subequations} for $\kappa\in[1:b]$. \eqref{eq:d-8a} and \eqref{eq:d-8c} are found by taking trace and sum over all the entries from both sides of \eqref{eq:rep-6}, respectively. \eqref{eq:d-8b} is moreover concluded by adding up the entries over the diagonal blocks of size $\mu_\kappa\upbeta^{-1}$. The right hand sides of \eqref{eq:d-8a}-\eqref{eq:d-8c} can then be evaluated using the equalities \begin{subequations} \begin{align} &\mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \norm{\mathbf{x}-\mathbf{v}}^2 \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}) = m \frac{\partial}{\partial e} \mathsf{F}^\mathsf{R}(\upbeta,m;\boldsymbol{\mu}), \label{eq:d-9a} \\ &\mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{I}_{\frac{m \upbeta}{\mu_\kappa}} \otimes \mathbf{1}_{\frac{\mu_\kappa}{\upbeta}}} \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x}) = -\frac{m}{\upbeta g_\kappa} \frac{\partial}{\partial g_\kappa} \mathsf{F}^\mathsf{R}(\upbeta,m;\boldsymbol{\mu}), \label{eq:d-9b}\\ &\mathsf{E}_{\mathbf{x}} \sum_{\mathbf{v}} \tr{(\mathbf{x}-\mathbf{v})(\mathbf{x}-\mathbf{v})^\mathsf{T} \mathbf{1}_m} \ \mathrm{p}_{\mathbf{v}|\mathbf{x}}^\upbeta(\mathbf{v}|\mathbf{x})= -\frac{m}{\upbeta f} \frac{\partial}{\partial f} \mathsf{F}^\mathsf{R}(\upbeta,m;\boldsymbol{\mu}). \label{eq:d-9c} \end{align} \end{subequations} Thus, the fixed point equations are finally concluded as \begin{subequations} \begin{align} &\frac{\chi}{\upbeta}+\sum\limits_{\kappa=1}^b p_\kappa+q = \mathsf{E}\hspace{.5mm} \int \frac{ \sum_{v} (v-x)^2 \ e^{- \upbeta \left[ e (x-v)^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-v) + u(v)\right]} }{\sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-v) + u(v) \right]} } \prod_{\kappa=1}^b \tilde{\Lambda}^\upbeta_\kappa \ \mathrm{D} z_\kappa \mathrm{D} z_0 \label{eq:d-10a} \\ &\tilde{\chi}_{\kappa-1}+\mu_\kappa \left(\sum_{\varsigma=\kappa}^b p_\varsigma + q \right) = \frac{1}{g_\kappa}\mathsf{E}\hspace{.5mm} \int \frac{ \sum_{v} (v-x)z_\kappa \ e^{- \upbeta \left[ e (x-v)^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-v) + u(v)\right]} }{\sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-v) + u(v) \right]} } \times \nonumber \\ & \hspace{12cm} \times\prod_{\kappa=1}^b \tilde{\Lambda}^\upbeta_\kappa \ \mathrm{D} z_\kappa \mathrm{D} z_0 \label{eq:d-10b} \\ &\tilde{\chi}_{b} =\frac{1}{f}\mathsf{E}\hspace{.5mm} \int \frac{ \sum_{v} (v-x)z_0 \ e^{- \upbeta \left[ e (x-v)^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-v) + u(v)\right]} }{\sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-v) + u(v) \right]} } \prod_{\kappa=1}^b \tilde{\Lambda}^\upbeta_\kappa \ \mathrm{D} z_\kappa \mathrm{D} z_0. \label{eq:d-10c} \end{align} \end{subequations} for $\kappa\in[1:b]$ in which we denote $\tilde{\Lambda}^{\upbeta}_\kappa\coloneqq \left[ \int \Lambda^\upbeta_\kappa \mathrm{D} z_\kappa\right]^{-1} \Lambda^\upbeta_\kappa$ with $\Lambda^\upbeta_1$ \begin{align} \Lambda^{\upbeta}_{1} \coloneqq \left[ \sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-v) + u(v) \right]} \right]^{\tfrac{\mu_{1}}{\upbeta}} \label{eq:d-11} \end{align} and $\{\Lambda^\upbeta_\kappa\}$ for $\kappa\in[2:b]$ being recursively defined as \begin{align} \Lambda^{\upbeta}_{\kappa} \coloneqq \left[ \int \Lambda^\upbeta_{\kappa-1} \ \mathrm{D} z_{\kappa-1} \right]^{\tfrac{\mu_{\kappa}}{\mu_{\kappa-1}}}. \label{eq:d-12} \end{align} The replicas' average distortion regarding the distortion function $\mathsf{d}(\cdot;\cdot)$ is further determined using the Hamiltonian modification technique employed in Appendix \ref{app:b} and \ref{app:c}. After modifying the Hamiltonian and taking the derivatives, the average distortion at the inverse temperature $\upbeta$ is given by \begin{align} \mathsf{D}^{\mathsf{R}}(\upbeta;\boldsymbol{\mu}) = \mathsf{E}\hspace{.5mm} \int \frac{ \sum_{v} \mathsf{d}(v;x) e^{- \upbeta \left[ e (x-v)^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-v) + u(v)\right]} }{\sum_{v} e^{- \upbeta \left[ e (x-v)^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-v) + u(v) \right]} } \prod_{\kappa=1}^b \tilde{\Lambda}^\upbeta_\kappa \ \mathrm{D} z_\kappa \mathrm{D} z_0. \label{eq:d-13} \end{align} Finally, by taking the limit $\upbeta \uparrow \infty$, we find the asymptotic distortion as \begin{align} \mathsf{D}^{\mathbbmss{W}}&= \mathsf{E}\hspace{.5mm} \int \mathsf{d}(\mathrm{g};x) \prod_{\kappa=1}^b \tilde{\Lambda}_\kappa \ \mathrm{D} z_\kappa \mathrm{D} z_0 \label{eq:d-14} \end{align} where $\mathrm{g}$ is defined as \begin{align} \mathrm{g} \coloneqq \arg \min_{v} \left[ e (x-v)^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-v) + u(v)\right], \label{eq:d-15} \end{align} and $\tilde{\Lambda}_\kappa$ denotes the limiting factor $\tilde{\Lambda}_\kappa^\infty$. Considering the definition of $\tilde{\Lambda}_\kappa^\upbeta$, $\tilde{\Lambda}_\kappa$ reads $\tilde{\Lambda}_\kappa = \left[ \int \Lambda_\kappa \mathrm{D} z_\kappa\right]^{-1} \Lambda_\kappa$ with \begin{align} \Lambda_{1} \coloneqq e^{-\mu_1 \left[ e (x-\mathrm{g})^2 + (fz_0+\sum\limits_{\kappa=1}^b g_\kappa z_\kappa) (x-\mathrm{g}) + u(\mathrm{g}) \right]} \label{eq:d-16} \end{align} and $\{\Lambda_\kappa\}$ for $\kappa\in[2:b]$ \begin{align} \Lambda_{\kappa} \coloneqq \left[ \int \Lambda_{\kappa-1} \ \mathrm{D} z_{\kappa-1} \right]^{\tfrac{\mu_{\kappa}}{\mu_{\kappa-1}}}. \label{eq:d-17} \end{align} Moreover, the fixed point equations reduce to \begin{subequations} \begin{align} \sum_{\kappa=1}^b p_\kappa + q &= \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x)^2 \prod_{\kappa=1}^b \tilde{\Lambda}_\kappa \ \mathrm{D} z_\kappa \mathrm{D} z_0 \label{eq:d-18a}, \\ \tilde{\chi}_{\kappa-1}+\mu_\kappa \left(\sum_{\varsigma=\kappa}^b p_\varsigma + q \right) &= \frac{1}{g_\kappa} \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x) z _\kappa \prod_{\kappa=1}^b \tilde{\Lambda}_\kappa \ \mathrm{D} z_\kappa \mathrm{D} z_0 , \label{eq:d-18b} \\ \tilde{\chi}_{b} &= \frac{1}{f} \mathsf{E}\hspace{.5mm} \int (\mathrm{g}-x) z _0 \prod_{\kappa=1}^b \tilde{\Lambda}_\kappa \ \mathrm{D} z_\kappa \mathrm{D} z_0, \label{eq:d-18c} \end{align} \end{subequations} for $\kappa \in [1:b]$. As in the 1\ac{rsb} ansatz, we set $\boldsymbol{\mu}$ to be the extreme point of the free energy at a given inverse temperature $\upbeta$, in order to satisfy the second law of thermodynamics. The solution needs to be found over the set of non-negative real vectors which satisfy the constraint in \eqref{eq:d-2.1}. The parameters of the ansatz, however, are finally taken such that the zero temperature free energy is minimized. Using Proposition \ref{proposition:1}, the free energy of the corresponding spin glass for a given vector $\boldsymbol{\mu}$ is written as \begin{align} \mathsf{F}(\upbeta;\boldsymbol{\mu})=\frac{1}{2\lambda} \left[ \int_0^1 \mathrm{F}^{\upbeta}(\omega;\boldsymbol{\mu}) \mathrm{d} \omega - \mathrm{F}^{\upbeta} (1;\boldsymbol{\mu}) \right] +\mathsf{F}^{\mathsf{R}}(\upbeta;\boldsymbol{\mu}) \label{eq:d-19} \end{align} where the function $\mathrm{F}^{\upbeta}(\cdot;\boldsymbol{\mu})$ is defined as \begin{align} \mathrm{F}^{\upbeta}(\omega;\boldsymbol{\mu}) = \sum\limits_{\kappa=1}^b \frac{1}{\mu_\kappa} \frac{\mathrm{d}}{\mathrm{d} \omega} \int_{\tilde{\chi}_{\kappa-1} \omega}^{\tilde{\chi}_{\kappa} \omega} \mathrm{R}_{\mathbf{J}}(-\frac{t}{\lambda} ) \mathrm{d} t + \frac{\chi}{\upbeta} \mathrm{R}_{\mathbf{J}}(-\frac{\chi}{\lambda} \omega) + \left[q- \frac{\lambda_0}{\lambda} \tilde{\chi}_b \right] \frac{\mathrm{d}}{\mathrm{d} \omega} \left[ \omega \mathrm{R}_{\mathbf{J}}(-\frac{\tilde{\chi}_b}{\lambda} \omega) \right]. \label{eq:d-20} \end{align} Therefore, the vector $\boldsymbol{\mu}^\star$, for a given $\upbeta$, is set as \begin{align} \boldsymbol{\mu}^\star=\arg\min_{\boldsymbol{\mu}} \mathsf{F}(\upbeta;\boldsymbol{\mu}) \end{align} with $\boldsymbol{\mu}\in\mathbbmss{S}_{\boldsymbol{\mu}}$ where $\mathbbmss{S}_{\boldsymbol{\mu}}$ is the set of non-negative real vectors satisfying the constraint in \eqref{eq:d-2.1}. By substituting \eqref{eq:d-3.1a}-\eqref{eq:d-3.1c} in \eqref{eq:d-20}, $\boldsymbol{\mu}^\star$ reduces to \begin{align} \boldsymbol{\mu}^\star=\arg\min_{\boldsymbol{\mu}} \left\lbrace \frac{1}{2\lambda} \left[ \int_0^1 \mathrm{F}^{\upbeta}(\omega;\boldsymbol{\mu}) \mathrm{d} \omega \right]+\mathsf{F}^{\mathsf{R}}(\upbeta;\boldsymbol{\mu}) - e \Delta(\boldsymbol{\mu}) \right\rbrace \label{eq:d-19} \end{align} with $\boldsymbol{\mu}\in\mathbbmss{S}_{\boldsymbol{\mu}}$ where $\Delta(\cdot)$ reads \begin{align} \Delta(\boldsymbol{\mu})\coloneqq \frac{1}{e} \left\lbrace\sum\limits_{\kappa=1}^b \frac{1}{\mu_\kappa} \left[ \tilde{e}_\kappa \tilde{\chi}_\kappa - \tilde{e}_{\kappa-1} \tilde{\chi}_{\kappa-1} \right] + \left[\frac{\tilde{e}_0 \tilde{\chi}_0}{\upbeta}+ \tilde{e}_b q - \frac{f^2}{2} \tilde{\chi}_b \right] \right\rbrace \end{align} with $\tilde{e}_0 \coloneqq e$ and \begin{align} \tilde{e}_\kappa \coloneqq e-\sum_{\varsigma=1}^{\kappa} \mu_\varsigma \frac{g^2_\varsigma}{2} \label{eq:d-21} \end{align} for $\kappa\in[1:b]$. The vector $\boldsymbol{\mu}^\star$ is then determined such that it minimizes the free energy . Finally by taking the limit $\upbeta\uparrow\infty$, the zero temperature free energy is evaluated as \begin{align} \mathsf{F}^0=\frac{1}{2\lambda} \left[ \int_0^1 \mathrm{F}^{\infty}(\omega) \mathrm{d} \omega - \mathrm{F}^{\infty} (1) \right] - \frac{1}{\mu_b} \mathsf{E}\hspace{.5mm} \int \log \left[\int \Lambda_b \mathrm{D} z_b\right] \ \mathrm{D} z_0 \label{eq:d-22} \end{align} where we define \begin{align} \mathrm{F}^{\infty}(\omega)\coloneqq \lim_{\upbeta\uparrow\infty}\mathrm{F}^{\upbeta}(\omega;\boldsymbol{\mu}^\star). \label{eq:d-23} \end{align} Denoting $\lambda^{\mathsf{s}}\coloneqq \left[2e\right]^{-1}$, $\lambda^{\mathsf{s}}_0\coloneqq \left[4e^2\right]^{-1} f^2$ and $\lambda^{\mathsf{s}}_\kappa\coloneqq \left[4e^2\right]^{-1} g_\kappa^2$ for $\kappa\in[1:b]$, and defining the sequence $\{ \zeta_\kappa \}$ such that $\zeta_0=1$ and \begin{align} \zeta_\kappa \coloneqq 1-\sum_{\varsigma=1}^\kappa \mu_{\varsigma} \frac{\lambda^{\mathsf{s}}_\varsigma}{\lambda^{\mathsf{s}}} \end{align} for $\kappa\in[1:b]$, Proposition \ref{proposition:7} is concluded. \newpage \section{General \ac{rsb} Frequency Domain Correlation Matrix} \label{app:e} Consider the spin glass of replicas defined in Definition \ref{def:replica_spin}, the Hamiltonian reads \begin{align} \mathcal{E}^\mathsf{R}(\mathbf{v}|\mathbf{x})= (\mathbf{x}-\mathbf{v})^{\mathsf{T}} \mathbf{R} (\mathbf{x}-\mathbf{v}) + u(\mathbf{v}). \label{eq:e-1} \end{align} where $\mathbf{R} \coloneqq \mathbf{T} \mathrm{R}_{\mathbf{J}}(-2 \upbeta \mathbf{T} \mathbf{Q})$ is referred to as the ``frequency domain correlation matrix''. In this appendix, we show that under the general \ac{rsb} assumption on $\mathbf{Q}$, including the \ac{rs} case, the frequency domain correlation matrix has the same structure with different scalar coefficients. To show that, let the correlation matrix be of the form \begin{align} \mathbf{Q}= q_0 \mathbf{I}_m + \sum_{i=1}^{b} q_i \mathbf{I}_{\frac{m}{\xi_i}} \otimes \mathbf{1}_{\xi_i} + q_{b+1} \mathbf{1}_m \label{eq:e-2} \end{align} for some integer $b$ where $q_0,q_{b+1} \neq 0$. \eqref{eq:e-2} represents the $b$\ac{rsb} as well as \ac{rs} structures by setting the coefficients correspondingly. Considering $\mathbf{T}$ as defined in \eqref{eq:rep-5}, $\mathbf{T}\mathbf{Q}$ is then written as \begin{align} \mathbf{T}\mathbf{Q}=\frac{1}{2\lambda}\left[ \mathbf{Q}-\frac{\upbeta \lambda_0}{\lambda + m\upbeta \lambda_0}\mathbf{1}_m\mathbf{Q} \right]. \label{eq:e-3} \end{align} Defining the vector ${\boldsymbol{u}}_{m\times 1}$ as a vector with all entries equal to $1$, it is clear that ${\boldsymbol{u}}$ is an eigenvector of $\mathbf{Q}$, and therefore, by denoting the eigendecomposition of $\mathbf{Q}$ as $\mathbf{V}\mathbf{D}^{\mathrm{Q}}\mathbf{V}^\mathsf{T}$, $\mathbf{1}_m$ reads \begin{align} \mathbf{1}_m={\boldsymbol{u}}\bu^\mathsf{T}=\mathbf{V}\mathbf{D}^{1}\mathbf{V}^\mathsf{T} \label{eq:e-4} \end{align} where $\mathbf{D}^1$ is a diagonal matrix in which all the diagonal entries expect the entry corresponding to the eigenvector ${\boldsymbol{u}}$ are zero. Consequently, \eqref{eq:e-3} reduces to \begin{align} \mathbf{T}\mathbf{Q}= \frac{1}{2\lambda} \mathbf{V} \left[ \mathbf{D}^{\mathrm{Q}}-\frac{\upbeta \lambda_0}{\lambda + m\upbeta \lambda_0}\mathbf{D}^{1} \mathbf{D}^{\mathrm{Q}} \right] \mathbf{V}^\mathsf{T} \label{eq:e-5} \end{align} which states that $\mathbf{T}\mathbf{Q}$ and $\mathbf{Q}$ span the same eigenspace. The eigenvalues of $\mathbf{T}\mathbf{Q}$ and $\mathbf{Q}$ are also distributed with the same frequencies. In fact, as the eigenvalue corresponding to ${\boldsymbol{u}}$ occurs with multiplicity $1$, the second term on the right hand side of \eqref{eq:e-3} does not change the distribution of eigenvalues and only modifies the eigenvalue corresponding to ${\boldsymbol{u}}$. Therefore, $\mathbf{T}\mathbf{Q}$ can be also represented as in \eqref{eq:e-2} with different scalar coefficient. To extend the scope of the analysis to $\mathbf{R}$, we note that the function $\mathrm{R}_{\mathbf{J}}(\cdot)$ is strictly increasing for any $\mathrm{F}_{\mathbf{J}}$ different from the single mass point \ac{cdf}\footnote{In the single mass point \ac{cdf}, we have $\mathrm{F}_{\mathbf{J}}(\lambda)=\mathbf{1}\{\lambda \geq \mathsf{K}\}$ for some real constant $\mathsf{K}$.}\cite{zaidel2012vector}. Consequently, the eigenvalues' distribution remains unchanged, and thus, \begin{align} \mathbf{R}= r_0 \mathbf{I}_m + \sum_{i=1}^{b} r_i \mathbf{I}_{\frac{m}{\xi_i}} \otimes \mathbf{1}_{\xi_i} + r_{b+1} \mathbf{1}_m. \label{eq:e-6} \end{align} for some real $\left\lbrace r_i \right\rbrace$. In the case that $\mathrm{F}_{\mathbf{J}}$ is the single mass point \ac{cdf}, the $\mathrm{R}$-transform becomes a constant function which results in $\mathrm{R}_{\mathbf{J}}(-2 \upbeta \mathbf{T} \mathbf{Q})=\mathsf{K}\mathbf{I}_m$ for some constant $\mathsf{K}$. Therefore, $\mathbf{R}=\mathsf{K} \mathbf{T}$ which is again represented as in \eqref{eq:e-6} by setting $r_i = 0$ for $i \in [1:b]$. This concludes that $\mathbf{R}$ has the same structure as $\mathbf{Q}$ for any $\mathrm{F}_{\mathbf{J}}$. \newpage \section{Asymptotics of Spherical Integral} \label{app:f} Consider $\mu_{n}^{\zeta}$ to be the Haar measure on the orthogonal group $\mathbbmss{O}_n$ for $\zeta=1$, and on the unitary group $\mathbbmss{U}_n$ for $\zeta=2$. Let $\mathbf{G}_n$ and $\mathbf{D}_n$ be $n\times n$ matrices; then, the integral of the form \begin{align} \mathrm{I}_n^{\zeta}(\mathbf{G}_n, \mathbf{D}_n) \coloneqq \int e^{n \tr{\mathbf{U} \mathbf{G}_n \mathbf{U}^{\dagger}\mathbf{D}_n}} \mathrm{d} \mu_{n}^{\zeta}(\mathbf{U}), \label{eq:f-1} \end{align}is known as the ``spherical integral''. This integral has been extensively studied in the mathematics literature, as well as physics where it is often called ``Harish-Chandra'' or ``Itzykson \& Zuber'' integral. In a variety of problems, such as ours, the evaluation of spherical integrals in asymptotic regime is interesting, and therefore, several investigations have been done on this asymptotics. In \cite{guionnet2002large}, the asymptotics of the integral has been investigated when the matrices $\mathbf{G}_n$ and $\mathbf{D}_n$ have $n$ distinct eigenvalues with converging spectrums, and under some assumptions, a closed form formula has been given; however, the final formula in \cite{guionnet2002large} is too complicated and hard to employ. In \cite{guionnet2005fourier}, the authors showed that, for a low-rank $\mathbf{G}_n$, the asymptotics of the integral can be written directly in terms of the $\mathrm{R}$-transform corresponding to the asymptotic eigenvalue distribution of $\mathbf{D}_n$. As long as the replica analysis is being considered, we can utilize the result from \cite{guionnet2005fourier}, since the number of replicas can be considered to be small enough. In \cite{guionnet2005fourier}, Theorem 1.2, it is shown that when $\mathbf{G}_n$ is a rank-one matrix, under the assumption that the spectrum of $\mathbf{D}_n$ asymptotically converges to a deterministic \ac{cdf} $\mathrm{F}_{\mathbf{D}}$ with compact and finite length support, the asymptotics of the integral can be written in terms of the $\mathrm{R}$-transform $\mathrm{R}_{\mathbf{D}}(\cdot)$ as \begin{align} \lim_{n \uparrow \infty} \frac{1}{n} \log \mathrm{I}_n^{\zeta}(\mathbf{G}_n, \mathbf{D}_n)= \int_{0}^{\uptheta} \mathrm{R}_{\mathbf{D}}(\frac{2 \omega}{\zeta}) \mathrm{d} \omega, \label{eq:f-2} \end{align}in which $\uptheta$ denotes the single nonzero eigenvalue of $\mathbf{G}_n$. The authors further showed in Theorem 1.7 that in the case of $\mathrm{rank}(\mathbf{G}_n)=\mathcal{O}(\sqrt{n})$, under the same assumption as in Theorem 1.2, the spherical integral asymptotically factorizes into product of rank-one integrals, and therefore, \begin{align} \lim_{n \uparrow \infty} \frac{1}{n} \log \mathrm{I}_n^{\zeta}(\mathbf{G}_n, \mathbf{D}_n)= \sum_{i=1}^m \int_{0}^{\uptheta_i} \mathrm{R}_{\mathbf{D}}(\frac{2 \omega}{\zeta}) \mathrm{d} \omega, \label{eq:f-3} \end{align}with $\{ \uptheta_i \}$ denoting the nonzero eigenvalues of $\mathbf{G}_n$ for $i\in[1:m]$, and $m=\mathrm{rank}(\mathbf{G}_n)$. In Appendix \ref{app:a}, one can employ \eqref{eq:f-3} in order to evaluate the asymptotics over the system matrix consistent to the system setup illustrated in Section \ref{sec:problem_formulation}. Moreover, by using the above discussion, the investigations in Appendix \ref{app:a} can be extended to the case of complex variables. More about the spherical integral and its asymptotics can be found in \cite{guionnet2005fourier}, and the references therein. \label{list:acronyms} \begin{acronym} \acro{iid}[i.i.d.]{independent and identically distributed} \acro{pmf}[PMF]{Probability Mass Function} \acro{cdf}[CDF]{Cumulative Distribution Function} \acro{pdf}[PDF]{Probability Density Function} \acro{rs}[RS]{Replica Symmetry} \acro{1rsb}[1RSB]{One-Step Replica Symmetry Breaking} \acro{brsb}[$b$RSB]{$b$-Steps Replica Symmetry Breaking} \acro{rsb}[RSB]{Replica Symmetry Breaking} \acro{mse}[MSE]{Mean Square Error} \acro{mmse}[MMSE]{Minimum Mean Square Error} \acro{map}[MAP]{Maximum-A-Posterior} \acro{rhs}[r.h.s.]{right hand side} \acro{lhs}[l.h.s.]{left hand side} \acro{wrt}[w.r.t.]{with respect to} \acro{lln}[LLN]{Law of Large Numbers} \acro{mpm}[MPM]{Marginal-Posterior-Mode} \acro{mimo}[MIMO]{Multiple-Input Multiple-Output} \acro{awgn}[AWGN]{Additive White Gaussian Noise} \acro{cdma}[CDMA]{Code Division Multiple Access} \acro{amp}[AMP]{Approximate Message Passing} \end{acronym} \newpage \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
376
\section{Introduction} During last decades colliders have provided most of our knowledge on fundamental constituents of matter and their interactions. Particle colliders can be classified concerning center-of-mass energy, colliding beams and collider types: \begin{itemize} \item Collider types: ring-ring, linac-linac and linac-ring, \item Center-of-mass energy: energy frontiers and particle factories, \item Colliding beams: hadron, lepton, photon, lepton-hadron and photon-hadron colliders. \end{itemize} The ring-ring colliders are the most advanced from technology viewpoint and are widely used around the world. As for the linac-linac colliders, essential experience is handled due to SLC (Stanford Linear Collider \cite{key-1} with $\sqrt{s}=0.1$ TeV) operation and ILC/CLIC (International Linear Collider project \cite{key-2} with $\sqrt{s}=0.5-1$ TeV / Compact Linear Collider project \cite{key-3} with $\sqrt{s}$ up to 3 TeV) related studies. The linac-ring colliders are less familiar (for history of linac-ring type collider proposals see \cite{key-4}). \begin{table}[b] \caption{Energy frontier colliders: colliding beams vs collider types.} \begin{centering} \begin{tabular}{|c|c|c|c|} \hline Colliders & Ring-Ring & Linac-Linac & Linac-Ring\tabularnewline \hline \hline Hadron & + & & \tabularnewline \hline Lepton ($e^{-}$$e^{+}$) & & + & \tabularnewline \hline Lepton ($\mu^{-}$$\mu^{+}$) & + & & \tabularnewline \hline Lepton-hadron ($eh$) & & & +\tabularnewline \hline Lepton-hadron ($\mu h$) & + & & \tabularnewline \hline Photon-hadron & & & +\tabularnewline \hline \end{tabular} \par\end{centering} \centering{} \end{table} In Table I we present correlations between colliding beams and collider types for energy frontier colliders where symbol \textquotedblleft +\textquotedblright{} implies that given type of collider provides maximal center of mass energy for this type of colliding particles (for example; linac-ring type colliders will give opportunity to achieve highest center of mass energy for $ep$ collisions). Concerning the center-of-mass energy: hadron colliders provide highest values (for this reason they are considered as \textquotedbl{}discovery\textquotedbl{} machines), while lepton colliders have an order smaller $E_{CM}$, and lepton-hadron colliders provide intermediate $E_{CM}$. It should be mentioned that differences in center-of-mass energies become fewer at partonic level. From the BSM search point of view, lepton-hadron colliders are comparable with hadron colliders and essentially exceeds potential of lepton colliders for a lot of new phenomena (see \cite{key-5} for LHC (Large Hadron Collider \cite{key-6} with $\sqrt{s}=14$ TeV at CERN), CLIC and LEP$\varotimes$LHC (Large Electron Positron Collider \cite{key-7} with $\sqrt{s}=0.1-0.2$ TeV at CERN) comparison and \cite{key-8} for LHC, ILC and ILC$\varotimes$LHC comparison). Below we list past and future energy frontier colliders for three time periods (Tevatron \cite{key-9} denotes $\bar{p}p$ collider with $\sqrt{s}=1.98$ TeV at FNAL, HERA \cite{key-10} denotes $\sqrt{s}=0.3$ TeV $ep$ collider at DESY, low energy $\mu C$ denotes Muon Collider porject \cite{key-11} with $\sqrt{s}=0.126$ TeV, LHeC denotes $\sqrt{s}=1.3$ TeV $ep$ collider project \cite{key-12}, PWFA-LC denotes Plasma Wake-Field Accelerator-Linear Collider project \cite{key-13}, high energy $\mu C$ denotes Muon Collider porject \cite{key-11} with $\sqrt{s}$ up to 3 TeV): \begin{itemize} \item Before the LHC (<2010): Tevatron ($\bar{p}p$), SLC/LEP ($e^{-}$$e^{+}$) and HERA ($ep$), \item LHC era (2010-2030): LHC ($pp$, $AA$), ILC ($e^{-}$$e^{+}$), low energy $\mu C$ ($\mu^{-}$$\mu^{+}$), LHeC ($ep$, $eA$) and $\mu$-LHC ($\mu p$, $\mu A$), \item After the LHC (>2030): FCC ($pp$, $AA$), CLIC ($e^{-}$$e^{+}$), PWFA-LC ($e^{-}$$e^{+}$), high energy $\mu C$ ($\mu^{-}$$\mu^{+}$), and FCC based lepton-hadron and photon-hadron colliders, namely, $e$-FCC ($ep$, $eA$) and $\mu$-FCC ($\mu p$, $\mu A$) and $\gamma$-FCC ($\gamma p$, $\gamma A$). \end{itemize} Comparison of contemporary lepton and hadron colliders shows that hadron colliders have much higher center of mass energies even at partonic level. Therefore, formers give opportunity to search for heavier new particles and/or probe smaller distances. This is why they are called ``discovery'' machines. It is known that lepton-hadron scattering had played crucial role in our understanding of deep inside of matter. For example, electron scattering on atomic nuclei reveals structure of nucleons in Hofstadter experiment \cite{key-14}. Moreover, quark parton model was originated from lepton-hadron collisions at SLAC \cite{key-15}. Extending the kinematic region by two orders of magnitude both in high $Q^{2}$ and small $x$, HERA (the first and still unique lepton-hadron collider) with $\sqrt{s}=0.32$ TeV has shown its superiority compared to the fixed target experiments and provided parton distribution functions (PDF) for LHC and Tevatron experiments. Unfortunately, the region of sufficiently small $x$ ($<10^{-6}$) and high $Q^{2}$ ($\geq10\,GeV^{2}$), where saturation of parton densities should manifest itself, has not been reached yet. Hopefully, LHeC \cite{key-12} with $\sqrt{s}=1.3$ TeV will give opportunity to investigate this region. Construction of linear $e^{+}e^{-}$colliders (or special linac) and muon colliders (or special muon ring) tangential to the future circular collider (FCC), as shown in Fig. 1, will give opportunity to achieve highest center of mass energy in lepton-hadron and photon-hadron collisions \cite{key-16,key-17}. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{Fig1} \par\end{centering} \centering{}\caption{Possible configuration for FCC, linear collider (LC) and muon collider (\textmu C). } \end{figure} FCC is the future 100 TeV center-of-mass energy pp collider studied at CERN and supported by European Union within the Horizon 2020 Framework Programme for Research and Innovation \cite{key-18}. Main parameters of the FCC $pp$ option \cite{key-19} are presented in Table II. The FCC also includes an electron-positron collider option in the same tunnel (TLEP) \cite{key-20}, as well as several $ep$ collider options \cite{key-21}. \begin{table}[H] \caption{Main parameters of proton beams in FCC.} \centering{}% \begin{tabular}{|c|c|} \hline Beam Energy (TeV) & 50\tabularnewline \hline Peak Luminosity ($10^{34}\,cm^{-2}s^{-1}$) & 5.6\tabularnewline \hline Particle per Bunch ($10^{10}$) & 10\tabularnewline \hline Norm. Transverse Emittance ($\mu m$) & 2.2\tabularnewline \hline \textgreek{b}{*} amplitude function at IP (m) & 1.1\tabularnewline \hline IP beam size ($\mu m$) & 6.8\tabularnewline \hline Bunches per Beam & 10600\tabularnewline \hline Bunch Spacing (ns) & 25\tabularnewline \hline Bunch length (mm) & 80\tabularnewline \hline Beam-beam parameter, $\xi_{pp}$ & $5.6\times10^{-3}$\tabularnewline \hline \end{tabular} \end{table} Energy recovery linac (ERL) with $E_{e}=60\,GeV$ is chosen as the main option for LHeC. Same ERL can also be used for FCC based $ep$ collider \cite{key-21}. Concerning $e$-ring in the FCC tunnel \cite{key-21} energy of electrons is limited ($E_{e}<200\,GeV$) due to large synchrotron radiation (synchrotron radiation power is proportional to the fourth power of energy and inversely proportional to the square of the ring radius and to the fourth power of the particle mass). Higher electron energies can be handled only by constructing linear colliders (or special linac) tangential to the FCC. For the first time this approach was proposed for UNK$\varotimes$VLEPP based $ep$ colliders \cite{key-22} (UNK denotes $pp$ collider project with $\sqrt{s}=6$ TeV at IHEP, VLEPP denotes multi-hundred GeV $e^{+}e^{-}$ collider at BINP). Then, construction of TESLA tangential to HERA (THERA project) was considered \cite{key-23}. This line was followed by consideration of the LC$\varotimes$LHC $ep$ collider proposals (see reviews \cite{key-24,key-25,key-26} and references therein). In this paper, we consider main parameters of the FCC based lepton-hadron ($lp$, $lA$) and photon-hadron ($\gamma p$, $\gamma A$) colliders, especially LC$\varotimes$FCC based ep collider schemes. In Section II, we estimate luminosity of FCC based $ep$ colliders taking into account beam-beam tune shift and disruption effects. In numerical calculations, we utilize main parameters of ILC (International Linear Collider) \cite{key-2} and PWFA-LC (Plasma Wake Field Accelerator - Linear Collider) \cite{key-13} using a simulation program under development for lepton-hadron colliders. Possible other options, namely, $eA$, $\mu p/\mu A$ and $\gamma p/\gamma A$ are briefly discussed in Section III. In Section IV, conclusions and recommendations are presented after comparison of LC, FCC-$pp$ and LC$\varotimes$FCC colliders' potentials for color octet electron search. \section{LC$\varotimes$FCC Based $ep$ Colliders} \begin{singlespace} General expression for luminosity of FCC based $lh$ colliders is given by ($l$ denotes electron or muon, $h$ denotes proton or nucleus): \begin{eqnarray} L_{lh} & = & \frac{N_{l}N_{h}}{4\pi max[\sigma_{x_{h}},\sigma_{x_{l}}]max[\sigma_{y_{h}},\sigma_{y_{l}}]}min[f_{c_{h},}\,f_{c_{l}}]\label{eq:Denklem1} \end{eqnarray} where $N_{l}$ and $N_{h}$ are numbers of leptons and hadrons per bunch, respectively; $\sigma_{x_{h}}$ ($\sigma_{x_{l}}$ ) and $\sigma_{y_{h}}$ ($\sigma_{y_{l}}$ ) are the horizontal and vertical hadron (lepton) beam sizes at IP; $f_{c_{l}}$ and $f_{c_{h}}$ are LC and FCC bunch frequencies. $f_{c}$ is expressed by $f_{c}=N_{b}f_{rep}$, where $N_{b}$ denotes number of bunches, $f_{rep}$ means revolution frequency for FCC and pulse frequency for LC. In order to determine collision frequency of lh collider, minimum value should be chosen among lepton and hadron bunch frequencies. Some of these parameters can be rearranged in order to maximize $L_{lh}$ but one should note that there are some main limitations that should be considered. One of these limitations is lepton beam power, however only parameters of FCC hadron beam is rearranged in this study and only nominal parameters of linear colliders are considered. Therefore, there is no change of electron beam power due to upgrades. Other limitations for linac-ring type $lh$ colliders are due to beam-beam effects. In general, a better focusing is needed to have high luminosity values at interaction points (IP). However, although an intensely focused beam including charged particles with large Lorentz factor ($\gamma$ >\textcompwordmark{}> 1) does not have a strong influence on its internal beam particles, due to canceling of Lorentz forces one another (space charge effects diminish with $1/\gamma^{2}$), this situation does not hold for the encountered beam. Deflection of particles under this electromangetic interaction is called as disruption. When this interaction causes an angular kick in opposite beam's particles, it is called beam-beam tune shift. While beam-beam tune shift affects hadron (proton, ion) and muon beams, disruption has influence on electron beams. \end{singlespace} Disruption parameter for electron beam is given by: \begin{subequations} \begin{eqnarray} D_{x_{e}} & = & \frac{2\,Z_{h}N_{h}r_{e}\sigma_{z_{h}}}{\gamma_{e}\sigma_{x_{h}}(\sigma_{x_{h}}+\sigma_{y_{h}})}\label{eq:Denklem2} \end{eqnarray} $\,$ \begin{equation} D_{y_{e}}=\frac{2\,Z_{h}N_{h}r_{e}\sigma_{z_{h}}}{\gamma_{e}\sigma_{y_{h}}(\sigma_{y_{h}}+\sigma_{x_{h}})} \end{equation} \end{subequations} \noindent where, $r_{e}=2.82\times10^{-15}$ is classical radius for electron, $\gamma_{e}$ is the Lorentz factor of electron beam, $\sigma_{x_{h}}$ and $\sigma_{y_{h}}$ are horizontal and vertical hadron beam sizes at IP, respectively. $\sigma_{z_{h}}$ is bunch length of hadron beam. $Z_{h}$ denotes atomic number for ion (for electron-proton collisions $Z_{h}=1$). Beam-beam parameter for hadron beams is given by: \begin{subequations} \begin{equation} \xi_{x_{h}}=\frac{N_{l}r_{h}\beta_{h}^{*}}{2\pi\gamma_{h}\sigma_{x_{l}}(\sigma_{x_{l}}+\sigma_{y_{l}})}\label{eq:Denklem3} \end{equation} $ $ \begin{equation} \xi_{y_{h}}=\frac{N_{l}r_{h}\beta_{h}^{*}}{2\pi\gamma_{h}\sigma_{y_{l}}(\sigma_{y_{l}}+\sigma_{x_{l}})} \end{equation} \end{subequations} where $r_{h}$ is radius of hadron (for proton it is classical radius, $r_{p}=1.54\times10^{-18}$), $\beta_{h}^{*}$ is beta function of hadron beam at interaction point (IP), $\gamma_{h}$ is the Lorentz factor of hadron beam. $\sigma_{x_{l}}$ and $\sigma_{y_{l}}$ are horizontal and vertical sizes of lepton beam at IP, respectively. Considering ILC$\varotimes$FCC and PWFA-LC$\varotimes$FCC options, one should note that bunch spacing of electron accelerators are always greater than FCC, while proton beam sizes are always greater than the electron beam sizes at IP. Details and parameters of electron beam accelerators are given in further subsections. In numerical calculations, we use transversely matched electron and proton beams at IP. Keeping in mind roundness of FCC proton beam, Eqs (1)-(3) turn into; $\,$ \begin{equation} L_{ep}=\frac{N_{e}N_{p}}{4\pi\sigma_{p}^{2}}f_{c_{e}}\label{eq:Denklem4} \end{equation} \begin{equation} \xi_{p}=\frac{N_{e}r_{p}\beta_{p}^{*}}{4\pi\gamma_{p}\sigma_{p}^{2}}\label{eq:Denklem5} \end{equation} \begin{equation} D_{e}=\frac{N_{p}r_{e}\sigma_{z_{p}}}{\gamma_{e}\sigma_{p}^{2}}\label{eq:Denklem6} \end{equation} $\,$ In order to increase luminosity of $ep$ collisions LHeC-like upgrade of the FCC proton beam parameters have been used. Namely, number of protons per bunch is increased 2.2 times ($2.2\times10^{11}$ instead of $10^{11}$), $\beta$-function of proton beam at IP is arranged to be 11 times lower (0.1 m instead of 1.1 m) which corresponds to THERA \cite{key-23} and LHeC \cite{key-12} designs. Therefore, IP beam size of proton beam, $\sigma_{p}$, is decreased $\sim$3.3 times according to the relation $\sigma_{p}=\sqrt{\varepsilon_{p}^{N}\beta_{p}^{*}/\gamma_{p}}$. Details of the parameter calculations for ILC$\varotimes$FCC and PWFA-LC$\varotimes$FCC $ep$ colliders are given in subsections II.A and II.B, respectively. Numerical calculations have been performed using a new simulation software for $ep$ colliders which is currently being developed. Details are given in subsection II.C. \subsection{ILC$\varotimes$FCC} Main parameters of ILC electron beam are given in Table III. One can see from the table that bunch spacing of ILC is 554 ns which is about 22 times greater than FCC bunch spacing of 25 ns. Therefore, most of the proton bunches turning in FCC would not participate in $ep$ collisions unless parameters of FCC (especially bunch spacing) are rearranged. For FCC, the parameter $N_{p}$ can be increased while number of bunches is decreased regarding the dissipation. Transverse beam size of proton is much greater than transverse beam size of electron for ILC$\varotimes$FCC. If beam sizes are matched, this leads $L_{ep}$ to decrease since luminosity is inversely proportional to $\sigma_{p}^{2}$ as can be seen from Eq. (\ref{eq:Denklem4}). To increase luminosity, upgraded value of $\beta_{p}^{*}$ parameter is set to be 0.1 m and therefore $\sigma_{p}$ to be 2.05 $\mu m$. Calculated values of $L_{ep}$, $D_{e}$ and $\xi_{p}$ parameters for ILC$\varotimes$FCC based $ep$ colliders with both nominal and upgraded FCC proton beam cases are given in Table IV. In addition in Table V, disruption parameter is fixed at the limit value of $D_{e}=25$ (for motivation see \cite{key-272727,key-282828})and corresponding $N_{p}$ and $L_{ep}$ values are given. \begin{table}[b] \caption{Main parameters of electron beams in ILC \cite{key-2}.} \centering{}% \begin{tabular}{|c|c|c|} \hline Beam Energy (GeV) & $250$ & $500$\tabularnewline \hline Peak Luminosity ($10^{34}\,cm^{-2}s^{-1}$) & $1.47$ & $4.90$\tabularnewline \hline Particle per Bunch ($10^{10}$) & $2.00$ & $1.74$\tabularnewline \hline Norm. Horiz. Emittance ($\mu m$) & $10.0$ & $10.0$\tabularnewline \hline Norm. Vert. Emittance (nm) & $35.0$ & $30.0$\tabularnewline \hline Horiz. \textgreek{b}{*} amplitude function at IP (mm) & $11.0$ & $11.0$\tabularnewline \hline Vert. \textgreek{b}{*} amplitude function at IP (mm) & $0.48$ & $0.23$\tabularnewline \hline Horiz. IP beam size (nm) & $474$ & $335$\tabularnewline \hline Vert. IP beam size (nm) & $5.90$ & $2.70$\tabularnewline \hline Bunches per Beam & $1312$ & $2450$\tabularnewline \hline Repetition Rate (Hz) & $5.00$ & $4.00$\tabularnewline \hline Beam Power at IP (MW) & $10.5$ & $27.2$\tabularnewline \hline Bunch Spacing (ns) & $554$ & $366$\tabularnewline \hline Bunch length (mm) & $0.300$ & $0.225$\tabularnewline \hline \end{tabular} \end{table} \begin{table}[h] \caption{Main parameters of ILC$\varotimes$FCC based $ep$ collider.} \noindent \centering{}% \begin{tabular}{|c|c|c|c|c|} \hline & & \multicolumn{3}{c|}{Nominal FCC}\tabularnewline \hline $E_{e}(GeV)$ & $\sqrt{s}(TeV)$ & $L_{ep},\,cm^{-2}s^{-1}$ & $D_{e}$ & $\xi_{p}$ \tabularnewline \hline 250 & 7.08 & $2.26\times10^{30}$ & 1.0 & $1.09\times10^{-3}$ \tabularnewline \hline 500 & 10.0 & $2.94\times10^{30}$ & 0.5 & $9.40\times10^{-4}$ \tabularnewline \hline $E_{e}(GeV)$ & $\sqrt{s}(TeV)$ & \multicolumn{3}{c|}{Upgraded FCC}\tabularnewline \hline 250 & 7.08 & $55.0\times10^{30}$ & 24 & $1.09\times10^{-3}$\tabularnewline \hline 500 & 10.0 & $70.0\times10^{30}$ & 12 & $9.40\times10^{-4}$\tabularnewline \hline \end{tabular} \end{table} \begin{table}[H] \caption{Main parameters of ILC$\varotimes$FCC based $ep$ collider corresponding to the disruption limit $D_{e}=25$. } \centering{}% \begin{tabular}{|c|c|c|c|c|} \hline $E_{e}(GeV)$ & $\sqrt{s}(TeV)$ & $N_{p}(10^{11})$ & $L_{ep},\,cm^{-2}s^{-1}$ & $\xi_{p}$\tabularnewline \hline 250 & 7.08 & 2.3 & $57\times10^{30}$ & $1.09\times10^{-3}$\tabularnewline \hline 500 & 10.0 & 4.6 & $149\times10^{30}$ & $9.40\times10^{-4}$\tabularnewline \hline \end{tabular} \end{table} \subsection{PWFA-LC$\varotimes$FCC} Beam driven plasma wake field technology made a great progress for linear accelerators recently. This method enables an electron beam to obtain high gradients of energy even only propagating through small distances compared to the radio frequency resonance based accelerators \cite{key-13}. In other words, more compact linear accelerators can be built utilizing PWFA to obtain a specified beam energy. In Table VI, main electron beam parameters of PWFA-LC accelerator are listed. As in ILC$\varotimes$FCC case, transverse beam size of proton is greater than all PWFA $e$-beam options. Same upgrade for the proton beam is handled ($N_{p}=2.2\times10^{11}$, $\beta_{p}^{*}=0.1$ m) and final values of luminosity, disruption and beam-beam parameters are given in Table VII for both nominal and upgraded FCC proton beam cases. In Table VIII, disruption parameter is fixed at the limit value of $D_{e}=25$ and corresponding $ep$ collider parameters are given. \begin{table}[h] \caption{Main parameters of electron beams in PWFA-LC \cite{key-13}.} \centering{}% \begin{tabular}{|c|c|c|c|c|} \hline Beam Energy (GeV) & 250 & 500 & 1500 & 5000\tabularnewline \hline Peak Luminosity ($10^{34}\,cm^{-2}s^{-1}$) & 1.25 & 1.88 & 3.76 & 6.27\tabularnewline \hline Particle per Bunch ($10^{10}$) & 1 & 1 & 1 & 1\tabularnewline \hline Norm. Horiz. Emittance ($10^{-5}$ m) & $1.00$ & $1.00$ & $1.00$ & $1.00$\tabularnewline \hline Norm. Vert. Emittance ($10^{-8}$ m) & $3.50$ & $3.50$ & $3.50$ & $3.50$\tabularnewline \hline Horiz. \textgreek{b}{*} function at IP ($10^{-3}$ m) & $11$ & $11$ & $11$ & $11$\tabularnewline \hline Vert. \textgreek{b}{*} function at IP ($10^{-5}$ m) & $9.9$ & $9.9$ & $9.9$ & $9.9$\tabularnewline \hline Horiz. IP beam size ($10^{-7}$ m) & $4.74$ & $3.36$ & $1.94$ & $1.06$\tabularnewline \hline Vert. IP beam size ($10^{-10}$ m) & $26.7$ & $18.9$ & $10.9$ & $5.98$\tabularnewline \hline Bunches per Beam & 1 & 1 & 1 & 1\tabularnewline \hline Repetition Rate ($10^{3}$ Hz) & 20 & 15 & 10 & 5\tabularnewline \hline Beam Power ar IP (MW) & 8 & 12 & 24 & 40\tabularnewline \hline Bunch Spacing ($10^{4}$ ns) & $5.00$ & $6.67$ & $10.0$ & $20.0$\tabularnewline \hline Bunch length ($10^{-5}$ m) & $2.00$ & $2.00$ & $2.00$ & $2.00$\tabularnewline \hline \end{tabular} \end{table} \begin{table}[t] \caption{Main parameters of PWFA-LC$\varotimes$FCC based $ep$ collider.} \centering{}% \begin{tabular}{|c|c|c|c|c|} \hline & & \multicolumn{3}{c|}{Nominal FCC}\tabularnewline \hline $E_{e}(GeV)$ & $\sqrt{s}(TeV)$ & $L_{ep},\,cm^{-2}s^{-1}$ & $D_{e}$ & $\xi_{p}$ \tabularnewline \hline 250 & 7.08 & $3.44\times10^{30}$ & 1.00 & $5.47\times10^{-4}$ \tabularnewline \hline 500 & 10.0 & $2.58\times10^{30}$ & 0.50 & $5.47\times10^{-4}$ \tabularnewline \hline 1500 & 17.3 & $1.72\times10^{30}$ & 0.17 & $5.47\times10^{-4}$ \tabularnewline \hline 5000 & 31.6 & $0.86\times10^{30}$ & 0.05 & $5.47\times10^{-4}$ \tabularnewline \hline $E_{e}(GeV)$ & $\sqrt{s}(TeV)$ & \multicolumn{3}{c|}{Upgraded FCC}\tabularnewline \hline 250 & 7.08 & $82.6\times10^{30}$ & 24 & $5.47\times10^{-4}$\tabularnewline \hline 500 & 10.0 & $61.9\times10^{30}$ & 12 & $5.47\times10^{-4}$\tabularnewline \hline 1500 & 17.3 & $41.3\times10^{30}$ & 4.0 & $5.47\times10^{-4}$\tabularnewline \hline 5000 & 31.6 & $20.8\times10^{30}$ & 1.2 & $5.47\times10^{-4}$\tabularnewline \hline \end{tabular} \end{table} \begin{table*}[t] \caption{Main parameters of PWFA-LC$\varotimes$FCC based $ep$ collider corresponding to the disruption limit $D_{e}=25$. } \centering{}% \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{1}{*}{$E_{e}(GeV)$} & \multirow{1}{*}{$\sqrt{s}(TeV)$} & \multirow{1}{*}{$N_{p}(10^{11})$} & \multirow{1}{*}{$L_{ep},\,cm^{-2}s^{-1}$} & \multirow{1}{*}{$\xi_{p}$} & \multicolumn{1}{c|}{$\tau_{IBS,x}$ (h)}\tabularnewline \hline 125 & 5.00 & 1.15 & $65.0\times10^{30}$ & $5.47\times10^{-4}$ & 171\tabularnewline \hline 250 & 7.08 & 2.30 & $86.0\times10^{30}$ & $5.47\times10^{-4}$ & 85\tabularnewline \hline 500 & 10.0 & 4.60 & $129\times10^{30}$ & $5.47\times10^{-4}$ & 43\tabularnewline \hline 1500 & 17.3 & 13.8 & $258\times10^{30}$ & $5.47\times10^{-4}$ & 14\tabularnewline \hline 5000 & 31.6 & 45.8 & $433\times10^{30}$ & $5.47\times10^{-4}$ & 4\tabularnewline \hline \end{tabular} \end{table*} As one can see from the third column of the Table VIII number of protons in bunches are huge in options corresponding to the highest energy electron beams. Certainly, an order higher bunch population comparing to that of FCC design value requires radical change of the injector chain, which needs seperate study. Another critical issue is IBS growth time. For this reason we estimate horizontal IBS growth times using Wei formula \cite{key-27}: $\vphantom{}$ \begin{spacing}{0} \[ \left[\begin{array}{c} \frac{1}{\sigma_{pf}}\frac{d\sigma_{pf}}{dt}\\ \frac{1}{\sigma_{x}}\frac{d\sigma_{x}}{dt}\\ \frac{1}{\sigma_{y}}\frac{d\sigma_{y}}{dt} \end{array}\right]=\frac{Z^{4}Nr_{0}^{2}cL_{c}}{8\pi A\gamma^{2}\sigma_{s}\sigma_{pf}\beta\epsilon_{x}\epsilon_{y}}\times \] \end{spacing} \begin{equation} \frac{(1+a^{2}+b^{2})I(\frac{a^{2}+b^{2}}{2})-3}{1-(\frac{a^{2}+b^{2}}{2})}\left[\begin{array}{c} (1-d^{2})\,n_{b}\\ d^{2}-(a^{2}/2)\\ -b^{2}/2 \end{array}\right] \end{equation} \noindent where \textit{Z} and \textit{A} are charge and atomic mass numbers of the particle (for protons $Z=A=1$), respectively. $L_{c}$$\approx ln\left[4\beta_{rel}^{2}\bar{b}\sigma_{pf}^{2}(1-d^{2})/r_{0}(a^{2}+b^{2})\right]$ is the Coulomb logarithm factor \cite{key-303030}, $\beta_{rel}\approx1$ for ultra-relativistic particles, $a=\beta_{x}d/D_{h}\gamma$, $b=(\beta_{y}\sigma_{x}/\beta_{x}\sigma_{y})\,a$ , $d=D_{h}\sigma_{pf}/(\sigma_{x}^{2}+D_{h}^{2}\sigma_{pf}^{2})^{1/2}$ , $\sigma_{pf}$ is the fractional momentum deviation, $\sigma_{s}$ is the rms bunch length, $\sigma_{x}$ and $\sigma_{y}$ are horizontal and vertical amplitudes, respectively. $D_{h}$ is horizontal dispersion and its average value is equal to \cite{key-2828,key-28}: \noindent \begin{equation} \frac{l_{c}\theta_{c}}{4}(\frac{1}{sin^{2}\frac{\mu}{2}}-\frac{1}{12}) \end{equation} \noindent where $l_{c}$ is FODO cell length and $\mu$ is the phase advance. The bending angle per cell is taken as $\theta_{c}=2\pi/N_{c}$ where $N_{c}$ is number of FODO cells. Finally the function $I(\chi)$ is expressed as: \begin{equation} I(\chi)=\begin{cases} \begin{array}{c} \frac{1}{\sqrt{\chi(\chi-1)}}Arth\sqrt{\frac{\chi-1}{\chi}}\\ \frac{1}{\sqrt{\chi(\chi-1)}}Arctan\sqrt{\frac{1-\chi}{\chi}} \end{array} & \begin{array}{c} \chi\geqslant1\\ \chi<1 \end{array}\end{cases}. \end{equation} Obtained results for horizontal IBS growth times, $\tau_{IBS,x}=\sigma_{x}/(d\sigma_{x}/dt)$, at $E_{p}=50$ TeV are presented in the last column of the Table VIII. In numerical calculations we used baseline FCC FODO cell length value $l_{c}$=203.0 m considered in \cite{key-28}. It is seen that IBS growth times are acceptable even for $E_{e}=5000$ GeV case. \subsection{Collision Point Simulator for the FCC Based lepton-hadron and photon-hadron Colliders} There are several beam-beam simulation programs for linear $e^{+}e^{^{-}}$ and photon colliders (see for example \cite{key-29,key-30}). Unfortunately, no similar (open-access) programs exist for $ep$ colliders. In order to understand and analyze electron-proton beam interactions at collision points, we start to develop a numerical program that considers beam dynamics with aim to optimize electron and proton beam parameters in order to obtain maximal luminosity values. At this stage luminosity, beam-beam tune shift, disruption and beam life-time formulae (Equations 1-3, 7-9, 12-20) are included in, and the numerical results of this paper are calculated using current software. The aim of the software is to optimize main parameters of lepton-hadron colliders. It is obvious that luminosity values with nominal beam parameters can be calculated analytically. However, when beam dynamics is deeply analyzed considering time evolution of beam structures, it becomes almost impossible to make analytical solutions. These affects become time-dependent due to varying beam sizes. The work on the upgraded version which will include time dependent behaviour of beams during collision as well as $\gamma p$ collider options is under progress. In addition, in order to achieve highest luminosity values at the collision, beam parameters should be optimized. For this reason an additional interface is being developed. It will optimize luminosity and give required beam parameters within pre-determined parameter interval. The current version of the program is a Java based environment and therefore it is platform-independent. It is available to access at http://alohep.hepforge.org and our group web page (http://yef.etu.edu.tr/ALOHEP\_eng.html). \section{FCC Based $\mu p$, $eA$, $\mu A$, $\gamma p$ and $\gamma A$ Colliders} This section is devoted to brief discussion of additional options for FCC based $lh$ and $\gamma h$ colliders. \subsection{$\mu p$ Colliders} Muon-proton colliders were proposed almost two decades ago. Construction of additional proton ring in $\sqrt{s}=$ 4 TeV muon collider tunnel was suggested in \cite{key-31} in order to handle $\mu p$ collider with the same center-of-mass energy. However, luminosity value, namely $L_{\mu p}=3\times10^{35}cm^{-2}s^{-1}$, was extremely over estimated, realistic value for this option is three orders smaller \cite{key-26}. Then, construction of additional 200 GeV energy muon ring in the Tevatron tunnel in order to handle $\sqrt{s}=$ 0.9 TeV $\mu p$ collider with $L_{\mu p}=10^{32}cm^{-2}s^{-1}$ was considered in \cite{key-32}. \begin{singlespace} In this paper we consider another design, namely, construction of muon ring close to FCC (see Fig 1). For round beams general expression for the luminosity given in Eq. (\ref{eq:Denklem1}) transforms to: \begin{eqnarray} L_{pp} & = & f_{pp}\frac{N_{p}^{2}}{4\pi\sigma_{p}^{2}}\label{eq:Denklem7} \end{eqnarray} \end{singlespace} \begin{center} \begin{eqnarray} L_{\mu\mu} & = & f_{\mu\mu}\frac{N_{\mu}^{2}}{4\pi\sigma_{\mu}^{2}}\label{eq:Denklem8} \end{eqnarray} \par\end{center} \noindent for FCC-$pp$ and $\mu C$, respectively. Concerning muon-proton collisions one should use larger transverse beam sizes and smaller collision frequency values. Keeping in mind that $f_{\mu\mu}$ is an order smaller than $f_{pp}$, following correlation between $\mu p$ and $\mu\mu$ luminosities take place: \begin{center} \begin{eqnarray} L_{\mu p} & = & (\frac{N_{p}}{N_{\mu}})(\frac{\sigma_{\mu}}{max[\sigma_{p},\,\sigma_{\mu}]})^{2}L_{\mu\mu}\label{eq:Denklem9} \end{eqnarray} \par\end{center} \begin{table}[H] \caption{Nominal muon collider parameters \cite{key-11}.} \centering{}% \begin{tabular}{|c|c|c|c|} \hline $\sqrt{s}$, TeV & 0.126 & 1.5 & 3.0 \tabularnewline \hline Avg. Luminosity, $10^{34}cm^{-2}s^{-1}$ & 0.008 & 1.25 & 4.4 \tabularnewline \hline Circumference, km & 0.3 & 2.5 & 4.5 \tabularnewline \hline Repetition Rate, Hz & 15 & 15 & 12 \tabularnewline \hline $\beta^{\star}$, cm & 1.7 & 1 & 0.5 \tabularnewline \hline No. muons/bunch, $10^{12}$ & 4 & 2 & 2 \tabularnewline \hline No. bunches/beam & 1 & 1 & 1 \tabularnewline \hline Norm. Trans. Emmit., $\pi\:mm-rad$ & 0.2 & 0.025 & 0.025 \tabularnewline \hline Bunch length, cm & 6.3 & 1 & 0.5\tabularnewline \hline Beam Size at IP, $\mu m$ & 75 & 6 & 3\tabularnewline \hline Beam beam parameter / IP , $\xi_{\mu\mu}$ & 0.02 & 0.09 & 0.09\tabularnewline \hline \end{tabular} \end{table} Using nominal parameters of $\mu\mu$ colliders given in Table IX, according to Eq. (\ref{eq:Denklem9}), parameters of the FCC based $\mu p$ colliders are calculated and presented in Table X. Utilizing Eq. (3) for round beams, we obtain: \begin{eqnarray} \xi_{p} & = & \frac{N_{\mu}r_{p}\beta_{p}^{*}}{4\pi\gamma_{p}\sigma_{\mu}^{2}}\label{eq:Denklem10} \end{eqnarray} Beam beam parameter for muons is given by: \begin{eqnarray} \xi_{\mu} & = & \frac{N_{p}r_{\mu}\beta_{\mu}^{*}}{4\pi\gamma_{\mu}\sigma_{p}^{2}}\label{eq:Denklem11} \end{eqnarray} \noindent where $r_{\mu}=1.37\times10^{-17}$ m is classical muon radius. \begin{table}[H] \caption{Main parameters of the FCC based $\mu p$ colliders.} \centering{}% \begin{tabular}{|c|c|c|c|c|} \hline Collider & \multirow{2}{*}{$\sqrt{s}$, TeV } & $L_{\mu p},\,cm^{-2}s^{-1}$ & \multirow{2}{*}{$\xi_{p}$} & \multirow{2}{*}{$\xi_{\mu}$}\tabularnewline Name & & (Avg.) & & \tabularnewline \hline \hline $\mu63$-FCC & 3.50 & $0.20\times10^{31}$ & $1.8\times10^{-3}$ & $5.4\times10^{-4}$\tabularnewline \hline $\mu750$-FCC & 12.2 & $49\times10^{31}$ & $1.1\times10^{-1}$ & $3.3\times10^{-3}$\tabularnewline \hline $\mu1500$-FCC & 17.3 & $43\times10^{31}$ & $1.1\times10^{-1}$ & $8.3\times10^{-4}$\tabularnewline \hline \end{tabular} \end{table} As one can see from Table X, where nominal parameters of FCC proton beam are used, $\xi_{p}$ for energy frontier $\mu p$ colliders is unacceptably high and should be decreased to 0.01. According to Eq. (\ref{eq:Denklem10}), this can be succeeded by decreasing of $\beta_{p}$ and/or increasing of $\sigma_{\mu}$. For example, decreasing $\beta_{p}^{*}$ from 1.1 m to 0.1 m (as in the upgraded option of proton beams considered in Section II) seems to solve this problem. Luminosity values presented in Table X assume simultaneous operation with $pp$ collider. These values can be increased by an order using dedicated proton beam with larger bunch population \cite{key-26}. \subsection{$eA$ and $\mu A$ Colliders} It is known that FCC also includes $Pb-Pb$ collider option \cite{key-18,key-28}. Therefore, construction of LC and $\mu C$ tagential to FCC will provide opportunity to handle $e$-Pb and $\mu$-Pb collisions. In order to estimate luminosity of FCC based lepton-nucleus colliders we use parameters of $Pb$-beam for $p-Pb$ option from \cite{key-28} presented in Table XI. \begin{table}[H] \caption{Main parameters of $Pb$ beam in FCC $p$-$Pb$ option.} \centering{}% \begin{tabular}{|c|c|} \hline Beam Energy (GeV) & 4100\tabularnewline \hline Peak Luminosity ($10^{30}\,cm^{-2}s^{-1}$) & 1.24\tabularnewline \hline Particle per Bunch ($10^{10}$) & 1.15\tabularnewline \hline Norm. Transverse Emittance ($\mu m$) & 3.75\tabularnewline \hline \textgreek{b}{*} amplitude function at IP (m) & 1.1\tabularnewline \hline IP beam size ($\mu m$) & 8.8\tabularnewline \hline Bunches per Beam & 432\tabularnewline \hline Bunch length (mm) & 80\tabularnewline \hline Beam-beam parameter, $\xi_{pp}$ & $3.7\times10^{-4}$\tabularnewline \hline \end{tabular} \end{table} Luminosity, disruption and beam beam tune shift for $e$-$Pb$ are given by: \begin{eqnarray} L_{ePb} & = & \frac{N_{e}N_{Pb}}{4\pi\sigma_{Pb}^{2}}f_{c_{e}}\label{eq:Denklem12} \end{eqnarray} \begin{eqnarray} D_{e} & = & \frac{Z_{Pb}N_{Pb}r_{e}\sigma_{z_{Pb}}}{\gamma_{e}\sigma_{Pb}^{2}}\label{eq:Denklem13} \end{eqnarray} \begin{eqnarray} \xi_{Pb} & = & \frac{N_{e}r_{Pb}\beta_{Pb}^{*}}{4\pi\gamma_{Pb}\sigma_{Pb}^{2}}\label{eq:Denklem14} \end{eqnarray} \noindent respectively. In Eq. (\ref{eq:Denklem14}) $\gamma_{Pb}=E_{Pb}/m_{Pb}$ and $r_{Pb}=(Z_{Pb}^{2}/A_{Pb})$r$_{p}$. Calculated luminosity values for LC$\varotimes$FCC based e-Pb colliders are given in Table XII (here upgraded FCC means $\beta_{Pb}^{*}=0.1$ m). One can see that sufficiently high luminosities can be achieved with reasonable $D_{e}$ and $\xi_{Pb}$ values. \begin{table}[H] \caption{Main parameters of LC$\varotimes$FCC based $e$-$Pb$ colliders.} \noindent \centering{}{}% \begin{tabular}{|c|c|c|c|c|} \hline & & \multicolumn{3}{c|}{Nominal FCC}\tabularnewline \hline Collider & \multirow{2}{*}{$E_{e}(GeV)$ } & \multirow{2}{*}{$L_{ep},\,cm^{-2}s^{-1}$ } & \multirow{2}{*}{$D_{e}$ } & \multirow{2}{*}{$\xi_{Pb}$ }\tabularnewline Name & & & & \tabularnewline \hline \multirow{2}{*}{ILC$\varotimes$FCC} & 250 & $6.1\times10^{28}$ & 2.2 & 0.021\tabularnewline \cline{2-5} & 500 & $8.0\times10^{28}$ & 1.1 & 0.019\tabularnewline \hline \multirow{4}{*}{PWFA-LC$\varotimes$FCC} & 250 & $9.4\times10^{28}$ & 2.2 & 0.011\tabularnewline \cline{2-5} & 500 & $7.0\times10^{28}$ & 1.1 & 0.011\tabularnewline \cline{2-5} & 1500 & $4.7\times10^{28}$ & 0.4 & 0.011\tabularnewline \cline{2-5} & 5000 & $2.3\times10^{28}$ & 0.1 & 0.011\tabularnewline \hline & & \multicolumn{3}{c|}{Upgraded FCC}\tabularnewline \hline Collider & \multirow{2}{*}{$E_{e}(GeV)$ } & \multirow{2}{*}{$L_{ep},\,cm^{-2}s^{-1}$ } & \multirow{2}{*}{$D_{e}$ } & \multirow{2}{*}{$\xi_{Pb}$ }\tabularnewline Name & & & & \tabularnewline \hline \multirow{2}{*}{ILC$\varotimes$FCC} & 250 & $68\times10^{28}$ & 24.5 & 0.021\tabularnewline \cline{2-5} & 500 & $88\times10^{28}$ & 12.2 & 0.019\tabularnewline \hline \multirow{4}{*}{PWFA-LC$\varotimes$FCC} & 250 & $103\times10^{28}$ & 24 & 0.011\tabularnewline \cline{2-5} & 500 & $77\times10^{28}$ & 12 & 0.011\tabularnewline \cline{2-5} & 1500 & $51\times10^{28}$ & 4 & 0.011\tabularnewline \cline{2-5} & 5000 & $26\times10^{28}$ & 1.2 & 0.011\tabularnewline \hline \end{tabular} \end{table} Luminosity and beam beam tune shifts for $\mu$-$Pb$ colliders are given by: \begin{center} \begin{eqnarray} L_{\mu Pb} & = & (\frac{N_{Pb}}{N_{\mu}})(\frac{\sigma_{\mu}}{max[\sigma_{Pb},\,\sigma_{\mu}]})^{2}L_{\mu\mu}\label{eq:Denklem15} \end{eqnarray} \par\end{center} $\,$ \begin{eqnarray} \xi_{\mu} & = & \frac{Z_{Pb}N_{Pb}r_{\mu}\beta_{\mu}^{*}}{4\pi\gamma_{\mu}\sigma_{Pb}^{2}}\label{eq:Denklem16} \end{eqnarray} \begin{eqnarray} \xi_{Pb} & = & \frac{N_{\mu}r_{Pb}\beta_{Pb}^{*}}{4\pi\gamma_{Pb}\sigma_{\mu}^{2}}\label{eq:Denklem17} \end{eqnarray} \noindent Calculated luminosity values for $\mu C$$\varotimes$FCC based $\mu$-$Pb$ colliders with nominal parameters are given in table XIII. It is seen that nominal parameters lead to unacceptably high $\xi_{Pb}$ values. The straightforward way to reduce $\xi_{Pb}$ is essential decreasing of $N_{\mu}$. According to Eq. (\ref{eq:Denklem15}) this leads to correspoding decreasing of luminosity as seen from the last column of Table XIII. \begin{table}[H] \centering{}\caption{Main parameters of $\mu$C$\varotimes$FCC based $\mu$-$Pb$ colliders.} \begin{tabular}{|c|c|c|c|c|} \hline & & \multicolumn{3}{c|}{Nominal parameters}\tabularnewline \hline Collider & \multirow{2}{*}{$E_{\mu},\:TeV$ } & $L_{\mu Pb},\,cm^{-2}s^{-1}$ & \multirow{2}{*}{$\xi_{Pb}$} & \multirow{2}{*}{$\xi_{\mu}$}\tabularnewline Name & & (Avg.) & & \tabularnewline \hline $\mu63$-FCC & 0.063 & $1.1\times10^{31}$ & 0.1 & $1.5\times10^{-1}$\tabularnewline \hline $\mu750$-FCC & 0.75 & $1.3\times10^{31}$ & 12 & $7.3\times10^{-3}$\tabularnewline \hline $\mu1500$-FCC & 1.5 & $1.1\times10^{31}$ & 47 & $1.8\times10^{-3}$\tabularnewline \hline & & \multicolumn{3}{c|}{Upgraded parameters}\tabularnewline \hline Collider & \multirow{2}{*}{$E_{\mu},\:TeV$ } & $L_{\mu Pb},\,cm^{-2}s^{-1}$ & \multirow{2}{*}{$\xi_{Pb}$} & \multirow{2}{*}{$N_{\mu}$}\tabularnewline Name & & (Avg.) & & \tabularnewline \hline $\mu63$-FCC & 0.063 & $110\times10^{28}$ & 0.01 & $4\times10^{11}$\tabularnewline \hline $\mu750$-FCC & 0.75 & $1.1\times10^{28}$ & 0.01 & $1.67\times10^{9}$\tabularnewline \hline $\mu1500$-FCC & 1.5 & $0.23\times10^{28}$ & 0.01 & $4.26\times10^{8}$\tabularnewline \hline \end{tabular} \end{table} \begin{figure*}[t] \centering{}\includegraphics[scale=0.4]{Fig2}\caption{Discovery limits for color octet electron at different pp, $e^{+}e^{-}$ and $ep$ colliders.} \end{figure*} \subsection{$\gamma p$ and $\gamma A$ Colliders} In 1980\textquoteright s, the idea of using high energy photon beams, obtained by Compton backscattering of laser light off a beam of high energy electrons, was considered for $\gamma e$ and $\gamma\gamma$ colliders (see review \cite{key-33} and references therein). Then the same method was proposed for constructing $\gamma p$ colliders on the base of linac-ring type $ep$ machines in \cite{key-34}. Rough estimations of the main parameters of $\gamma p$ collisions are given in \cite{key-35}. The dependence of these parameters on the distance between conversion region (CR) and interaction point (IP) was analyzed in \cite{key-30}, where some design problems were considered. It should be noted that $\gamma p$ colliders are unique feature of linac-ring $ep$ colliders and could not be constructed on the base of standard ring-ring type $ep$ machines (for arguments see \cite{key-35,key-36}). Concerning FCC based $\gamma p$ colliders, center of mass energy and luminosity are approximately the same as of corresponding $ep$ colliders ($\sqrt{s_{\gamma p}}\thickapprox0.9\sqrt{s_{ep}}$; $L_{\gamma p}\thickapprox L_{ep}$) for one-pass linacs. Let us mention that energy recovery is not effective for $\gamma p$ colliders since electron bunches are destroyed during conversion (for details see \cite{key-36}). Regarding the analyses performed for THERA and LHeC, $\gamma p$ colliders have shown their superiority compared to the corresponding ep colliders for a lot of SM and BSM phenomena (small $x_{g}$, $q^{*}$ and so on). Similar studies should be performed for FCC based $\gamma p$ colliders. Certainly, FCC based $\gamma A$ colliders will bring out great opportunities for QCD and nuclear physics research. For example, $\gamma A$ option will give an opportunity to investigate quark-gluon plasma at very high temperatures but relatively low nuclear density (according to VMD, proposed machine will be at the same time $\rho$-nucleus collider). Different aspects of the THERA based $\gamma p$ colliders have been considered in \cite{key-37}. In \cite{key-38,key-39} Linac$\varotimes$LHC based $\gamma p$ colliders have been considered for different linac scenarios. Similar work on FCC based $\gamma p$ and $\gamma A$ colliders is under progress. \section{Conclusions} \noindent In this study it is shown that for ILC$\varotimes$FCC and PWFA-LC$\varotimes$FCC based $ep$ colliders, luminosity values up to $L_{ep}\sim10^{32}\,cm^{-2}s^{-1}$ are achievable with LHeC-like upgrade of the FCC proton beam. Even with this moderate luminosity, BSM search potential of $ep$ colliders essentially exceeds that of corresponding linear colliders. It may also exceed the search potential of the FCC-$pp$ option for a lot of BSM phenomena. As a BSM process production of color octet electron ($e_{8}$) at the FCC, LC$\varotimes$FCC and LC have been analyzed in \cite{key-40}. Mass discovery limits for $e_{8}$ in $\Lambda=M_{e8}$ case (where $\Lambda$ is compositeness scale) are presented in Figure 2. If FCC will discover $e_{8}$, LC$\varotimes$FCC will give opportunity to determine Lorentz structure of $e_{8}$-e-g vertex using longitudinal polarization of electron beam, as well as to probe compositeness scale up to hundreds TeV. In principle, ``dynamic focusing'' scheme \cite{key-41}, which was proposed for THERA, could provide $L_{ep}\sim10^{33}\,cm^{-2}s^{-1}$ for all ep collider options considered in this study. Concerning ILC$\varotimes$FCC based $ep$ colliders, a new scheme for energy recovery proposed for higher-energy LHeC (see Section 7.1.5 in \cite{key-12}) may give an opportunity to increase luminosity by an additional one or two orders, resulting in $L_{ep}$ exceeding $10^{34}\,cm^{-2}s^{-1}$. Unfortunately, this scheme can not be applied at PWFA-LC$\varotimes$FCC. Acceleration of ion beams at the FCC will give opportunity to provide multi-TeV center of mass energy in electron-nucleus collisions. In addition, electron beam can be converted to high energy photon beam using Compton back-scattering of laser photons which will give opportunity to construct LC$\varotimes$FCC based $\gamma p$ and $\gamma A$ colliders. In conclusion, construction of ILC and PWFA-LC tangential to the FCC will essentially enlarge the physics search potential for both SM and BSM phenomena. Therefore, systematic study of accelerator, detector and physics search potential, issues of LC$\varotimes$FCC based electron-hadron and photon-hadron colliders, as well as $\mu C$$\otimes$FCC based muon-hadron collider, are essential to plan the future of high energy physics. Concerning the viability of different options, ILC$\varotimes$FCC option seems to be the most realistic one for linac-ring type $ep$ machine proposals, while viability of PWFA-LC$\varotimes$FCC and $\mu C$$\varotimes$FCC based colliders are dependent on resolution of technical aspects of PWFA-LC and muon collider. Possible construction of dedicated $e$-linac and/or muon ring tangential to FCC requires separate study. \section*{Acknowledgments} This study is supported by TUBITAK under the grant No 114F337. A. Akay and S. Sultansoy are grateful to organizers of the FCC Week 2016 for giving opportunity to present our results at this distinguished conference.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,284
Tag Archives: hamburg by Daytrippin' Leave a comment Finding the Fourth Beatle: John, Paul, George and their 18 drummers by David Bedford and Garry Popper The Beatles phenomenon is one amazing story that John Lennon tried to sum up by stating: "I met Paul and said, 'Do you want to join me band?' and then George joined, and then Ringo joined. We were just a band who made it very, very big." That is one of the biggest understatements ever, because it was so much more complicated than that, and the story involves 18 drummers. Neil Aspinall once said that "the story of the Beatles always seemed to be about John, Paul, George and a drummer." When examined closely, that is exactly what happened, yet nobody has concentrated on the story of those drummers, and the crises in the evolution of The Beatles that always seemed to be around losing, or gaining, a drummer. How many drummers can you count that played with the Fab Three between 1956 and 1970? We have found 18! In a new book, and forthcoming documentary film, Finding the Fourth Beatle tells the story of The Beatles from 1956-1970 through the 18 drummers, including Colin Hanton, Pete Best and Jimmie Nicol, and some you will not have heard of before. The book and film explore the Beatles' crises, changes of musical direction, getting a record deal, and finding the drummer who would put the beat into The Beatles: Ringo Starr, the Fourth Beatle. Categories: Beatles History, New Beatles merchandise | Tags: beatles, Beatles book, beatles documentary, beatles drummer, hamburg, jimmie nicol, liverpool, pete best, ringo starr, the beatles, the fourth beatle | Permalink. by Daytrippin' 27 Comments The Case for Authenticity: 'Love Me Tender' by Stuart Sutcliffe by Liscio What Stuart Sutcliffe fan hasn't wished to learn as much as possible about the fascinating young artist and Beatle? His time with us was short yet incredibly creative; every surfacing artwork, picture, letter or anecdote is pored over with relish by admirers. But some things Sutcliffe-lovers were sadly certain they would never get to know: for instance—his voice. That's why the digital release of "Love Me Tender", sung by Stuart himself, is an astonishing event generating stunned excitement and questions about the song's origin and authenticity. "Love Me Tender" was Stuart's signature song; a ballad he performed so well in Hamburg it received the best applause during the Beatles' sets at the Kaiserkeller and Star Club. Sutcliffe also performed Carl Perkins' "Matchbox" and Elvis Presley's "Wooden Heart". But "Love Me Tender" is the song most associated with his name. His newly-released song, now available to the public for the first time in 50 years, is compelling listening: Stu's voice strains just slightly ending the first refrain, and he gives us a very sexy exhale at the end of another. In between, the notes are confident, strong, on pitch and melodic. Sutcliffe has made this version of Presley's tune unabashedly his own. In fact the track is so good, some listeners maintain they don't even care if it is Stuart (though they hope it is) and skeptics are accusing the Sutcliffe family of overdubbing the voice of a professional singer. (One might point out that as a paid member of a hard-working rock band, Stuart was a professional singer). Another quick discrediting attempt claimed the song originated from a 1979 American movie—that version has none of the soft nasality indicative of Liverpool accents, clearly evident in Stuart's singing. Noting this, listeners say Stuart sounds like John or George. David Bedford, author of "Liddypool: Birthplace of the Beatles"—and a life-long Liverpudlian—confirms, "Yes, nasal talking is a scouse thing for sure. As Stuart's parents were Scottish, his accent was different to John's and would sound different too – it differs on where in Liverpool you are from." So—where has such a sensational piece of musical history been hiding for the past 50 years? Stuart's sister Pauline says, "I never expected to receive this recording of Stuart singing 'Love Me Tender' because I was told the only recording which existed was locked away forever by a private collector." But quite unexpectedly in 2009, Stuart's Estate became aware that a copy was available through another source. Once they'd obtained it, a substantial effort of time and money was spent trying to trace its provenance. "As far as we know for certain, Stuart's 'Love Me Tender' track was recorded in Hamburg, probably 1961—after Stuart officially left the Beatles to pursue his art, " says Pauline. "On one occasion we were told that it was a one-sided German Polydor acetate. Another source tells us that we have a copy from a reel-to-reel recording. We've also been advised that new instrumentation has been overdubbed." Though gaps in the history remain, one thing is unequivocally certain: it is Stuart. Says Pauline, "The family do know Stuart's voice when they hear it – and this is Stuart's voice." Those who are surprised that Sutcliffe could sing suffer from the same myopic misconception that had them believing he couldn't play bass guitar. David Bedford reminds us that as a young lad in Liverpool, Stuart was head chorister for St. Gabriel's church in Huyton, leading the singing for Sunday services and weddings. The former choirboy still sounds youthful and earnest—some say his voice on "Love Me Tender" is "angelic"—some say "haunting"—while others are reminded of Phil and Don Everly's sweet harmonies. In a recent phone conversation, Pauline revealed that once the Estate possessed the recording, they were just "trying to get comfortable with it". One can only wonder what it was like for a sister to hold in her hands an object containing a special voice from so very long ago . A missing piece had at last come home. In time, those responsible for overseeing Stuart's Estate were curious to know whether the tape could be cleaned up. Help came in the form of Dan Whitelock-Wainwright, Pauline's techno-expert great-nephew, currently at University and a member of the rock band Groan. Dan's cousin Alex Whitelock-Wainwright (at University in Liverpool) also possessed a copy of the original tape and he wrote in his blog: "The original I have has a constant hiss throughout; that's all that has been modified with the released version and the sound levels are higher. Talking to my cousin, who first tried to clean the track up, (he) believes that the noise frequencies have been totally cleaned out which has removed some instruments and they have been overdubbed back onto the track." It was the 24/7 division of IODA that finished the mastering, leaving Stuart's voice unmanipulated, only louder. [Correction (11/3/2011): "24 Hour Service Station Distribution" and not "24/7 division of IODA" handled the cleaning up of the track. Marshall Dickson contacted us and explained: "I personally coordinated the sonic recovery, and also have strong reason to believe the original recording comes from an acetate, since the source file we possess has the sound of a needle sliding across a record after the music ends."] There was never any doubt that the voice was Stuart's. But the Estate has another reason to know the tape is genuine: they know Stuart. The young bohemian led an accelerated life, traveling incredibly far in a very short time. And his time in Hamburg was likely his most innovative. Eduardo Paolozzi, Stuart's art instructor at the School of Fine Arts in Germany, wrote: "He (Stuart) had so much energy and was so very inventive."(1) Musician and artist Klaus Voorman said, "Every second of Stuart's short time he was doing something. His imagination was fantastic."(2) Everybody was aware of and amazed by Stu's energy and the ease with which he was able to work in a variety of artistic areas. It was completely in character for Stuart to have made this recording. And the family's got it in Stuart's own writing that he planned to do just that. Copyright: Stuart Sutcliffe Estate Some of his Hamburg letters, reproduced here, reveal Sutcliffe's interest in a new art project: his desire to make a movie with an accompanying soundtrack. The text reads: "Yes! Tomorrow comes Paolozzi and Tuesday we go once more to that ship-breaking yard which we visited last semester. I will have with me a film camera I borrowed of Theo, Astrid's cousin. I'm very quickly trying to learn the technique as I'm enthralled by the possibilities but it's so expensive. He has many films including some of Astrid from a few years ago, very sweet as you can imagine. I'll have to take advantage of the few days I'll have it; I'll probably tire of it all the more quickly because of the complete inaccessibility of all the equipment required." 'I made a film last week when I was at the ship-breaking yard and I have really caught a feeling for filming, the desire that is. I made another today and wish to make a long film accompanied by a tape-recording." "Thank you for your letter and the catalogues. I should have written before but have been busy with various odds and ends. We started the week very tired after working all weekend making photos, or rather Astrid worked while I grew tired looking on. She was working on a commission for Polydor making photos of this singer Sheridan and made some marvelous ones in black and white and color." Stuart was well acquainted with Tony Sheridan. While performing in Hamburg between 1960 and 1963, Sheridan employed various backup bands, most of which were really "pickup bands", or simply an amalgam of various musicians, rather than a group proper.(3) It was Polydor's A&R (Artists and Repertoire) man, Bert Kaempfert, who arranged in 1961 for the Beatles to back Sheridan for an LP called "My Bonnie". The standard (and decidedly incomplete) story is that Stuart was present during this session, but did not participate. But both John Lennon and Tony Sheridan swore that there were several other Beatle tracks that were recorded during the two-day session, and that either they were not preserved OR something else happened to them.(3) Tony Sheridan (left) and Stuart Sutcliffe Copyright Astrid Kirchherr; Pauline Sutcliffe private collection Another group recording for Polydor was a German band called The Bats. "They (the Bats) went through the usual Star Club routine…(they) recorded mainly for Polydor. Drummer Toni Cavanaugh came from the circle of musicians connected with Tony Sheridan (and) also played drums for Sheridan's Beat Brothers/Star combo. The band's crew changed…once in a while ex-Beatle Stuart Sutcliffe joined in."(4) Hamburg's music scene in '61 was open and inclusive, with musicians intermingling on stage and in the studio. Astrid was there with her camera, recording visual tracks while the bands made musical ones. Stuart was right in the midst of it. He'd been to the studio, played with the bands, knew Kaempfert, had all the right connections. It's not implausible to think that at some time during that year his voice was captured on "a German Polydor acetate". Or perhaps Stu recorded his own voice, and instruments were tracked in later. The fact is that Sutcliffe intended to make a recording. Since "Love Me Tender" was the cool bassist's spotlight song, one he'd sung a hundred times or more and was the ballad he'd dedicated to his darling Astrid, it was the natural choice. Those free Hamburg days were unparalleled—a pivitol time for art and music. Timing can be so deadly crucial—why did Stuart's Estate choose to release "Love Me Tender" now? It wasn't a decision made lightly. Pauline has balanced two missions for nearly 50 years: working determinedly to ensure her talented brother's legacy, and striving to protect his image from harm. In the documentary "The Lost Beatle" she reminisces that Stuart "used to be my elder brother. But now he's my kid brother…I want to take care of him…to protect him." Regarding "Love Me Tender", she was wisely aware of those who would cry foul even if the Sutcliffes presented a recording contract with Stuart's signature at the bottom. But recent events: a partnership with promotional agency CMG Worldwide; the successful stage production of Backbeat, now showing in London's West End; the launch of Stuart's Official Fan Club (www.stuartsutcliffefanclub.com); and next year's world tour art exhibition "Conversation With Stuart Stucliffe", convinced the Estate there was no better time to release Stuart's song than now. There has been a shift in perspective regarding the Beatle who left the band because he loved art and Astrid Kirchherr. The media is now far less likely to depict Sutcliffe shoved aside in his shades to an obscure corner…the reluctant, incapable bassist. Commentaries adhering to that badly-sketched-in picture show their inaccuracy and age. With every unexpected and exciting new event, the remarkably talented Sutcliffe is now receiving the worldwide accolade he deserves. Some things are worth waiting for—even if it takes 50 years. "Love Me Tender" was definitely worth the wait. Thanks, Stu, for making certain we'd hear your voice. [Editor's Note: Those in Beatles history who knew Stuart at the time this song was believed to be recorded, (i.e., Astrid Kirchherr, Klaus Voormann, Paul McCartney, Ringo Starr) have not yet commented on their personal knowledge of the existence of this recording. ] © 2011 Daytrippin' – This article including photos/images may not be reproduced without permission from the author and Daytrippin.com. A brief excerpt may be reprinted with a link to the article and proper credit. Update: More in-depth analysis on this recording has been done by David Bedford, author of Liddypool: Birthplace of The Beatles. You can read his article here: http://www.stuartsutcliffefanclub.com/lovemetenderdb.html Update (Nov. 4, 2011): The Beatles Examiner has obtained quotes from Klaus Voormann, Tony Sheridan and Bill Harry concerning their opinions on the recording. (1) John Willett 1967 "Art In The City" (2) The Beatles In Hamburg/Bill Hillman Tracks (hillmanweb.com) (3) Tony Sheridan Wikipedia (4) Discogs/The Bats (discogs.com) For more Beatles news, follow us on Twitter and Facebook Categories: Beatles News, Commentary, New Beatles merchandise | Tags: astrid kirchherr, beatles, hamburg, love me tender, pauline sutcliffe, stuart sutcliffe, tony sheridan | Permalink. The Ballad of John, Yoko, Stuart and Astrid An in-depth exploration of how John Lennon's love for Yoko filled the void left by Astrid and Stu by Josh Kennedy It split the Beatles, this affair of the heart. She was an artist from an upper class family. She came from a foreign country that the previous generation in Britain had fought an all-out war to defeat. One Beatle was besotted with her, ready and willing to forsake the band for his new romance. She was always at his side; the intense couple even began dressing and wearing their hair alike. Paul McCartney was jealous, venting his frustration in petty ways that boiled over into the group's professional work. The name of this lady was… Astrid Kirchherr. It would happen again, and eerily so, when Yoko Ono appeared on the scene six years later. The personalities involved were different, but a similar stew of forces was present in both situations. When the Beatles story is examined as a whole, Yoko can be seen as an amalgam, combining the earlier roles of Astrid – the influential, foreign artistic woman – and of Stuart Sutcliffe – the brilliant but musically limited force who occupied much of John's attention at the group's expense. These striking parallels are worth exploring for any light they may shed on the eventual breakup of the Beatles. When the Beatles met Astrid in Hamburg, there is no doubt they were impressed. As Cynthia Lennon wrote in her 1978 memoir, "John's letters were full of Astrid… particularly her way of dress, her avant-garde way of life, and her marvelous photography." John even went so far as to call her the "German Brigitte Bardot." This comparison is illuminating. Bardot was the icon of John's adolescent fantasies, to the point where he encouraged Cynthia to dye her own hair blonde in emulation. Very shortly before taking up with Yoko in 1968, Lennon would meet the real Bardot in person. He showed up stoned for the appointment, and had what he later described as a "fucking terrible evening – even worse than meeting Elvis." Any illusions he still harbored about Bardot as the ideal woman were then shattered, and with them, perhaps, some regard for his own wife's dyed-blonde image. Yet Bardot was not John's only ideal. As he recalled in a posthumously published reminiscence, "I'd always had a fantasy about a woman who would be a beautiful, intelligent, high-cheek-boned, free-spirited artist a la Juliette Greco." He went on to say that this ideal morphed slightly during a Beatles visit to Asia, becoming an artistic oriental woman. But back in Hamburg, "oriental" was not yet part of the idea. Astrid was not only a "beautiful, intelligent, high-cheek-boned, free-spirited artist" but was also, like Greco, a continental European. As Kirchherr later told BBC radio: "We got inspired by all the French artists and writers, because that was the closest we could get. England was so far away, and America was out of the question. So France was the nearest. So we got all the information from France, and we tried to dress like the French existentialists. … We wanted to be free, we wanted to be different, and tried to be cool, as we call it now." Small wonder that Cynthia felt intimidated about meeting her. Of course, Astrid fell in love with Stuart Sutcliffe, the most bohemian Beatle, with his dark sunglasses and brooding James Dean image. "I fell in love with Stuart that very first night," Astrid told author Philip Norman. "So pale, but very, very beautiful. He was like a character from a story by Edgar Allan Poe." 'They were the big love," Paul McCartney says of this period, and Pete Best remembers the couple as being "like one of those fairy stories." Before long, according to Norman, Astrid was employing her own artistic talents "to model him (Stuart) into an appearance echoing and complementing her own." Much has been made of Astrid's visual influence on the Beatles' haircut and fashion, and as an early band photographer. More overlooked is the impact all of this had on John's ideal of a relationship. John may have joined his band mates in ridiculing Stuart at times, but as he later admitted to biographer Hunter Davies, "I used to explain afterwards to him that we didn't dislike him." Privately John admired his friend, and the intense partnership of Stu and Astrid might be seen as something of a model for John's later, all-encompassing infatuation with Yoko. Certainly the two situations produced some similar outcomes, for in both cases, Paul McCartney reacted badly. Lennon noted the cause of an onstage fistfight between McCartney and Sutcliffe: "Paul was saying something about Stu's girl, and he was jealous because she was a great girl, and Stu hit him on stage." Later, when John found his own soul mate in Yoko, Paul tried to accept it, even inviting the couple to live in his house during the summer of 1968. This was a time when Paul was in a fragile state, having recently broken with his fiancée Jane Asher. As reported by Paul's summer girlfriend Francie Schwartz, Paul's true feelings of envy slipped out in a cruel jest. A note left on the mantle warned John: "You and your Jap tart think you're hot shit." Paul admitted leaving the note as a joke, but the dark underpinnings of this incident were crystal clear. Indeed, jealousy was at the heart of the other Beatles' relationships with both Stuart and Yoko. Stuart was a formidable presence in his own right. Cynthia Lennon recalled: "It was a very beautiful friendship John had with Stu. John, even though he'd gone into the music end of the art world and left his art behind, he still desperately wanted to be a painter, and Stuart was a fantastic and dedicated artist. They totally understood each other and gave to each other what they knew, what they had to offer." Stuart was hardly a musician, but joined the group because John liked having him around. "When he came into the band… we were a little jealous of him; it was something I didn't deal with very well," Paul admitted years later in The Beatles Anthology. "We were always slightly jealous of John's other friendships… when Stuart came in it felt as if he was taking the position away from George and me. We had to take a bit of a back seat." George agreed, saying "..with all the stress we were under, a little bitching went on and Paul and he (Stu) used to punch each other out a bit." "We'd had a few ding-dongs, partly out of jealousy for John's friendship, and Stuart, being his mate from art school, had a lot of his time and we were jealous of that," Paul continued. "Also, I was keen to see the group be as good as it could be, so I would make the odd remark. Oh, you don't play that right." Here was evidence of the strict perfectionism which Paul would later direct towards George and Ringo in the studio. Curiously, John would never lose his taste for inviting musically limited friends to join his band simply because he liked them. This trend had begun with John's boyhood friend Pete Shotton scraping a washboard in the Quarrymen. Of Stuart joining the Beatles, Shotton wrote: "Thus continued the pattern that had begun with me in 1956, and would once again manifest itself with Yoko Ono in the late sixties. Since music came so naturally to John, it simply never occurred to him that anyone to whom he felt especially close could not also participate." Philip Norman's 2008 biography Lennon shrewdly probes John's decision to bring Yoko to Beatles recording sessions in 1968: "Whatever John's inner thoughts, he remained a fully paid-up Beatle, subject to the remorseless manufacturing cycle, which, in late May, had summoned them back to Abbey Road Studios… at the back-to-school session on May 30, his initial intention became clear: not to break up the old gang, but to augment it. 'He wanted me to be part of the group,' Yoko says. 'He created the group, so he thought the others should accept that. I didn't particularly want to be part of them… I couldn't see how I would fit in, but John was certain I would. He kept saying, 'They're very sensitive … Paul is into Stockhausen… They can do your thing…' He thought the other Beatles would go for it; he was trying to persuade me.'" Lennon confirmed this remarkable notion himself, in his 1970 Rolling Stone interview: "Yoko played me tapes I understood. I know it was very strange and avant-garde music is very tough to assimilate… but I've heard the Beatles playing avant-garde music when nobody was looking for years. But they're artists, and all artists have fuckin' big egos… and when a new artist came into the group, they were never allowed. Sometimes George and I would like to bring somebody in like Billy Preston, that was exciting, we might have had him in the group. We were fed up with the same old shit… and I would have expanded the Beatles… she came in and she would expect to perform with them like you would with any group…" In his 2006 memoir, recording engineer Geoff Emerick noted a shift in Yoko's role as the White Album sessions dragged on: "I could see that she (Yoko) was gaining confidence. She seemed to feel she was part of the group now. In her mind, and in John's mind, she had become the fifth Beatle." Lennon later expressed indignation when scenes of Yoko vocalizing to a Beatles jam were cut from the Let it Be movie. Clearly, he took Yoko's presence as a quasi-band member seriously. Furthermore, John sought to enforce these wishes at a time when he was trying to reassert himself as leader of the Beatles. It was a role John had occupied during the early days, when Stuart had joined the group. By contrast, many Beatles ideas in 1967 had originated with Paul. Privately, Lennon simmered, as he told Rolling Stone: "When Paul felt like it, he would come in with about twenty good songs… and I suddenly had to write a fucking stack of songs. Pepper was like that. And Magical Mystery Tour was another." Perhaps, following the critical panning which greeted the Magical Mystery Tour film, John felt it was time for a change. Or perhaps, being with Yoko simply gave him renewed confidence. John further told Rolling Stone: "Bit by bit over a two-year period, I had destroyed me ego. I didn't believe I could do anything. I just was nothing. I was shit… and she (Yoko) made me realize that I was me and that it's all right. That was it; I started fighting again, being a loudmouth again and saying, "I can do this. Fuck it. This is what I want," you know. "I want it, and don't put me down." With Yoko, John felt he had reawakened his own crucial sense of personal authenticity. Years later, he gave this assessment of the Beatles' split: "…That's how the Beatles ended. Not because Yoko split the Beatles, but because she showed me what it was to be Elvis Beatle and to be surrounded by sycophants and slaves who were only interested in keeping the situation as it was. She said to me, you've got no clothes on. Nobody had dared tell me that before." Nobody, perhaps, except for Stuart Sutcliffe. In the early sixties, John wrote long, honest letters to Sutcliffe, sharing John's inner thoughts, as he would later do with Yoko. Tellingly, in 1967, John remembered Stu with these words: "I looked up to Stu. I depended on him to tell me the truth." Feeling he was once more being true to himself, John was furious when Paul got the credit for announcing the Beatles' split to the press in 1970. Lennon would continue to try to set the record straight for the rest of his life. It seems ironic that John's wife has been lambasted for years for supposedly splitting the group up, an act for which John himself publicly sought credit. Those who blame Yoko Ono for breaking up the Beatles may have a hard time facing the truth: that John Lennon broke up the Beatles. As he confidently wrote in the late seventies, "I started the band. I disbanded it. It's as simple as that." John elaborated on his decision to leave in a 1980 interview with Playboy: "What I did… in my own cowardly way was use Yoko… it was like now I have the strength to leave because I know there is another side to life." This other side to life included a host of different artistic projects, many of them employing John's latent art school talents. He collaborated with Yoko on a whirlwind of films, lithographs, and art shows, just as Stu had resumed his dedication to painting once the distraction of the rock band was removed. Yoko, then, became the escape from the Beatles that John had already been looking for. The template for this particular kind of escape had been established years before. We must remember that John was barely 29 years old when he told the other Beatles he was quitting the group in September 1969. For John, the best example of an appealing alternate life had been seen a mere eight years before, in the bohemian path of art and love chosen by his close friend Stu. Pete Shotton remembers John describing his new romance with Yoko: "It's just like how we used to fall in love when we were kids." John certainly remembered "when we were kids." He remembered Stu and Astrid. Copyright Daytrippin' – This article may not be reproduced without the permission of the author For more Beatles news, follow us on Twitter Categories: Commentary | Tags: astrid kirchherr, beatles, hamburg, john lennon, Paul McCartney, stuart sutcliffe, yoko ono | Permalink. Beatles Hamburg photos featured in new book, 'Astrid Kirchherr: A Retrospective' The companion book to the photography exhibit "Astrid Kirchherr: A Retrospective" which is currently on display at the Victoria Gallery & Museum in Liverpool was officially released today. What a treat for those of us who can't make it to Liverpool before the exhibit closes on January 29, 2011! With or without the exhibit, "Astrid Kirchherr: A Retrospective" is an important historical document in Beatles history. Compared to her limited edition coffee table books from Genesis costing several hundred dollars, this is one of the few times Astrid Kirchherr has compiled a collection of her legendary photos of The Beatles in an affordable edition. Not only was Astrid a photographer who took the first professional shots of The Beatles back in Hamburg in late 1960, but she also became their friend. Aside from her love affair with The Beatles' former bassist Stuart Sutcliffe, Astrid formed the closest bond with George Harrison. In an interview with Astrid featured in the book, she says: "George was always my favorite, his kindness and wit. He was just a wonderful person and whenever I was in trouble, like with money and things, he was always looking after me and he invited me a couple of times to London and later on to Henley. I just miss him terribly because he was like a little guardian angel for me. I feel like I am in a way lost without him." Astrid, her ex-boyfriend, Klaus Voormann, and friend Jurgen Vollmer had a huge impact on The Beatles during their time in Hamburg. The Beatles traded in their matching sports jackets for leather attire due to the fashion influence of their new Hamburg friends, and eventually combed their hair forward in the "moptop" style due to Astrid and Jurgen's influence. The first black and white photos that Astrid took of The Beatles at the funfair at a munincipal park in Hamburg are regarded as some of the best photos of The Beatles ever taken. These photos as well as many never before seen photos are featured in the book including pictures of Astrid with Paul, George and Ringo on a holiday vacation in Tenerife in 1963. At 208 pages, "Astrid Kirchherr: A Retrospective" offers not only famous photos of The Beatles, but also uncropped and alternate shots. Featuring in-depth interviews with Astrid, Klaus Voormann, Ulf Kruger and Gibson Kemp, we learn much more about this young female photographer who, at the time, had no idea that her friends and photography subjects would become the biggest band in the world. — Trina Yannicos "Astrid Kirchherr: A Retrospective" is available for order on Amazon.com Editor's Note: Read about Astrid Kirchherr's first US appearance in Chicago in 1997 in the first issue of Daytrippin' (available in PDF or hard copy format) Categories: New Beatles merchandise, Reviews | Tags: astrid kirchherr, beatles, beatles photos, book review, hamburg | Permalink. by Daytrippin' 2 Comments The Beatles photography exhibit by Astrid Kirchherr opens in Liverpool, England (Photos) Copyright Astrid Kirchherr German photographer, Astrid Kirchherr, was the first photographer to take professional quality photos of the Beatles. Her famous black and white portraits taken in Hamburg in the early 1960s show The Beatles dressed in leather jackets and pants–quite different from the Edwardian suits they wore when they became famous. Over 70 images covering Astrid's career from 1960 until she ultimately abandoned photography in 1967 are on display at the Victoria Gallery & Museum in Liverpool in an exhibit which opened today. "Astrid Kirchherr: A Retrospective" contains a wide range of images from the early days when Astrid first met the Beatles in Hamburg to her involvement photographing The Beatles on the set of "A Hard Day's Night" in 1964 for STERN magazine which brought her back to Liverpool. Fans outside the Cavern Astrid first became aware of The Beatles through her friend, artist Klaus Voormann. Voormann discovered the Beatles when they were playing at the Kaiserkeller club in Hamburg, Germany in 1960. He immediately brought Astrid to hear the Beatles play. Astrid, Klaus, and another photographer, Jurgen Vollmer formed a tight-knit friendship with the Beatles during the time they spent in Hamburg. In 1960, Astrid convinced The Beatles to pose for photographs at an old fairground in Hamburg which shows The Beatles dressed like "Teddy boys" sporting leather jackets, leather pants, and slicked-back Elvis-style haircuts. Later on, she did studio-style portraits of them. "They trusted me, and that is the most important thing for a photographer if you take portraits of people," Astrid told Daytrippin' Magazine in an exclusive interview. "If they don't trust you, then you can forget it." In 1964, Astrid, accompanied by another photographer, Max Scheler, was granted special access to photograph The Beatles on the set of "A Hard Day's Night" in London. She also visited Liverpool and took many photos of The Beatles' hometown. These photos appeared in the book "Yesterday: The Beatles Once Upon A time." For the avid Beatle fan, this new exhibit offers some previously unpublished images of the Beatles, some well-known images of the Beatles in their original format and some rare images of the Beatles holidaying in Tenerife. It also includes portraits of key individuals from the period, including Rory Storm, Gibson Kemp and Klaus Voorman, according to a museum press release. Astrid Kirchherr self-portrait This exhibition is accompanied by a fully illustrated exhibition catalog called "Astrid Kirchherr: A Retrospective" published by Liverpool University Press. This book, available for purchase on Amazon, also contains a series of in-depth interviews with Astrid, Gibson Kemp, Ulf Krüger and Klaus Voorman by Colin Fallows. "Astrid Kirchherr: A Retrospective" runs through January 29, 2011. Admission is free. The Victoria Gallery & Museum is located at the University of Liverpool, Ashton Street, Liverpool L69 3DR. For more information, visit http://www.liv.ac.uk/vgm/ For more Beatles news, follow Daytrippin' on Facebook and Twitter Categories: Beatles News, Beatles Travel: UK | Tags: astrid kirchherr, beatles, george harrison, hamburg, john lennon, liverpool, photography exhibit | Permalink.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,076
Boult first bowled danger man Fakhar Zaman off his pads, before Babar Azam nicked through to slip and Mohammad Hafeez was trapped in-front, with the home side in a funk at 3/8. It's not his job, his job is to do batting and if he concentrate on that it's better. Taylor finished with a top score of 80 in New Zealand's 47-run win while Hafeez finished with figures of 0-23 from six overs of work. Latham, who boasts an exceptional record in Asia, joined Taylor at the crease as the duo set about hitting over half of their team's total. "New Zealand have good new ball bowlers and early wickets pushed us too much". Boult also smashed a maximum off the last ball by Junaid Khan as Pakistan conceded 50 runs off the final five overs in an innings full of ebb and flow. Hafeez was bowling to Taylor during the first innings in Abu Dhabi when the Kiwi star made a "chucking" gesture towards umpires.
{ "redpajama_set_name": "RedPajamaC4" }
96
Q: lmfit saving function in Python I've got an issue with save_modelresult(result, 'S:\Doc\Python\Results\modelresult.csv') Well the save is complete, but the organization of this data is very poor. Does anyone know of any tricks/ways how to store my results in organized columns? Cheers! A: Lmfit's model.save_modelresult() function saves the ModelResult as JSON that is intended to be loaded with load_modelresult which will turn that saved representation into a working ModelResult in another Python session. It's not necessarily meant to be human-readable. Then again, it can be read in with the json library if you want. For organizing that output in a human-readable form, I would suggest looking at the fit_report() method of ModelResult and the lmfit.printfuncs.fit_report() function that it uses. The simplest thing to do is probably just save that fit report to a file, say like this: # save fit report to a file: with open('fit_result.txt', 'w') as fh: fh.write(result.fit_report())
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,378
\section{Introduction} Cobalt disulphide, CoS$_2$, a metallic compound with the cubic pyrite-type crystal structure~\cite{1}, exhibits a phase transition to a ferromagnetic state at $T_c\sim122$ K Ref.~\cite{2} that magnetic and electric properties indicate to be itinerant in nature~\cite{2,3,4,5,6}. Upon entering the ferromagnetic state CoS$_2$ becomes a nearly half metal with a significant decrease in the density of states at the Fermi level that is reflected in an increase in the resistivity below $T_c$~\cite{7,8}. The temperature-pressure phase diagram of CoS$_2$ has been studied to high pressures where experiments show that the ferromagnetic transition decreases monotonically and trends to zero at $\sim$6 GPa~\cite{9,10,11,12}. A change from continuous to first-order phase transition with a tricritical point close to ambient pressure was suggested in Refs.~\cite{11,12}, which accounts for the absence of a non-Fermi liquid temperature dependence of the resistivity near this critical pressure~\cite{12}. The conclusion for a strong first-order quantum-phase transition in CoS$_2$ was based on indirect arguments~\cite{12}; consequently, direct thermodynamic studies are needed to shed light on the evolution and properties of the phase transition in CoS$_2$ at high pressures. \section{Experimental} We report an X-ray study of the lattice parameter change at the phase transition in CoS$_2$ along the pressure-dependent transition line. Single crystals of CoS$_2$ were grown by chemical-vapor-transport~\cite{12}, and some crystals were ground in an agate mortar to prepare a powder sample. X-ray diffraction studies of both crystals and powders of CoS$_2$ were performed at the HPCAT (16BM-D) beam line of the Advanced Photon Source (APS), Argonne National Laboratory. Single-crystal data were collected using the rotation method ($\omega$-axis rotation rate of $\pm17^\circ/500$ sec, X-ray wave length $\lambda=0.424603 {\AA}$) and unit-cell parameters were calculated from the (610) and (440) reflections. In the powder-diffraction experiments, the X-ray wave length $\lambda=0.354300 {\AA}$ was chosen to get the ten strongest reflections. In both experiments, an image plate detector MAR345, calibrated with fine CeO$_2$ standard powder, was used for data collection. Examples of the diffraction patterns are given in Fig.~\ref{fig1}. The collected data were subjected to the full profile analysis using the GSAS software package~\cite{Larson,Toby}, with a resulting accuracy of the unit-cell parameter determination of $\pm1\times10^{-4} {\AA}$. For the diffraction experiments, high pressures were generated in a diamond anvil cell with $\sim$400 $\mu m$ culet diameters and $\sim60^\circ$ aperture. A 200 $\mu m$ hole was drilled in the pre-indented stainless-steel gasket. A powder or single crystal sample of CoS$_2$ ($\sim50\times50\times10\ \mu m^3$ in size) and ruby chips were placed in the sample chamber that was filled with helium to a pressure of $\sim$200 MPa. A gas-membrane device equipped with a pressure-control system was employed to pressure-load the cell. For these experiments, the cell was attached to a cold-finger type cryostat, which provided temperatures down to 15 K measured by a Cernox thermometer. Pressure was measured by the ruby luminescence technique with accuracy $\pm2\times10^{-5}$ GPa making use of the standard ruby calibration scale and with the appropriate temperature correction, in correspondence with procedures accepted in the HPCAT. \begin{figure}[htb] \includegraphics[width=80mm]{fig1.eps} \caption{\label{fig1} (Color online) Position of (601) reflection from a single crystal of CoS$_2$ as a function of pressure at 23 K. Reflection shift between 5.6 and 5.8 GPa corresponds to the first order phase transition.} \end{figure} The X-ray study was supplemented by ac specific heat measurements at high pressures~\cite{14Sidorov}, which were generated in a miniature clamped toroid-type anvil pressure cell~\cite{15Petrova} with a glycerol/water (3:2 by volume) mixture as the pressure medium. Pressure at low temperatures was estimated from the superconducting transition temperature of lead~\cite{16Eiling}. \section{Results} Figure~\ref{fig2} gives typical results for the variation in unit-cell volume around the phase boundary. Experimental conditions, unfortunately, introduce sufficient error in accurately determining the pressure and temperature that it is not possible to distinguish in these data the difference between a continuous phase transition and a jump inherent to a first-order phase transition. We take the change in cell volume at the phase transition to be given by the dashed curves in Fig.~\ref{fig2}. Consequently, it is not possible to draw conclusions from these data about the tricritical behavior in CoS$_2$ (Ref.~\cite{11,12}). On the other hand, these experimental uncertainties are inconsequential in determining the isothermal change in cell volume as a function of pressure in the high pressure regime that is plotted in Fig.~\ref{fig3}. In the next section, results of Figs.~\ref{fig2} and ~\ref{fig3} will be compared to heat capacity data plotted in Fig.~\ref{fig4}. These data demonstrate an evolution of the phase-transition heat with temperature and pressure. The initial rise of the heat capacity peak with increasing pressure probably signifies the crossover from second to first order transitions. The pressure dependence of the phase-transition temperature deduced from the current experimental data is shown in Fig.~\ref{fig5}. As seen, the phase line obtained from X-ray diffraction differs significantly from that determined from electrical resistivity, susceptibility and heat capacity data ~\cite{12}. Obviously, calibrations of the low temperature ruby pressure scale and the scale based on the superconducting transition temperature of lead greatly disagree. It is worth noting that the phase-transition line in CoS$_2$ that also had been established in Ref.~\cite{10,12} with use the "lead" high pressure scale also differs from the current data probably due to non hydrostatic experimental conditions. Figures~\ref{fig6} and ~\ref{fig7} summarize the pressure-dependent evolution of $\Delta V$ and $\Delta V/V_F$ along the transition line. Here $\Delta V =V_P-V_F$, where $V_P$ and $V_F$ are the unit-cell volume calculated from lattice-parameter data in the paramagnetic and ferromagnetic phases, respectively. We emphasize a special behavior of $\Delta V$: its absolute value $|\Delta V|$ increases with pressure, which probably indicates involvement of some nontrivial physics. Despite an increase of $\Delta V$ by an order of magnitude in the range from 120 to 20K, the maximum ratio $\Delta V/V_F$ reaches only $\sim0.1$ percent, so there is not a strong first order phase transition in CoS$_2$ at high pressure. \begin{figure}[htb] \includegraphics[width=80mm]{fig2.eps} \caption{\label{fig2} (Color online) Examples of the temperature dependence of the unit-cell volume of CoS$_2$ in the vicinity of its ferromagnetic phase transition. Dashed curves are guides to the eye. It is tempting to ascribe the seemingly two anomalies at the phase transition shown in the middle panel to a splitting of the phase transition; however, heat capacity data do not support this supposition.} \end{figure} \begin{figure}[htb] \includegraphics[width=80mm]{fig3.eps} \caption{\label{fig3} (Color online) Compression isotherm of CoS$_2$. As shown by the discontinuity in these data, a first order phase transition occurs between 5.6 and 5.8 GPa. Error bars correspond to the circle size.} \end{figure} \begin{figure}[htb] \includegraphics[width=80mm]{fig4.eps} \caption{\label{fig4} (Color online) Heat capacity of CoS$_2$ in the vicinity of its phase transition. Numbers above the peaks correspond to pressure values in GPa. In the inset the heat capacity peaks at 3.39, 3.88 and 4.24 GPa are shown in the enlarged scale} \end{figure} \begin{figure}[htb] \includegraphics[width=80mm]{fig5.eps} \caption{\label{fig5} (Color online) Phase diagram of CoS$_2$ at high pressure. Curve 1 was determined making use a "lead" manometer to measure pressure~\cite{12}. Curve 2 was determined by X-ray diffraction and the pressure measured by a ruby sensor.} \end{figure} \begin{figure}[htb] \includegraphics[width=80mm]{fig6.eps} \caption{\label{fig6} (Color online) Volume change at the ferromagnetic phase transition in single crystal and powder samples of CoS$_2$ as a function of pressure. The dashed curve is a guide to the eyes.} \end{figure} \begin{figure}[htb] \includegraphics[width=80mm]{fig7.eps} \caption{\label{fig7} (Color online) Relative volume change at the ferromagnetic phase transition in CoS$_2$ as a function of pressure. The dashed curve is a guide to the eyes.} \end{figure} \begin{figure}[htb] \includegraphics[width=80mm]{fig8.eps} \caption{\label{fig8} (Color online) Entropy of the phase transition in CoS$_2$ as a function of temperature. Symbols denote the entropy measured at the phase transition and dashed curves are guides. Curve1: entropy change calculated from the Clapeyron-Clausius equation; curve 2: entropy change calculated from heat capacity data (Fig.~\ref{fig4}); point 3: calorimetry result at ambient pressure~\cite{13Ogawa}; point 4: calculated from dilatometric measurements at ambient pressure~\cite{12}. Pressure measurements in experiments leading to curves 1 and 2 were based on different pressure scales (Fig.~\ref{fig5}). To compare these results on an equal footing, they were plotted as functions of temperature.} \end{figure} \begin{figure}[htb] \includegraphics[width=80mm]{fig9.eps} \caption{\label{fig9} Compressibility discontinuity at the phase transition in CoS$_2$ at high pressure, calculated from the compression isotherm (Fig.~\ref{fig3}). The compressibility of paramagnetic phase exceeds that of the ferromagnetic phase.} \end{figure} \begin{figure}[htb] \includegraphics[width=80mm]{fig10.eps} \caption{\label{fig10} a) Jumps of heat capacity divided by temperature at the phase transition in CoS$_2$ at T=1.28 K and T= 60 K. Values of $C_p/T$ correspond to heat capacity of the sample plus uncertain amount of pressure transmitting media. Heat capacity of the media stays constant at the phase transition therefore providing a possibility to calculate an absolute value of $\Delta C_p/T$, as shown in the figure. Note that the jump at 60 K is much higher than at 1.28 K b) Jumps of heat capacity divided by temperature at the phase transition in CoS$_2$ as a function of transition temperature $T_c$. Heat capacity of the media stays constant at the phase transition therefore providing a possibility to calculate an absolute value of $\Delta C_p/T$, as shown in the figure. } \end{figure} The presented data permit calculations of the entropy of the phase transition in CoS$_2$. Two sets of calculations are presented in Fig.~\ref{fig8}. Set 1 is calculated from the volume change at the phase transition (Fig.~\ref{fig6}) together with the Clausius-Clapeyron equation $dT/dP = \Delta V/\Delta S$, which connects the slope of the transition curve with a difference of volume and entropy. Set 2 was obtained by integrating the heat capacity data (Fig.~\ref{fig4}). Both curves in Fig.~\ref{fig8} qualitatively agree, reflecting a fast quantum degradation of the transition entropy with lowering temperature from temperatures as high as 110 K. The quantitative difference between the two sets is attributed to many factors, including uncertainty in the heat capacity of CoS$_2$ due to inadequate subtraction of a contribution from the pressure transmitting medium, accuracy in determination the volume change, the slope of the phase transition line, etc.. \section{Discussion} As seen from the experimental data, there is a steep growth of the volume change with pressure at the ferromagnetic phase transition in CoS$_2$. This behavior could be connected at least partly with the tricritical crossover supposedly observed in CoS$_2$ at its magnetic phase transition close to ambient pressure~\cite{11,12}. In this case, first order features should grow on compression, which is in fair agreement with the general evolution of $\Delta V$. On the other hand, a negative volume change at a phase transition, such as seen in CoS$_2$, is typical of itinerant magnets and is a manifestation of a magnetovolume effect or volume magnetostriction~\cite{18}. Neglecting the spin fluctuations, the volume change due to band polarization and associated spontaneous magnetization can be written as~\cite{19,20} $$\Delta V/V_F=(C/K)M^2,$$ where M is the magnetization, K is the bulk modulus, and C is the magnetovolume coupling constant. As seen in Fig.~\ref{fig7}, $\Delta V/V_F$ changes by an order of magnitude along the transition line. To explain this observation, the magnetization M of the ferromagnetic phase CoS$_2$ at the transition line should increase considerably with pressure, which seems implausible and contradicts the experimental data and calculations~\cite{21,22}. At the same time, from $\Delta V$ along the phase-transition line it necessarily follows that the compressibility of the ferromagnetic phase is always lower than that of the paramagnetic phase. This conclusion is clearly supported by the compressibility calculated from the compression isotherm (Fig.~\ref{fig9}). This situation seems to be not common. For instance, in the case of the helical magnet MnSi a reverse relationship is observed between compressibilities of the coexisting magnetic and paramagnetic phases~\cite{23}. But, the nearly half metal nature of the ferromagnetic phase of CoS$_2$ creates a new situation. The point is that most of the electrons of the minority band become localized in the nearly half metallic state. This influences the repulsive interaction between electrons in CoS$_2$ in such a way to decrease the compressibility. An analogous situation is realized in a high pressure study of the model half-metal CrO$_2$~\cite{24}. This conclusion is supported by the pressure-dependent heat capacity at 1.28 K, which shows (Fig.~\ref{fig10}) a positive jump at the transition from ferromagnetic to paramagnetic states. At this low temperature, phonon and magnon degrees of freedom are practically frozen out. As a result, electronic contributions dominate $C/T$ and therefore the jump indicates an increase of the electronic density of states in the paramagnetic phase, as expected for a transition from half metallic to true metallic states. Note that at higher temperatures the corresponding jumps increase significantly ((Fig.~\ref{fig10}b) that may imply decreasing half metallicity with pressure if the phonon and magnon contributions can be neglected. It should be mentioned that in the Landau theory compressibility and heat capacity changes at a phase transition have the same sign, which seemingly contradicts the current observation~\cite{25}. In the Landau theory, however, only anomalous parts of thermodynamic quantities are considered; whereas, the regular background contributions, which could drastically change at a phase transition, cannot be treated in a general way. In CoS$_2$, the sign of the compressibility change is certainly defined by the background, which eliminates a contradiction with the Landau theory. \section{Conclusion} The volume change and heat capacity were measured at the ferromagnetic phase transition in CoS$_2$ at high pressure, and the transition entropy was calculated from these experimental data. The transition entropy drops along the transition line due to quantum degradation, as required by the Nernst law. The volume change increases substantially along the transition line and is explained by specifics of the compressibility difference of the coexisting phases, which results from the nearly half metallic nature of the ferromagnetic phase of CoS$_2$. \ack This work was supported by the Russian Foundation for Basic Research (grant 12-02-00376-a), Program of the Physics Department of RAS on Strongly Correlated Electron Systems and Program of the Presidium of RAS on Strongly Compressed Matter. Work at Los Alamos National Laboratory was performed under the auspices of the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. T.A.L. wish to acknowledge research performed at Ames Laboratory. Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University. A portion of this work was performed at HPCAT (Sector 16), Advanced Photon Source (APS), Argonne National Laboratory. HPCAT operations are supported by DOE-NNSA under Award No. DE-NA0001974 and DOE-BES under Award No. DE-FG02-99ER45775, with partial instrumentation funding by NSF. APS is supported by DOE-BES, under Contract No. DE-AC02-06CH11357. F.E and I.Z greatly appreciate help of C. Kenney-Benson, D. Ikuta and D.Popov. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,673
\section{Introduction} Discrete linear convolution sums based on the fast Fourier transform (FFT) algorithm~\cite{Gauss1866,Cooley65} have become important tools for image filtering, digital signal processing, and correlation analysis. They are also widely used in periodic domains to solve nonlinear partial differential equations, such as the Navier--Stokes equations. In some of these applications, such as direct numerical pseudospectral simulations of turbulent fluids, memory usage is a critical limiting factor, and self-sorting in-place multidimensional Fourier transforms~\cite{Temperton91} are typically used to reduce the memory footprint of the required spectral convolutions. It is important to remove aliases from FFT-based convolutions applied to non\-periodic (wavenumber-space) data because they assume cyclic input and produce cyclic output. Typically the input data arrays are extended by padding them with enough zeros so that wave beats of the positive frequencies cannot wrap around and contaminate the negative frequencies. A cyclic convolution $\sum_{p=0}^{N-1} f_p g_{k-p}$ is then performed using a correspondingly larger Fourier transform size~$N$. If the cost of computing a complex Fourier transform of size~$N$ is asymptotic to $K N\log_2 N$ as \hbox{$N\goesto\infty$} (the lowest bound currently achievable is $K=34/9$~\cite{Johnson07,Lundy07}), the asymptotic cost of computing the convolution of two vectors of unpadded length~$m$ is $6Km\log_2 m$ (using three Fourier transforms with $N=2m$). Another important case in practice is the centered Hermitian~1D convolution, dealiased by the {\it 2/3 padding rule} \cite{Orszag71}. Since the computational cost of complex-to-real and real-to-complex Fourier transforms of size~$N=3m$ is asymptotic to $\half K N\log_2 N$, the FFT-based Hermitian convolution $\sum_{p=k-m+1}^{m-1} f_p g_{k-p}$ requires three transforms and hence $\fr{9}{2}Km\log_2 m$ operations. Alternatively, phase shift dealiasing~\cite{Patterson71,Canuto06} can be used to cancel out the aliasing errors between two convolutions with different phase shifts. However, this second technique is rarely used in practice since, in addition to doubling the memory requirements, it is computationally more expensive, requiring $6K m\log_2 m$ operations for a centered Hermitian convolution. An explicit application of the zero-padding technique involves the rather obvious inefficiency of summing over a large number of data values that are known {\it a~priori\/} to be zero. However, it is worthwhile to consider the response provided by Steven G. Johnson to a frequently asked question about the possibility of avoiding this expense~\cite{fftwprune}: \begin{quotation}\label{Johnson} {\it The most common case where people seem to want a pruned FFT is for zero-padded convolutions, where roughly 50\% of your inputs are zero (to get a linear convolution from an FFT-based cyclic convolution). Here, a pruned FFT is hardly worth thinking about, at least in one dimension. In higher dimensions, matters change (e.g.\ for a 3d zero-padded array about 1/8 of your inputs are non-zero, and one can fairly easily save a factor of two or so simply by skipping 1d sub-transforms that are zero). } \end{quotation} The reasoning behind the assertion that such one-dimensional pruned FFTs are not worth thinking about is that if only $m$ of the $N$ inputs are nonzero, the computational cost is reduced only from $N\log_2 N$ to $N\log_2 m$. For example, if $m=N/2$, the savings is a minuscule $1/\log_2 N$. Moreover, since a zero-padded Fourier transform of size~$N$ yields~$N$ (typically nonzero) output values, no storage savings appear possible in one dimension. Nevertheless, in this work we demonstrate that pruning the zero-padded elements of one-dimensional convolutions is still worth thinking about, primarily because this provides useful building blocks for constructing more efficient multidimensional convolutions. The key observation is this: although the memory usage of our implicitly padded 1D convolution is identical to that of a conventional explicitly padded convolution, the additional temporary memory need not be contiguous with the user data. In a multidimensional context, this external work buffer can be reused for other low dimensional convolutions. As a result, in $d$ dimensions an implicitly dealiased convolution asymptotically uses $1/2^{d-1}$ of the storage space needed for an explicitly padded convolution. When the Fourier origin is centered in the domain, memory usage is reduced to $(2/3)^{d-1}$ of the conventional amount. If saving memory alone were the goal, this reduction could also be achieved with explicit zero padding by copying the data for the innermost convolution to an external padded buffer, but such extra data communication would degrade overall performance. The fact that our one-dimensional convolution does not require this extra copying is the main feature that was exploited to obtain simultaneous improvements in memory usage and speed. Nevertheless, the task of writing an efficient implicitly dealiased one-dimensional convolution is onerous, particularly if one tries to compete with a problem-dependent, architecture-adaptive FFT algorithm (like that provided by the award-winning FFTW \cite{Frigo05} library, which empirically predetermines a near optimal butterfly graph at each subdivision). Effectively one wants to perform the outer FFT subdivision manually, dropping the zero terms and deferring the computation of the inner transforms to a standard library routine. But this also restricts the set of available platform-dependent algorithms that can be used at the highest level. Fortunately, several notable features of our algorithm help to offset this disadvantage. First, if the goal is to produce a convolution, bit reversal for the hand-optimized outermost subdivision is unnecessary: the scrambled Fourier subtransforms of the two input vectors can be multiplied together as they are produced (perhaps while still accessible in the cache). Second, the implicit method allows most of the subtransforms for an in-place convolution to be optionally computed as out-of-place transforms, which typically execute faster than in-place transforms (cf.~Figs.~1--6 of~\cite{Frigo05}) since they require no extra (pre-, post-, or interlaced) bit-reversal stage. These savings help keep our one-dimensional in-place implicit convolution competitive with an explicitly padded convolution based on the same highly optimized library. In Section~\ref{1d}, we develop an algorithm for Fourier transforming a one-dimensional zero-padded vector without the need for explicit padding. We show how this algorithm can be used to calculate implicitly padded convolutions for both general and Hermitian inputs. We describe how implicit padding may be applied to compute the discrete Fourier transform of an input vector padded beyond an arbitrary fraction~$p/q$ of its length. Building on these one-dimensional algorithms, implicitly padded convolutions are implemented for two- and three-dimensional input in Section~\ref{multid}. Finally, in Section~\ref{hyperconv}, we show for both general and Hermitian data how implicit padding may be used to dealias higher-order convolutions efficiently. \section{Implicitly dealiased 1D convolutions}\label{1d} In this section we describe the optimized 1D building blocks that are used in subsequent sections to construct higher-dimensional implicitly dealiased convolutions. \subsection{Complex convolution} The Fourier origin for standard convolutions is located at the first (zero) index of the array. Therefore, input data vectors of length~$m$ must be padded with zeros to length $N\ge 2m-1$ to prevent modes with wavenumber $m-1$ from beating together to contaminate mode~$N=0\mod N$. However, since FFT sizes with small prime factors in practice yield the most efficient implementations, it is normally desirable to extend the padding to $N=2m$. In terms of the $N$th primitive root of unity, $\zeta_N\doteq \exp\(2 \pi i/N\)$ (here $\doteq$ is used to emphasize a definition), the unnormalized backward discrete Fourier transform of a complex vector $\{U_k: k=0,\ldots,N\}$ may be written as $$ u_j\doteq\sum_{k=0}^{N-1}\zeta_N^{jk} U_k,\qquad j=0,\ldots,N-1. $$ The fast Fourier transform method exploits the properties that $\zeta_N^r=\zeta_{N/r}$ and $\zeta_N^N=1$. On taking $N=2m$ with $U_k=0$ for $k \ge m$, one can easily avoid looping over the unphysical zero Fourier modes by decimating in wavenumber: for $\ell=0,1,\ldots, m-1$: \begin{equation} u_{2\ell} =\ds\sum_{k=0}^{m-1}\zeta_{2m}^{2\ell k} U_k =\ds\sum_{k=0}^{m-1}\zeta_m^{\ell k} U_k, \quad u_{2\ell+1} =\ds\sum_{k=0}^{m-1}\zeta_{2m}^{(2\ell+1) k} U_k =\ds\sum_{k=0}^{m-1}\zeta_m^{\ell k} \zeta_{2m}^kU_k. \label{cconv1backward} \end{equation} This requires computing two subtransforms, each of size $m$, for an overall computational scaling of order $2m\log_2 m=N\log_2 m$. The odd and even terms of the convolution can then be computed separately (without the need for a bit reversal stage), multiplied term by term, and finally transformed again to Fourier space using the (scaled) forward transform \ben 2mU_k&=&\sum_{j=0}^{2m-1}\zeta_{2m}^{-kj} u_j =\sum_{\ell=0}^{m-1}\zeta_{2m}^{-k2\ell} u_{2\ell} +\sum_{\ell=0}^{m-1}\zeta_{2m}^{-k(2\ell+1)} u_{2\ell+1}\endl &=&\sum_{\ell=0}^{m-1}\zeta_m^{-k\ell} u_{2\ell} +\zeta_{2m}^{-k}\sum_{\ell=0}^{m-1}\zeta_m^{-k\ell} u_{2\ell+1}, \qquad k\hiderel=0,\ldots,m-1.\label{cconv1forward} \een The implicitly padded transforms described by~\Eqs{cconv1backward} and~\hEq{cconv1forward} are implemented as Procedure~{\tt\ref{fftpadBackward}} and~{\tt\ref{fftpadForward}}. These algorithms are combined in Function~{\tt\ref{cconv}} to compute a dealiased convolution of unpadded length~$m$ using two arrays of size~$m$ as input vectors instead of one array of size $2m$. This seemingly trivial distinction is the key to the improved efficiency and reduced storage requirements of the higher-dimensional implicit convolutions described in Section~\ref{multid}. Moreover, in Function~{\tt\ref{cconv}} we see that implicit padding allows each of the six complex Fourier transforms of size $m$ to be done out of place. In the listed pseudocode, an asterisk ($*$) denotes an element-by-element (vector) multiply. In principle, the stable trigonometric recursion described by Buneman~\cite{Buneman87}, which requires two small precomputed tables, each of size~$\log_2 N$, could be used to compute the required roots of unity $\zeta_N^k$ that appear in \Eqs{cconv1backward} and~\hEq{cconv1forward}.\footnote{We note that, in terms of the smallest positive number $\e$ satisfying $1+\e > 1$ in a given machine representation, the singularity in Buneman's scheme can be removed by replacing $\sec{\pi/2}$ with $5/\e$, $\sin 4\pi$ with $-\e/2$, and $\sin{2\pi}$ and $\sin{\pi}$ each with $-\e/10$.} While Buneman's recursion has the same average accuracy as an FFT itself~\cite{Tasche02}, on modern hardware a factorization method that does not rely on successive table updates turns out to be more efficient~\cite{Johnson09}, at the expense of somewhat higher memory usage. We instead calculate the $\zeta_N^k$ factors with a single complex multiply, using two short precomputed tables $H_a=\zeta_N^{as}$ and $L_b=\zeta_N^b$, where $k=as+b$ with $s=\floor{\sqrt m}$, \hbox{$a=0,1,\ldots,\ceil{m/s}-1$}, and $b=0,1,\ldots,s-1$. Since these one-dimensional tables require only $\O(\sqrt{m})$ complex words of storage and our focus is on higher-dimensional convolutions anyway, we do not account for them in our storage estimates. Referring to the computation times shown in Fig.~\ref{timing1c}, we see that the implicit padding algorithm described by \Eqs{cconv1backward} and~\hEq{cconv1forward} can thus be implemented to be competitive with explicitly padded convolutions. The error bars indicate the lower and upper {\it one-sided standard deviations} $$ \sigma_L=\sqrt{\fr{1}{\fr{n}{2}-1}}\sum_{i=1\atop{t_i < T}}^n (t_i-T)^2, \qquad \sigma_H=\sqrt{\fr{1}{\fr{n}{2}-1}}\sum_{i=1\atop{t_i > T}}^n (t_i-T)^2, $$ where $T$ denotes the mean execution time of $n$ samples. Both the FFTW-3.2.2 library and the convolution layer we built on top of it were compiled with the Intel C/C++ 11.0 20081105 compiler, using the optimization options {\tt -ansi-alias -malign-double -fp-model fast=2} on a 64-bit 3GHz Intel E5450 Xenon processor with 6MB cache. Like the FFTW library, our algorithm was vectorized for this architecture with specialized single-instruction multiple-data (SIMD) code. To compare the normalized error for the two methods, we considered the input vectors $f_k=F e^{ik}$ and $g_k=G e^{ik}$ for $k=0,\ldots,m-1$, with $F=\sqrt3+ i\sqrt 7$ and $G=\sqrt5+ i\sqrt {11}$. The Fourier transforms of these vectors have nonzero components for all transform sizes. In Fig.~\ref{error1c} we compare the normalized~$L^2$ error $\sqrt{\sum_{k=0}^{m-1}\Abs{h_k-H_k}^2}/\sqrt{\sum_{k=0}^{m-1} \Abs{H_k}^2}$ for each of the computed solutions $h_k$ relative to the exact solution $H_k=\sum_{p=0}^k f_p g_{k-p}=FG (k+1) e^{ik}$. \SetProcFnt{\textnormal} \setlength{\algomargin}{0.6em} \SetAlCapSkip{3pt} \def\tt{\tt} \def{\tt fft}{{\tt fft}} \def{\tt crfft}{{\tt crfft}} \def{\tt rcfft}{{\tt rcfft}} \def{\tt cconv}{{\tt cconv}} \def{\tt conv}{{\tt conv}} \def{\tt tconv}{{\tt tconv}} \def{\tt build}{{\tt build}} \def{\tt fftpadBackward}{{\tt fftpadBackward}} \def{\tt fftpadForward}{{\tt fftpadForward}} \def{\tt fft0padBackward}{{\tt fft0padBackward}} \def{\tt fft0padForward}{{\tt fft0padForward}} \def{\tt ffttpadBackward}{{\tt ffttpadBackward}} \def{\tt ffttpadForward}{{\tt ffttpadForward}} \def{\tt fft0tpadBackward}{{\tt fft0tpadBackward}} \def{\tt fft0tpadForward}{{\tt fft0tpadForward}} \SetKwData{xf}{f} \SetKwData{xu}{u} \SetKwData{xFk}{F} \SetKwData{xA}{A} \SetKwData{xB}{B} \SetKwData{xg}{g} \SetKwData{xh}{h} \SetKwData{xv}{v} \SetKwData{xw}{w} \SetKwData{xA}{A} \SetKwData{xB}{B} \SetKwData{xC}{C} \SetKwData{xD}{D} \SetKwData{xF}{F} \SetKwData{xG}{G} \SetKwData{xS}{S} \SetKwData{xT}{T} \SetKwData{xU}{U} \SetKwData{xV}{V} \SetKwData{xW}{W} \begin{figure}[htbp] \begin{minipage}{0.5\linewidth} \begin{procedure}[H] \KwIn{vector \xf} \KwOut{vector \xf, vector \xu} \For{$k=0$ \KwTo $m-1$}{ $\xu[k] \leftarrow \z_{2m}^k\xf[k]$\; } $\xf \leftarrow {\tt fft}\inv(\xf)$\; $\xu \leftarrow {\tt fft}\inv(\xu)$\; \caption{fftpadBackward({\sf f},{\sf u}) stores the scrambled $2m$-padded backward Fourier transform values of a vector {\sf f} of length $m$ in {\sf f} and an auxiliary vector~{\sf u} of length $m$.}\label{fftpadBackward} \end{procedure} \def{\tt fftpadBackward}{{\tt fftpadBackward}} \begin{procedure}[H] \KwIn{vector \xf, vector \xu} \KwOut{vector \xf} $\xf \leftarrow {\tt fft}(\xf)$\; $\xu \leftarrow {\tt fft}(\xu)$\; \For{$k=0$ \KwTo $m-1$}{ $\xf[k] \leftarrow \xf[k] + \z_{2m}^{-k}\xu[k]$\; } \Return f/(2m)\; \caption{fftpadForward({\sf f},{\sf u}) returns the inverse of {\tt fftpadBackward}({\sf f},{\sf u}).}\label{fftpadForward} \end{procedure} \end{minipage} \begin{minipage}{0.49\linewidth} \begin{function}[H] \KwIn{vector \xf, vector \xg} \KwOut{vector \xf} $\xu \leftarrow {\tt fft}\inv(\xf)$\; $\xv \leftarrow {\tt fft}\inv(\xg)$\; $\xu \leftarrow \xu * \xv$\; \For{$k=0$ \KwTo $m-1$}{ $\xf[k] \leftarrow \z_{2m}^k\xf[k]$\; $\xg[k] \leftarrow \z_{2m}^k\xg[k]$\; } \medskip $\xv \leftarrow {\tt fft}\inv(\xf)$\; $\xf \leftarrow {\tt fft}\inv(\xg)$\; $\xv \leftarrow \xv * \xf$\; \medskip $\xf \leftarrow {\tt fft}(\xu)$\; $\xu \leftarrow {\tt fft}(\xv)$\; \medskip \For{$k=0$ \KwTo $m-1$}{ $\xf[k] \leftarrow \xf[k] + \z_{2m}^{-k}\xu[k]$\; } \Return f/(2m)\; \caption{cconv({\sf f},{\sf g},{\sf u},{\sf v}) computes an in-place implicitly dealiased convolution of two complex vectors {\sf f} and {\sf g} using two temporary vectors {\sf u} and {\sf v}, each of length~$m$.}\label{cconv} \end{function} \end{minipage} \end{figure} \begin{figure}[htbp] \begin{minipage}{0.49\linewidth} \begin{center} \includegraphics{timing1c} \caption{Comparison of computation times for explicitly and implicitly dealiased complex in-place 1D convolutions of length $m$. The storage requirements of the two algorithms are identical.} \phantomsection{}\label{timing1c} \end{center} \end{minipage} \, \begin{minipage}{0.49\linewidth} \begin{center} \includegraphics{error1c} \caption{Normalized $L^2$ error for explicitly and implicitly dealiased complex in-place 1D convolutions of length $m$.} \phantomsection{}\label{error1c} \end{center} \end{minipage} \end{figure} \subsection{Implicitly dealiased centered Fourier transform}\label{fft0} A basic building block for constructing multidimensional centered convolutions is an implicitly dealiased centered Fourier transform, where the input data length is odd, say $2m-1$, with the Fourier origin at index $m-1$. Here, one needs to pad to $N\ge 3m-2$ to prevent modes with wavenumber $m-1$ from beating together to contaminate the mode with wavenumber $-m+1$. The ratio of the number of physical to total modes, $(2m-1)/(3m-2)$, is asymptotic to $2/3$ for large $m$ \cite{Orszag71}. For explicit padding, one usually chooses the padded vector length $N$ to be a power of $2$, with $m=\floor{(n+2)/3}$. However, for implicit padding, it is advantageous to choose $m$ itself to be a power of $2$ since the algorithm reduces to computing FFTs of length $m$. Moreover, it is convenient to pad implicitly slightly beyond $3m-2$, to $N=3m$, as this allow the use of a radix $3$ subdivision at the outermost level, so that only two of the three subtransforms of length $m$ need to be retained. Suppose then that $U_k=0$ for $k\ge m$. On decomposing $j=(3\ell+r)\mod N$, where $r\in\{-1,0,1\}$, the substitution $k'=m+k$ allows us to write the backward transform as \begin{equation} u_{3\ell +r}\hiderel=\sum_{k=-m+1}^{m-1}\z_m^{\ell k} \z_{3m}^{rk} U_k =\sum_{k'=1}^{m-1}\z_m^{\ell k'} \z_{3m}^{r(k'-m)} U_{k'-m} +\sum_{k=0}^{m-1}\z_m^{\ell k} \z_{3m}^{rk} U_k =\sum_{k=0}^{m-1}\z_m^{\ell k} w_{k,r},\label{fft0backwardA} \end{equation} where \begin{dmath} w_{k,r}\doteq \cases{ U_0&if $k=0$,\cr \z_{3m}^{rk}(U_k+\z_3^{-r}U_{k-m})&if $1\le k\le m-1$.\cr }\label{fft0backwardB} \end{dmath} The forward transform is then \begin{dmath} {3m}U_k=\sum_{r=-1}^{1}\zeta_{3m}^{-rk}\sum_{\ell=0}^{m-1}\zeta_m^{-\ell k} u_{3\ell+r}, \qquad k\hiderel =-m+1,\ldots,m-1.\label{fft0forward} \end{dmath} The use of the remainder $r=-1$ instead of $r=2$ allows us to exploit the optimization $\zeta_{3m}^{-k}=\conj{\zeta_{3m}^ k}$ in \Eqs{fft0backwardB} and~\hEq{fft0forward}. The number of complex multiplies needed to evaluate \Eq{fft0backwardB} for $r=\pm 1$ can be reduced by computing the intermediate complex quantities \begin{dgroup*} \begin{dmath*} A_k\doteq \zeta_{3m}^k\(\mathop{\rm Re}\nolimits U_k+\zeta_3^{-1} \mathop{\rm Re}\nolimits U_{k-m}\), \end{dmath*} \begin{dmath*} B_k\doteq i\zeta_{3m}^k\(\mathop{\rm Im}\nolimits U_k+\zeta_3^{-1} \mathop{\rm Im}\nolimits U_{k-m}\), \end{dmath*} \end{dgroup*} where $\zeta_3^{-1}=(-\half,-\fr{\sqrt{3}}{2})$, so that for $k > 0$, $w_{k,1}=A_k+B_k$ and $w_{k,-1}=\conj{A_k-B_k}$. The resulting transforms, Procedures {\tt\ref{fft0padBackward}} and {\tt\ref{fft0padForward}}, each have an operation count asymptotic to $3Km\log_2 m$. We were able to implement strided multivector versions of these algorithms since they operate fully in place on their arguments, with no additional storage requirements. \begin{procedure}[htbp] \KwIn{vector \xf} \KwOut{vector \xf, vector \xu} $\xu[0] \leftarrow \xf[m-1]$\; \For{$k=1$ \KwTo $m-1$}{ $\xA \leftarrow \zeta_{3m}^k\[\mathop{\rm Re}\nolimits \xf[m-1+k]+\(-\half,-\fr{\sqrt{3}}{2}\)\mathop{\rm Re}\nolimits \xf[k]\]$\; $\xB \leftarrow i\zeta_{3m}^k\[\mathop{\rm Im}\nolimits \xf[m-1+k]+\(-\half,-\fr{\sqrt{3}}{2}\)\mathop{\rm Im}\nolimits \xf[k]\]$\; $\xf[m-1+k] \leftarrow \xA+\xB$\; $\xu[k] \leftarrow \conj{\xA-\xB}$\; $\xf[0] \leftarrow \xf[k]$\; $\xf[k] \leftarrow \xf[k]+\xf[m-1+k]$\; } $\xf[0,\ldots,m-1] \leftarrow {\tt fft}\inv(\xf[0,\ldots,m-1])$\; $\xu[m] \leftarrow \xf[m-1]$\; $\xf[m-1] \leftarrow \xu[0]$\; $\xf[m-1,\ldots,2m-2] \leftarrow {\tt fft}\inv(\xf[m-1,\ldots,2m-2])$\; $\xu[0,\ldots,m-1] \leftarrow {\tt fft}\inv(\xu[0,\ldots,m-1])$\; \caption{fft0padBackward({\sf f},{\sf u}) stores the scrambled $3m$-padded centered backward Fourier transform values of a vector {\sf f} of length $2m-1$ in {\sf f} and an auxiliary vector~{\sf u} of length $m+1$.}\label{fft0padBackward} \end{procedure} \begin{procedure}[htbp] \KwIn{vector \xf, vector \xu} \KwOut{vector \xf} $\xf[m-1,\ldots,2m-2] \leftarrow {\tt fft}(\xf[m-1,\ldots,2m-2])$\; $\xu[m] \leftrightarrow \xf[m-1]$\; $\xf[0,\ldots,m-1] \leftarrow {\tt fft}(\xf[0,\ldots,m-1])$\; $\xu[0,\ldots,m-1] \leftarrow {\tt fft}(\xu[0,\ldots,m-1])$\; $\xu[m] \leftarrow \xf[0]+\xu[m]+\xu[0]$\; \For{$k=1$ \KwTo $m-1$}{ $\xf[k-1]=\xf[k]+\(-\half,\fr{\sqrt{3}}{2}\)\zeta_{3m}^{-k}\xf[m-1+k]+\(-\half,-\fr{\sqrt{3}}{2}\)\zeta_{3m}^k\xu[k]$\; $\xf[m-1+k]=\xf[k]+\zeta_{3m}^{-k}\xf[m-1+k]+\zeta_{3m}^k\xu[k]$\; } $\xf[m-1] \leftarrow \xu[m]$\; \Return \xf/(3m)\; \caption{fft0padForward({\sf f},{\sf u}) returns the inverse of {\tt fft0padBackward}({\sf f},{\sf u}).} \label{fft0padForward} \end{procedure} \subsection{Centered Hermitian convolution} In this frequently encountered case (relevant to the pseudospectral method), each input vector is the Fourier transform of real-valued data; that is, it satisfies the {\it Hermitian symmetry} $U_{-k}=\conj{U_k}$. While the physical data represented is of length $2m-1$, centered about the Fourier origin, the redundant modes (corresponding to negative wavenumbers) are not included in the input vectors. The input vectors are thus of length $m$, with the Fourier origin at index $0$. Just as in Section~\ref{fft0}, the unsymmetrized physical data needs to be padded with at least $m-1$ zeros. Hermitian symmetry then requires us to pad the $m$ non-negative wavenumbers with at least $c\doteq\floor{m/2}$ zeros. The resulting $2/3$ padding ratio (for even $m$) turns out to work particularly well for developing implicitly dealiased centered Hermitian convolutions. As in the centered case, we again choose the Fourier size to be $N=3m$. Given that $U_k=0$ for $k\ge m$, the backward (complex-to-real) transform appears as \Eq{fft0backwardA}, but now with \begin{dmath} w_{k,r}\doteq \cases{ U_0&if $k=0$,\cr \z_{3m}^{rk}\(U_k+\z_3^{-r}\conj{U_{m-k}}\)&if $1\le k\le m-1$.\cr } \end{dmath} We note that $w_{k,r}$ obeys the Hermitian symmetry $w_{k,r}=\conj{w_{m-k,r}}$, so that the Fourier transform $\sum_{k=0}^{m-1}\z_m^{\ell k} w_{k,r}$ in \Eq{fft0backwardA} will indeed be real valued. This allows us to build a backward implicitly dealiased centered Hermitian transform using three complex-to-real Fourier transforms of the first $c+1$ components of $w_{k,r}$ (one for each $r\in\{-1,0,1\}$). The forward transform is given by \begin{dmath} {3m}U_k =\sum_{r=-1}^{1}\zeta_{3m}^{-rk}\sum_{\ell=0}^{m-1}\zeta_m^{-\ell k} u_{3\ell+r}, \qquad k\hiderel =0,\ldots,m-1.\label{fft0forwardB} \end{dmath} Since $u_{3\ell+r}$ is real, a real-to-complex transform can be used to compute the first $c+1$ frequencies of $\sum_{\ell=0}^{m-1}\zeta_m^{-\ell k} u_{3\ell+r}$; the remaining $m-c-1$ frequencies needed in \Eq{fft0forwardB} are then computed using Hermitian symmetry. Since there are two input vectors and one output vector, the complete convolution requires a total of nine Hermitian Fourier transforms of size $m$, for an overall computational scaling of $\fr{9}{2}K m \log_2 m$ operations, in agreement with the leading-order scaling of an explicitly padded centered Hermitian convolution. For simplicity, we document here only the practically important case $m=2c$; minor changes are required to implement the case $m=2c+1$. We see in Function~{\tt\ref{conv}} that seven out of the nine Fourier transforms can be performed out of place using the same amount of memory, $2(\floor{N/2}+1)=6c+2$ words, as would be used to compute a centered Hermitian convolution with explicit padding. To facilitate an in-place implementation of the backward transform, we store the conjugate of the transformed values for $r=1$ in reverse order in the upper half of the input vector, using the identity (for real $u_j$) $$ u_j=\conj{u_j}=\sum_{k=-c+1}^{c}\zeta_m^{-jk} \conj{U_k} =\sum_{k'=m-1}^{0}\zeta_m^{j(k'-c)} \conj{U_{c-k'}} =(-1)^j\sum_{k=0}^{m-1}\zeta_m^{jk} \conj{U_{c-k}} $$ obtained with the substitution $k'=c-k$. One can omit the factors of $(-1)^j$ here since they will cancel during the real term-by-term multiplication of the two transformed input vectors. As seen in Fig.~\ref{timing1r}, the efficiency of the resulting implicitly dealiased centered Hermitian convolution is comparable to an explicit implementation. For each algorithm, we benchmark only those vector lengths that yield optimal performance. To check the accuracy of our implementation, we used the test case $f_k=F e^{ik}$ and $g_k=G e^{ik}$ for $k=0,\ldots,m-1$, with $F=\sqrt 3$ and $G=\sqrt 5$, noting that Hermitian symmetry requires that $F$ and $G$ be real. The exact solution is $H_k=FG\sum_{p=k-m+1}^{m-1} e^{ip}e^{i(k-p)}=FG(2m-1-k)e^{ik}$. The normalized $L^2$ errors for implicit and explicit padding are compared in Fig.~\ref{error1r}. \begin{function}[htbp] \KwIn{vector \xf, vector \xg} \KwOut{vector \xf} $\xF \leftarrow \xf[c]$\; {\tt build}(\xf,\xu)\; $\xC \leftarrow \xf[c]$\; $\xf[c] \leftarrow 2\mathop{\rm Re}\nolimits \xF$\; $\xu[c] \leftarrow \mathop{\rm Re}\nolimits \xF+\sqrt{3}\mathop{\rm Im}\nolimits \xF$\; \medskip $\xG \leftarrow \xg[c]$\; {\tt build}(\xg,\xv)\; $\xD \leftarrow \xg[c]$\; $\xg[c] \leftarrow 2\mathop{\rm Re}\nolimits \xG$\; $\xv[c] \leftarrow \mathop{\rm Re}\nolimits \xG+\sqrt{3}\mathop{\rm Im}\nolimits \xG$\; \medskip $\xu \leftarrow {\tt crfft}(\xu)$\; $\xv \leftarrow {\tt crfft}(\xv)$\; $\xv \leftarrow \xv * \xu$\; $\xu \leftarrow {\tt rcfft}(\xv)$\; \medskip $\xv \leftarrow {\tt crfft}(\xf[0,\ldots,c])$\; $\xf[0,\ldots,c] \leftarrow {\tt crfft}(\xg[0,\ldots,c])$\; $\xv \leftarrow \xv * \xf[0,\ldots,c]$\; $\xf[0,\ldots,c] \leftarrow {\tt rcfft}(\xv)$\; \medskip $\xS\leftarrow \xf[c-1]$\; $\xT\leftarrow \xf[c]$\; $\xf[c-1]=\mathop{\rm Re}\nolimits \xF-\sqrt{3}\mathop{\rm Im}\nolimits \xF$\; $\xf[c]=\xC$\; $\xg[c-1]=\mathop{\rm Re}\nolimits \xG-\sqrt{3}\mathop{\rm Im}\nolimits \xG$\; $\xg[c]=\xD$\; \medskip $\xv \leftarrow {\tt crfft}(\xg[c-1,\ldots,2c-1])$\; $\xg[c-1,\ldots,2c-1] \leftarrow {\tt crfft}(\xf[c-1,\ldots,2c-1])$\; $\xg[c-1,\ldots,2c-1] \leftarrow \xg[c-1,\ldots,2c-1] * \xv$\; $\xv\leftarrow {\tt rcfft}(\xg[c-1,\ldots,2c-1])$\; \medskip \For{$k=1$ \KwTo $c-2$}{ $\xf[k]=\xf[k]+\zeta_{6c}^{-k}\xv[k]+\zeta_{6c}^k\xu[k]$\; $\xf[2c-k]=\conj{\xf[k]}+\(-\half,-\fr{\sqrt{3}}{2}\)\zeta_{6c}^k\conj{\xv[k]}+\(-\half,\fr{\sqrt{3}}{2}\)\zeta_{6c}^{-k}\conj{\xu[k]}$\; } $\xf[c-1]=S+\zeta_{6c}^{1-c}\xv[c-1]+\zeta_{6c}^{c-1}\xu[c-1]$\; $\xf[c]=T-\(-\half,\fr{\sqrt{3}}{2}\)\xv[c]-\(-\half,-\fr{\sqrt{3}}{2}\)\xu[c]$\; \If{$c > 1$}{ $\xf[c+1]=\conj{S}+\(-\half,-\fr{\sqrt{3}}{2}\)\zeta_{6c}^{c-1}\conj{\xv[c-1]}+\(-\half,\fr{\sqrt{3}}{2}\)\zeta_{6c}^{1-c}\conj{\xu[c-1]}$\; } \Return f/(6c)\; \caption{conv({\sf f},{\sf g},{\sf u},{\sf v}) uses Procedure~{\tt build}\ to compute an in-place implicitly dealiased convolution of centered Hermitian vectors {\sf f} and {\sf g} of length~$2c$ using temporary vectors {\sf u} and {\sf v} of length $c+1$.}\label{conv} \end{function} \begin{figure}[htbp] \begin{minipage}{0.51\linewidth} \begin{procedure}[H] \KwIn{vector \xf} \KwOut{vector \xf, vector \xu} $\xu[0] \leftarrow \xf[0]$\; $\xFk \leftarrow \conj{f[2c-1]}$\; $\xf[2c-1] \leftarrow \xf[0]$\; \For{$k=1$ \KwTo $c-1$}{ $\xA \leftarrow \zeta_{6c}^k \[\mathop{\rm Re}\nolimits \xf[k]+\(-\half,\fr{\sqrt{3}}{2}\)\mathop{\rm Re}\nolimits \xFk\]$\; $\xB \leftarrow -i\zeta_{6c}^k \[\mathop{\rm Im}\nolimits \xf[k]+\(-\half,\fr{\sqrt{3}}{2}\)\mathop{\rm Im}\nolimits \xFk\]$\; $\xf[k] \leftarrow \xf[k]+\xFk$\; $\xu[k] \leftarrow \xA-\xB$\; $\xFk \leftarrow \conj{\xf[2c-1-k]}$\; $\xf[2c-1-k] \leftarrow \xA+\xB$\; } \caption{build({\sf f},{\sf u}) builds the FFT arrays required for Function~{\tt\ref{conv}} from an unpadded vector {\sf f} of length $2c$ into {\sf f} and an auxiliary vector~{\sf u} of length $c+1$.}\label{build} \end{procedure} \end{minipage} \begin{minipage}{0.49\linewidth} \begin{center} \includegraphics{timing1r} \caption{Comparison of computation times for explicitly and implicitly dealiased centered Hermitian in-place 1D convolutions of length $2m-1$.} \phantomsection{}\label{timing1r} \end{center} \end{minipage} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics{error1r} \caption{Normalized $L^2$ error for explicitly and implicitly dealiased centered Hermitian in-place 1D convolutions of length $m$.} \phantomsection{}\label{error1r} \end{center} \end{figure} \subsection{General padding}\label{pq} Implicit padding corresponding to an arbitrary~$p/q$ rule is also possible. Suppose that $pm$ data modes are zero padded to $N=qm$, where $p$ and $q$ are relatively prime. One decomposes $j=q\ell+r$, where $\ell=0,\dots,m-1$ and $r=0,\dots,q-1$. Similarly, one expresses $k=t m+s$, where $t=0,\dots,p-1$ and $s=0,\dots, m-1$: $$ u_{q\ell+r}=\sum_{k=0}^{pm-1}\z_{qm}^{(q\ell+r)k} U_k =\sum_{s=0}^{m-1}\sum_{t=0}^{p-1} \z_m^{\ell (t m+s)}\z_{qm}^{r(t m+s)} U_{t m+s} =\sum_{s=0}^{m-1}\z_m^{\ell s}\sum_{t=0}^{p-1}\z_{qm}^{r(t m+s)} U_{t m+s}. $$ Since there are $q$ choices of $r$, the problem reduces to computing $q$ Fourier transforms of length~$m$, which requires $K q m\log_2 m=K N\log_2 (N/q)$ operations. Likewise, the forward implicit transform $$ {qm}U_k =\sum_{\ell=0}^{m-1}\sum_{r=0}^{q-1} \zeta_{qm}^{-(q\ell+r)k} u_{q\ell+r} =\sum_{r=0}^{q-1}\zeta_{qm}^{-rk}\sum_{\ell=0}^{m-1}\zeta_m^{-k\ell} u_{q\ell+r}, \qquad k\hiderel =0,\ldots,pm-1 $$ also requires $q$ Fourier transforms of length $m$. Again, the computational savings for a one-dimensional transform is marginal. \section{Higher-dimensional convolutions}\label{multid} The algorithms developed in Section~\ref{1d} can be used as building blocks to construct efficient implicitly padded higher-dimensional convolutions. \subsection{Complex 2D convolution} The implicitly padded 2D convolution shown in Function {\tt\ref{cconv2}} is designed for data stored with a stride of one in the $y$ direction. Efficient multivector versions of Procedures {\tt\ref{fft0padBackward}} and {\tt\ref{fft0padForward}} are used for the transform in the $x$ direction; this allows a single $\zeta_{3m}^k$ factor to be applied to a consecutive column of data at constant $x$. In principle, one could also develop a multivector version of Function~{\tt\ref{cconv}} to perform simultaneous convolutions in the $y$ direction, but our timing tests indicate that this would only slightly enhance the overall performance (since the data for constant $y$ is not stored consecutively in memory) and would prevent the 1D convolution work arrays from being reused for the~$y$ convolution at each fixed $x$. The memory savings in our method comes precisely from this reuse of temporary storage, which in turn requires that the~$y$ convolutions be computed serially. As shown in Fig.~\ref{timing2c}, the resulting implicit 2D algorithm dramatically outperforms the explicit version: a $1024^2$ implicit complex convolution is $1.91$ times faster. \begin{figure}[htbp] \begin{center} \includegraphics{timing2c} \caption{Comparison of computation times for explicitly and implicitly dealiased complex in-place 2D convolutions of size $m^2$.} \phantomsection{}\label{timing2c} \end{center} \end{figure} The third sentence of the quote from Steven G. Johnson on page~\ref{Johnson} suggests a sensible optimization for explicitly padded 2D convolutions: one can omit for $m \le y < 2m$ the backward and forward Fourier transforms in the $x$ direction. However potential data locality optimizations may be lost when a 2D convolution is expressed directly in terms of 1D transforms: as observed in Fig.~\ref{timing2c}, while a $1024^2$ $y$-pruned explicit convolution is $1.26$ times faster than a conventional explicit implementation, the pruned method becomes $1.80$ times {\it slower} for the $8192^2$ case. Our implicitly dealiased convolution is also subject to these same optimization losses, but the savings due to implicit padding, out-of-place subtransforms, the neglect of high-level bit reversal, and the immediate convolution of constant $x$ columns (while still possibly in cache) outweigh these losses. Because the same temporary arrays $u$ and $v$ are used for each column of the convolution, the memory requirement is $4m_xm_y+2m_y$ complex words, far less than the $8m_xm_y$ complex words needed for an explicitly padded convolution. \subsection{Centered Hermitian 2D convolution} In two dimensions, the Fourier-centered Hermitian symmetry appears as $U_{-k,-\ell}=\conj{U_{k,\ell}}$. This symmetry is exploited in the centered Hermitian convolution algorithm described in Function~{\tt\ref{conv2}}. As shown in Fig.~\ref{timing2r}, implicit padding again yields a dramatic improvement in speed. When $m_y$ is even, the memory usage for an implicitly dealiased $(2m_x-1)\times (2m_y-1)$ centered Hermitian convolution is $2(2m_x-1)m_y+2(m_x+1)m_y+2(m_y/2+1)=6m_xm_y+m_y+2$ complex words, compared with a minimum of $2(3m_x-2)(3m_y/2)=9m_xm_y-6m_y$ complex words required for an explicitly dealiased convolution. \begin{figure}[htbp] \begin{minipage}{0.5\linewidth} \begin{function}[H] \KwIn{matrix \xf, matrix \xg} \KwOut{matrix \xf} \For{$j=0$ \KwTo $m_y-1$}{ ${\tt fftpadBackward}(\xf^T[j],\xU^T[j])$\; ${\tt fftpadBackward}(\xg^T[j],\xV^T[j])$\; } \For{$i=0$ \KwTo $m_x-1$}{ ${\tt cconv}(\xf[i],\xg[i],\xu,\xv)$\; ${\tt cconv}(\xU[i],\xV[i],\xu,\xv)$\; } \For{$j=0$ \KwTo $m_y-1$}{ ${\tt fftpadForward}(\xf^T[j],\xU^T[j])$\; } \Return \xf\; \caption{cconv2({\sf f},{\sf g},{\sf u},{\sf v},{\sf U},{\sf V}) returns an in-place implicitly dealiased convolution of \hbox{$m_x\times m_y$} matrices {\sf f} and {\sf g} using temporary \hbox{$m_x\times m_y$} matrices {\sf U} and {\sf V} and temporary vectors {\sf u} and {\sf v} of length $m_y$.}\label{cconv2} \end{function} \end{minipage} \begin{minipage}{0.5\linewidth} \begin{function}[H] \KwIn{matrix \xf, matrix \xg} \KwOut{matrix \xf} \For{$j=0$ \KwTo $m_y-1$}{ ${\tt fft0padBackward}(\xf^T[j],\xU^T[j])$\; ${\tt fft0padBackward}(\xg^T[j],\xV^T[j])$\; } \For{$i=0$ \KwTo $2m_x-2$}{ ${\tt conv}(\xf[i],\xg[i],\xu,\xv)$\; } \For{$i=0$ \KwTo $m_x$}{ ${\tt conv}(\xU[i],\xV[i],\xu,\xv)$\; } \For{$j=0$ \KwTo $m_y-1$}{ ${\tt fft0padForward}(\xf^T[j],\xU^T[j])$\; } \Return \xf\; \caption{conv2({\sf f},{\sf g},{\sf u},{\sf v},{\sf U},{\sf V}) returns an in-place implicitly dealiased centered Hermitian convolution of $(2m_x-1)\times m_y$ matrices {\sf f} and {\sf g} using temporary $(m_x+1)\times m_y$ matrices {\sf U} and~{\sf V} and vectors {\sf u} and {\sf v} of length~$m_y$. }\label{conv2} \end{function} \end{minipage} \end{figure} \begin{figure}[htbp] \begin{minipage}{0.49\linewidth} \begin{center} \includegraphics{timing2r} \caption{Comparison of computation times for explicitly and implicitly dealiased centered Hermitian in-place 2D convolutions of size \hbox{$(2m-1)\times m$}.} \phantomsection{}\label{timing2r} \end{center} \end{minipage} \, \begin{minipage}{0.49\linewidth} \begin{center} \includegraphics{timing3c} \caption{Comparison of computation times for explicitly and implicitly dealiased complex in-place 3D convolutions of size $m^3$.} \phantomsection{}\label{timing3c} \end{center} \end{minipage} \end{figure} \subsection{2D pseudospectral application} In our implementation, we allow the convolution inputs to be arrays of vectors, $f_i$ and~$g_i$ ($i=1,\ldots,M$), interpreting in Functions~{\tt\ref{cconv}},~{\tt\ref{conv}}, and~{\tt\ref{tconv}}, the product $f * g$ as the element-by-element dot product $\sum_i f_i * g_i$. Convolving $M$ input data blocks simultaneously like this enables, for example, the nonlinear term of the 2D incompressible Euler equation to be computed using five Fourier transforms (instead of three for each of the $M=2$ input pairs). Specifically, the advective term $-\vu\dot\grad\w=-(\zhat\cross\grad \del^{-2}\w)\dot \grad \w$, which appears in Fourier space as $$ \sum_{\vp} \frac{p_xk_y - p_y k_x}{|\vk-\vp|^2}\w_{\vp}\w_{\vk-\vp}, $$ can be computed efficiently with the call ${\tt conv2}(i k_x\w,i k_y\w,i k_y \w/k^2,-ik_x\w/k^2,{\sf u},{\sf v})$, where {\sf u} and {\sf v} are work arrays. \subsection{Complex and centered Hermitian 3D convolutions} The decoupling of the 2D work arrays in Function~{\tt\ref{cconv2}} facilitates the construction of an efficient 3D implicit complex convolution, as described in Function~{\tt\ref{cconv3}}. As shown in Fig.~\ref{timing3c}, an implicit $256^3$ convolution is $2.38$ times faster than the explicit version, while an $xz$-pruned version is only $1.24$ times faster. The memory usage of an implicitly padded 3D $m_x\times m_y\times m_z$ convolution is $4m_xm_ym_z+2m_y m_z+2m_z$ complex words, far less than the $16m_xm_ym_z$ complex words required by an explicit convolution based on power-of-two transforms. A $(2m_x-1)\times (2m_y-1)\times (2m_1-1)$ implicit centered Hermitian 3D convolution was also implemented in an analogous manner. It required $$ 6m_x(2m_y-1)m_z+2(m_y+1)m_z+2(m_z/2+1)=12m_xm_ym_z-6m_xm_z+2m_ym_z+m_z+2 $$ complex words, in comparison with the usual requirement of $27m_xm_ym_z$ complex words for explicit padding with power-of-two transforms. \begin{figure}[htbp] \begin{minipage}{0.55\linewidth} \begin{function}[H] \KwIn{matrix \xf, matrix \xg} \KwOut{matrix \xf} \For{$j=0$ \KwTo $m_y-1$}{ \For{$k=0$ \KwTo $m_z-1$}{ ${\tt fftpadBackward}(\xf^T[k][j],\xU^T[k][j])$\; ${\tt fftpadBackward}(\xg^T[k][j],\xV^T[k][j])$\; } } \For{$i=0$ \KwTo $m_x-1$}{ $\cconv2(\xf[i],\xg[i],\xu_1,\xv_1,\xu_2,\xv_2)$\; $\cconv2(\xU[i],\xV[i],\xu_1,\xv_1,\xu_2,\xv_2)$\; } \For{$j=0$ \KwTo $m_y-1$}{ \For{$k=0$ \KwTo $m_z-1$}{ ${\tt fftpadForward}(\xf^T[k][j],\xU^T[k][j])$\; } } \Return \xf\; \caption{cconv3({\sf f},{\sf g}) returns an in-place implicitly dealiased convolution of \hbox{$m_x\times m_y\times m_z$} matrices {\sf f} and {\sf g} using temporary \hbox{$m_x\times m_y\times m_z$} matrices ${\sf U}$ and~${\sf V}$, $m_y\times m_z$ matrices ${\sf u}_2$ and ${\sf v}_2$, and vectors ${\sf u}_1$ and ${\sf v}_1$ of length~$m_z$.}\label{cconv3} \end{function} \end{minipage} \begin{minipage}{0.46\linewidth} \begin{procedure}[H] \KwIn{vector \xf} \KwOut{vector \xf, vector \xu} $\xu[0]=\xf[0]=0$\; \For{$k=1$ \KwTo $2m-1$}{ $\xu[k] \leftarrow -i\z_{4m}^k\xf[k]$\; } $\xf \leftarrow {\tt fft}\inv(\xf)$\; $\xu \leftarrow {\tt fft}\inv(\xu)$\; \Return f; \caption{fft0tpadBackward({\sf f},{\sf u}) stores the scrambled signed~$4m$-padded centered backward Fourier transform values of a vector {\sf f} of length~$2m$ in {\sf f} and an auxiliary vector~{\sf u} of length $2m$.}\label{fft0tpadBackward} \end{procedure} \begin{procedure}[H] \KwIn{vector \xf, vector \xu} \KwOut{vector \xf} $\xf \leftarrow {\tt fft}(\xf)$\; $\xu \leftarrow {\tt fft}(\xu)$\; \For{$k=1$ \KwTo $2m-1$}{ $\xf[k] \leftarrow \xf[k] +i \z_{4m}^{-k}\xu[k]$\; } \Return \xf/(4m)\; \caption{fft0tpadForward({\sf f},{\sf u}) returns the inverse of Procedure {\tt fft0tpadBackward}({\sf f},{\sf u}).} \label{fft0tpadForward} \end{procedure} \end{minipage} \end{figure} \section{Implicitly dealiased ternary convolutions}\label{hyperconv} In this section, we show that implicit padding is well suited to dealiasing the centered ternary convolution $$ \sum_{p=-m+1}^{m-1}\sum_{q=-m+1}^{m-1}\sum_{r=-m+1}^{m-1} f_p g_q h_r\d_{p+q+r,k}, $$ which, for example, is required to compute the time evolution of the Casimir invariant $\int \w^4\,d\vx$ associated with the nonlinearity of two-dimensional incompressible flow expressed in terms of the scalar vorticity~$\w$. The basic building blocks for this problem are again the centered Fourier transform and Hermitian convolution. \subsection{Implicit double-dealiased centered Fourier transform} \label{fft0bi} Here the input data length is $2m-1$, with the Fourier origin at index $m-1$, so one needs to pad to $N\ge 4m-3$ to prevent contamination due to wave beating. Implicit padding is most efficiently implemented by padding the input vector with a single zero complex word at the beginning, to yield a vector of length $2m$, with the Fourier origin at index $m$. We choose $m$ to be a power of $2$ and $N=4m$, with $U_k=0$ for $k=-m$ and $k\ge m$. On decomposing $j=2\ell+r$, where $\ell=0,\ldots, 2m-1$ and $r\in\{0,1\}$, we find on substituting $k'=k+m$ that \begin{equation} u_{2\ell +r}\hiderel=\sum_{k=-m}^{m-1}\z_{2m}^{\ell k} \z_{4m}^{rk} U_k =\sum_{k'=0}^{2m-1}\z_{2m}^{\ell (k'-m)} \z_{4m}^{r(k'-m)} U_{k'-m} =(-1)^\ell i^{-r}\sum_{k=0}^{2m-1}\z_{2m}^{\ell k} \z_{4m}^{rk} U_{k-m}. \label{fft0tbackward} \end{equation} The forward transform is then given for $k=-m+1,\ldots,m-1$ by \begin{dmath} {4m}U_k=\sum_{r=0}^{1}\zeta_{4m}^{-rk}\sum_{\ell=0}^{2m-1}\zeta_{2m}^{-\ell k} u_{2\ell+r} =\sum_{r=0}^{1}\zeta_{4m}^{-r(k'-m)}\sum_{\ell=0}^{2m-1}\zeta_{2m}^{-\ell (k'-m)} u_{2\ell+r} =\sum_{r=0}^{1}\zeta_{4m}^{-rk'}i^r\sum_{\ell=0}^{2m-1}\zeta_{2m}^{-\ell k'}(-1)^\ell u_{2\ell+r}, \qquad k'\hiderel =1,\ldots,2m-1.\label{fft0tforward} \end{dmath} For a ternary convolution, the product of the three factors $(-1)^\ell$ (one for each input vector) arising from \Eq{fft0tbackward} and the factor $(-1)^\ell$ in \Eq{fft0tforward} cancels. Procedures {\tt\ref{fft0tpadBackward}} and {\tt\ref{fft0tpadForward}} each have an operation count asymptotic to $4Km\log_2 m$. As they operate fully in place on their arguments, with no additional storage requirements, it is straightforward to implement strided multivector versions of these algorithms. \subsection{Implicitly dealiased centered Hermitian 1D ternary convolution} Let us now consider a centered Hermitian ternary convolution with $N=4m$, where $m$ is a power of $2$. For explicit padding, one needs to pad the $m$ non-negative wavenumbers with $m+1$ zeros, for a total vector length of $2m+1$. On decomposing $j=2\ell+r$, where $\ell=0,\ldots, 2m-1$ and $r\in\{0,1\}$, the backward transform is given by \begin{dmath*}[compact] u_{2\ell +r}=\sum_{k=-m}^{m-1} \z_{2m}^{\ell k} \z_{4m}^{rk} U_k. \eec If we set $U_m=0$, the real components $u_{2\ell +r}$ can thus be computed by taking a complex-to-real transform of $\{\z_{4m}^{rk} U_k : k=0,\ldots, m\}$. The forward transform is \begin{dmath*} {4m}U_k=\sum_{r=0}^{1}\zeta_{4m}^{-rk} \sum_{\ell=0}^{2m-1} \zeta_{2m}^{-\ell k} u_{2\ell+r}, \qquad k\hiderel =-m+1,\ldots,m-1. \end{dmath*} The resulting implicitly padded centered Hermitian ternary convolution, Function {\tt\ref{tconv}}, has an operation count of $8Km\log_2 m$. Five of the eight required Fourier transforms can be done out of place. In Fig.~\ref{timing1t} we show that this algorithm is competitive with explicit padding. Function {\tt\ref{tconv}} requires $6(m+1)$ complex words of storage, slightly more than the $3(2m+1)=6m+3$ complex words needed for explicit padding. Just as for convolutions, the performance and memory benefits of dealiasing higher-order convolutions {\it via\/} implicit padding manifest themselves only in higher dimensions. For example, in Fig.~\ref{timing2t}, we observe for $m_x=m_y=4096$ that the implicit $(2m_x-1)\times m_y$ centered Hermitian ternary convolution computed with Function~{\tt\ref{tconv2}} is $2.28$ times faster than an explicit version. The memory usage for a $(2m_x-1)\times m_y$ implicit centered Hermitian ternary convolution is $6\cdot 2m_x(m_y+1)+3(m_y+1)=12m_xm_y+12m_x+3m_y+3$ complex words, compared with $3\cdot 4m_x(2m_y+1)=24m_xm_y+12m_x$ complex words (using power-of-two transforms) for an explicit version. In contrast, the corresponding $y$-pruned convolution requires the same amount of storage as, but is $1.42$ times faster than, an explicitly padded version. \begin{figure}[htbp] \begin{minipage}{0.48\linewidth} \begin{function}[H] \KwIn{vector \xf, vector \xg, vector \xh} \KwOut{vector \xf} \For{$k=0$ \KwTo $m-1$}{ $\xu[k] \leftarrow \z_{4m}^k \xf[k]$\; $\xv[k] \leftarrow \z_{4m}^k \xg[k]$\; $\xw[k] \leftarrow \z_{4m}^k \xh[k]$\; } \medskip $\xu[m]=\xv[m]=\xw[m]=0$\; $\xu \leftarrow {\tt crfft}\inv(\xu)$\; $\xv \leftarrow {\tt crfft}\inv(\xv)$\; $\xw \leftarrow {\tt crfft}\inv(\xw)$\; $\xv \leftarrow \xu * \xv * \xw$\; $\xu \leftarrow {\tt rcfft}(\xv)$\; \medskip $\xf[m]=\xg[m]=\xh[m]=0$\; $\xv \leftarrow {\tt crfft}\inv(\xf)$\; $\xw \leftarrow {\tt crfft}\inv(\xg)$\; $\xg \leftarrow {\tt crfft}\inv(\xh)$\; $\xv \leftarrow \xv * \xw * \xg$\; $\xf \leftarrow {\tt rcfft}(\xv)$\; \medskip \For{$k=0$ \KwTo $m-1$}{ $\xf[k] \leftarrow \xf[k] + \z_{4m}^{-k}\xu[k]$\; } \Return f/(4m)\; \caption{tconv({\sf f},{\sf g},{\sf h},{\sf u},{\sf v},{\sf w}) computes an in-place implicitly dealiased centered Hermitian ternary convolution of three centered Hermitian vectors {\sf f}, {\sf g}, and {\sf h}, using three temporary vectors {\sf u}, {\sf v}, and {\sf w}, each of length~$m+1$.}\label{tconv} \end{function} \end{minipage} \begin{minipage}{0.5\linewidth} \begin{function}[H] \KwIn{matrix \xf, matrix \xg, matrix \xh} \KwOut{matrix \xf} \For{$j=0$ \KwTo $m_y-1$}{ ${\tt fft0tpadBackward}(\xf^T[j],\xU^T[j])$\; ${\tt fft0tpadBackward}(\xg^T[j],\xV^T[j])$\; ${\tt fft0tpadBackward}(\xh^T[j],\xW^T[j])$\; } \For{$i=0$ \KwTo $2m_x-1$}{ ${\tt tconv}(\xf[i],\xg[i],\xh[i],\xu,\xv,\xw)$\; ${\tt tconv}(\xU[i],\xV[i],\xW[i],\xu,\xv,\xw)$\; } \For{$j=0$ \KwTo $m_y-1$}{ ${\tt fft0tpadForward}(\xf^T[j],\xU^T[j])$\; } \Return \xf\; \caption{tconv2({\sf f},{\sf g},{\sf h}) returns an in-place implicitly dealiased centered Hermitian ternary convolution of \hbox{$2m_x\times (m_y+1)$} matrices~{\sf f},~{\sf g}, and~{\sf h} using temporary \hbox{$2m_x\times (m_y+1)$} matrices~{\sf U}, {\sf V}, and~{\sf W} and vectors {\sf u}, {\sf v} and~{\sf w} of length~$m_y+1$. }\label{tconv2} \end{function} \end{minipage} \end{figure} \begin{figure}[htbp] \begin{minipage}{0.49\linewidth} \begin{center} \includegraphics{timing1t} \caption{Comparison of computation times for explicitly and implicitly dealiased centered Hermitian in-place 1D ternary convolutions of length $m$.} \phantomsection{}\label{timing1t} \end{center} \end{minipage} \, \begin{minipage}{0.49\linewidth} \begin{center} \includegraphics{timing2t} \caption{Comparison of computation times for explicitly and implicitly dealiased centered Hermitian in-place 2D ternary convolutions of size $(2m-1)\times m$.} \phantomsection{}\label{timing2t} \end{center} \end{minipage} \end{figure} \section{Concluding remarks} Explicitly padded Fourier transforms are frequently used to dealias convolutions in one or more dimensions. In this work we have developed an efficient method for avoiding explicit zero padding in multidimensional convolutions, thereby saving both memory and computation time. The key idea that was exploited was the decoupling of temporary storage and user data, which in higher dimensions allows the reuse of storage space. The resulting increased data locality significantly enhanced performance by as much as a factor of $2$. The savings in memory use, obtained by computing the Fourier transformed data in blocks rather than all at once, was equally significant: asymptotically, as $m_x\goesto\infty$, an implicit complex convolution requires one-half of the memory needed for a zero-padded convolution in two dimensions and one-quarter in three dimensions. In the centered Hermitian case, the memory use in two dimensions is $2/3$ of the amount used for an explicit convolution and $4/9$ of the corresponding storage requirement in three dimensions. Even in one dimension, where implicit padding can be implemented competitively with conventional methods, the method has notable advantages. For the intended application to partial differential equations, there is flexibility in the choice of the exact convolution size. This is why we consider for each algorithm only those vector lengths that maximize performance. On the other hand, for those applications where the size of the convolution is dictated by external criteria, implicit padding effectively expands the available set of efficient convolution sizes to include integral powers of~$2$, a case of practical significance. Canuto \etal~\cite[p.136]{Canuto06} point out that if only a power-of-two transform were available for a centered convolution, zero padding a vector of length $m=2^k$ would require a transform size of $2m$, yielding an even slightly higher operation count, $6Km\log_2(2m)$, than the $6K m\log_2m$ operations required for phase-shift dealiasing. The availability of implicitly dealiased convolutions now makes this argument moot. Another advantage of implicit padding is the ability of the algorithm to work directly on raw unpadded user data without the inconvenience or extra storage requirements of a separate padding buffer. Having a prewritten, well-tested dealiased convolution that takes care of dealiasing internally is a major convenience for the average user. For 2D and 3D Hermitian convolutions, a prepackaged routine should also automatically enforce Hermitian symmetry of the data along the $x$ axis or the~$xy$ plane, respectively. With the highly optimized implementations of the algorithms developed in this work made available in the open source package {\tt FFTW++}~\cite{fftwpp}, writing a pseudospectral code for solving nonlinear partial differential equations should now be a relatively straightforward exercise.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,030
Voice of VOIPSA Collective thoughts and musings on the state of VoIP security today. Category Archives: VoIP Security Companies Sipera Systems Relaunches Their Online Presence While I wouldn't normally write about simply an updated website for a company, this particular company is Sipera Systems, one of the small number of companies focused pretty much entirely on VoIP security… er… "Unified Communications Security". (And hey, "UC Security" sounds a whole lot better to say!) Given that part of my regular work is working with web sites, I commend them on their new nice, clean look. They've also revamped their blog, as well. Good to see, and I wish them continued success in this space. If you found this post interesting or helpful, please consider either subscribing via RSS or following VOIPSA on Twitter. This entry was posted in VoIP Security Companies on November 10, 2010 by Dan York. New Threats, Old Friends On a lightning visit to the Infosec show in London, I chanced to meet with Ari Takanen of Codenomicon (fuzzing and quality assurance experts). Ari has a new book out: "Fuzzing for Software Security Testing and Quality Assurance", from Artech House, available at Amazon.com and (as they say) all good bookstores. Of course, just because there's a credit crunch doesn't mean that security is any less of a problem, and it doesn't mean that software defects are any the better. It sounds like Codenomicon have a pretty good market niche. Facetime were talking about their new Unfied Security Gateway. This appliance goes beyond URL blocking and reporting, and implements reporting for VoIP and Skype, and the whole range of IM and P2P applications. In addition they have some pretty granular tools for finding out what the usage of social sites like Facebook (FB) and Myspace, and the resulting bandwidth usage might be. You can even drill down into the subsections being used (apps, music etc), which will be useful as increasingly FB is used for legitimate messaging and networking purposes in business. Facetime's "special guest" on the stand was an original Engima encryption device, brought down from Bletchley Park (a.k.a "Station X"), the UK's premier code-breaking museum. This is a refurbished and fully working Enigma, and on the Facetime stand they were even allowing us to have a go. I can report that it is satisfyingly mechanical to use. AEP were also there showing some high-grade encryption equipment for enabling remote sites with access to secure systems. Law enforcement and government customers have a legal duty to protect the data that they handle, which and even remote users (or temporary sites) must protect data from snooping. Data at rest is a particular risk, and UK government agencies have embarrassingly lost large numbers of laptops and pen drives in recent years. It's safer to leave the data in the secure site (rather than the USB stick) and access it over secure links when needed. The AEP solution fits into a laptop bag, and enables a team of people to share secure data and VoIP links to a central site, routed over any convenient satellite, 3G or WAN links. The Infosec show is still on today and tomorrow at Earls Court exhibition centre in London. This entry was posted in Conferences, Skype, VoIP Security, VoIP Security Companies and tagged encryption, VoIP, VoIPsecurity tools on April 29, 2009 by Martyn Davies. Want to learn about voice biometrics? VoiceVerified to be interviewed tomorrow (July 10, 2008) Are you interested in using voice for authentication, also known as voice biometrics? Would you like to know how far voice biometrics has come from that 1992 film "Sneakers" with "My voice is my password"? If you are free tomorrow, July 10, 2008, at 11am US Eastern time you can join in a conference call/podcast where I'll be interviewing David Standig with VoiceVerified.com about voice biometrics in general and VoiceVerified's specific offering. If you can't join us at 11am, the interview will be available as a "Squawk Box" podcast later in the day. The deal is that Alec Saunders, the regular host/producer of the daily Squawk Box podcast is away on vacation and I've been guest-hosting this week in his absence. The daily shows have been about a range of topics (today was a great one about P2PSIP) and tomorrow's show actually gets into VoIP security in terms of voice verification/biometrics. If you would like to join into the show, there are two ways you can do so: If you are a Facebook user, go to: http://apps.facebook.com/calliflower/conf/show/34614 You'll be prompted to install the "Calliflower" Facebook app. If you don't use Facebook – or don't want to install the app, you can go to Calliflower.com at: http://apps.calliflower.com/conf/show/34614 You'll need to register for a free account. In either case, you'll get access to the telephone number you need to call and, during the call, will also have access to the live chat session that is used. If you aren't able to attend (or don't want to use the app), you can listen to the show after I post it on Alec's Saunderslog.com sometime later tomorrow, probably in the evening. Also, if you are interested in being on Alec's Squawk Box show, my guest hosting is done tomorrow but drop me a note and I'll be glad to suggest your name to Alec after he returns. I frequently participate and they've been enjoyable shows to be a part of. P.S. In the interest of full transparency and disclosure, I should note that VoiceVerified is actually a business partner of my employer, Voxeo, as I outlined in a blog post. That fact, however, did not influence my decision to bring them on the show – I was just looking for interesting companies to interview and they were one that caught my eye. voiceverified, voice, voip, voip security, security, authentication, squawk box This entry was posted in Podcasts, VoIP Security, VoIP Security Companies on July 10, 2008 by Dan York. Sipera looking to hire a few good VoIP security researchers… Want a job in VoIP security? Jason Ostrom, who recently joined Sipera Systems as director of their VIPER Lab, passed along word to us that they are looking to hire two new positions related to VoIP security: VIPER Security Consultant VIPER Vulnerability Research Engineer Job descriptions and information about applying can be found over on Sipera's "Careers in VoIP Security" page. (i.e. please do not leave comments here about these jobs or contact us as we have nothing to do with the jobs). voip, voip security, security, sipera, sipera systems This entry was posted in VoIP Security, VoIP Security Companies on July 1, 2008 by Dan York. Information Week interviews SecureLogix about VoIP security While I was sick at VoiceCon and didn't record any of the videos I was planning to do, it's great to see that Fritz Nelson over at Information Week did capture this video of Mark Collier of SecureLogix: The TechWeb folks did a nice job on the video, particularly in cutting in to some of the slides explaining what Mark was talking about. Fritz has an article accompanying the video as well. Oh, yeah, Mark was great, too! 🙂 P.S. For those who don't know, Mark has been involved with VOIPSA and in fact was on a panel I moderated on VoIP security there at VoiceCon. voip, voip security, securelogix, mark collier, security This entry was posted in Security, Videos, VoIP Security, VoIP Security Companies on April 18, 2008 by Dan York. Hackers Attack International Space Station Email — Let's Hope VoIP Isn't Next On April 1st VuNet reported that hackers had taken down the International Space Station's email capabilities. So, this was a good April Fool's joke, right? Three astronauts onboard the Space Station reported last night that email was no longer working. Hackers are thought to have planted a Trojan in the computer systems at Houston and used the infection to ride the satellite uplink to the Space Station. What is especially troubling is the email system's reliance upon older Microsoft operating systems that are no longer supported by Microsoft. "I am sorry but there is nothing we can do. It is past its deadline, said Professor Brian Offin, Microsoft's head of obsolete operating systems. Again, a good April Fool's joke, right? However, this false article brings to light the fact that as newer technologies replace legacy systems, we must bear in mind that the new technology changes will, over time, themselves become legacy systems and subject to the same outdated, unsupported and insecurities that plagued the very legacy systems they replaced. So what's this have to do with VoIP and the International Space Station? Well, details are thin, but way back in 2000 VoIP Group Inc. was awarded a contract to provide a VoIP replacement for the ISS to "bring about significant cost reductions as it supplements and then replaces an existing legacy system." Initially deployed at NASA's Marshall Space Flight Center in Huntsville, Alabama, and later at other International Space Station operations centers, the solution will consist of VoIP Group's gateways connected to the Internet and to Raytheon voice switches and CUseeMe conference servers to support voice conferencing. The system is designed to link together researchers, NASA operations personnel, and potentially ISS crew, to support collaboration during Space Station experiment planning and operations. Because users can access the system using a standard Internet browser on an inexpensive multimedia PC, they can be located at NASA centers, universities, and companies throughout the world, and still connect in real-time, 24 x 7. I hope that the sharp folks at NASA and VoIPgroup are taking the proactive steps to avoid security problems with critical communications with the ISS. This entry was posted in Security, VoIP Attacks in the News, VoIP Security, VoIP Security Companies on April 4, 2008 by Shawn Merdinger. SIPTap Author forms VoIP Security Company Some of you may remember Peter Cox who put out an eavesdropping tool SIPTap last November. For those who have a short memory, SIPTap monitors "multiple voice-over-IP call streams, listening in and recording them for remote inspection as .wav files." At the time, however, the tool didn't appear to me to be much of a threat because it only worked on the VLAN it was attached to and only if it saw the traffic. Meaning that if you weren't attached to a span port, a hub or used another tool such as Ettercap, you wouldn't be able to do much recording. BUT the tool served Peter Cox's purpose. Apparently for some time now, Peter Cox has been preaching VoIP security to anyone who will listen… and if he's like most IA people I know, anyone who doesn't want to listen, but needs to. The tool, therefore, appeared to be aimed at educating people outside the IA world about the importance of VoIP security and how easy it is to eavesdrop on calls. Now Peter Cox has started a new company UM Labs where his goal is to develop and deliver products that provide VoIP security in a world where the traditional security foundation of voice and data separation no longer apply. They are already announcing three products described on the company's website and here New VoIP security products are always welcome and UM Labs appears to be looking towards the future to find ways to meet some of the upcoming security challenges of unified networks. This entry was posted in VoIP Security, VoIP Security Companies, VoIP Security Tools on February 21, 2008 by Craig Bowser. Phil Zimmermann's "Zfone Project" has new website and new beta release Perhaps it has been up for a while, but I just noticed today the new Zfone Project Home Page. Previously Phil Zimmermann had Zfone as a subset of his philzimmermann.com website, but now it's off on its own sharp-looking site. There's also news of a new beta for download as of February 9th. Kudos to Phil and his team for launching the new site and, as always, we're definitely interested in hearing what people think (okay, at least I am). This entry was posted in Cryptography, VoIP Security, VoIP Security Companies on February 14, 2007 by Dan York. What's all the Fuzz about? I'm guessing there's going to be a resurgence soon in protocol fuzzing against different VoIP phones, PBXs, and especially VoIP softphones. The practice of fuzzing, otherwise known as robustness testing or functional protocol testing, has been around for a while in the security community. The practice has proven itself to be pretty effective at automating vulnerability discovery in applications and devices that support a target protocol. The prize for the most prolific university fuzzing results to date belongs to the PROTOS project of Oulu University's Secure Programming Group. Through various incarnations of student projects, the PROTOS group has been faithfully discovering vulnerabilities in a variety of protocol implementations, including SIP and H.323. Ari Takanen of that group eventually graduated and went on to cofound a commercial fuzzing tool company called Codenomicon, along with others from Oulu. In just the last year alone, the market has seen several other new commercial fuzzing entrants including: Musecurity's Mu-4000 Gleg.net's ProtoVer Professional Beyond Security's BeStorm Security Innovation's Hydra Today, VoIP is starting to become a more interesting target for security researchers as the technology becomes more affordable and popular among enterprise customers. While it would be ideal if all VoIP vendors tested their own products internally for security bugs, the reality is that not all of them have the time, resources, or even the security DNA to find them all ahead of time. For a great list of other fuzzing tools and presentations, check out Matthew Franz's wiki. This entry was posted in VoIP Security, VoIP Security Companies, VoIP Security Tools, VoIP Vulnerabilities on May 23, 2006 by David Endler. SS7 Security On Techmeme? A Reminder About Interconnected Systems… Verizon Launches Voice Cypher Secure VoIP Mobile App… With A Government Backdoor 7 Asterisk VoIP Security Advisories Issued Slides: Reboot the Open Realtime Revolution – #MoreCrypto (Fall 2014) VoiceOps – Mitigating SIP Threats With SBC Policies, Auto-Blacklisting Asterisk "hack" to show blocked Caller-ID points to larger trust issues with SIP It's a Feature! -- Remote Tapping a Snom VoIP Phone ISC2 Blog on Security Issues Certificates and PKI Voice of VOIPSA Info VoIP Attacks in the News VoIP Legislation VoIP Security VoIP Security Companies VoIP Security Research VoIP Security Tools VoIP Vulnerabilities VOIPSA
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,410
This section is meant for prompting advice about tulpae. Want to know if you can tulpaforce into an object? This is the right place, but if you're asking "How big is your tulpa?" then you're probably best off in General Discussion. There is a chance that your question has already been answered, so before creating a duplicate topic, it's recommended that you do a few things first. First, if you haven't yet, I would recommend checking out the Guides section, especially the Frequently Asked Questions to see if your question is already answered there. Next, before making a new topic in this board, please first search to see if there is an existing topic on the subject. If you can't find a satisfactory answer to your question using the resources above, feel free to make one of your own. Please remember to name your thread in such a way that it is easy for others to search for, and makes it clear what your thread is about. Don't call it "Help!", or "I have a question", but rather "Can tulpas create other tulpas?", or "Can I visualize with my eyes open?", etc.
{ "redpajama_set_name": "RedPajamaC4" }
6,593
Dynamic, home run hitter, versatile. That's the main package of what Darrell Henderson brings to the table as a running back. He's a weapon more than anything in the backfield, and his production is off the charts. As long as Henderson is able to stay featured enough with an NFL offense, he'll find plenty of success. Henderson's slippery and can reach the open field in a blink of an eye. Memphis's zone run system fit him so well. His ability to feel the defense and make sudden cuts in between defenders leads to missed tackles and those long yardage runs. Henderson presents the smoothness of former Chiefs running back Jamaal Charles, back when he was in his prime. Likewise, his shiftiness and power in space present a style similar to Dalvin Cook or Derrick Henry. That ability to read the defense leads to many cutbacks by Henderson. He's very smart to know when to wait behind pulling blockers before hitting his burst. With that zone run scheme, Henderson was typically better with stretch/outside runs. When running in between the tackles, it's hit or miss. Sometimes being too quick on hitting the hole comes back to bite him. Henderson's speed is above average. But, it's that second or even third gear that he reaches in open space that allows him to break for long runs. His lateral quickness is wicked, which makes him look like a bowling ball, bouncing off of defenders. When running straighter, Henderson is not as successful. It's that change of direction with no wasted movement that allows him to get out in space. I think this is Henderson's greatest attribute to help lead him to success. His cuts and twists are smooth because of his ability to pinball off of contact. His legs never stop between the whistles. This constant running style leads to so many missed tackles, and his momentum makes defenders indecisive on how to play him. Henderson will also run over anyone in the open field, with a fine pad level and absorption of contact. Like his non-stop running style, Henderson has great effort in pass protection. But that effort doesn't always lead to success. His smaller frame is a weakness against interior defensive linemen. When stepping up, Henderson's results are better. There is too often times where this doesn't happen, and he will sit back and take on defenders flat-footed. Memphis was not shy in using Henderson in the screen game, in swing routes and even as a slot receiver. Not always clean in making catches, but this is more so as a slot receiver. When running screen routes, there's more success because Henderson is already in open space looking for big yards after the catch numbers. I think his body control is more of plus than not as a receiver too. With how much the Chiefs run out of the shotgun, Henderson's game would translate well to the offense. Certainly, Kareem Hunt's time ended in KC on the wrong note, there could be the kind of production and success Henderson could have that would be similar to Hunt's game. He's someone you can rely upon due to his effort and also his non-stop motor. While the Chiefs may switch to more of a running back by committee approach, give the ball to Henderson late in games with the lead. Therefore, you still have the chance that he could break one to extend the lead. Thank you for reading. For more draft coverage, stay tuned to Full Press Coverage. – Braden Holecek is the Kansas City Chiefs managing editor for Full Press Coverage. He covers the NFL. Like and follow on Follow @ebearcat9 Follow @FPC_Chiefs and Facebook. – Kyle Senra is an editor for the Kansas City Chiefs on Full Press Coverage. Like and follow on Follow @nyama_ks Follow @FPC_NFL and Facebook.
{ "redpajama_set_name": "RedPajamaC4" }
3,337
\section{Introduction} \mylabel{sec:Intro} Understanding the role of disorder on electronic conduction has been a central theme in all of condensed matter physics \cite{Lee_RMP_1985, Kramer_RPP_1993, Janssen_PR_1997,Evers_RMP_2008}. Apart form being fundamentally interesting from a theoretical perspective, these problems hold immense significance as they directly bring out (or hide) novel physics in various experimental systems\cite{Abrahams_RMP_2001}. A major milestone in this pursuit has been the scaling theory of localization which stated that any infinitesimal amount of disorder will inhibit any conductivity in a thermodynamically large $2D$ system \cite{Anderson_PR_1958, Abrahams_PRL_1979}. However, a comprehensive understanding of transport in $2D$ is far from complete -- two dimensional systems continue to spring surprises with various phenomena, where mesoscopic physics, interactions, disorder and topology interplay \cite{Stormer_RMP_1999, Novoselov_RMP_2011,Sarma_RMP_2011, Hasan_RMP_2010}. One of the essential probes in condensed matter is the magnetic field. Effects of which on a $2D$ electron gas leads to integer and fractional Hall effect \cite{Klitzing_PRL_1980, Stormer_RMP_1999}. The same phenomena on an idealized square lattice leads to the Hofstadter model \cite{Hofstadter_PRB_1976}. Interestingly this physics has now been realized both in cold-atomic systems \cite{Aidelsburger_PRL_2013, Miyake_PRL_2013} and material systems \cite{Hunt_Science_2013, Yu_NatPhys_2014}. These systems also possess non-trivial topology and their signatures in transport \cite{Thouless_PRL_1982, Osadchy_JMP_2001}. Not surprisingly, the effect of disorder on quantum Hall physics has received its due attention \cite{Chalker_JPC_1988, Cain_PRB_2001, Galstyan_PRB_1997, Kramer_PR_2005}. For the continuum model -- this question can be posed in two ways -- how does the conductance change when, while keeping the magnetic field same, the disorder is increased or; keeping the disorder same, the magnetic field is reduced. The evolution of the Landau levels, in a 2D electron gas, and in the lattice setting with a weakening magnetic field has been a matter of debate\cite{Huckestein_RMP_1995, Ortuno_PRB_2011}. For a recent review refer to \cite{Dolgopolov_PHU_2014}. It was earlier suggested that to be consistent with the scaling hypothesis \cite{Abrahams_PRL_1979}, the Landau levels will float up to higher energies with decreasing magnetic field or increasing disorder \cite{Khmelnitskii_PLA_1984,Laughlin_PRL_1984,Yang_PRL_1996}. However some numerical calculations have hinted otherwise and have instead suggested that the system undergoes a Chern insulator to normal insulator transition as a function of the strength of the disorder \cite{Sheng_PRL_1997}. A two parameter scaling theory has been suggested to understand this transition and a phase diagram was also proposed \cite{Pruisken_PRB_1985,Sheng_PRL_1998,Sheng_PRB_2000}. However disorder comes in various varieties -- Anderson disorder\cite{Anderson_PR_1958} is the most famous of them all. In this, random onsite potentials are added to each site of the lattice. The other more stronger kind of disorders are the percolation disorders. They come in two varieties -- site and bond. In the former, one randomly removes the sites from the lattice, in the latter, bonds. Till date, quantum site and bond percolation in $2D$ even in absence of a magnetic field is poorly understood and highly debated -- the central question being -- whether the physics here is different from Anderson disorder \cite{Kirkpatrick_RMP_1973, Mookerjee_Book_2009}. In fact delocalization-localization transition has been predicted in $2D$ for site-dilution on square lattices \cite{Koslowski_PRB_1990, Islam_PRE_2008, Gong_PRB_2009, Dillon_EPJB_2014}. In this work we limit ourselves to the discussion on bond percolation. If we define $p_b$ as the probability of a link being present between two neighbouring sites, then in classical bond percolation, percolation threshold occurs at $p_c=1, 0.5$ and $0.2488$ for hyper-cubic lattices in dimensions($D$)=$1,2$ and $3$ respectively \cite{stauffer_Book_1991, Isichenko_RMP_1992}. This threshold signifies the point below which there exists no geometrical connecting path between two sides of a lattice. One expects quantum bond percolation threshold $p_q$ to be $>p_c$ since interference effects will tend to further localize the system even if classically a path may exist. Notice that unlike Anderson disorder here exists a natural bound on $p_q$ due to presence of $p_c$. While finite size scaling analysis shows an existence of a percolation threshold $p_q$ in $3D$ \cite{Soukoulis_PRB_1991}, results for $2D$ are still not settled \cite{Mookerjee_Book_2009}. Some of the previous works have predicted non-zero conductance for $p>p_c$ while others have predicted that all states get localized even for infinitesimal disorder \cite{Odagaki_PRB_1983, Raghavan_PRB_1981, Shapir_PRL_1982, Taylor_JPCM_1989, Soukoulis_PRB_1991}. A study of transport in bond percolating system and its comparison with classical Drude theory expectations have also been performed \cite{Schmidtke_PRE_2014}. Recently, bond (and site) percolation on a honeycomb lattice has received major attention in order to understand the nature of divergence of density of states at $E=0$ \cite{Sanyal_arXiv_2016, Hafner_PRL_2014, Ostrovsky_PRL_2014, Zhu_PL_2016}. As far as the effect of magnetic field is concerned, most of the studies above \cite{Sheng_PRL_1997,Sheng_PRL_1998,Sheng_PRB_2000} has been performed for diagonal Anderson disorder. A study of banded off-diagonal disorder was performed in \cite{Liu_JPCM_2003}. However the role of percolation disorder on the Hofstadter model has been little investigated. A periodic dependence of $p_q$ was found as a function of magnetic flux in $3D$ while that in $2D$ was also conjectured \cite{Mier_PRL_1986}. Since for bond percolation disorder the exact value of $p_q$ itself is an open question, it is of particular interest to find if there exists a metal insulator transition before we cross the classical percolation threshold in presence of a magnetic field. In this paper, our motivation is two fold. The first part involves understanding the effect of bond percolation disorder on the Hofstadter butterfly pattern as a function of $p_b$. We study the model in both high and low concentration of bond dilutions. We find that even at high amount of bond dilution, we have butterfly-like patterns present in the system. We also look at the effect of bond dilution on band gaps and wavefunctions of the system. We provide understanding of the key features of our results from analyzing small clusters and finite size rings. This provides some physical reasoning behind the results and also contrast them from the case of Anderson disorder. The second part involves calculation of the transport quantities ($\sigma_{xy}$), where a numerical study based on calculation of Chern numbers is performed using coupling matrix approach \cite{Zhang_CPB_2013}. We study the effect of bond percolation disorder on Hofstadter bands and show that there indeed is a metal insulator transition with decreasing $p_b$ for $p_b>p_c$. We also find that the Chern bands close to the band edges are more stable to disorder than the ones close to band center, which means that it takes higher disorder strength for achieving metal to insulator transition at the edge of the band than at the center. This result in the low-disorder limit is consistent with the findings for the Anderson disorder case \cite{Sheng_PRL_1997}. We now present the plan of the manuscript. In the next section (\ref{sec:Hofs}) we provide a brief review of the Hofstadter model and present some of the results for finite rings in presence of magnetic field. These will be used later in our study. In the same section we also introduce the percolation problem. Section. \ref{sec:Kill} contains our results and related discussions on the effect of bond percolation disorder on the Hofstadter butterfly. Here we also discuss the effect of bond dilution on band gaps and on wavefunctions using inverse participation ratios (IPRs). Section \ref{sec:trans} contains the essential details about the coupling matrix approach to calculate the Chern number in the presence of disorder and the corresponding results and discussions. In section \ref{sec:Summary} we summarize our results and speculate some future directions. \section{Formulation and Prelude} \mylabel{sec:Hofs} \begin{figure} \centering \includegraphics[width=6cm]{lattice_schematic} \caption{(Color online) A schematic of a square lattice with some of the bonds removed. $p_b$ is the probability that a bond is present. Therefore, at $p_b=1$ we have an ideal square lattice. $t$ is the hopping amplitude which is set to 1. $\phi$ is the Pierls phase which includes the effect of the magnetic field {\bf B}.} \mylabel{fig:schematiclattice} \end{figure} \subsection{Hamiltonian} The Hamiltonian of our interest is, \begin{equation} {\cal H} = \sum_{\langle i,j \rangle}-t e^{i\phi_{ij}} c^{\dagger}_{i} c_{j} + h.c. \end{equation} where $c^{\dagger}_{i}$,$c_{j}$ are the creation and annihilation operators for the electrons at site $i$ and $j$ respectively (see Fig.$\ref{fig:schematiclattice}$). The $\langle i,j \rangle$ signifies that the sum is over the nearest neighbors on a square lattice. $\phi_{ij}$ is the Pierls phase which takes into account the effect of a perpendicular magnetic field $\bB$ on the lattice and is given by \begin{equation} \phi_{ij} = \frac{e}{\hbar}\int_{\br_j}^{\br_i}\bA\cdot\bd\br, \end{equation} where $\bA$ is the corresponding vector potential. $t$ is the hopping integral and is set to $1$. $\br_{i(j)}$ denotes the position coordinates of site $i(j)$. We work in Landau gauge where $\bA=(0,Bx,0)$. This conveniently allows for complex phases only in the hoppings in the vertical direction. The flux per plaquette is given by $\alpha$ in units of $h/e$. We will ignore the spin of the fermions. \subsection{Hofstadter butterfly} In the gauge we are using, the system has a translational symmetry in $y$ direction -- therefore $k_y$ is a good quantum number. For a generic $\alpha$, the system does not have translational symmetry in $x$ direction. However, when $\alpha = p/q$, where $p,q$ are integers, the problem can be mapped to a reduced Brillouin zone. The eigenvalues can be plotted as a function of $\alpha$ and this leads to the famous Hofstadter butterfly pattern. This self-similar, fractal pattern was first obtained by Hofstadter \cite{Hofstadter_PRB_1976}, and is reproduced in Fig. \ref{fig:infhofs}. \begin{figure} \includegraphics[width=8cm]{infhofs} \caption{(Color online) The plot of the energy dispersion as a function of the magnetic flux $\alpha$. This is the Hofstadter butterfly as was originally reported by Hofstadter \cite{Hofstadter_PRB_1976}.} \mylabel{fig:infhofs} \end{figure} \subsection{Polygon in a magnetic field} Next, let us consider a $N$ sided polygon in a magnetic field. The eigenvalues indexed by $M$ are given by, \begin{equation} E_N(M,\alpha_p) = -2 t \cos(\frac{2\pi}{N}(M+ \alpha_p)) \mylabel{polygondis} \end{equation} where $M = \{0,1, \ldots N-1\}$ and $\alpha_p$ is the flux going through the polygon \cite{Analytis_AJP_2004}. Note that $\alpha_p$ is different from the flux per unit plaquette $\alpha$ as introduced in the previous subsection. Fig. \ref{fig:finhofs} shows the dispersion for few representative finite size rings in presence of uniform magnetic field. As can be seen from Fig. \ref{fig:finhofs}(c) and (d), both the polygons have 8 sides, but the total flux inside the loops are different. While (c) has $\alpha_p = 3 \alpha$, the latter (d) has $\alpha_p = 4\alpha$. These lead to different dispersions ((g)-(h)). These as we will see later will be useful in understanding the results in presence of percolation later. \begin{figure*} \centering \includegraphics[width=14cm]{smalllat} \caption{(Color online) (a) A connected square with a unit flux $\alpha$ per plaquette has a dispersion shown in (e). (b) A polygon with 6 sides, has two unit squares inside, this corresponds to $\alpha_p =2 \alpha$ in \eqn {polygondis} and has a dispersion in (f). A 8-sided polygon can have $\alpha_p=3\alpha$(shown in (c)) and $\alpha_p=4\alpha$ as shown in (d). The corresponding dispersions are shown in (g) and (h) respectively.} \mylabel{fig:finhofs} \end{figure*} \subsection{Disorder and Percolation} Next we define what we precisely mean by percolating the lattice. $p_b$ is defined as the probability of a link being present between two neighboring sites. This implies that at $p_b=1$ we have an ideal square lattice. For any value of $p_b<1$ some of the bonds are removed from the lattice (see Fig.\ref{fig:schematiclattice}). For a square lattice their is a classical percolation threshold at $p^c_b \equiv p_c =\frac{1}{2}$. At any value of $p_b<p_c$ there does not exist a classical geometrical path connecting the two sides of the square lattice \cite{Mookerjee_Book_2009}. Percolation transitions have their own universality classes and distinct critical exponents \cite{Isichenko_RMP_1992}. Percolation is therefore, a special kind of disorder. Even in quantum transport, note that each bond removal is of the energy scale $t$ which is of the same order as the band-width. However density of bonds removed, quantified as $(1-p_b)$, is considered as the tuning parameter of disorder strength. Another kind of percolation problem is the site percolation problem. Here sites are randomly removed from a lattice. We state that although, both bond and site percolation problems retain the sublattice symmetry, the bond problem has some `nicer' features than the site percolation. Once a site is removed from a lattice, it effectively reduces the Hilbert space of the problem. Given a imbalance between the number of sites belonging to the two sublattices, one finds $zero$ energy modes in the system, which need to be removed `by-hand' to keep track of non-trivial zero modes \cite{Sanyal_arXiv_2016, Weik_arXiv_2016}. On the contrary, removing bonds on the lattice keeps the dimension of Hilbert space same and only modifies the connectivity between the sites. As was mentioned in the introductory section, the most well studied disorder problem is the Anderson disorder \cite{Anderson_PR_1958}. Here onsite potentials to each site is chosen randomly from a distribution (mostly `box') between $[-\frac{W}{t}, \frac{W}{t}]$. Thus $W$ is the parameter characterizing the strength of disorder. A review of numerical results on this can be found in \cite{Markos_APS_2006}. \section{Killing the Butterfly} \mylabel{sec:Kill} In Fig. \ref{fig:bondpercorep} the evolution of the Hofstadter butterfly as function of $p_b$ for some representative values of $p_b$ is shown. While $1-p_b$ can be considered as the `strength' of disorder (like $W$ in Anderson disorder case) we will see that both these disorders are quite different in high disorder limit. Let us first look at the case when $p_b$ is very small and $p_b \ll p_c$ (high disorder limit). \subsubsection{$p_b \ll p_c$} In this limit, the system is below the classical percolation threshold and therefore the lattice has already geometrically broken up into disconnected fragments. As can be seen from Fig.\ref{fig:bondpercorep} (j) and (l) one finds that there are many bands which do not disperse with $\alpha$. This can be understood from the fact that most of these structures do not have closed loops which have any magnetic flux passing through. We also see that the energies cluster around specific values. To understand these, consider a single site $(j=1)$ connected to $N$ $(j=2,\ldots,N+1)$ other sites with an equal hopping strength $-t$ and no other site is connected to any other. Let the eigenvalues be $\varepsilon_i$, where $(i \in 1,\ldots N+1)$. For a generic $N$ , one finds only two non-zero eigenvalues given by $\pm \sqrt{N}t$. The corresponding eigenvectors are $\frac{1}{\sqrt{2}}(\mp 1, \underbrace{\frac{1}{\sqrt{N}}, \ldots, \frac{1}{\sqrt{N}}}_N)^T$. The other eigenvectors corresponding to $zero$ eigenvalues are of the form $\frac{1}{\sqrt{2}}(0, 0,.., \underbrace{1}_i, .., \underbrace{-1}_j, \ldots 0)^T$, where $i, j$ denotes the site index and take the values $\in (2, \ldots,N+1)$ and therefore has $N-1$ solutions. Since the maximum coordination number for a square lattice problem is $4$, the corresponding non-zero eigenvalues are $\pm t(N=1)$, $\pm\sqrt{2}t (N=2)$, $\pm \sqrt{3}t (N=3)$ and $\pm 2t(N=4)$. Note that all of these structure have no loops and therefore, the eigen-energies will not change with $\alpha$. Therefore at low $p_b$ limit, as shown in Fig. 4(l), the system has no closed loops and breaks into disconnected fragments. The probability of these structures appearing are $\propto p_b^N$ \cite{Isichenko_RMP_1992}. This therefore also implies that in this limit we have segregation of eigenvalues at some set of discrete energies and DOS peaks only at these specific energy eigenvalues. Note that this limit of the Hofstadter model in presence of bond percolation is absolutely distinct from Anderson disorder. The connectivity of each lattice point to the other is not changed in the case of Anderson disorder, and therefore at no value of $W$ do we expect non-dispersing eigenvalues (with $\alpha$). Similarly, increase in $W$ will never lead the eigenvalues to segregate at select eigenvalues. On the other hand, in bond percolation, at $p_b=0$ the DOS will show a $\delta$ function peak at $E=0$. However, these states are distinct from the weak disorder $E=0$ states as also discussed in \cite{Ziman_PRB_1982}, but rather strongly localized states on individual sites which have very high IPR as will be discussed in detail later. \begin{figure} \includegraphics[height=20cm]{bondpercorep} \caption{(Color online) The representative lattices of $12 \times 12$ size at different values of $p_b$ and their energy dispersion as a function of the magnetic flux $\alpha$. The lattices shown in (a) belongs to $p_b=1$. For (c) $p_b = 0.9$, (e) $p_b=0.75$, (g) $p_b = 0.50$, (i) $p_b=0.25$ and (k) $p_b=0.1$. The corresponding dispersion as a function of $\alpha$ is shown in (b),(d),(f),(h),(j) and (k) respectively. (b) is the corresponding Hofstadter butterfly for a finite $12\times 12$ system. While the increasing disorder destroys the butterfly pattern, one finds non-varying lines present in the dispersion. In (j) one finds only two bands dispersing. While in (l) one finds no dispersing bands.} \mylabel{fig:bondpercorep} \end{figure} \subsubsection{$p_b < p_c$} As $p_b$ is slightly increased, as can be seen in Fig. \ref{fig:bondpercorep} (j) and (h), dispersing bands appear. While the complete lattice still does-not have a spanning cluster, what is clear is that we have states in the system which disperse with magnetic flux $\alpha$. These are due to small clusters which contain closed loops. Take for example the representative plot shown in Fig. \ref{fig:bondpercorep} (j) and compare the dispersing curve with the Fig. \ref{fig:finhofs} (e). As can be clearly seen they are exactly the same. Therefore, the low $p_b$ ``Hofstadter butterfly'' will be dominated only by these finite size small loops, as shown in Fig. \ref{fig:finhofs}. These have implications for oscillations in magnetization which we will discuss in more detail later. Note that all these states cannot contribute to transport since they reside only on small clusters. \subsubsection{$p_b = 1$} We now discuss the other limit i.e. the clean system. Clearly, even the finite Hofstadter butterfly as shown in Fig. \ref{fig:bondpercorep}(b) has some semblance to the infinite Hofstadter butterfly as shown in Fig. \ref{fig:infhofs}, increasing lattice size makes this similarity more and more apparent \cite{Analytis_AJP_2004}. However, even the finite lattice system has some interesting gap structure at $E=0$ which we now discuss (see Fig. \ref{fig:finhofsE0}). Any finite size square lattice of dimensions ($L\times L$) shows a number of bands dispersing linearly from $E=0$. This number and the slope increases in an interesting fashion, which can be guessed from our discussions on the finite size polygons in magnetic field. Note that a $L \times L$ square ring has $(L-1)^2\alpha$ flux passing through it. Substituting $N=4L-4$, $M=N/4$ in Eqn. \ref{polygondis}, we find the low energy dispersion of the form, \begin{eqnarray} &=& -2t\cos\left(\frac{2\pi}{N}((L-1) + (L-1)^2\alpha)\right) \\ &\approx & (L-1)\pi \alpha \end{eqnarray} Now a square lattice of $L \times L$ can contain states on concentric square rings of dimensions $2,4, \ldots, L$, where the states on this rings have small $\alpha$ dispersion as $\pi \alpha,3 \pi \alpha \ldots (L-1)\pi\alpha$ near $E=0$. This can be clearly seen from Fig. \ref{fig:finhofsE0}. As is expected the states indeed lie predominantly on the concentric rings. \begin{figure} \centering \includegraphics[width=8cm]{finitesize} \caption{(Color online) Band dispersion for square lattices with open periodic condition for lattices of size $L\times L$ for (a) $2 \times 2$ (b) $4 \times 4$ (c) $6 \times 6$ and (d) $8 \times 8$ for $\alpha$ and $E$ close to $zero$. The slope of dispersion approximately follows the slope of $\pi (L-1) \alpha$ } \mylabel{fig:finhofsE0} \end{figure} \subsubsection{$p_b \gg p_c$} We now look at the effect of the low bond disorder on the finite size Hofstadter butterfly. We focus on the band gap structure at $E=0$. We see from Fig.\ref{fig:finbandgap}, a pure $4\times4$ and $12\times 12$ ((a) and (b)) lattice size has a set of gapless points and large band gaps at some other values of $\alpha$. Increasing disorder, opens up gaps at the gapless points and reduces otherwise large band gaps. This, in some sense, is the usual effect of any disorder i.e. spreading of the $DOS$. It is also reasonable to see that this effect increases with increasing disorder. This is more clear from the inset in the Fig.\ref{fig:finbandgap} where the variance of the gap is plotted. We also note that the amount of gap opened up at $\alpha=0$ is much smaller than other gap-less points. \begin{figure} \centering \includegraphics[width=8cm]{midgap} \caption{(Color online) The band gap at $E=0$ for four values of $p_b = 1.0, 0.95, 0.90$ and $0.85$ as a function of $\alpha$ for a (a) $4\times4$ lattice and (b) $12 \times 12$ lattice. For the later three values of $p_b$, averages are being plotted over $400$ configurations. The $blue$ line is for the pure system, and has many gapless points and large band gaps including one at $\alpha=0.5$. Increasing the disorder, opens up the gap at gapless points and reduces the magnitude of the larger gaps from the pure system. The lower the value of $p_b$, the effect is larger. In the Inset, the variance ($\equiv (\Delta E/t)_\sigma$) for the later three values of $p_b$ are plotted. It shows the variance $\Delta E/t$ increase with decrease in $p_b$. All these are consistent with our understanding that generic weak disorder spreads out the $DOS$ and open up gaps at the gapless points.} \mylabel{fig:finbandgap} \end{figure} To further understand the effect of bond percolation disorder, we study the Inverse Participation Ratio (IPR) of the different wavefunctions. IPR for a unit normalized wavefunction $|\psi \rangle$ expandable in site basis as, \begin{equation} | \psi \rangle = \sum_i \psi_i |i \rangle \end{equation} is given by, \begin{equation} IPR = \sum_i |\psi_i|^4 \end{equation} This value estimates the spread of a wavefunction in real space. For a delocalized wavefunction spread uniformly over area $A$, IPR $\propto 1/A$, and will decrease with increasing area. If a wavefunction is localized over some few sites, then IPR $\propto 0.1-1$ and doesn't change significantly with increasing size of the system. This diagnostic therefore provides a scope to demarcate localized and delocalized states. To estimate the effect on IPR, we consider 400 configurations of the lattice at a given $p_b$. For each configuration we diagonalize the Hamiltonian and innumerate the energy eigenvalues as $n=1 \ldots L^2$. For each value of $n$ we average over the eigenvalues to find the average energy, and their IPRs to find the average IPR. This gives us the average IPR of the full system as a function of energy and is shown in Fig. \ref{fig:IPR}. As the $p_b$ is reduced, IPR at certain values of $E$ becomes very large. Interestingly the values are at $\pm t$, $\pm \sqrt{2}t$ and $\pm \sqrt{3}t$. These correspond to clusters of small sites mentioned before. The corresponding IPRs for these is $1,1/2$ and $1/3$. As $p_b$ is decreased further some of these peaks vanish, and now only the central peak remains. Also peaks at other values correspond to the solutions for a open tight binding chain. For a $n$-site chain the dispersion is given by $-2t\cos k$, where $k \in \frac{m\pi}{n+1}$, where $m \in (1,2\ldots n)$. For example, a 4-site open chain has eigenvalues at $\pm 2\cos(\frac{\pi}{5}) (\sim \pm0.62)$ and $\pm 2\cos(\frac{2\pi}{5}) (\sim \pm 1.62)$ which can also be clearly seen in Fig. \ref{fig:IPR} (a). In Fig. \ref{fig:IPR}(b) we look at relatively smaller values of $p_b$ and look at the effect of increasing the system size. The variation of IPR signifies whether the system is comprised of localized or delocalized states. For example at $p_b=1.0$ one finds that IPR is quite low ($\sim 5 \times 10^{-3}$) and reduces with increasing system size. Decreasing $p_b$ one notices a strong peak appears at $E=0$, implying appearance of localized states. However, one notices that decreasing $p_b$ more high IPR peaks start to appear at distinct energy values as discussed in detail above. However, interestingly the average IPR of the system increases, and the value for $30\times30$ starts to overlap with $24\times24$, implying localization overall in the full spectrum. This feature becomes quite prominent at $p_b \lesssim 0.65$. Note that the average IPR is about $0.025$ at $p_b=0.60$ signaling that the wavefunction resides only on a average of 40 sites, in otherwise a lattice of 900 sites. This signals the wavefunctions have got localized much before the classical percolation threshold is reached. This will be investigated more clearly through calculations of the transport in the next section. \begin{figure} \centering \includegraphics[width=7cm]{IPR} \caption{(Color online) Variation of IPR as a function of bond probability. (a) IPR is shown for a $30 \times 30$ lattice for $p_b$ starting from $1.0$ (bottom-most) to $0.05$(topmost) in intervals of $0.05$. One notices appearance of peaks at specific values of $E/t$ which goes away with reducing $p_b$. (see text)(b) IPR for system sizes $18\times18$ (blue), $24 \times 24$ (orange) and $30 \times 30$ (yellow) compared to each other at $\alpha = 1/4$ when averaged over $400$ configurations. At $p_b=1.0$ the IPR is quite small ($\sim 5 \times 10^{-3}$) and reduces with increasing system size. Decreasing $p_b$ one notices a strong peak appears at $E=0$, implying appearance of localized states. However, one notices that decreasing $p_b$ more high IPR peaks start to appear at distinct energy values. Also the average IPR of the system increases for a $30\times30$ lattice and overlaps with the case of $24\times24$, implying overall localization in the spectrum. This feature becomes prominent for $p_b \lesssim 0.65$. Error bars are not shown for clarity of figure.} \mylabel{fig:IPR} \end{figure} \section{Effect on Chern numbers and Transport} \mylabel{sec:trans} Hofstadter Model, apart from structure of the eigenspectrum, also hosts interesting structure of the topological invariants \cite{Thouless_PRL_1982, Osadchy_JMP_2001, Fradkin_Book_1991}. It will be interesting to understand the effect of disorder on such topological invariants. This therefore requires calculation of Chern numbers. Note that in presence of disorder, the system no longer contains translational symmetry and therefore a momentum integral over the Brillouin zone will not suffice to calculate the Chern number. We therefore calculate the Chern numbers using the method outlined in \cite{Zhang_CPB_2013}. This essential numerical technique is motivated from the fact that Chern number can also be calculated from an integral over the twisted boundary conditions. For completeness we include briefly some of the definitions and a brief discussion about the method following \cite{Zhang_CPB_2013}. For a $2D$ lattice comprising of $N=L\times L$ unit cells, the single particle wavefunctions can satisfy the following boundary conditions given by $\phi_{\theta}(x+L,y)=e^{i\theta_x}\phi_{\theta}(x,y)$ and $\phi_{\theta}(x,y+L)=e^{i\theta_y}\phi_{\theta}(x,y)$, where $\theta = (\theta_x,\theta_y)$ such that $0 \le \theta_x,\theta_y \le 2 \pi$. For a given filling, we can have $M$ states occupied. Let the many body wavefunction of these $M$ states be written as $\Psi_{\theta}$. Then the Chern number of the ground state is given by \begin{equation} C = \frac{1}{2\pi i}\int_{T_\theta} d\theta \langle \nabla_{\theta} \Psi_{\theta}|\times| \nabla_{\theta} \Psi_{\theta} \rangle \end{equation} where $T_{\theta}$ denotes the allowed $(\theta_x, \theta_y)$ values \cite{Niu_PRB_1985}. Note that since in defining $\Psi$ we have taken into consideration all the filled states, $C$ here is the sum of the Chern numbers of individual bands below the chemical potential. Hence the quantity evaluated can be interpreted as $\sigma_{xy}$ in units of $e^2/h$. The calculation of $\sigma_{xy}$ can be directly done using Lehmann representation of the Kubo formula \cite{Dutta_JAP_2012}. However finding the Chern number of each band requires numerical diagonalization of a system many number of times\cite{Sheng_PRL_1997}. The recently developed coupling matrix approach allows for a much simpler and numerically inexpensive method \cite{Zhang_CPB_2013}. The idea is to convert the integral over $T_\theta$ into an integral over a path in momentum space. This integral then can be solved as a product of matrices whose components are determined by the inner product of some wavefunctions which were determined only by the system under periodic boundary conditions. The essential simplifying step is to do away with the necessity of diagonalizing the system at different values of boundary conditions. This approach can also take into consideration the effect of real space disorder in a natural way. Now we present the results of our calculations. In Fig.\ref{fig:flux1by4finsz} we show the variation of $\sigma_{xy}$ with bond occupation probability $p_b$ for $p/q=1/4$. The filling is kept constant at $1/4$ and for each configuration chemical potential is self consistently evaluated. The lattice size is systematically increased from $12\times 12$ to $24 \times 24$ in difference of $4$ sites per side. We find that increasing lattice size makes the transition sharper and clearly the conductance goes to zero much before $p_c$ which occurs at $p_b=0.5$. The transition seems to occur close to $p_b \approx 0.65$, which was also tentatively the value seen from IPR results in previous section. \begin{figure}[h!] \includegraphics[width=8cm]{flux1by4finitesize} \caption{(Color online) Plot of $\sigma_{xy}$ with bond occupation probability $p_b$ at $p/q=1/4$ for different lattice sizes. The filling is kept constant at $1/4$ and the lattice size is increased from $12\times 12$ to $24 \times 24$. The red line with moon points is for the lattice size of $24\times 24$. The results are averaged over 400 disorder configurations. With increasing lattice size we find the transition becoming sharper around $p_b \approx 0.65$. The $standard$ $error$ of the mean is of the order of the point size or lower and hence has not been shown above.} \mylabel{fig:flux1by4finsz} \end{figure} In Fig.\ref{fig:flux1by16fill} we show the variation of {$\sigma_{xy}$ with bond occupation probability $p_b$ for $p/q=1/16$ for different fillings. We find that with increasing filling the Chern insulator plateau remains stable only for lower strength of bond disorder. Note that with increasing filling from $1/16$ to $7/16$ we moved from bottom of the spectrum to band center. This resembles what was found for the Anderson disorder in earlier studies \cite{Sheng_PRL_1997}. \begin{figure}[H] \centering \includegraphics[width=8cm]{flux1by16diffpl} \caption{ (Color Online) $\sigma_{xy}$ with bond occupation probability $p_b$ for $p/q=1/16$ for different fillings. The lattice size is $32 \times 32$. The results are averaged over 400 disorder configurations. In the clean limit ($p_b=1$) $\sigma_{xy}$ for fillings $n/16$ is $n \frac{e^2}{h}$ where ($n \in 1,\ldots,7$). With increasing filling one moves from the bottom of the full spectrum to the center. The Chern insulator plateaus are less stable to bond disorder as one moves closer to band center.} \mylabel{fig:flux1by16fill} \end{figure} To understand the underlying mechanism for this, one first realizes that the physics of the Hofstadter problem is different from that of the continuum $2D$ model. Unlike the continuum, $\sigma_{xy}$ can be negative in the lattice setting \cite{Fradkin_Book_1991}. This is because the bands here may carry negative Chern numbers. For an even $q$ a negative Chern number band of Chern number $-2(q-1)$ lies at the band center. With increasing onsite disorder is has been argued that this central band mixes with the other bands hence explaining why the bands close to band center are the first ones to show transition to normal insulator \cite{Sheng_PRL_1997}. We infer that the similar considerations indeed apply in this limit of bond percolation problem. Also there is a lot of interest in understanding the physics in the limit of $zero$ magnetic field. We expect that with weakening magnetic field, the amount of bond percolation disorder required for Anderson insulating transition will decrease i.e. $p_q$ will slowly approach a larger value. This can be expected from Fig. \ref{fig:flux1by4finsz} and Fig. \ref{fig:flux1by16fill}. The magnetic flux is kept high $(p/q=1/4)$ in Fig. \ref{fig:flux1by4finsz} and the transition occurs at $p_b \approx 0.65$. While in Fig. \ref{fig:flux1by16fill} the magnetic flux is kept low $(p/q=1/16)$, and the transitions for all values of filling occurs at $p_b>0.65$. This suggests that with further decrease in magnetic field, one might expect higher values of $p_b$ (lower number of bonds removed) where the transition will occur. The exact form of this variation and its filling dependence would be interesting to investigate. \section{Summary and Future Directions} \mylabel{sec:Summary} To summarize, we have studied the effect of the bond percolation disorder on the Hofstadter bands which are formed when a perpendicular magnetic field is applied to a square lattice. We have looked at the evolution of the Hofstadter butterfly as the bond percolation is increased. We find that at low values of $p_b$, unlike the Anderson disorder, the eigenspectrum does-not disperse with the magnetic field. This we attribute to the open clusters of sites which do not enclose any magnetic field. With slight increase in $p_b$ we find few dispersing states which are due to disconnected rings. The dispersion of these are compared with finite size ring structures. At large values of $p_b$ we find the bond percolation spreads out energy eigenvalues and the gapless points get opened up. We also analyze the IPR of the wavefunctions as a function of $p_b$, and have looked at the effect of this disorder on band gaps and states close to $E=0$. To understand some of the features of our results we discussed properties of finite size rings and clusters. Next we investigated the effect of disorder on the Chern bands, and found that they undergo direct transition to the normal insulator state with increasing bond percolation disorder. This happens at a bond occupation probability $p_b$ higher than the classical percolation threshold. We also find that the bands at the band bottom are more stable to disorder than the band center. The calculations were performed using a recently developed method of calculating Chern numbers using coupling matrix approach \cite{Zhang_CPB_2013}. These results seem to be in accordance with the insights found from the diagonal Anderson disorder problem \cite{Sheng_PRL_1997}. We now mention some of the future directions. In our study, we have looked at two aspects of the physics of bond percolation on square lattices when kept in presence of uniform magnetic field. One, the effect on the energy dispersion, which leads to the effective ``killing" of the Hofstadter butterfly. And two, effect on transverse conductivity $\sigma_{xy}$. It will be interesting to look at the magnetic oscillations in this system for a fixed density of particles. Magnetization($M$) is determined by the change of the energy dispersion of the system as a function of magnetic field $M = -\frac{\partial E}{\partial \alpha}$ \cite{Analytis_SM_2005}. If the energy spectrum does not disperse with magnetic field ($\alpha$), as is the case when $p_b \ll p_c$, then this quantity will be identically $zero$. Therefore absence of oscillations due to increasing bond-percolation is a signature of reaching the limit of high percolation disorder. However, the exact form of this change and the variation with $p_b$ may be interesting to understand. Further, while we study the effect of bond percolation on the $\sigma_{xy}$, it might be interesting to correlate this with the effect on $\sigma_{xx}$. It will be intriguing to understand if the two-parameter scaling theory, as has been tested for other disorder problems in quantum Hall physics \cite{Sheng_PRL_1998}, is also applicable to the bond percolation disorder. There are other interesting parallels between the Anderson transition and percolation transition which would be worth pursuing. It was shown in \cite{Kaneko_JPSJ_2003}, that both percolation and Anderson transition have a characteristic exponent by which the radius of a wavepacket spreads with time. It might be interesting to study these exponents in presence of a magnetic field and investigate the transitions from the Hall state to an Anderson insulator state. In \cite{Goldenfeld_PRB_2006} existence of weaker Anderson transitions was shown for diagonal disorder in $2D$. It might also be interesting to realize this physics in case of percolation disorder and study its interplay with the effects of a perpendicular magnetic field. \begin{acknowledgement} Financial support from CSIR, India is gratefully acknowledged. AA is grateful to Vijay B. Shenoy for suggestions, discussions and computational resources. AA also thanks Sambuddha Sanyal, Jayantha P. Vyasanakere and Ajit C. Balram for discussions and for comments on the manuscript. \end{acknowledgement} \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,915
HomeCORONAVIRUSLos Angeles: Hospitalizations, Deaths and New Cases Continue to Increase Los Angeles: Hospitalizations, Deaths and New Cases Continue to Increase Larry Thompson Wednesday, July 08, 2020 The Los Angeles County Department of Public Health (Public Health) has confirmed 65 new deaths and 2,496 new cases of COVID-19. The daily positivity rate (a composite of a 7-day rolling average) is 10.4%, a rate that Los Angeles County hasn't seen since late-April. There are more than 2,000 people currently hospitalized, 26% of these people are confirmed cases in the ICU and 17% are confirmed cases on ventilators. This remains substantially higher than the 1,350 to 1,450 daily hospitalizations seen four weeks ago. To date, Public Health has identified 123,004 positive cases of COVID-19 across all areas of LA County, and a total of 3,642 deaths. Testing results are available for over 1,229,000 individuals with 9% of all people testing positive. "Each day, as we share this information with you, we know there are people across our community who have suffered tremendous loss. For those of you mourning the passing of a loved one, we wish you healing and peace," said Barbara Ferrer, PhD, MPH, MEd, Director of Public Health. "We need our residents to repeat what we did just weeks ago if we are going to flatten the curve again. If we can't get the infection numbers back under control by the end of July, we will see thousands more people that require hospitalizations and that could easily overwhelm our health care system." Of the 65 people that passed away, thirty-four people were over the age of 65 years old, 23 people who died were between the ages of 41 and 65 years old, and five people who died were between the ages of 18 and 40 years old. Fifty people had underlying health conditions including 33 people over the age of 65 years old, 13 people between the ages of 41 to 65 years old, and four people between the ages of 18 and 40 years old. Three deaths were reported by the City of Long Beach. Ninety-three percent of people who died had underlying health conditions. Of those who died, information about race and ethnicity is available for 3,389 people (99 percent of the cases reported by Public Health); 45% of deaths occurred among Latino/Latinx residents, 27% among White residents, 16% among Asian residents, 11% among African American/Black residents, less than 1% among Native Hawaiian/Pacific Islander residents and 1% among residents identifying with other races. Upon further investigation, 31 cases and two deaths reported earlier were not LA County residents. Business owners and residents must take immediate action in order to stop the spread of COVID-19. Stay home if you are elderly or have serious underlying health conditions. Everyone else should stay home as much as possible, and limit activities outside of your home to what is essential – work, getting groceries and medicine, and medical visits. Always wear a face covering and keep physical distance when you are outside your home. And wash your hands frequently. The actions of LA County residents to slow the spread cannot wait; we need to act now. The Reopening Protocols, COVID-19 Surveillance Interactive Dashboard, Roadmap to Recovery, Recovery Dashboard, and additional things you can do to protect yourself, your family and your community are on the Public Health website, www.publichealth.lacounty.gov.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,378
King Lear - Sketch of Lear questioning Cordelia / Alfred the Great - Compositional Sketches By Ford Madox Brown Accession number: 1906P754 Iron gall pen and ink over pencil on paper. Examine the details View work in person View artist biography Add to a personal collection" /> In 1844, whilst living in Paris, Brown created a set of drawings to illustrate Shakespeare's tragic play 'King Lear.' The majority of these drawings are now at the Whitworth Art Gallery, Manchester, but the Birmingham collection holds four related sheets of sketches. This sheet depicts act 1 scene 1 in which Lear questions Cordelia's love for him. Already the vitality and zeal of the series is apparent in Lear's intense glare and strong diagonal pose which cuts across the composition. Although the basic poses of the main characters remain the same, the position of the map, the extravagant design of the throne and the prominence of the intermediary figure, most likely representing Kent, are altered in the final version at the Whitworth. On the left is a pencil sketch for the third drawing in the series 'Cordelia Parting from her Sisters.' The finished drawing for this scene is also at the Whitworth and became the basis for Brown's first printed illustration published in the Germ in 1850 (1979P217.4 and 20007.1800). In the top left corner is another pencil sketch of a female figure restrained by two figures either side of her, faintly drawn. This does not appear to have been used in the drawings but may have been an early idea for a group of figures depicting Cordelia being led away from her father.On the reverse are sketches for Alfred the Great. He is shown drawing on the ground and surrounded by figures in front of a landscape background. Several of the figures have separate sketches around the edges of the paper where Brown has worked out individual poses. There is no recorded painting by Brown of Alfred the Great but these drawings show that Brown had already come up with a possible composition and was seriously considering it as a viable subject for a painting (see also 1978P513 and 1906P755).LM Purchased and presented by subscribers, 1906. © Birmingham Museums and Art Gallery Artist Ford Madox Brown - FMB / entered / enterred / simillar / similar / simmilar Inscription reverse bottom left Handwritten - In artist's handwriting.. FMB in pencil; spelling variations in brown pen and ink. Shakespeare in Pictures 4 Ulster Museum 1964 - 1964 Ford Madox Brown: The Unofficial Pre-Raphaelite 2 Birmingham Museum & Art Gallery 2008-08-27 - 2008-12-14 City of Birmingham Museum and Art Gallery: Catalogue of the Permanent Collection of Drawings A E Whitley 1939 Bemrose & Sons Ltd, Derby p. 29 Laura MacCulloch, Tessa Sidey 2008 D. Giles Limited, London pp. 49, 67; repr. p. 11 2008 D. Giles Limited, London p. 9 William Shakespeare Author Alfred the Great Associated with Related work & resources King Lear - Sketches of Lear imagining his... Ford Madox Brown 1843 – 1849 Alfred The Great - Sketch of Alfred Drawing in the... Ford Madox Brown 1843 – 1850 Dante's Circle Collection1 Created by susan katz Discuss this work Start a discussion about this work. You need to login to discuss this work. Click here to login. If you are not yet registered click here to become a member. Find out more about membership
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,614
Q: Error displaying dots on a multi series line - dots chart so as you can see on the snippet is want to display dot on my multi series line. But I get a serious problem. My dots are not plotting correctly and all the dots are not fonctionnal. My data are in JSON format I think the issue is located to points.selectAll("circle") .data(function(d){return d.values}) .enter() .append("circle") .attr("r", 3) .attr("cx", function(d) { return xScale(d.log_time); }) .attr("cy", function(d) { return yScale(d.temperature); }) .style("fill", function(d,i,j) { return color(products[j].id); }); Do you know how to fix this ? There should be 15 dots by series Should I change the shape of my lines ? //set the screen dimensions var margin = {top: 20, right: 200, bottom: 150, left: 50}, margin2 = { top: 430, right: 10, bottom: 20, left: 40 }, width = 1600 - margin.left - margin.right, height = 600 - margin.top - margin.bottom, height2 = 500 - margin2.top - margin2.bottom; // Time Parser with the format exemple of the datalog var parseTime = d3.time.format("%m/%d/%Y %H:%M:%S %p").parse; var bisectDate = d3.bisector(function(d) { return d.log_time; }).left; var xScale = d3.time.scale() .range([0, width]), xScale2 = d3.time.scale() .range([0, width]); // Duplicate xScale for brushing reference var yScale = d3.scale.linear() .range([height, 0]); // 40 Custom DDV colors var color = d3.scale.ordinal().range(["#48A36D", "#56AE7C", "#64B98C", "#72C39B", "#80CEAA", "#80CCB3", "#7FC9BD", "#7FC7C6", "#7EC4CF", "#7FBBCF", "#7FB1CF", "#80A8CE", "#809ECE", "#8897CE", "#8F90CD", "#9788CD", "#9E81CC", "#AA81C5", "#B681BE", "#C280B7", "#CE80B0", "#D3779F", "#D76D8F", "#DC647E", "#E05A6D", "#E16167", "#E26962", "#E2705C", "#E37756", "#E38457", "#E39158", "#E29D58", "#E2AA59", "#E0B15B", "#DFB95C", "#DDC05E", "#DBC75F", "#E3CF6D", "#EAD67C", "#F2DE8A"]); var xAxis = d3.svg.axis() .scale(xScale) .orient("bottom"), xAxis2 = d3.svg.axis() // xAxis for brush slider .scale(xScale2) .orient("bottom"); var yAxis = d3.svg.axis() .scale(yScale) .orient("left"); var line = d3.svg.line() .interpolate("basis") .x(function(d) { return xScale(d.log_time); }) .y(function(d) { return yScale(d.temperature); }) .defined(function(d) { return d.temperature; }); // Hiding line value defaults of 0 for missing data var maxY; // Defined later to update yAxis var svg = d3.select("body").append("svg") .attr("width", width + margin.left + margin.right) .attr("height", height + margin.top + margin.bottom) //height + margin.top + margin.bottom .append("g") .attr("transform", "translate(" + margin.left + "," + margin.top + ")"); // Create invisible rect for mouse tracking svg.append("rect") .attr("width", width) .attr("height", height) .attr("x", 0) .attr("y", 0) .attr("id", "mouse-tracker") .style("fill", "white"); var data = {"Header":{"add_data":"N/A","add_data_2":"N/A","add_data_3":"N/A","device_type":"PCU12_r7p1_ELFR","diags":"PCU12_r7p1_72h.s19" ,"entered_qty":"84","file_path":"C:\\Winapps\\MBI\\logs\\PCU12\\248649_7_PCU12_M2_ELFR72H_DriverMonitor.log","found_qty":"84","infos_system":"System ZFR11-CC3-11 MBI Burn In Wizard 6.8.10 - ","lot_ID":"248649_7_PCU12_M2_ELFR72H" ,"start_log_date":"4/13/2018 08:31:28 AM","system_version":"6.8.10"},"Session_test":[{"datas_lines":[{"bib":"No data","bin2_tests":{},"datas_line":[],"driver":"No data","error_code":"No data","log_time":"No data","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{"5":[]},"slot":"No data","start_time":"No data"},{"bib":"No data","bin2_tests":{},"datas_line":[],"driver":"No data","error_code":"No data","log_time":"No data","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{"7":[]},"slot":"No data","start_time":"No data"},{"bib":"01074601150135","bin2_tests":{},"datas_line":["169","170","169","169","169","168","168","170","170","170","171","169","170","167","169","169","169","169","168","168","168","168","168","168","169","167","167","166"],"driver":"00000000000000","error_code":"120","log_time":"4/13/2018 8:37:45 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"3","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150140","bin2_tests":{},"datas_line":["169","169","169","170","168","168","167","169","169","169","170","170","169","167","168","168","169","170","168","170","167","169","169","170","170","170","169","167"],"driver":"00000000000000","error_code":"120","log_time":"4/13/2018 8:37:47 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"5","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150129","bin2_tests":{},"datas_line":["169","169","167","167","167","167","165","170","170","169","169","167","167","165","168","168","170","168","168","167","165","167","169","169","168","167","166","165"],"driver":"00000000000000","error_code":"120","log_time":"4/13/2018 8:37:50 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"7","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150135","bin2_tests":{},"datas_line":["170","171","171","172","171","169","169","171","171","172","173","171","171","168","171","171","171","171","170","169","168","168","169","169","170","168","168","167"],"driver":"00000000000000","error_code":"122","log_time":"4/13/2018 8:40:06 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"3","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150135","bin2_tests":{},"datas_line":["173","174","174","175","174","172","172","173","174","174","176","173","175","171","173","174","173","174","173","172","171","171","172","172","173","171","171","169"], "driver":"00000000000000","error_code":"123","log_time":"4/13/2018 8:40:06 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"3","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150135","bin2_tests":{},"datas_line":["172","174","173","174","173","172","171","173","173","174","175","173","174","170","172","173","173","173","172","172","171","170","171","171","172","170","170","169"],"driver":"00000000000000","error_code":"124","log_time":"4/13/2018 8:40:06 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"3","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150135","bin2_tests":{},"datas_line":["170","172","171","172","171","170","169","171","171","171","173","171","172","168","170","171","170","171","170","169","168","168","169","169","170","168","169","167"],"driver":"00000000000000","error_code":"125","log_time":"4/13/2018 8:40:07 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"3","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150140","bin2_tests":{},"datas_line":["170","171","171","172","170","170","168","171","171","171","173","173","171","169","168","170","171","172","170","171","168","169","170","171","172","171","170","168"],"driver":"00000000000000","error_code":"122","log_time":"4/13/2018 8:40:08 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"5","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150140","bin2_tests":{},"datas_line":["173","174","174","175","173","173","171","174","173","174","175","175","174","172","171","172","173","175","173","174","171","172","173","174","174","174","173","171"],"driver":"00000000000000","error_code":"123","log_time":"4/13/2018 8:40:09 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"5","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150140","bin2_tests":{},"datas_line":["173","173","173","174","173","173","170","173","173","173","175","175","174","171","171","171","173","175","172","174","170","171","172","174","173","174","172","170"], "driver":"00000000000000","error_code":"124","log_time":"4/13/2018 8:40:09 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"5","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150140","bin2_tests":{},"datas_line":["170","171","171","172","171","171","168","171","170","170","173","173","171","169","169","169","171","172","170","172","168","169","170","172","172","172","170","168"],"driver":"00000000000000","error_code":"125","log_time":"4/13/2018 8:40:09 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[], "nrds":{},"slot":"5","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150129","bin2_tests":{},"datas_line":["172","171","170","171","171","170","168","172","172","172","172","171","170","167","169","170","173","171","170","170","167","168","170","171","170","169","168","167"],"driver":"00000000000000","error_code":"122","log_time":"4/13/2018 8:40:11 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"7","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150129","bin2_tests":{},"datas_line":["174","174","173","174","174","172","170","175","174","174","174","173","173","170","172","173","175","174","173","173","170","171","173","173","173","172","171","169"],"driver":"00000000000000","error_code":"123","log_time":"4/13/2018 8:42:11 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"7","start_time":"4/13/2018 8:34:57 AM"},{"bib":"01074601150129","bin2_tests":{},"datas_line":["174","173","173","173","173","172","169","174","174","174","174","173","172","169","172","173","175","173","173","172","170","170","172","172","173","171","170","169"],"driver":"00000000000000","error_code":"124","log_time":"4/13/2018 8:42:12 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"7","start_time":"4/13/2018 8:34:57 AM"},{"bib":"01074601150129","bin2_tests":{},"datas_line":["172","171","170","171","171","170","167","172","172","172","172","171","171","167","170","171","173","171","171","170","168","168","170","171","171","169","169","167"],"driver":"00000000000000","error_code":"125","log_time":"4/13/2018 8:42:12 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"7","start_time":"4/13/2018 8:34:57 AM"}],"session_number":0}]}; var data = data["Session_test"][0]["datas_lines"]; // To be able to map data, we extract data and place them into a new array of objects let data_converted = []; data.forEach(obj => { if (obj.log_time !== "No data") { let iterator = 0; let final = {}; obj.datas_line.forEach(d => { final[iterator.toString()] = d; iterator++; }); final.log_time = parseTime(obj.log_time); data_converted.push(final); } }); var dateKey = d3.keys(data_converted[0]); color.domain(dateKey.filter(function(key) { // Set the domain of the color ordinal scale to be all the JSON id except "log_time", matching a color to an issue return key !== "log_time"; })); var i = dateKey.indexOf("log_time") if(i != -1) { dateKey.splice(i, 1); } var products = color.domain().map(function(d) { return { id:d, values: data_converted.map( function(e) { return { log_time: e.log_time, temperature: e[d] }; }), visible: true } }); xScale.domain(d3.extent(data_converted, function(d) { return d.log_time; })); // extent = highest and lowest points yScale.domain([ d3.min(products, function(c) { return d3.min(c.values, function(d) { return d.temperature; }); }), d3.max(products, function(c) { return d3.max(c.values, function(d) { return d.temperature; }); }) ]); xScale2.domain(xScale.domain()); // Setting a duplicate xdomain for brushing reference later // draw line graph svg.append("g") .attr("class", "x axis") .attr("transform", "translate(0," + height + ")") .call(xAxis); // text label for the x axis svg.append("text") .attr("transform", "translate(" + (width/2) + " ," + (height + margin.top + 20) + ")") .style("text-anchor", "middle") .text("Time"); svg.append("g") .attr("class", "y axis") .call(yAxis) .append("text") // text label for the y axis .attr("transform", "rotate(-90)") .attr("y", 0 - margin.left) .attr("x", 0 - (height / 2)) .attr("dy", "1em") .style("text-anchor", "middle") .text("Temperature (°C)"); var points = svg.selectAll(".points") .data(products) // Select nested data and append to new svg group elements .enter().append("g") .attr("class", "points"); points.append("path") .attr("class", "line") .style("pointer-events", "none") // Stop line interferring with cursor .attr("id", function(d) { return "line-" + d.id; // Give line id of line }) .attr("d", function(d) { return d.visible ? line(d.values) : null; // If array key "visible" = true then draw line, if not then don't }) .attr("clip-path", "url(#clip)")//use clip path to make irrelevant part invisible .style("stroke", function(d) { return color(d.id); }); points.selectAll("circle") .data(function(d){return d.values}) .enter() .append("circle") .attr("r", 3) .attr("cx", function(d) { return xScale(d.log_time); }) .attr("cy", function(d) { return yScale(d.temperature); }) .style("fill", function(d,i,j) { return color(products[j].id); }); body { font: 12px sans-serif; } .axis path, .axis line, .axis1 path, .axis1 line { fill: none; stroke: #E6E7E8; shape-rendering: crispEdges; } .x.axis path, .x.axis1 path { display: none; } .line { fill: none; stroke: steelblue; stroke-width: 1.5px; } .legend-box { cursor: pointer; } #mouse-tracker { stroke: #E6E7E8; stroke-width: 1px; } .dot { fill: white; stroke: steelblue; stroke-width: 1.5px; } .hover-line { stroke: #E6E7E8; fill: none; stroke-width: 1px; left: 10px; shape-rendering: crispEdges; opacity: 1e-6; } .hover-text { stroke: none; font-size: 30px; font-weight: bold; fill: #000000; } .tooltip { font-weight: normal; } .brush .extent { stroke: #FFF; shape-rendering: crispEdges; } p { font-size: 18px; } <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.3.13/d3.min.js"></script> A: The circles are in the correct position. You intuitively answered your question when you asked "Should I change the shape of my lines?". Yes, you should. The problem is that the "basis" interpolation doesn't intercept the data points. For instance, have a look at this figure (from v5 API, but the principle is the same): You can see that the line doesn't cross all the points. The solution* is changing the interpolation. For instance: .interpolate("monotone") Here is the snippet with that change: //set the screen dimensions var margin = {top: 20, right: 200, bottom: 150, left: 50}, margin2 = { top: 430, right: 10, bottom: 20, left: 40 }, width = 1600 - margin.left - margin.right, height = 600 - margin.top - margin.bottom, height2 = 500 - margin2.top - margin2.bottom; // Time Parser with the format exemple of the datalog var parseTime = d3.time.format("%m/%d/%Y %H:%M:%S %p").parse; var bisectDate = d3.bisector(function(d) { return d.log_time; }).left; var xScale = d3.time.scale() .range([0, width]), xScale2 = d3.time.scale() .range([0, width]); // Duplicate xScale for brushing reference var yScale = d3.scale.linear() .range([height, 0]); // 40 Custom DDV colors var color = d3.scale.ordinal().range(["#48A36D", "#56AE7C", "#64B98C", "#72C39B", "#80CEAA", "#80CCB3", "#7FC9BD", "#7FC7C6", "#7EC4CF", "#7FBBCF", "#7FB1CF", "#80A8CE", "#809ECE", "#8897CE", "#8F90CD", "#9788CD", "#9E81CC", "#AA81C5", "#B681BE", "#C280B7", "#CE80B0", "#D3779F", "#D76D8F", "#DC647E", "#E05A6D", "#E16167", "#E26962", "#E2705C", "#E37756", "#E38457", "#E39158", "#E29D58", "#E2AA59", "#E0B15B", "#DFB95C", "#DDC05E", "#DBC75F", "#E3CF6D", "#EAD67C", "#F2DE8A"]); var xAxis = d3.svg.axis() .scale(xScale) .orient("bottom"), xAxis2 = d3.svg.axis() // xAxis for brush slider .scale(xScale2) .orient("bottom"); var yAxis = d3.svg.axis() .scale(yScale) .orient("left"); var line = d3.svg.line() .interpolate("monotone") .x(function(d) { return xScale(d.log_time); }) .y(function(d) { return yScale(d.temperature); }) .defined(function(d) { return d.temperature; }); // Hiding line value defaults of 0 for missing data var maxY; // Defined later to update yAxis var svg = d3.select("body").append("svg") .attr("width", width + margin.left + margin.right) .attr("height", height + margin.top + margin.bottom) //height + margin.top + margin.bottom .append("g") .attr("transform", "translate(" + margin.left + "," + margin.top + ")"); // Create invisible rect for mouse tracking svg.append("rect") .attr("width", width) .attr("height", height) .attr("x", 0) .attr("y", 0) .attr("id", "mouse-tracker") .style("fill", "white"); var data = {"Header":{"add_data":"N/A","add_data_2":"N/A","add_data_3":"N/A","device_type":"PCU12_r7p1_ELFR","diags":"PCU12_r7p1_72h.s19" ,"entered_qty":"84","file_path":"C:\\Winapps\\MBI\\logs\\PCU12\\248649_7_PCU12_M2_ELFR72H_DriverMonitor.log","found_qty":"84","infos_system":"System ZFR11-CC3-11 MBI Burn In Wizard 6.8.10 - ","lot_ID":"248649_7_PCU12_M2_ELFR72H" ,"start_log_date":"4/13/2018 08:31:28 AM","system_version":"6.8.10"},"Session_test":[{"datas_lines":[{"bib":"No data","bin2_tests":{},"datas_line":[],"driver":"No data","error_code":"No data","log_time":"No data","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{"5":[]},"slot":"No data","start_time":"No data"},{"bib":"No data","bin2_tests":{},"datas_line":[],"driver":"No data","error_code":"No data","log_time":"No data","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{"7":[]},"slot":"No data","start_time":"No data"},{"bib":"01074601150135","bin2_tests":{},"datas_line":["169","170","169","169","169","168","168","170","170","170","171","169","170","167","169","169","169","169","168","168","168","168","168","168","169","167","167","166"],"driver":"00000000000000","error_code":"120","log_time":"4/13/2018 8:37:45 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"3","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150140","bin2_tests":{},"datas_line":["169","169","169","170","168","168","167","169","169","169","170","170","169","167","168","168","169","170","168","170","167","169","169","170","170","170","169","167"],"driver":"00000000000000","error_code":"120","log_time":"4/13/2018 8:37:47 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"5","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150129","bin2_tests":{},"datas_line":["169","169","167","167","167","167","165","170","170","169","169","167","167","165","168","168","170","168","168","167","165","167","169","169","168","167","166","165"],"driver":"00000000000000","error_code":"120","log_time":"4/13/2018 8:37:50 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"7","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150135","bin2_tests":{},"datas_line":["170","171","171","172","171","169","169","171","171","172","173","171","171","168","171","171","171","171","170","169","168","168","169","169","170","168","168","167"],"driver":"00000000000000","error_code":"122","log_time":"4/13/2018 8:40:06 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"3","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150135","bin2_tests":{},"datas_line":["173","174","174","175","174","172","172","173","174","174","176","173","175","171","173","174","173","174","173","172","171","171","172","172","173","171","171","169"], "driver":"00000000000000","error_code":"123","log_time":"4/13/2018 8:40:06 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"3","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150135","bin2_tests":{},"datas_line":["172","174","173","174","173","172","171","173","173","174","175","173","174","170","172","173","173","173","172","172","171","170","171","171","172","170","170","169"],"driver":"00000000000000","error_code":"124","log_time":"4/13/2018 8:40:06 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"3","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150135","bin2_tests":{},"datas_line":["170","172","171","172","171","170","169","171","171","171","173","171","172","168","170","171","170","171","170","169","168","168","169","169","170","168","169","167"],"driver":"00000000000000","error_code":"125","log_time":"4/13/2018 8:40:07 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"3","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150140","bin2_tests":{},"datas_line":["170","171","171","172","170","170","168","171","171","171","173","173","171","169","168","170","171","172","170","171","168","169","170","171","172","171","170","168"],"driver":"00000000000000","error_code":"122","log_time":"4/13/2018 8:40:08 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"5","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150140","bin2_tests":{},"datas_line":["173","174","174","175","173","173","171","174","173","174","175","175","174","172","171","172","173","175","173","174","171","172","173","174","174","174","173","171"],"driver":"00000000000000","error_code":"123","log_time":"4/13/2018 8:40:09 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"5","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150140","bin2_tests":{},"datas_line":["173","173","173","174","173","173","170","173","173","173","175","175","174","171","171","171","173","175","172","174","170","171","172","174","173","174","172","170"], "driver":"00000000000000","error_code":"124","log_time":"4/13/2018 8:40:09 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"5","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150140","bin2_tests":{},"datas_line":["170","171","171","172","171","171","168","171","170","170","173","173","171","169","169","169","171","172","170","172","168","169","170","172","172","172","170","168"],"driver":"00000000000000","error_code":"125","log_time":"4/13/2018 8:40:09 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[], "nrds":{},"slot":"5","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150129","bin2_tests":{},"datas_line":["172","171","170","171","171","170","168","172","172","172","172","171","170","167","169","170","173","171","170","170","167","168","170","171","170","169","168","167"],"driver":"00000000000000","error_code":"122","log_time":"4/13/2018 8:40:11 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"7","start_time":"4/13/2018 8:32:57 AM"},{"bib":"01074601150129","bin2_tests":{},"datas_line":["174","174","173","174","174","172","170","175","174","174","174","173","173","170","172","173","175","174","173","173","170","171","173","173","173","172","171","169"],"driver":"00000000000000","error_code":"123","log_time":"4/13/2018 8:42:11 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"7","start_time":"4/13/2018 8:34:57 AM"},{"bib":"01074601150129","bin2_tests":{},"datas_line":["174","173","173","173","173","172","169","174","174","174","174","173","172","169","172","173","175","173","173","172","170","170","172","172","173","171","170","169"],"driver":"00000000000000","error_code":"124","log_time":"4/13/2018 8:42:12 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"7","start_time":"4/13/2018 8:34:57 AM"},{"bib":"01074601150129","bin2_tests":{},"datas_line":["172","171","170","171","171","170","167","172","172","172","172","171","171","167","170","171","173","171","171","170","168","168","170","171","171","169","169","167"],"driver":"00000000000000","error_code":"125","log_time":"4/13/2018 8:42:12 AM","lotID":"248649_7_PCU12_M2_ELFR72H","lot_removed":[],"nrds":{},"slot":"7","start_time":"4/13/2018 8:34:57 AM"}],"session_number":0}]}; var data = data["Session_test"][0]["datas_lines"]; // To be able to map data, we extract data and place them into a new array of objects let data_converted = []; data.forEach(obj => { if (obj.log_time !== "No data") { let iterator = 0; let final = {}; obj.datas_line.forEach(d => { final[iterator.toString()] = d; iterator++; }); final.log_time = parseTime(obj.log_time); data_converted.push(final); } }); var dateKey = d3.keys(data_converted[0]); color.domain(dateKey.filter(function(key) { // Set the domain of the color ordinal scale to be all the JSON id except "log_time", matching a color to an issue return key !== "log_time"; })); var i = dateKey.indexOf("log_time") if(i != -1) { dateKey.splice(i, 1); } var products = color.domain().map(function(d) { return { id:d, values: data_converted.map( function(e) { return { log_time: e.log_time, temperature: e[d] }; }), visible: true } }); xScale.domain(d3.extent(data_converted, function(d) { return d.log_time; })); // extent = highest and lowest points yScale.domain([ d3.min(products, function(c) { return d3.min(c.values, function(d) { return d.temperature; }); }), d3.max(products, function(c) { return d3.max(c.values, function(d) { return d.temperature; }); }) ]); xScale2.domain(xScale.domain()); // Setting a duplicate xdomain for brushing reference later // draw line graph svg.append("g") .attr("class", "x axis") .attr("transform", "translate(0," + height + ")") .call(xAxis); // text label for the x axis svg.append("text") .attr("transform", "translate(" + (width/2) + " ," + (height + margin.top + 20) + ")") .style("text-anchor", "middle") .text("Time"); svg.append("g") .attr("class", "y axis") .call(yAxis) .append("text") // text label for the y axis .attr("transform", "rotate(-90)") .attr("y", 0 - margin.left) .attr("x", 0 - (height / 2)) .attr("dy", "1em") .style("text-anchor", "middle") .text("Temperature (°C)"); var points = svg.selectAll(".points") .data(products) // Select nested data and append to new svg group elements .enter().append("g") .attr("class", "points"); points.append("path") .attr("class", "line") .style("pointer-events", "none") // Stop line interferring with cursor .attr("id", function(d) { return "line-" + d.id; // Give line id of line }) .attr("d", function(d) { return d.visible ? line(d.values) : null; // If array key "visible" = true then draw line, if not then don't }) .attr("clip-path", "url(#clip)")//use clip path to make irrelevant part invisible .style("stroke", function(d) { return color(d.id); }); points.selectAll("circle") .data(function(d){return d.values}) .enter() .append("circle") .attr("r", 3) .attr("cx", function(d) { return xScale(d.log_time); }) .attr("cy", function(d) { return yScale(d.temperature); }) .style("fill", function(d,i,j) { return color(products[j].id); }); body { font: 12px sans-serif; } .axis path, .axis line, .axis1 path, .axis1 line { fill: none; stroke: #E6E7E8; shape-rendering: crispEdges; } .x.axis path, .x.axis1 path { display: none; } .line { fill: none; stroke: steelblue; stroke-width: 1.5px; } .legend-box { cursor: pointer; } #mouse-tracker { stroke: #E6E7E8; stroke-width: 1px; } .dot { fill: white; stroke: steelblue; stroke-width: 1.5px; } .hover-line { stroke: #E6E7E8; fill: none; stroke-width: 1px; left: 10px; shape-rendering: crispEdges; opacity: 1e-6; } .hover-text { stroke: none; font-size: 30px; font-weight: bold; fill: #000000; } .tooltip { font-weight: normal; } .brush .extent { stroke: #FFF; shape-rendering: crispEdges; } p { font-size: 18px; } <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.3.13/d3.min.js"></script> * I'm not saying that you should use that interpolation, the result is awful. I'm just answering your question regarding the circles' positions.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,355
\section{Introduction} The tree structure of an XML document can be conveniently represented as an ordered unranked tree~\cite{DBLP:conf/dbpl/Suciu01,DBLP:journals/sigmod/Neven02}. For tree structures of common XML documents \emph{dags} (\emph{directed acyclic graphs}) offer high compression ratios: the number of edges of the minimal dag is only about 10\% of the number of edges of the original unranked tree~\cite{DBLP:conf/vldb/KochBG03} In a minimal dag, each distinct subtree is represented only once. A dag can be exponentially smaller than the represented tree. Dags and their linear average time construction via hashing are folklore in computer science (see e.g.~\cite{DBLP:journals/cacm/Ershov58}); they are a popular data structure used for sharing of common subexpressions (e.g., in programming languages) and in binary decision diagrams, see~\cite{DBLP:books/sp/MeinelT98}. Through a clever pointer data structure, worst-case linear time construction is shown in~\cite{DBLP:journals/jacm/DowneyST80}. Unranked trees of XML tree structures are often represented using binary trees, see~\cite{DBLP:journals/jcss/Schwentick07} for a discussion. A common encoding is the \emph{first child/next sibling encoding}~\cite{DBLP:conf/vldb/Koch03} (in fact, this encoding is well-known, see Paragraph~2.3.2 in Knuth's first book~\cite{DBLP:books/aw/Knuth68}). The binary tree $\text{fcns}(t)$ is obtained from an unranked tree $t$ as follows. Each node of $t$ is a node of $\text{fcns}(t)$. A node $u$ is a left child of node $v$ in $\text{fcns}(t)$ if and only if $u$ is the first child of $v$ in $t$. A node $u$ is the right child of a node $v$ in $\text{fcns}(t)$ if and only if $u$ is the next sibling of $v$ in $t$. From now on, when we speak of the size of a graph we mean its number of edges. Consider the minimal dag of $\text{fcns}(t)$ (called \emph{bdag} for \emph{binary dag} in the following) in comparison to the minimal dag of $t$. It was observed in~\cite{DBLP:journals/is/BusattoLM08} that the sizes of these dags may differ, in both directions. For some trees the difference is dramatic, which motivates the work of this paper: to study the precise relationship between the two dags, and to devise a new data structure that is guaranteed to be of equal or smaller size than the minimum size of the two dags. Intuitively, the dag of $t$ shares \emph{repeated subtrees}, while the dag of $\text{fcns}(t)$ shares \emph{repeated sibling end sequences}. \begin{figure}[ht] \centerline{ \input tn.pdf_t } \caption{The unranked tree $t_n$ and $\text{dag}(t_n)$.} \label{fig:tn} \end{figure} Consider the tree $t_n$ in the left of Figure~\ref{fig:tn}. Its minimal dag is shown on the right. As can be seen, each repeated subtree is removed in the dag. The dag consists of $n+1$ edges while $t_n$ consists of $2n$ edges. Moreover, $\text{fcns}(t_n)$ does not have any repeated subtrees (except for leaves), i.e., the bdag of $t_n$ has $2n$ edges as well. \begin{figure}[ht] \centerline{ \input sn.pdf_t } \caption{The unranked tree $s_n$ and $\text{bdag}(s_n)$.} \label{fig:tn2} \end{figure} Next, consider the tree $s_n$ in the left of Figure~\ref{fig:tn2}. Its bdag is shown on the right, it has $3n-2$ edges. On the other hand, $s_n$ has $n^2$ edges and the same is true for the dag of $s_n$ since this tree has no repeated subtrees (except for leaves). These two examples show that (i)~the size of the dag of an unranked tree can be half the size of the dag of the fcns encoded tree and (ii)~the size of the dag of the fcns encoded tree can be quadratically smaller than the size of the dag of the unranked tree. We prove in this paper that these ratios are maximal: The size of the dag of the unranked tree is (i)~lower bounded by half of the size of the bdag and (ii)~upper bounded by the square of the size of the bdag. Actually, we derive these bounds from stronger statements concerning a combination of the unranked dag and the binary dag, called the \emph{hybrid dag}, which combines both ways of sharing. The idea is as follows. Given an unranked tree, we compute its minimal dag. The dag can be naturally viewed as a regular tree grammar: Introduce for each node $v$ of the dag a nonterminal $A_v$ for the grammar. If a node $v$ is labeled with the symbol $f$ and its children in the dag are $v_1,\ldots, v_n$ in this order, then we introduce the production $A_v \to f(A_{v_1},\ldots,A_{v_n})$. We now apply the fcns encoding to all right-hand sides of this grammar. Finally, we compute the minimal dag of the forest consisting of all these fcns encoded right-hand sides. See Figure~\ref{fig:hdag} which shows a tree $t$ of size $9$. Its unranked and binary dags are each of size $6$. The hybrid dag consists of a start tree plus one rule, and is of size $5$. For the XML document trees of our corpus, the average size of the hybrid dag is only $76\%$ of the average size of the unranked dag. We show that the size of the hybrid dag is always bounded by the minimum of the sizes of the unranked dag and the binary dag. Moreover, we show that (i) the size of the hdag is at least half of the size of the binary dag and (ii) the size of the unranked dag is at most the square of the size of the hdag. The above mentioned bounds for the unranked dag and binary dag are direct corollaries of these bounds. The tree grammar of a hybrid dag is not a regular tree grammar anymore (because identifier nodes may have a right child). It can be unfolded in three passes: first undoing the sharing of tree sequences, then the binary decoding, and then undoing sharing of subtrees. We show that these grammars can be translated into a well known type of grammars: straight-line linear context-free tree grammars, for short \emph{SLT grammars} (produced by BPLEX~\cite{DBLP:journals/is/BusattoLM08} or TreeRePair~\cite{lohmanmen13}). This embedding increases the size only slightly. One advantage is that SLT grammars can be unfolded into the original tree in one pass. Moreover, it is known that finite tree automata (even with sibling equality constraints) and tree walking automata can be executed in polynomial time over trees represented by SLT grammars~\cite{DBLP:journals/tcs/LohreyM06,DBLP:journals/jcss/LohreyMS12,DBLP:journals/corr/abs-1012-5696}. While in the theoretical limit the binary dag can be smaller in comparison than the dag, it was observed in~\cite{DBLP:journals/is/BusattoLM08} that for common XML document trees $t$, almost always the dag of~$t$ is smaller than the binary dag of~$t$. One explanation is that $t$ contains many small repeated subtrees, which seldomly are part of a repeating sibling end sequence. For each repetition we (possibly) pay a ``penalty'' of one extra edge in the dag of $\text{fcns}(t)$; see the tree $t_n$ which has penalty $n$. On the other hand, there are very few repeating sibling end sequences in common XML; this is because optional elements typically appear towards \emph{the end} of a child sequence. Hence, the additional feature of sharing sibling sequences is not useful for XML. On real XML documents, we show in experiments that the ``reverse binary dag'' that arises from the \emph{last child/previous sibling encoding} is typically smaller than the binary dag, and almost as small as the dag. Moreover, for our test corpus, the average size of the \emph{reverse hybrid dag} built from the last child/previous sibling encoding of the dag is only $62\%$ of the average size of the minimal dag. Observe that in the second sharing phase of the construction of the hybrid dag, only sequences of identifiers (nonterminals of the regular tree grammar corresponding to the dag) are shared. Thus, we are sharing repeated string suffixes in a sequence of strings. We experimented with applying a grammar-based string compressor to this sequence of strings. It is not difficult to incorporate the output into an SLT grammar. As our experiments show, the obtained grammars are smaller than those of the hybrid dag and almost as small as TreeRePair's grammars. Moreover, they have the advantage that checking equivalence of subtrees is simple (each distinct subtree is represented by a unique identifier), a property not present for arbitrary SLT grammars. For hybrid dags, even equality of sibling end sequences can be checked efficiently. \noindent {\bf Average Size Analysis of DAGs.}\quad Given a tree over $n$ nodes and $m$ labels, what is the average size of its minimal dag? This problem was studied for unlabeled full binary trees by Flajolet, Sipala, and Steyaert~\cite{FlaSipStey1990}. They present exact expressions and show that the expected node size of the minimal dag of a full binary tree with $n$ nodes is asymptotically \[ \kappa \cdot \frac{n}{\sqrt{\ln n}} \cdot \left(1 + O \left(\frac{1}{\ln n} \right) \right) \] where the constant $\kappa$ is explicitly determined. One problem with the paper by Flajolet et.\ al.\ is that the proof of the result above is rather sketchy, and at certain places contains large gaps. Here we fill these gaps, and extend their results, giving detailed proofs of: \begin{itemize} \item exact expressions, in terms of their generating functions, for the average node and edge sizes of dags and bdags of unranked trees over $n$ nodes and $m$ labels, and of \item the asymptotic behavior of these averages. We show that these asymptotic behaviors are also of the form $C \frac{n}{\sqrt{ \log n}}$, where $C$ is again explicitly determined. \end{itemize} The proofs of these results assume basic knowledge about combinatorial classes and generating functions. Details on these can be found in textbooks, e.g., the one by Flajolet and Sedgewick~\cite{AnalyticCombinatorics}. Our proofs of the asymptotics are rather involved and can be found in the Appendix. A preliminary version of this paper (not containing average-case sizes of dags) appeared as~\cite{DBLP:conf/icdt/LohreyMN13}. \section{Trees and dags}\label{sec:trees_and_dags} Let $\Sigma$ be a finite set of node labels. An {\em ordered $\Sigma$-labeled multigraph} is a tuple $M = (V,\gamma,\lambda)$, where \begin{itemize} \item $V$ is a finite set of nodes \item $\gamma : V \to V^*$ assigns to each node a finite word over the set of nodes \item $\lambda : V \to \Sigma$ assigns to each node a label from $\Sigma$. \end{itemize} The idea is that for a node $v \in V$, $\gamma(v)$ is the ordered list of $v$'s successor nodes. The {\em underlying graph} is the directed graph $G_M = (V,E)$, where $(u,v) \in E$ if and only if $v$ occurs in $\gamma(u)$. The \emph{node size} of $M$, denoted by $ \| M \|$, is the cardinality of $V$, and the \emph{edge size} or simply \emph{size} of $M$ is defined as $|M| = \sum_{v \in V} |\gamma(v)|$ (here $|w|$ denotes the length of a word $w$). Note that the labeling function $\lambda$ does not influence the size of $M$. The motivation for this is that the size of $M$ can be seen as the number of pointers that are necessary in order to store $M$ and that these pointers mainly determine the space consumption for $M$. \begin{figure*}[t] \centerline{ \input hybrid.pdf_t } \caption{\label{fig:hdag} Top: a tree $t$, its dag, its fcns encoding and its bdag of $t$. Bottom: its hybrid dag is shown in the box.} \end{figure*} Two ordered $\Sigma$-labeled multigraphs $M_1 = (V_1,\gamma_1,\lambda_1)$ and $M_2 = (V_2,\gamma_2,\lambda_2)$ are isomorphic if there exists a bijection $f : V_1 \to V_2$ such that for all $v \in V_1$, $\gamma_2(f(v)) = f(\gamma_1(v))$ and $\lambda_2(f(v)) = \lambda_1(v)$ (here we implicitly extend $f$ to a morphism $f : V_1^* \to V_2^*$). We do not distinguish between isomorphic multigraphs. In particular, in our figures a node $v \in V$ is not represented by the symbol $v$, but by the label $\lambda(v)$. An {\em ordered $\Sigma$-labeled dag} is a $\Sigma$-labeled ordered multigraph $d = (V,\gamma,\lambda)$ such that the underlying graph $G_d$ is acyclic. The nodes $r \in V$ for which there is no $v \in V$ such that $(v,r)$ is an edge of $G_d$ ($r$ has no incoming edges) are called the {\em roots} of $d$. An ordered $\Sigma$-labeled {\em rooted} dag is an ordered $\Sigma$-labeled dag with a unique root. In this case every node of $d$ is reachable in $G_d$ from the root node. The nodes $\ell \in V$ for which there is no $v \in V$ such that $(\ell,v)$ is an edge of $G_d$ ($\ell$ has no outgoing edges) are called the {\em leaves} of $d$. An {\em ordered $\Sigma$-labeled tree} is an ordered $\Sigma$-labeled rooted dag $t = (V,\gamma,\lambda)$ such that every non-root node $v$ has exactly one occurrence in the concatenation of all strings $\gamma(u)$ for $u \in V$. In other words, the underlying graph $G_t$ is a rooted tree in the usual sense and in every string $\gamma(u)$, every $v \in V$ occurs at most once. We define $\mathcal{T}(\Sigma)$ as the set of all ordered $\Sigma$-labeled trees. We denote ordered $\Sigma$-labeled trees by their usual term notation, i.e., for every $a \in \Sigma$, $n \geq 0$, and all trees $t_1,\dots,t_n \in \mathcal{T}(\Sigma)$, we also have $a(t_1,\ldots,t_n) \in \mathcal{T}(\Sigma)$. Note that trees from $\mathcal{T}(\Sigma)$ are \emph{unranked} in the sense that the number of children of a node does not depend on the label of the node. We therefore frequently speak of unranked trees for elements of $\mathcal{T}(\Sigma)$. Let $d= (V,\gamma,\lambda)$ be an ordered $\Sigma$-labeled dag. With every node $v \in V$ we associate a tree $\text{eval}_d(v) \in \mathcal{T}(\Sigma)$ inductively as follows: We set $$\text{eval}_d(v) = f(\text{eval}_d(v_1),\ldots, \text{eval}_d(v_n)), $$ if $\lambda(v)=f$ and $\gamma(v)=v_1\cdots v_n$ (where $f(\varepsilon)= f$). Intuitively, $\text{eval}_d(v)$ is the tree obtained by unfolding $d$ starting in the node $v$. If $d$ is an ordered $\Sigma$-labeled rooted dag, then we define $\text{eval}(d) = \text{eval}_d(r)$, where $r$ is the root node of $d$. Note that if $t$ is an ordered $\Sigma$-labeled tree and $v$ is a node of $t$, then $\text{eval}_t(v)$ is simply the subtree of $t$ rooted at $v$ and is written as $t/v = \text{eval}_t(v)$ in this case. If for nodes $u \neq v$ of $t$ we have $t/u = t/v$, then the tree $t/u = t/v$ is a \emph{repeated subtree} of $t$. Let $t = (V,\gamma,\lambda) \in \mathcal{T}(\Sigma)$ and let $G_t = (V,E)$ be the underlying graph (which is a tree). For an edge $(u,v) \in E$, $v$ is a \emph{child} of $u$, and $u$ is the \emph{parent} of $v$. If two nodes $v$ and $v'$ have the same parent node $u$, then $v$ and $v'$ are \emph{siblings}. If moreover $\gamma(u)$ is of the form $u_1 v v' u_2$ for $u_1, u_2 \in V^*$ then $v'$ is the \emph{next sibling} of $v$, and $v$ is the \emph{previous sibling} of $v'$. If a node $v$ does not have a previous sibling, it is a \emph{first child}, and if it does not have a next sibling, it is a \emph{last child}. For many tree-processing formalisms (e.g.\ standard tree automata), it is useful to deal with ranked trees, where the number of children of a node is bounded. There is a standard binary encoding of unranked trees, which we introduce next. A \emph{binary $\Sigma$-labeled dag} $d$, or short \emph{binary dag}, can be defined as an ordered $(\Sigma \cup \{\Box\})$-labeled dag $d = (V,\gamma,\lambda)$, where $\Box \not\in \Sigma$ is a special dummy symbol such that the following holds: \begin{itemize} \item For every $v \in V$ with $\lambda(v) \in \Sigma$ we have $|\gamma(v)|=2$ \item for every $v \in V$ with $\lambda(v) =\Box$ we have $|\gamma(v)|=0$. \end{itemize} For a binary dag, $d = (V, \gamma,\lambda)$, we alter our definitions of node and edge sizes by disregarding all dummy nodes. That is, the node size is now $ \|d\| = |\{ v \in V \mid \lambda(v) \neq \Box\}|$ and the (edge) size is $|d| = \sum_{v \in V} |\gamma(v)|_{\Sigma}$, where $|v_1v_2 \cdots v_m|_{\Sigma} = |\{ i \mid 1 \leq i \leq m, \lambda(v_i) \neq \Box\}|$. Accordingly, the dummy nodes are not represented in our figures. A \emph{binary $\Sigma$-labeled tree} $t$, or short \emph{binary tree}, is a binary dag which is moreover an ordered $(\Sigma \cup \{\Box\})$-labeled tree. The symbol $\Box$ basically denotes the absence of a left or right child of a node. For instance, $g(a,\Box)$ denotes the binary tree that has a $g$-labeled root with an $a$-labeled left child but no left child (as shown in the bottom of Figure~\ref{fig:hdag}). Note that $g(a,\Box)$ and $g(a,\Box)$ are different binary trees. \begin{comment} Instead of introducing the dummy symbol $\Box$ one may introduce four copies $a_{i,j}$ ($i,j \in \{0,1\}$) for every label $a \in \Sigma$, where $i=0$ (resp., $j=0$) means that the node does not have a left (resp., right) child. \end{comment} Let $\mathcal{B}(\Sigma)$ denote the set of binary $\Sigma$-labeled trees. We define a mapping $\text{fcns} : \mathcal{T}(\Sigma)^* \rightarrow \mathcal{B}(\Sigma)$, where as usual $\mathcal{T}(\Sigma)^*$ denotes the set of all finite words (or sequences) over the set $\mathcal{T}(\Sigma)$, as follows (``fcns'' refers to ``first child/next sibling''): For the empty word $\varepsilon$ let $\text{fcns}(\varepsilon) = \Box $ (the empty binary tree). If $n \geq 1$, $t_1, \ldots, t_n \in \mathcal{T}(\Sigma)$ and $t_1 = f(u_1,\ldots,u_m)$ with $m \geq 0$, then $$ \text{fcns}(t_1 t_2\cdots t_n) = f(\text{fcns}(u_1 \cdots u_m),\text{fcns}(t_2 \cdots t_n)). $$ Note that $\text{fcns}(a) = a(\Box,\Box)$ for $a \in \Sigma$. In the following we always simply write $a$ for $a(\Box,\Box)$. The encoding fcns is bijective, hence the inverse $\text{fcns}^{-1} : \mathcal{B}(\Sigma) \to \mathcal{T}(\Sigma)^*$ is defined. Moreover, for every $t \in \mathcal{T}(\Sigma)$, we have $\|\text{fcns}(t)\| =\|t\|$, see e.g.~\cite{DBLP:books/aw/Knuth68}. The fcns encoding is also known as rotation correspondence (see, e.g.~\cite{marckert}) and as Rabin encoding. \begin{example} Let $t_1 = f(a_1,a_2,a_3)$ and let $t_2 = g(b_1,b_2)$. Then $\text{fcns}(t_1 t_2) = f(a_1(\Box,a_2(\Box,a_3)),g(b_1(\Box,b_2),\Box))$. \end{example} As mentioned in the Introduction, one can construct $\text{fcns}(t)$ by keeping all nodes of $t$ and creating edges as follows: For each node $u$ of $t$, the left child of $u$ in $\text{fcns}(t)$ is the first child of $u$ in $t$ (if it exists) and the right child of $u$ in $\text{fcns}(t)$ is the next sibling of $u$ in $t$ (if it exists). An ordered tree can be compacted by representing occurrences of repeated subtrees only once. Several edges then point to the same subtree (which we call a \emph{repeated} subtree), thus making the tree an ordered dag. It is known that the minimal dag of a tree is unique and that it can be constructed in linear time (see e.g., ~\cite{DBLP:journals/jacm/DowneyST80}). For later purposes it is useful to define the minimal dag $\text{dag}(d)$ for every ordered $\Sigma$-labeled dag $d= (V,\gamma,\lambda)$. It can be defined as $$ \text{dag}(d) = (\{ \text{eval}_d(u) \mid u \in V\}, \gamma',\lambda') $$ with $\lambda'(f(t_1,\ldots, t_n)) = f$ and $\gamma'(f(t_1,\ldots, t_n)) = t_1\cdots t_n$. Thus, the nodes of $\text{dag}(d)$ are the different trees represented by the unfoldings of the nodes of $d$. The internal structure of the nodes of $\text{dag}(d)$ (which are trees in our definition) has no influence on the size of $\text{dag}(d)$, which is still defined to be the number of its edges. Note that in general we cannot recover $d$ from $\text{dag}(d)$: For instance if $d$ is the disjoint union of two copies of the same tree $t$, then $\text{dag}(d) = \text{dag}(t)$, but this will not be a problem. Indeed, we use dags only for the compression of forests consisting of different trees. Such a forest can be reconstructed from its minimal dag. Note also that if $d$ is a rooted dag, then $\text{dag}(d)$ is also rooted and we have $\text{eval}(\text{dag}(d))=\text{eval}(d)$. \begin{example} Consider the tree $t_n$ defined by $t_0=a$ and $t_n= a(t_{n-1},t_{n-1})$. While $|t_n| = 2(2^n-1)$, $|\text{dag}(t_n)| = 2n$. Hence $\text{dag}(t)$ can be exponentially smaller than $t$. \end{example} The \emph{binary dag} of $t\in \mathcal{T}(\Sigma)$, denoted $\text{bdag}(t)$, is defined as $$ \text{bdag}(t) = \text{dag}(\text{fcns}(t)). $$ It is a binary dag as defined above. See Figure~\ref{fig:tn2} in the Introduction for an example (recall that we do not represent dummy nodes in binary dags). Clearly, the number of nodes of $\text{dag}(t)$ equals the number of different subtrees $t/v$ of $t$. In order to describe the number of nodes of $\text{bdag}(t)$ the following definition is useful: For a node $v$ of an unranked tree $t = (V,\gamma,\lambda)$ define $\text{sibseq}(v) \in \mathcal{T}(\Sigma)^*$ (the {\em sibling sequence} of $v$) as the sequence of subtrees rooted at $v$ and all its right siblings. More formally, if $v$ is the root of $t$ then $\text{sibseq}(v)=t$. Otherwise, let $u$ be the parent node of $v$ and let $\gamma(u) = w v v_1 \cdots v_m$, where $w \in V^*$. Then $$ \text{sibseq}(v) = (t/v) (t/v_1) \cdots (t/v_m). $$ \begin{example} The different sibling sequences of the tree $t=f(a, f(b, a), b, a)$ are: $t$, $af(b,a)ba$, $f(b,a)ba$, $ba$, and $a$. \end{example} The following lemma follows directly from the definitions of bdag and sibseq: \begin{lemma} \label{lemma-sib-sequ} The number of nodes of $\text{bdag}(t)$ is equal to the number of different sibling sequences $\text{sibseq}(v)$, for all $v \in V$. \end{lemma} \section{Straight-line tree grammars} \label{sec-SLT-grammar} Straight-line tree grammars are a formalism that in many cases give a more compact tree representation as dags. Let $\{y_1, y_2, \ldots\}$ be an infinite fixed set of parameters (or variables). A \emph{straight-line tree grammar} (\emph{SLT grammar} for short) is a tuple ${\mathcal G}=(N, \text{rank}, \Sigma, S, \rho)$, where \begin{itemize} \item $N$ is a finite set of so-called {\em nonterminal symbols} \item $\text{rank} : N \to \mathbb{N}$ maps every nonterminal to its rank (which may be~$0$) \item $\Sigma$ is a finite set of node labels \item $S \in N$ is the start nonterminal and \item $\rho$ is a function that maps every $X \in N$ to an ordered $\Gamma$-labeled tree $\rho(X) = (V,\gamma,\lambda)$, where $\Gamma=\Sigma \cup N \cup \{y_1, \ldots, y_{\text{rank}(X)}\}$ and the following conditions hold: \begin{itemize} \item for every $1 \leq i \leq \text{rank}(X)$ there is exactly one node $v \in V$ with $\lambda(v) = y_i$, which moreover is a leaf of $\rho(X)$ and \item for every node $v \in V$ with $\lambda(v) = Y \in N$ we have $|\gamma(v)| = \text{rank}(Y)$, i.e., $v$ has $\text{rank}(Y)$ many children. \end{itemize} Finally, the binary relation $\{ (X,Y) \in N \times N \mid Y \text{ appears in } \rho(X) \}$ must be acyclic. \end{itemize} We also write $X \to t$ for $\rho(X)=t$ and call $X \to t$ a \emph{rule} or \emph{production} of $\mathcal G$. Moreover, we also write $X(y_1, \ldots, y_{\text{rank}(X)})$ instead of $X$ in the left-hand side of the rules, to emphasize the rank of the nonterminals. The properties of an SLT grammar ${\mathcal G}=(N, \text{rank}, \Sigma, S, \rho)$ allow us to define for every nonterminal $X \in N$ a $(\Sigma \cup \{y_1, \ldots, y_{\text{rank}(X)}\})$-labeled tree $\text{eval}_{\mathcal G}(X)$ inductively as follows\footnote{We hope that no confusion will arise with the evaluation of a dag defined in the previous section; we will see in fact that the evaluation of a dag can be seen as a special case of the evaluation of an SLT grammar.}: Let $\rho(X) = (V,\gamma,\lambda)$. Assume that for every nonterminal $Y$ that appears in $\rho(X)$, the tree $t_Y = \text{eval}_{\mathcal G}(Y)$ is already defined. This is a tree that contains for every $1 \leq j \leq \text{rank}(Y)$ exactly one leaf node that is labeled with $y_j$. We now replace every node $v \in V$ in $\rho(X)$ that is labeled with a nonterminal by a copy of the tree $t_{\lambda(v)}$. Thereby, the $j$-th child of $v$ is identified with the $y_j$-labeled node of $t_{\lambda(v)}$ for every $1 \leq j \leq \text{rank}(\lambda(v))$, and the parent node of the root of $t_{\lambda(v)}$ becomes the parent node of $v$. The resulting tree is $\text{eval}_{\mathcal G}(X)$. For a completely formal definition, see e.g. \cite{DBLP:journals/tcs/LohreyM06,DBLP:journals/jcss/LohreyMS12}.\footnote{The formalisms in \cite{DBLP:journals/tcs/LohreyM06,DBLP:journals/jcss/LohreyMS12} slightly differ from our definition, since they assume a fixed rank for every node label in $\Sigma$. But this is not an essential difference.} Finally, let $\text{eval}({\mathcal G}) = \text{eval}_{\mathcal G}(S)$. The term ``straight-line tree grammar'' comes from the fact that an SLT can be seen as a context-free tree grammar that produces a single tree. The size of ${\mathcal G}=(N, \text{rank}, \Sigma, S, \rho)$ is defined to be $$ |{\mathcal G}| = \sum_{X \in N} |\rho(X)| . $$ \begin{example} Consider the SLT grammar $\mathcal G$ with nonterminals $S, A, B$ and rules \begin{eqnarray*} S & \to & B(a,b,B(c,d,a)), \\ B(y_1, y_2, y_3) & \to & A(y_1, A(y_2,y_3)), \\ A(y_1, y_2) & \to & f(g(y_1),y_2) . \end{eqnarray*} We have $|{\mathcal G}| = 13$ and $\text{eval}({\mathcal G}) = f(g(a),f(g(b),f(g(c),f(g(d),a))))$. The same tree is also generated by the SLT grammar with the rules \begin{eqnarray*} S & \to & A(a,A(b,A(c,A(d,a)))), \\ A(y_1, y_2) & \to & f(g(y_1),y_2) . \end{eqnarray*} Its size is only $11$. \end{example} A $k$-SLT is an SLT ${\mathcal G} = (N, \text{rank}, \Sigma, S, \rho)$ such that $\text{rank}(X) \leq k$ for every $X \in N$. A $0$-SLT grammar is also called a \emph{regular SLT} (since it is a regular tree grammar). In such a grammar, nonterminals only occur as leaves in the right-hand sides of the rules. An ordered $\Sigma$-labeled rooted dag $d = (V, \gamma,\lambda)$ can be identified with the $0$-SLT grammar $$ \mathcal{G}_d = (V,\text{rank},\Sigma,S,\rho), $$ where $\text{rank}(v)=0$ for every $v \in V$, $S$ is the root of $d$, and $\rho(v) = f(v_1,\ldots,v_n)$ if $\lambda(v)=f$ and $\gamma(v) = v_1 \cdots v_n$. Note that all trees occurring in the right-hand sides of the rules have height $0$ or $1$. Often in this paper, it will be useful to eliminate in $\mathcal{G}_d$ nonterminals $v \in V$ such that $\gamma(v) = \varepsilon$ (that is, the leaves of the dag $d$). For this, we define the {\em reduced} $0$-SLT grammar $$ \mathcal{G}_d^{\text{red}} = (V \setminus V_0 ,\text{rank},\Sigma,S,\rho), $$ where $V_0 = \{ v \in V \mid \gamma(v) = \varepsilon\}$, $\text{rank}(v)=0$ for every $v \in V \setminus V_0$, $S$ is the root of $d$, and $\rho(v) = f(w_1,\ldots,w_n)$ if $\lambda(v)=f$, $\gamma(v) = v_1 \cdots v_n$, and $w_i = \lambda(v_i)$ if $v_i \in V_0$ and $w_i = v_i$ otherwise. Note that every right-hand side $\rho(v)$ of $\mathcal{G}_d^{\text{red}}$ is now a tree of height 1. This does not change the evaluation of the grammar, and simplifies some technical details in Section~\ref{sec-size}. Of course, we should exclude the case that $V_0 = V$. For this, we simply exclude dags consisting of a single node from further considerations. Finally, observe that the evaluation of the grammar $\mathcal G_d$ coincides with the evaluation of the dag $d$ defined in the previous section. If $\mathcal{G}_d^{\text{red}} = (V \setminus V_0 ,\text{rank},\Sigma,S,\rho)$ with $V \setminus V_0 = \{ A_1, \ldots, A_n\}$ and $\rho(A_i) = f_i(\alpha_{i,1}, \ldots, \alpha_{i,k_i})$ (where $\alpha_{i,j} \in (V \setminus V_0) \cap \Sigma$) then the words $\alpha_{i,1} \cdots \alpha_{i,k_i} \in ((V \setminus V_0) \cap \Sigma)^+$ are called the \emph{child sequences} of the dag $d$. \begin{example} For $d = \text{dag}(t)$ in Figure~\ref{fig:hdag}, $\mathcal{G}_d^{\text{red}}$ consists of the rules $$ S \to f(B,A,A), \qquad B \to f(A,A), \quad A \to g(a). $$ Hence the child sequences of the dag are $BAA$, $AA$, and $a$. \end{example} Algorithmic problems on SLT grammar-compressed trees were considered in \cite{DBLP:journals/tcs/LohreyM06,DBLP:journals/jcss/LohreyMS12}. Of particular interest in this context is a result from \cite{DBLP:journals/jcss/LohreyMS12} stating that a given SLT $\mathcal G$ can be transformed in polynomial time into an equivalent $1$-SLT ${\mathcal G}_1$ such that $\text{eval}({\mathcal G}_1)=\text{eval}({\mathcal G})$. In combination with results from \cite{DBLP:journals/tcs/LohreyM06} it follows that for a given nondeterministic tree automaton $A$ (even with sibling constraints) and an SLT grammar $\mathcal G$ one can test in polynomial time whether $A$ accepts $\text{eval}({\mathcal G})$. Compression algorithms that produce from a given input tree a small SLT grammar are proposed in \cite{DBLP:journals/is/BusattoLM08,lohmanmen13}. \section{The hybrid dag} \label{sec-hdag} While the dag shares repeated subtrees of a tree, the binary dag shares repeated sibling sequences (see Lemma~\ref{lemma-sib-sequ}). Consider an unranked tree $t$. As we have seen in the Introduction, the size of $\text{dag}(t)$ can be smaller than the size of $\text{bdag}(t)$. On the other hand, it can also be that the size of $\text{bdag}(t)$ is smaller than the size of $\text{dag}(t)$. We now wish to define a tree representation that combines both types of sharing (trees and tree sequences) and whose size is guaranteed to be smaller than or equal to the minimum of the sizes of $\text{dag}(t)$ and $\text{bdag}(t)$. Our starting point is $d = \text{dag}(t)$. In this dag we now want to share all repeated sibling sequences. As an example, consider the tree $t=f(f(g(a),g(a)),g(a),g(a))$ shown on the top left of Figure~\ref{fig:hdag}. Its size is $9$. The dag of this tree consists of a unique occurrence of the subtree $g(a)$ plus two $f$-labeled nodes, shown to the right of $t$ in the figure. Thus $|d|=6$. The corresponding reduced $0$-SLT grammar $\mathcal{G}_d^{\text{red}}$ consists of the rules \begin{equation}\label{dag-example} \begin{array}{lcl} S&\to& f(B,A,A),\\ B &\to& f(A,A), \\ A &\to& g(a). \end{array} \end{equation} In order to share repeated sibling sequences in $d$ we apply the fcns encoding to the right-hand sides of $\mathcal{G}_d^{\text{red}}$. For the above example we obtain the following new ``binary tree grammar'': \begin{equation}\label{dag-example-binary-coding} \begin{array}{lcl} S&\to& f(B(\Box, A(\Box,A)),\Box)\\ B &\to& f(A(\Box,A),\Box)\\ A&\to& g(a,\Box). \end{array} \end{equation} This is not an SLT grammar, since there are occurrences of $A$ with $0$ and $2$, respectively, children. We view the above rules just as the binary encoding of~\eqref{dag-example}. We now build the minimal dag of the forest obtained by taking the disjoint union of all right-hand sides of \eqref{dag-example-binary-coding}. In the example, the subtree $A(\Box,A)$ appears twice and is shared. We write the resulting dag again as a grammar, using the new nonterminal $C$ for the new repeated tree $A(\Box,A)$ (corresponding to the repeated sibling sequence $AA$ in \eqref{dag-example}): \begin{equation}\label{hdag-example} \begin{array}{lcl} S&\to& f(B(\Box, C),\Box)\\ B &\to& f(C,\Box)\\ C & \to & A(\Box,A)\\ A&\to& g(a,\Box) \end{array} \end{equation} These rules make up the {\em hybrid dag} ({\em hdag} for short) of the initial tree. Its size is the total number of edges in all right-hand side trees; it is $5$ in our example (here, as usual, we do not count edges to $\Box$-labeled nodes). Compare this to $\text{dag}(t)$ and $\text{bdag}(t)$, which are both of size $6$. Again, note that \eqref{hdag-example} should not be seen as an SLT-grammar but as a succinct encoding of \eqref{dag-example}. In our example, the production $B \to f(A,A)$ in the $0$-SLT grammar \eqref{dag-example} does not save any edges, since the nonterminal $B$ occurs only once in a right-hand side (namely $f(B,A,A)$). Eliminating this production yields the $0$-SLT grammar \[ \begin{array}{lcl} S&\to& f(f(A,A),A,A)\\ A &\to& g(a) \end{array} \] with the fcns encoding \[ \begin{array}{lcl} S&\to& f(f(A(\Box,A),A(\Box,A)),\Box)\\ A&\to& g(a,\Box) . \end{array} \] Sharing repeated subtrees gives \begin{equation}\label{hdag2-example} \begin{array}{lcl} S&\to& f(f(C,C),\Box)\\ C&\to& A(\Box,A)\\ A&\to& g(a,\Box) , \end{array} \end{equation} which corresponds to the framed graph in Figure~\ref{fig:hdag} . The total number of edges to non-$\Box$ nodes in all right-hand sides is still 5, but it has only 3 nonterminals in contrast to 4 for the above hdag. In practice, having fewer nonterminals is preferable. In fact, our implementation avoids redundant nonterminals like $B$ in our example. On the other hand, having only trees of height 1 as right-hand sides of the dag (seen as a reduced $0$-SLT grammar) does not influence the number of edges in the final grammar. Moreover, it slightly simplifies the proofs in the next section, where we show that the size of the hdag of a tree $t$ is smaller than or equal to the minimum of the sizes of $\text{dag}(t)$ and $\text{bdag}(t)$. In general, the hybrid dag is produced by first building the minimal dag, then constructing the fcns encoding of the corresponding reduced $0$-SLT grammar, and then building a minimal dag again. More formally, consider $d = \text{dag}(t)$ and assume that the corresponding reduced $0$-SLT grammar $\mathcal{G}_d^{\text{red}}$ contains the rules $A_1\to t_1,\dots, A_n\to t_n$. Recall that every tree $t_i$ has height 1 and that the trees $t_1, \ldots, t_n$ are pairwise different. Let $t'_i$ be the tree that is obtained from $t_i$ by adding $A_i$ as an additional label to the root of $t_i$. Then \[ \text{hdag}(t) = \text{dag}(\text{fcns}(t'_1),\dots,\text{fcns}(t'_n)), \] where the tuple $(\text{fcns}(t'_1),\dots,\text{fcns}(t'_n))$ is viewed as the dag obtained by taking the disjoint union of the binary trees $\text{fcns}(t'_i)$. Clearly $\text{hdag}(t)$ is unique up to isomorphism. In the second step when $\text{dag}(\text{fcns}(t'_1),\dots,\text{fcns}(t'_n))$ is constructed from the tuple $(\text{fcns}(t_1'),\dots,\text{fcns}(t_n'))$, only suffixes of child sequences can be shared, since the trees $t'_1, \ldots, t'_n$ are pairwise different and of height 1. The size $|\text{hdag}(t)|$ of the hdag is the number of edges (to non-$\Box$-labeled nodes) of $\text{dag}(\text{fcns}(t'_1),\dots,\text{fcns}(t'_n))$. Note that the additional label $A_i$ at the root of $t_i$ is needed in order to be able to reconstruct the initial tree $t$. In \eqref{hdag-example}, these additional labels ($S$, $A$, and $B$) are implicitly present as the left-hand sides of the rules. On the other hand, these labels have no influence on the size of the hdag. The hdag is a particular dag. It is obtained by sharing repeated suffixes of child sequences in the minimal dag (viewed as a $0$-SLT grammar). In Section~\ref{sec:dag_plus_string} we introduce a further generalization of this idea, where child sequences of the dag are compressed using a straight-line context-free tree grammar. Moreover, we show that such a compressed structure can be easily transformed into an equivalent $1$-SLT grammar (Theorem~\ref{thm-construct-1-STL}). This applies in particular to hdags. Hence, all the good algorithmic properties of ($1$-)SLT grammars (e.g. polynomial time evaluation of tree automata) also hold for hdags. \section{Using the reverse encoding}\label{section_reverseDag} Instead of using the fcns encoding of a tree, one may also use the \emph{last child previous sibling encoding} (lcps). Just like fcns, lcps is a bijection from $\mathcal{T}(\Sigma)^*$ to $\mathcal{B}(\Sigma)$ and is defined as follows. For the empty word $\varepsilon$ let $\text{lcps}(\varepsilon) = \Box $ (the empty binary tree). If $n \geq 1$, $t_1, \ldots, t_n \in \mathcal{T}(\Sigma)$ and $t_n = f(u_1,\ldots,u_m)$ with $m \geq 0$, then $$\text{lcps}(t_1 t_2\cdots t_n) = f(\text{lcps}(t_1,\ldots,t_{n-1}), \text{lcps}(u_1 \cdots u_m)). $$ Again, the inverse $\text{lcps}^{-1} : \mathcal{B}(\Sigma) \to \mathcal{T}(\Sigma)^*$ is defined. \begin{example} Let $t_1=f(a_1,a_2,a_3)$ and $t_2 = g(b_1,b_2)$. Then $$\text{lcps}(t_1 t_2) = g(f(\Box,a_3(a_2(a_1,\Box),\Box)),b_2(b_1,\Box)).$$ \end{example} Let $\text{rbdag}(t)=\text{dag}(\text{lcps}(t))$ and $$\text{rhdag}(t)=\text{dag}(\text{lcps}(t'_1),\dots,\text{lcps}(t'_n)),$$ where $t'_1, \ldots, t'_n$ are obtained from $t$ as in the definition of the hdag. The reason to consider the lcps encoding is that $\text{rbdag}(t)$ and $\text{rhdag}(t)$ are smaller for trees that have repeated \emph{prefixes} of child sequences. Empirically, as we show in Section~\ref{sec:exp_results_dag}, this is quite common and for most trees $t$ in our XML corpus $|\text{rbdag}(t)| < |\text{bdag}(t)| $ and $|\text{rhdag}(t)| < |\text{hdag}(t)| $. \begin{example} Let $t = f(f(a,a,b),f(a,a,c))$. Then $|\text{rbdag}(t)| = 7$ while $|\text{dag}(t)|=|\text{bdag}(t)|=|\text{hdag}(t)|=|t|=8$. \end{example} Clearly, there are also trees $t$ where $|\text{hdag}(t)| < |\text{rhdag}(t)|$. This raises the question whether there is a scheme which combines the best of both approaches. Obviously one can construct both $\text{hdag}(t)$ and $\text{rhdag}(t)$ of a tree $t$ and discard the larger of both. Yet a scheme which combines both approaches by sharing both suffixes and prefixes of children sequences, faces the problem that the resulting minimal object is not necessarily unique. This can easily be seen by considering trees in which repeated prefixes and suffixes of child sequences overlap. Also it is not clear how a minimal such object can be constructed efficiently. A (non-optimal) approach we have considered was to first share repeated prefixes and then share repeated suffixes. Yet the results in compression achieved were not significantly better than for the $\text{rhdag}$. Moreover, this approach can be further generalized by sharing arbitrary factors of sibling sequences. This is the topic of Section~\ref{sec:dag_plus_string}. \section{Comparison of worst-case sizes of dag, bdag, and hdag} \label{sec-size} We want to compare the node size and the edge size of $\text{dag}(t)$, $\text{bdag}(t)$, and $\text{hdag}(t)$ for an unranked tree $t$. We do not include $\text{rbdag}(t)$ or $\text{rhdag}(t)$, because by symmetry the same bounds holds as for $\text{bdag}(t)$ and $\text{hdag}(t)$, respectively. \subsection{The number of nodes} In this section we consider the number of \emph{nodes} in the dag and bdag of an unranked tree $t$. We show that $\| \text{dag}(t)\| \leq \|\text{bdag}(t)\|$. \begin{example} \label{example-f(a,..a)} Consider the tree $t_n=f(a,a,\ldots,a)$ consisting of $n$ nodes, where $n \geq 2$. Then $\|\text{dag}(t)\|=2$ and $\|\text{bdag}(t)\|=n$, while $|\text{dag}(t)| = |\text{bdag}(t)| = n-1$. Note that dags with multiplicities on edges, as defined in~\cite{DBLP:conf/vldb/KochBG03}, can store a tree such as $t_n$ in size $O(\log n)$. \end{example} \begin{lemma}\label{lemma:comparing_node_size} Let $t$ be an unranked tree. Then $\| \text{dag}(t)\| \le \| \text{bdag}(t) \| \label{ineq:node_size}$. \end{lemma} \begin{proof} The lemma follows from Lemma~\ref{lemma-sib-sequ} and the obvious fact that the number of different subtrees of $t$ (i.e., \|\text{dag}(t)\|$) is at most the number of different sibling sequences in $t$: $\text{sibseq}(u) = \text{sibseq}(v)$ implies $t/u = t/v$. \qed \end{proof} \begin{lemma} \label{lemma:comparing_node_size2} There exists a family of trees $(t_n)_{n \geq 2}$ such that \|\text{dag}(t)\| = 2$ and $ \|t_n\| = \|\text{bdag}(t)\| = n$. \end{lemma} \begin{proof} Take the family of trees $t_n$ from Example~\ref{example-f(a,..a)}. \qed \end{proof} Let us remark that the node size of the hdag can be larger than the node size of the bdag and the node size of the dag. The reason is that in $\mathcal{G}^{\text{red}}_{\text{dag}(t)}$, there is a nonterminal for each node of the dag (and hence the height of each right-hand side is at most one). This can be done differently of course; it was chosen to simplify proofs and because our main goal is the reduction of edge size. Note that the total number of edges of $\mathcal{G}^{\text{red}}_{\text{dag}(t)}$ is equal to the number of edges of $\text{dag}(t)$. \subsection{The number of edges} \label{sec:number-edges} We have just seen that the number of nodes of the (minimal) dag is always at most the number of nodes of the bdag, and that the gap can be maximal ($O(1)$ versus $|t|$). For the number of edges, the situation is different. We show that $\frac{1}{2} |\text{bdag}(t)| \le |\text{dag}(t)| \leq \frac{1}{2} |\text{bdag}(t)|^2$ for $|t| \geq 2$ and that these bounds are sharp up to the constant factor $1/2$ in the second inequality. In fact, for $|t| \geq 2$ we show the three inequalities \begin{eqnarray*} |\text{hdag}(t)| & \le & \min( |\text{dag}(t)|, |\text{bdag}(t)|), \\ |\text{bdag}(t)| & \le & 2|\text{hdag}(t)|, \text{ and} \\ |\text{dag}(t)| & \leq & \frac{1}{2} |\text{hdag}(t)|^2 \end{eqnarray*} which imply $$\frac{1}{2} |\text{bdag}(t)| \le |\text{dag}(t)| \leq \frac{1}{2} |\text{bdag}(t)|^2. $$ Before we prove these bounds we need some definitions. Recall that the nodes of $\text{bdag}(t)$ are in 1-1-correspondence with the different sibling sequences of $t$. In the following, let $$ \text{sib}(t) = \{\text{sibseq}(v) \mid v \text{ a node of } t\} $$ be the set of all sibling sequences of $t$. To count the size (number of edges) of $\text{bdag}(t)$ we have to count for each sibling sequence $w \in \text{sib}(t)$ the number of outgoing edges in $\text{bdag}(t)$. We denote this number with $e(w)$; it can be computed as follows, where $w = s_1 s_2 \cdots s_m$ ($m \geq 1$) and the $s_i$ are trees: \begin{itemize} \item $e(w) = 0$, if $m=1$ and $|s_1|=0$ \item $e(w) = 1$, if either $m=1$ and $|s_1| \geq 1$ (then $w$ has only a left child) or if $m \geq 2$ and $|s_1|=0$ (then $w$ has only a right child) \item $e(w) = 2$, otherwise. \end{itemize} With this definition we obtain: \begin{lemma} \label{lemma-counting-edges-bdag} For every $t \in \mathcal{T}(\Sigma)$, we have $$ |\text{bdag}(t)| = \sum_{w \in \text{sib}(t)} e(w). $$ \end{lemma} The size of the hdag can be computed similarly: Consider the reduced $0$-SLT grammar $\mathcal{G} = \mathcal{G}_{\text{dag}(t)}^{\text{red}}$. Let $N$ be the set of nonterminals of $\mathcal{G}$ and let $S$ be the start nonterminal. Recall that every right-hand side of $\mathcal{G}$ has the form $f(\alpha_1,\ldots, \alpha_n)$, where every $\alpha_i$ belongs to $\Sigma \cup N$. Let $\text{sib}(\mathcal{G})$ be the set of all sibling sequences that occur in the right-hand sides of $\mathcal{G}$. Thus, for every right-hand side $f(\alpha_1,\ldots,\alpha_n)$ of $\mathcal{G}$, the sibling sequences $f(\alpha_1,\ldots,\alpha_n)$ (a sibling sequence of length $1$) and $\alpha_i \alpha_{i+1} \cdots \alpha_n$ ($1 \leq i \leq n$) belong to $\text{sib}(\mathcal{G})$. For such a sibling sequence $w$ we define $e(w)$ as above. Here, every $\alpha_i$ is viewed as a tree with a single nodes, i.e., $|\alpha_i|=0$. Then we have: \begin{lemma} \label{lemma-counting-edges-hdag} For every $t \in \mathcal{T}(\Sigma)$, we have $$ |\text{hdag}(t)| = \sum_{w \in \text{sib}(\mathcal{G})} e(w). $$ \end{lemma} For $w = s_1\cdots s_m \in \text{sib}(t)$ let $\tilde{w}$ be the string that results from $w$ by replacing every non-singleton tree $s_i \not\in\Sigma$ by the unique nonterminal of $\mathcal{G}$ that derives to $s_i$. Actually, we should write $\tilde{w}_t$ instead of $\tilde{w}$, since the latter also depends on the tree $t$. But the tree $t$ will be always clear from the context. Here are a few simple statements: \begin{itemize} \item For every $w \in \text{sib}(t)$, the sibling sequence $\tilde{w}$ belongs to $\text{sib}(\mathcal{G})$, except for the length-1 sequence $\tilde{w} = S$ that is obtained from the length-1 sequence $w = t \in \text{sib}(t)$. \item For every $w \in \text{sib}(t)$, $\tilde{w}$ is a word over $N \cup \Sigma$. \item For every $w \in \text{sib}(t)$, $e(\tilde{w}) \leq e(w)$. \item The mapping $w \mapsto \tilde{w}$ is an injective mapping from $\text{sib}(t) \setminus \{t\}$ to $\text{sib}(\mathcal{G})$. \end{itemize} Using this mapping, the sums in Lemma~\ref{lemma-counting-edges-bdag} and \ref{lemma-counting-edges-hdag} can be related as follows: \begin{lemma} \label{lemma-relating-sums} For every $t \in \mathcal{T}(\Sigma)$, we have $$ \text{hdag}(t) \ = \sum_{w \in \text{sib}(\mathcal{G})} \!\!\!\! e(w) \ = \ |N| + \sum_{w \in \text{sib}(t)} e(\tilde{w}) . $$ \end{lemma} \begin{proof} By Lemma~\ref{lemma-counting-edges-hdag} it remains to show the second equality. The only sibling sequences in $\text{sib}(\mathcal{G})$ that are not of the form $\tilde{w}$ for $w \in \text{sib}(t)$ are the sequences (of length 1) that consist of the whole right-hand side $f(\alpha_1,\ldots,\alpha_m)$ of a nonterminal $A \in N$. For such a sibling sequence $u$ we have $e(u) = 1$ (since it has length $1$ and $f(\alpha_1,\ldots,\alpha_m)$ is not a single symbol). Hence, we have \begin{eqnarray*} \sum_{w \in \text{sib}(\mathcal{G})} \!\!\!\! e(w) & = & |N| + \!\! \sum_{w \in \text{sib}(t) \setminus \{t\}} \!\!\!\! e(\tilde{w}) \\ & = & |N| + \sum_{w \in \text{sib}(t)} e(\tilde{w}) , \end{eqnarray*} where the last equality follows from $e(\tilde{t}) = e(S) = 0$. \qed \end{proof} \begin{theorem} \label{hdag-kleiner-min} For every $t \in \mathcal{T}(\Sigma)$, we have $$ |\text{hdag}(t)| \le \min( |\text{dag}(t)|, |\text{bdag}(t)|). $$ \end{theorem} \begin{proof} Since $\text{hdag}(t)$ is obtained from $\text{dag}(t)$ by sharing repeated suffixes of child sequences, we immediately get $|\text{hdag}(t)| \le |\text{dag}(t)|$. It remains to show $|\text{hdag}(t)| \leq |\text{bdag}(t)|$. By Lemma~\ref{lemma-counting-edges-bdag} and \ref{lemma-relating-sums} we have to show $$|N|+\sum_{w \in \text{sib}(t)} e(\tilde{w}) \ \leq \ \sum_{w \in \text{sib}(t)} e(w),$$ where $N$ is the set of nonterminals of $\mathcal{G}_{\text{dag}(t)}^{\text{red}}$. To see this, note that: \begin{itemize} \item $e(\tilde{w}) \leq e(w)$ for all $w \in \text{sib}(t)$ and \item for every nonterminal $A \in N$ there must exist a sibling sequence $w \in \text{sib}(t)$ such that $\tilde{w}$ starts with $A$. For this sequence we have $e(w) = e(\tilde{w})+1$ (note that the right-hand side of $A$ does not belong to $\Sigma$, hence $w$ starts with a tree of size at least 1). \end{itemize} Choose for every $A \in N$ a sibling sequence $w_A \in \text{sib}(t)$ such that $\tilde{w}_A$ starts with $A$. Let $R = \text{sib}(t) \setminus \{w_A \mid A \in N\}$. We get \begin{eqnarray*} |N|+\sum_{w \in \text{sib}(t)} e(\tilde{w}) & = & |N|+ \sum_{A \in N} e(\tilde{w}_A) + \sum_{w \in R} e(\tilde{w}) \\ & = & \sum_{A \in N} (e(\tilde{w}_A)+1) + \sum_{w \in R} e(\tilde{w}) \\ & \leq & \sum_{A \in N} e(w_A) + \sum_{w \in R} e(w)\\ & = & \sum_{w \in \text{sib}(t)} e(w). \end{eqnarray*} This proves the theorem. \qed \end{proof} \begin{theorem}\label{lemma:root_lemma} For every $t \in \mathcal{T}(\Sigma)$ with $|t| \geq 2$, we have $$ |\text{dag}(t)| \leq \frac{1}{2} |\text{hdag}(t)|^2. $$ \end{theorem} \begin{proof} Let $f_i(\alpha_{i,1}, \ldots, \alpha_{i,n_i})$ for $1 \leq i \leq k$ be the right-hand sides of $\mathcal{G}_{\text{dag}(t)}^{\text{red}}$. W.l.o.g. assume that $1 \leq n_1 \leq n_2 \leq \cdots \leq n_k$. Every $\alpha_{i,j}$ is either from $\Sigma$ or a nonterminal. Moreover, all the trees $f_i(\alpha_{i,1}, \ldots, \alpha_{i,n_i})$ are pairwise different. We have $|\text{dag}(t)| = \sum_{i=1}^k n_i$. If $n_k = 1$, then $t$ is a linear chain. In this case, we get $$ |\text{dag}(t)| = |t| \leq \frac{1}{2} |t|^2 = \frac{1}{2} |\text{hdag}(t)|^2 $$ since $|t| \geq 2$. Let us now assume that $n_k \geq 2$. Recall that we compute $\text{hdag}(t)$ by taking the minimal dag of the forest consisting of the binary encodings of the trees $f_i(\alpha_{i,1}, \ldots, \alpha_{i,n_i})$. The binary encoding of $f_i(\alpha_{i,1}, \ldots, \alpha_{i,n_i})$ has the form $f_i(t_i,\Box)$, where $t_i$ is a chain of $n_i-1$ many right pointers. Let $d$ be the minimal dag of the forest consisting of all chains $t_i$. Since all the trees $f_i(\alpha_{i,1}, \ldots, \alpha_{i,n_i})$ are pairwise distinct, we have $|\text{hdag}(t)| = k + |d|$. Since the chain $t_i$ consists of $n_i$ many nodes, we have $|d| \geq \max\{ n_i \mid 1 \leq i \leq k\}-1 = n_k -1$. Hence, we have to show that $\sum_{i=1}^k n_i \leq \frac{1}{2} (k + n_k - 1)^2$. We have $$ \sum_{i=1}^k n_i \leq k \cdot n_k \leq (k-1) n_k + \frac{1}{2} n_k^2 = \frac{1}{2} (2 (k-1) n_k + n_k^2) \leq \frac{1}{2} (k-1 + n_k)^2, $$ which concludes the proof. For the second inequality note that $n_k \leq \frac{1}{2} n_k^2$, since $n_k \geq 2$. \qed \end{proof} Consider the tree $s_n$ from Figure~\ref{fig:tn2}. We have $|\text{dag}(s_n)| = |s_n| = n^2$ and $|\text{hdag}(s_n)| = |\text{bdag}(s_n)| = 3n-2$. Hence, we get $$ |\text{dag}(s_n)| = n^2 > \frac{1}{9} (3n-2)^2 = \frac{1}{9} |\text{hdag}(s_n)|^2 . $$ This shows that up to a constant factor, the bound in Theorem~\ref{lemma:root_lemma} is sharp. The constant $1/9$ can be slightly improved: \begin{theorem}\label{lemma:1/6} There is a family of trees $(s_n)_{n \geq 1}$ such that $$|\mbox{dag}(s_n)| > \frac{1}{6} |\text{hdag}(s_n)|^2. $$ \end{theorem} \begin{proof} We specify $s_n$ by the reduced $0$-SLT grammar $\mathcal{G}_{\text{dag}(s_n)}^{\text{red}}$. Let $\mathcal{G}_{\text{dag}(s_n)}^{\text{red}}$ contain the following productions for $0 \leq i \leq n$: $$ A_i \to f(A_{i+1}, \ldots, A_n, \underbrace{a,\ldots,a}_{n \text{ many}}) . $$ This is indeed the grammar obtained from the minimal dag for a tree $s_n$ (of size exponential in $n$). We have $$ |\text{dag}(s_n)| = \sum_{i=n}^{2n} i = n (n+1) + \sum_{i=0}^{n} i = n (n+1) + \frac{n (n+1)}{2} = \frac{3 n (n+1)}{2}. $$ The hybrid dag of $s_n$ consists of the child sequence $A_1 A_2 \cdots A_n a^n$ together with $n+1$ many left pointers into this sequence. Hence, we have $$ |\text{hdag}(s_n)| = 2n-1 + n+1 = 3n . $$ We obtain $$ \frac{1}{6} |\text{hdag}(s_n)|^2 = \frac{1}{6} 9 n^2 = \frac{3}{2} n^2 < \frac{3 n (n+1)}{2} = |\text{dag}(s_n)| . $$ This proves the theorem. \qed \end{proof} Next let us bound $|\text{bdag}(t)|$ in terms of $|\text{hdag}(t)|$: \begin{theorem}\label{lemma:half_lemma} For every $t \in \mathcal{T}(\Sigma)$, we have $$ |\text{bdag}(t)| + n \leq 2 |\text{hdag}(t)|, $$ where $n$ is the number of non-leaf nodes of $\text{dag}(t)$. \end{theorem} \begin{proof} We use the notations introduced before Theorem~\ref{hdag-kleiner-min}. Note that $n = |N|$ is the number of nonterminals of the $0$-SLT grammar $\mathcal{G}_{\text{dag}(t)}^{\text{red}}$. By Lemma~\ref{lemma-counting-edges-bdag} we have $|\text{bdag}(t)|\ =\ \sum_{w \in \text{sib}(t)} e(w)$. By Lemma~\ref{lemma-relating-sums} we have $|\text{hdag}(t)| \ = \ |N| + \sum_{w \in \text{sib}(t)} e(\tilde{w})$. Hence, we have to show that $$ |N| + \sum_{w \in \text{sib}(t)} e(\tilde{w}) \ \geq \ \frac{1}{2} \sum_{w \in \text{sib}(t)} \!\! e(w) \ + \ \frac{1}{2} |N|. $$ In order to prove this, we show the following for every sibling sequence $w \in \text{sib}(t)$: Either $e(\tilde{w}) \geq \frac{1}{2} e(w)$ or $e(\tilde{w}) = 0$ and $e(w) = 1$. In the latter case, the sibling sequence $w$ consists of a single tree $s$ of size at least one (i.e., $s$ does not consist of a single node), and $\tilde{w}$ consists of a single nonterminal $A \in N$. So, let $w = t_1 \cdots t_n \in \text{sib}(t)$ and let $\tilde{w} = \alpha_1 \cdots \alpha_n$ with $\alpha_i \in \Sigma \cup N$. We consider the following four cases: \smallskip \noindent {\em Case 1.} $n>1$ and $t_1 = \alpha_1 \in \Sigma$. We have $e(w) = e(\tilde{w})=1$. \smallskip \noindent {\em Case 2.} $n > 1$ and $|t_1| \geq 1$. We have $e(w)=2$ and $e(\tilde{w})=1$. \smallskip \noindent {\em Case 3.} $n=1$ and $t_1 = \alpha_1 \in \Sigma$. We have $e(w) = e(\tilde{w})=0$. \smallskip \noindent {\em Case 4.} $n = 1$ and $|t_1| \geq 1$. We have $e(w)=1$, $e(\tilde{w})=0$, and $\tilde{w}$ consists of a single nonterminal $A \in N$. \qed \end{proof} For the tree $t_m$ from Figure~\ref{fig:tn} we have $|\text{bdag}(t_m)| = |t_m| = 2m$, $|\text{hdag}(s_m)| = |\text{dag}(t_m) | = m+1$, and $n=|N|=2$. Hence, Theorem~\ref{lemma:half_lemma} is optimal. From Theorems~\ref{hdag-kleiner-min}, \ref{lemma:root_lemma}, and \ref{lemma:half_lemma} we immediately get: \begin{corollary} For every $t \in \mathcal{T}(\Sigma)$ with $|t| \geq 2$, we have $$ \frac{1}{2} |\text{bdag}(t)| \le |\text{dag}(t)| \leq \frac{1}{2} |\text{bdag}(t)|^2. $$ \end{corollary} \newcommand{generating function}{generating function} \newcommand{generating functions}{generating functions} \newcommand{N}{N} \section{The average-case sizes of dag and bdag} Let $m\ge 1$. We will use the terminology {\em $m$-labeled tree}, instead of {\em $\{1, \ldots, m\}$-labeled tree}. In this section, we analyze the average sizes (both node size and edge size) of the dags and bdags of $m$-labeled unranked trees of size~$n$. Currently, we are not able to make such an analysis for the hdag. While Section~\ref{sec:exact} provides exact expressions for the average sizes, Section~\ref{subsection:asymptotic} deals with their asymptotic behavior. The results are mostly an extension of~\cite{FlaSipStey1990}, where the authors treat the average node size of binary trees over a singleton alphabet ($m=1$). However, we give here complete proofs, whereas the proof was merely sketched in \cite{FlaSipStey1990}. Let $\mathcal{B}_m$ denote the set of non-empty $m$-labeled binary trees and let $\mathcal{T}_m$ denote the set of non-empty $m$-labeled unranked trees. Here, ``non-empty'' means that our trees have at least one node. For $\mathcal{U} \in \{ \mathcal{B}, \mathcal{T} \}$ and $n\ge 0$, we define $$ \mathcal{U}_{m,n} = \{ t \in \mathcal{U}_m\mid |t|=n \}. $$ We seek expressions for the accumulated quantities $$ N _{m,n}^{\mathcal{U}} = \sum_{t \in \mathcal{U}_{m,n}} \| \text{dag}(t) \| \quad \text{and}\quad E_{m,n}^{\mathcal{U}} = \sum_{t \in \mathcal{U}_{m,n}} |\text{dag}(t)| $$ as well as for the average sizes $$ \bar{N }_{m,n}^{\mathcal{U}} = \frac{N _{m,n}^{\mathcal{U}}}{|\mathcal{U}_{m,n}|} \quad \text{and}\quad \bar{E}_{m,n}^{\mathcal{U}} = \frac{E_{m,n}^{\mathcal{U}}}{|\mathcal{U}_{m,n}|}. $$ Recall that the fcns-encoding yields a bijection between $m$-labeled unranked trees of edge size $n$ and $m$-labeled binary trees of edge size $n$, where the root only contains a left child. Therefore, the average node size (resp. edge size) of the bdag of $m$-labeled unranked trees of size $n$ is one plus the average node size (resp. edge size) of the minimal dag of $m$-labeled binary trees of size $n-1$. One key tool used in this section is that of {\em generating functions}. If $F_n$ is a sequence of numbers, then its (ordinary) generating function is defined as \[ \mathbf F(z)= \sum_{n \ge 0} F_n z^n \] and $[z^n] \mathbf F(z)$ denotes the coefficient of $z^n$ in $\mathbf F(z)$ (i.e., $F_n$). If for a set $\mathcal{S}$ a size function $f:\, \mathcal{S} \to \mathbb{N}$ is defined such that for every $n$ the set $\mathcal{S}_n = \{ s \in \mathcal{S}\mid f(s)=n\}$ is finite, we can associate to the set $\mathcal{S}$ the generating function \[ \mathbf S(z) = \sum_{n \ge 0} |\mathcal{S}_n| z^n, \] which is said to \emm count the objects of $\mathcal S$ by their size., Such sets $\mathcal S$ are sometimes called {\em combinatorial classes}~\cite[p.16]{AnalyticCombinatorics}. When a class has a simple recursive structure, it is often possible to obtain an explicit expression for $\mathbf S(z)$. This will be the case for our two basic generating functions, $\mathbf B_m(z)$ and $\mathbf T_m(z)$, which count respectively the trees of $\mathcal B_m$ and $\mathcal T_m$ by their size (see Lemmas~\ref{lemma:gf_labeled_binary_trees} and~\ref{lemma:gf_labeled_unranked_trees}). Let again $\mathcal{U} \in \{ \mathcal{B}, \mathcal{T} \}$. For $u \in \mathcal{U}_m$ and $n\ge 0$, define $C_{m,n}^{\mathcal{U}}(u)$ as the number of $\mathcal{U}_m$-trees of size $n$ that {contain} $u$ as a subtree. Let $ v \in \mathcal{U}_m$ be another tree such that $|v|=|u|$. For every $n \ge 0$ there is a bijection between the set of trees of size $n$ that contain $u$ and the set of trees of size $n$ that contain $v$ (it is obtained by replacing every occurrence of $u$ by a copy of $v$, and vice versa). Therefore $C_{m,n}^{\mathcal{U}}(u) =C_{m,n}^{\mathcal{U}}(v)$ and so we also write $C_{m,n}^{\mathcal{U}}(p)$ (with $p = |u|$) instead of $C_{m,n}^{\mathcal{U}}(u)$. The corresponding generating function is $$ \mathbf C_{m,p}^{\mathcal{U}}(z) = \sum_{n \geq 0} C_{m,n}^{\mathcal{U}}(p) z^n. $$ This series will be determined in Lemma~\ref{lem:C-binary} for binary trees and in Lemma~\ref{lemma:C^U_n} for unranked trees. Let us now explain how the accumulated sizes $N^{\mathcal U}_{m,n}$ and $E^{\mathcal U}_{m,n}$ (or, equivalently, the associated generating functions) can be expressed in terms of these series. Let $\text{sub}(t)$ denote the set of subtrees occurring in the tree $t$. Since $\| \text{dag}(t) \| $ is the number of different subtrees of $t$, we have ($\mathbbm{1}_{u\in \text{sub}(t)}$ is $1$ if $u \in \text{sub}(t)$ and $0$ otherwise) \begin{eqnarray*} N _{m,n}^{\mathcal{U}} = \sum_{t \in \mathcal{U}_{m,n}} \| \text{dag}(t) \| & = & \sum_{t \in \mathcal{U}_{m,n}} \sum_{u \in \mathcal U_m}\mathbbm{1}_{u\in \text{sub}(t)}\\ & = & \sum_{u \in \mathcal{U}_m} C_{m,n}^{\mathcal{U}}(u) = \sum_{p \geq 0} |\mathcal{U}_{m,p}| \, C_{m,n}^{\mathcal{U}}(p). \end{eqnarray*} Hence the corresponding generating function is \begin{equation}\label{formula-K(z)^U-general} \mathbf N _m^{\mathcal{U}} (z) = \sum_{n \geq 0} N _{m,n}^{\mathcal{U}} z^n = \sum_{p \geq 0} |\mathcal{U}_{m,p}| \, \mathbf C_{m,p}^{\mathcal{U}}(z). \end{equation} We now want to obtain an expression for the accumulated number of edges $E_{m,n}^{\mathcal U}$ and the associated generating function. Let us denote by ${U}_{m,n}^{(d)}$ (with $U=B$ or $T$) the number of trees from $\mathcal U_{m,n}$ that have root degree $d$ (i.e., the root has $d$ children). Then, in the same spirit as for the number of nodes, we get for the number of edges: \begin{eqnarray* E_{m,n}^{\mathcal{U}} = \sum_{t \in \mathcal{U}_{m,n}} |\text{dag}(t)| & = & \sum_{t \in \mathcal{U}_{m,n}}\sum_{u\in \text{sub}(t)} \deg(\text{root}(u) )\\ & = & \sum_{u \in\mathcal U_m} \deg(\text{root}(u)) C^{\mathcal U}_{m,n}(u)= \sum_{p,d \geq 0} d\, {U}_{m,p}^{(d)} C_{m,n}^{\mathcal{U}}(p). \end{eqnarray*} The associated generating function is \begin{equation}\label{formula-E(z)^U-general} \mathbf E_m^{\mathcal{U}} (z) = \sum_{n \geq 0} E_{m,n}^{\mathcal{U}} z^n = \sum_{p,d \geq 1} d\, {U}_{m,p}^{(d)} \mathbf C_{m,p}^{\mathcal{U}}(z). \end{equation} Indeed, we can ignore trees of size $p=0$ (or, equivalently, root degree $d=0$). \subsection{Exact counting}\label{sec:exact} In this section, we determine explicit expressions for the generating functions\ $\mathbf N^{\mathcal{U}}_m (z)$ and $\mathbf E_m^{\mathcal{U}} (z)$, whose coefficients record the accumulated number of nodes (resp. edges) in the dag of $m$-labeled trees of size $n$. We start with binary trees (that is, $\mathcal U=\mathcal B$). \subsubsection{Binary trees} \begin{lemma}\label{lemma:gf_labeled_binary_trees} The generating function $\mathbf B_m(z)$ of $m$-labeled binary trees, counted by their edge number, is \begin{align} \mathbf B_m(z) &= \frac{1-2mz-\sqrt{1-4mz}}{2mz^2}. \label{eq:B_m1} \end{align} Equivalently, the number of $m$-labeled binary trees of size $p$ is \begin{equation}\label{eq:B_m2} B_{m,p}= \frac 1{p+2} {2p+2\choose p+1} m^{p+1}. \end{equation} Of course the case $m=1$ recovers the (shifted) Catalan numbers. \end{lemma} \begin{proof} The proof of Lemma~\ref{lemma:gf_labeled_binary_trees} follows a general principle called the {\em symbolic method} in~\cite[Chapter 1]{AnalyticCombinatorics}: If a combinatorial class $\mathcal{H}$ is built by disjoint union from two combinatorial classes $\mathcal{F}$ and $\mathcal{G}$ with the respective generating functions $\mathbf F(z)$ and $\mathbf G(z)$, then the generating function $\mathbf H(z)$ of the combinatorial class $\mathcal{H}$ is $\mathbf H(z) = \mathbf F(z) + \mathbf G(z)$. Similarily, if $\mathcal{H}$ is build via Cartesian product from the classes $\mathcal{F}$ and $\mathcal G$, then $\mathbf H(z) = \mathbf F(z)\cdot \mathbf G(z)$. An $m$-labeled binary tree is either a single node with a label from $\{1, \ldots, m\}$, or a root node with a single subtree (left or right) or a root node with two subtrees. The above principles give for the generating function\ $ \mathbf B_m(z)$ the equation \[ \mathbf B_{m}(z) = m + 2mz\mathbf B_{m}(z) + m \left( z\mathbf B_{m}(z) \right)^2. \] Solving this equation for $\mathbf B_{m}(z)$ proves equation~\eqref{eq:B_m1} (taking the other root for $\mathbf B_m(z)$ would give a series with negative powers of $z$). Equation~\eqref{eq:B_m2} follows from~\eqref{eq:B_m1} by a Taylor expansion. \qed \end{proof} \begin{lemma}\label{lem:C-binary} The generating function $\mathbf C_{m,u}^{\mathcal{B}}(z)$ of $m$-labeled binary trees that contain a given tree $u \in \mathcal{B}_m$ of size $p$ is $ \mathbf C_{m,p}^{\mathcal{B}}(z) =\frac{1}{2mz^2} \left(\sqrt{1-4mz+4mz^{p+2}} - \sqrt{1-4mz} \right). $$ \end{lemma} \begin{proof} We first determine the generating function\ $\mathbf A_{m,p}^{\mathcal{B}}(z)$ counting $m$-labeled binary trees that do \emm not, contain (or {\em avoid}) $u$. A non-empty binary tree $t$ that avoids $u$ is either reduced to a single node, or a root node with a $u$-avoiding tree attached (this may be the left or the right child), or a root node to which two $u$-avoiding trees are attached. However, we must still exclude the tree $t=u$, which is included in the above recursive description. We thus get the following equation: \[ \mathbf A_{m,p}^{\mathcal{B}}(z) = m + 2mz \mathbf A_{m,p}^{\mathcal{B}}(z) + m \left(z \mathbf A_{m,p}^{\mathcal{B}}(z) \right)^2 -z^p, \] which yields \[ \mathbf A_{m,p}^{\mathcal{B}}(z) = \frac{1-2mz-\sqrt{1-4mz + 4mz^{p+2}}}{2mz^2}. \] Using $\mathbf C_{m,p}^{\mathcal{B}}(z) = \mathbf B_m(z) - \mathbf A_{m,p}^{\mathcal{B}}(z)$, this proves the lemma. \qed \end{proof} We now obtain expressions for the generating functions\ $\mathbf N_m^{\mathcal{B}}(z)$ and $\mathbf E_m^{\mathcal{B}}(z)$ given by~\eqref{formula-K(z)^U-general} and~\eqref{formula-E(z)^U-general}. \begin{theorem}\label{lemma:binary_K} The generating function of the accumulated number of nodes of minimal dags of $m$-labeled binary trees is $ \mathbf N _m^{\mathcal{B}}(z) = \frac{1}{2mz^2} \sum_{p \ge 0} B_{m,p} \left( \sqrt{1-4mz+4mz^{p+2}} -\sqrt{1-4mz} \right), $ where the numbers $B_{m,p}$ are given by~\eqref{eq:B_m2}. The generating function of the accumulated number of edges of dags of $m$-labeled binary trees is $ \mathbf E_m^{\mathcal{B}}(z) = \frac{3}{2mz^2} \sum_ {p \ge 1} \frac p{2p+1} B_{m,p}\left( \sqrt{1-4mz+4mz^{p+2}} -\sqrt{1-4mz} \right). $ \end{theorem} Equation~(3) in~\cite{FlaSipStey1990} can be obtained from the above expression for $\mathbf N _m^{\mathcal{B}}(z)$ by setting $m=1$ and by shifting the index (since the size is defined as the number of nodes in~\cite{FlaSipStey1990}). \begin{proof} The expression for $\mathbf N^{\mathcal B}_{m}(z)$ follows directly from~\eqref{formula-K(z)^U-general} and Lemma~\ref{lem:C-binary}. To express the series $\mathbf E^{\mathcal B}_{m}(z)$, we first need to determine (according to~\eqref{formula-E(z)^U-general}) the number ${B}_{m,p}^{(d)}$ of $m$-labeled binary trees of size $p \geq 1$ with root degree~$d$. Note that $d$ can only be 1 or 2. Clearly, % ${B}_{m,p}^{(1)}= 2mB_{m,p-1}$, and thus ${B}_{m,p}^{(2)}= B_{m,p}-2mB_{m,p-1}$. Hence, for $p\ge 1$, \begin{eqnarray*} \sum_{d\ge 1} d \cdot {B}_{m,p}^{(d)} & = & 2mB_{m,p-1}+ 2(B_{m,p}-2mB_{m,p-1})\\ & = & 2(B_{m,p}-mB_{m,p-1}) \\ & = & \frac{3p}{2p+1} B_{m,p}, \end{eqnarray*} where the last equation follows from \eqref{eq:B_m2}. The expression for $\mathbf E^{\mathcal B}_{m}(z)$ now follows, using~\eqref{formula-E(z)^U-general} and Lemma~\ref{lem:C-binary}. \qed \end{proof} \subsubsection{Unranked trees} \begin{lemma}\label{lemma:gf_labeled_unranked_trees} The generating function $\mathbf T_m(z)$ of $m$-labeled unranked trees is \begin{align} \mathbf{T}_m(z) &= \frac{1-\sqrt{1-4mz}}{2z}. \label{eq:T_m1} \end{align} Equivalently, the number of $m$-labeled unranked trees of size $p$ is \begin{equation} T_{m,p}= \frac{1}{p+1} \binom{2p}{p} m^{p+1}. \label{eq:T_m2} \end{equation} Again, we obtain the Catalan numbers when $m=1$. \end{lemma} \begin{proof} An $m$-labeled unranked tree is a root node to which a sequence of $m$-labeled unranked trees is attached. We can now use another construction from~\cite[Chapter 1]{AnalyticCombinatorics}: If $\mathcal{G}$ is a combinatorial class that does not contain an element of size $0$, and the class $\mathcal{F}$ is defined as \[ \mathcal{F} = \{\epsilon \} + \mathcal{G} + (\mathcal{G} \times \mathcal{G}) + (\mathcal{G} \times \mathcal{G} \times \mathcal{G}) + \cdots, \] then the generating function of $\mathcal{F}$ is \[ \mathbf F(z) = \frac{1}{1-\mathbf G(z)} \] where $\mathbf G(z)$ is the generating function of $\mathcal{G}$. In our case, $\mathbf G(z)=z \mathbf T_m(z)$ counts trees with root degree 1 and root label~1, and we thus obtain \begin{equation*} \mathbf{T}_m(z) = \frac{m}{1- z \mathbf{T}_m(z)}. \end{equation*} Solving this for $\mathbf{T}_m(z)$ yields~\eqref{eq:T_m1}. We then obtain equation~\eqref{eq:T_m2} by a Taylor expansion. \qed \end{proof} \begin{lemma}\label{lemma:C^U_n} The generating function of $m$-labeled unranked trees that contain a given tree $u$ of size $p$ is $ \mathbf C_{m,p}^{\mathcal{T}}(z) = \frac{z^{p+1}+ \sqrt{1- 4mz + 2z^{p+1} + z^{2p+2} } - \sqrt{1-4mz} }{2z}. $ \end{lemma} \begin{proof} We first determine the generating function\ $\mathbf A_{m,p}^{\mathcal{T}}(z)$ counting $m$-labeled unranked trees that do \emm not, contain (or avoid) $u$. A tree that avoids $u$ is a root node to which a sequence of $u$-avoiding trees is attached. As in the binary case, we still need to subtract $z^p$ to avoid counting $u$ itself. This gives \begin{align*} \mathbf A_{m,p}^{\mathcal{T}}(z) &= \frac{m}{1 - z\mathbf A_{m,p}^{\mathcal{T}}(z)} -z^p, \end{align*} which can be solved for $\mathbf A^{\mathcal{T}}_{m,p}(z)$: \begin{align*} \mathbf A_{m,p}^{\mathcal{T}}(z) &= \frac{1}{2z} \left( 1 - z^{p+1} - \sqrt{1- 4mz + 2z^{p+1} + z^{2p+2}}\right). \end{align*} Using $\mathbf C_{m,p}^{\mathcal{T}}(z) = \mathbf T_m(z) - \mathbf A_{m,p}^{\mathcal{T}}(z)$, this proves the lemma. \qed \end{proof} \begin{proposition}\label{prop:NE-unranked} The generating function of the accumulated node size of minimal dags of $m$-labeled unranked trees is $ \mathbf N _m^{\mathcal{T}}(z) = \frac{1}{2z} \sum_{p \ge 0} T_{m,p} \left(z^{p+1}+ \sqrt{1-4mz+2z^{p+1} +z^{2p+2}} - \sqrt{1-4mz}\right), $$ where the numbers $T_{m,p}$ are given by~\eqref{eq:T_m2}. The generating function of the accumulated edge size of minimal dags of $m$-labeled unranked trees is \begin{align*} \mathbf E_m^{\mathcal{T}}(z) &= \frac{3}{2z} \sum_{p\ge 0} \frac{p T_{m,p}}{p+2} \left( z^{p+1}+ \sqrt{1-4mz+2z^{p+1} +z^{2p+2}} - \sqrt{1-4mz}\right). \end{align*} \end{proposition} \begin{proof} The expression of $\mathbf N^{\mathcal T}_m(z)$ follows directly from~\eqref{formula-K(z)^U-general} and Lemma~\ref{lemma:C^U_n}. To express the series $\mathbf E^{\mathcal T}_m(z)$, we first need to determine (according to~\eqref{formula-E(z)^U-general}) the number $T^{(d)}_{m,p}$ of $m$-labeled unranked trees of size $p$ and root degree $d$, or, more precisely, the sum $$ \sum_{d\ge 1} d \cdot T^{(d)}_{m,p} $$ for any $p\ge 1$. This is done in~\cite[Corollary~4.1]{DBLP:journals/dm/DershowitzZ80} in the case $m=1$. It suffices to multiply by $m^{p+1}$ to obtain the general case: $$ \sum_{d\ge 1} d \cdot T^{(d)}_{m,p} = \frac{3pT_{m,p} }{p+2}. $$ Combining~\eqref{formula-E(z)^U-general} and Lemma~\ref{lemma:C^U_n} now gives the expression of $\mathbf E^{\mathcal T}_m(z)$. \qed \end{proof} \begin{comment}We need the following result to calculate $E_m^{\mathcal{T}}(z)$. \begin{lemma} The number $\mathcal{T}_{m,p}^{(d)}$ of $\mathcal{T}(\Sigma)$-trees of size $p$ with root degree $d$ is \begin{equation}\label{eq:T-u} \mathcal{T}_{m,p}^{(d)} = \frac{d}{p} \binom{2p-1-d}{p-1} m^p. \end{equation} Furthermore, \begin{equation}\label{eq:sum_of_ballot_numbers} t_{m,p}= \sum_{d=1}^p d \mathcal{T}_{m,p}^{(d)} = \sum_{ d=1}^{p} \frac{d^2}{p} \binom{2p-1-d}{p-1} m^p = T_{m,p+1} - T_{m,p} = \frac{3pT_{m,p} }{p+2}. \end{equation} \end{lemma} Equation~\eqref{eq:T-u} is proven in~\cite[Theorem 4]{DBLP:journals/dm/DershowitzZ80} for the case $m=1$. Equation~\eqref{eq:sum_of_ballot_numbers} is proven in~\cite[Corollary 4.1]{DBLP:journals/dm/DershowitzZ80} again for the case $m=1$. The case $m>1$ easily follows from the case $m=1$. \begin{lemma}\label{lemma:unranked_e} The accumulated edge size of dags of $\mathcal{T}_{m,n}$-trees is \begin{align*} E_{m,n}^{\mathcal{T}} &= \sum_{p\ge 0} \frac{3p T_{m,p}}{p+2} C_{m,n}^{\mathcal{T}}(p) \end{align*} and the corresponding generating function is \end{lemma} \begin{proof} We have \begin{align*} E_{m,n}^{\mathcal{T}} &= \sum_{p,d \geq 0} d\, \mathcal{T}_{m,p}^{(d)} C_{p,n}^{\mathcal{T}} = \sum_{p,d \geq 0} t_{m,p} C_{m,n}^{\mathcal{T}}(p) = \sum_{p\ge 0} \frac{3p T_{m,p}}{p+2} C_{m,n}^{\mathcal{T}}(p) \end{align*} according to equation~\eqref{eq:sum_of_ballot_numbers}. \end{proof} \end{comment} \subsection{Asymptotic results}\label{subsection:asymptotic} In this section we state asymptotic results for the average node and edge sizes of the dag of $m$-labeled binary trees, and of $m$-labeled unranked trees. The proofs are rather involved and assume some knowledge in analytic combinatorics \cite{AnalyticCombinatorics}. Therefore, the proofs are are given in the Appendix. \subsubsection{Binary trees} \begin{theorem}\label{theorem:asymptotic_binary_node} The average number of nodes in the minimal dag of an $m$-labeled binary tree of size $n$ satisfies \begin{equation} \bar{N }_{m,n}^{\mathcal{B}} = 2 \kappa_m\frac{n}{\sqrt{\ln n}} \left( 1+O \left( \frac{1}{\ln n} \right) \right) \quad \text{with} \quad \kappa_m = \sqrt{\frac{\ln (4m)}{\pi }}.\label{eq:asymptotic_binary_node} \end{equation} \end{theorem} The proof is an application of the \textit{singularity analysis} of Flajolet and Odlyzko, described in~\cite[Ch.~VI]{AnalyticCombinatorics}. One first determines the singular behavior of the series $\mathbf N_m ^{\mathcal{B}}(z)$ given by Theorem~\ref{lemma:binary_K} in the neighborhood of its \emm dominant, singularities (that is, singularities of minimal modulus). \begin{theorem}\label{lemme1-fl} The generating function $\mathbf N _m^{\mathcal{B}}(z)$ is analytic in the domain $D$ defined by $|z|< \frac{1}{2m}$ and $z \notin [ \frac{1}{4m},\frac{1}{2m} ]$. As $z$ tends to $ \frac{1}{4m}$ in $D$, one has \begin{equation*} \mathbf N_m ^{\mathcal{B}}(z) = \frac{8 \, m \, \kappa_m } {\sqrt{(1-4m z ) \ln ((1-4mz)^{-1})}} + O \left(\frac{1}{\sqrt{(1-4mz)\ln^3((1-4mz)^{-1})}} \right), \end{equation*} where $\kappa_m$ is defined as in Theorem~\ref{theorem:asymptotic_binary_node}. \end{theorem} Granted this proposition, one can use the Transfer Theorem~VI.4 of~\cite[p.~393]{AnalyticCombinatorics}, combined with the estimates of the coefficients of elementary series (see~\cite[Thm.~VI.2, p.~385]{AnalyticCombinatorics}) to obtain the asymptotic behavior of the accumulated node size of minimal dags of $m$-labeled binary trees of size $n$: $$ N _{m,n}^{\mathcal{B}} = [z^n]\mathbf N_m ^{\mathcal{B}}(z) = \frac{2\kappa_m}{\sqrt{\pi}} \frac {4^{n+1} m^{n+1}}{\sqrt{n \ln n}}\left(1+ O\left(\frac{1}{\ln n}\right)\right). $$ Since the numbers $B_{m,n}$, given by~\eqref{eq:B_m2}, satisfy $$ B_{m,n} = \frac{4^{n+1} m^{n+1}}{\sqrt \pi n^{3/2}}\left(1+ O\left(\frac{1}{n}\right)\right), $$ this gives Theorem~\ref{theorem:asymptotic_binary_node}. The proof of Theorem~\ref{lemme1-fl} can be found in the Appendix, Section~\ref{proof:fl} (for $m=1$) and Section~\ref{sec:binary-m} (for general values of $m$). \smallskip For the the edge size, one obtains in a similar fashion the following result. \begin{theorem}\label{lemma:asymptotic_binary_edge} The average number of edges in the minimal dag of an $m$-labeled binary tree of size $n$ satisfies \begin{equation} \bar{E}_{m,n}^{\mathcal{B}} = {3 \kappa_m} \frac{n}{\sqrt{\ln n}} \left( 1+O\left( \frac{1}{\ln n} \right) \right) \end{equation} with $\kappa_m$ as in Theorem~\ref{theorem:asymptotic_binary_node}. \end{theorem} The proof is a simple adaptation of the proof of Theorem~\ref{theorem:asymptotic_binary_node} and can be found in Section~\ref{sec:binary-e} of the Appendix. Note the factor $3/2$ between the node and edge sizes, which could be predicted by comparing the expressions of $\mathbf N^{\mathcal B}_m(z)$ and $\mathbf E^{\mathcal B}_m(z)$ in Theorem~\ref{lemma:binary_K}. \subsubsection{Unranked trees} \begin{theorem}\label{theorem:asymtotic_node_size_unranked} The average number of nodes in the minmal dag of an $m$-labeled unranked tree of size $n$ satisfies \begin{equation} \bar{N }^{\mathcal{T}}_{m,n} = {\kappa_m } \frac{n}{\sqrt{\ln n}} \left( 1+O\left( \frac{1}{\ln n} \right) \right), \end{equation} with $\kappa_m$ as in Theorem~\ref{theorem:asymptotic_binary_node}. \end{theorem} Thus the average node size of compressed unranked trees is about half the node size of compressed binary trees of the same size. Note that the same ratio holds between the heights of these trees~\cite{debruijn,flajolet-trees,marckert}. The proof of Theorem~\ref{theorem:asymtotic_node_size_unranked} is very similar to the proof of Theorem~\ref{theorem:asymptotic_binary_node}. The required changes are described in Section~\ref{sec:unranked-n}. \begin{theorem}\label{thm:unranked-e} The average number of edges in the minimal dag of an $m$-labeled unranked tree of size $n$ satisfies \begin{equation} \bar{E}_n^{\mathcal{T} }= 3 \kappa_m \frac{n}{\sqrt{\ln n}} \left( 1+O\left( \frac{1}{\ln n} \right) \right), \end{equation} with $\kappa_m$ as in Theorem~\ref{theorem:asymptotic_binary_node}. \end{theorem} In other words, asymptotically the edge size of compressed binary trees is equal to the edge size of compressed unranked trees. The proof of Theorem~\ref{thm:unranked-e} is given in Section~\ref{sec:unranked-e}. \begin{comment} \begin{proof} We start with \begin{align*} E_{m,n}^{\mathcal{T}} &= \sum_{p\ge 0} \frac{3p T_{m,p} }{p+2} C_{m,n}^{\mathcal{T}}(p) \\ &= 3 \sum_{p \ge 0} T_{m,p} C_{m,n}^{\mathcal{T}}(p) - 6 \sum_{p \ge 0} \frac{1}{p+2} T_{m,p} C_{m,n}^{\mathcal{T}}(p) \end{align*} The first part of the sum is $\frac{3}{2} N _{m,n}^{\mathcal{B}}$. It is left to show that \[ \frac{1}{B_{m,n}} \sum_{p \ge 0} \frac{1}{p} T_{m,p} C_{m,n}^{\mathcal{T}}(p) = \frac{n}{\sqrt{\ln n}}O\left( n^{-1} \right). \] This can be done verbatim as in Lemma~\ref{lemma:asymptotic_binary_edge}. \qed \end{proof} \end{comment} Table~\ref{table:asymtpics_overview} contains an overview of the results of this section. \begin{table*}[t] \centering \begin{tabular}{|l|c|c|} \hline & $\mathcal{B}_m$ & $\mathcal{T}_m$ \\ \hline &&\\ $\displaystyle \bar{N }_{m,n}$ & $\displaystyle 2 \kappa_m \frac{n}{\sqrt{\ln n }} \left( 1 + O\left( \frac{1}{ \ln n }\right) \right)$ & $\displaystyle \kappa_m \frac{n}{\sqrt{\ln n}} \left( 1 + O\left( \frac{1}{ \ln n}\right) \right)$ \\ &&\\ $\displaystyle \bar{E}_{m,n}$ & $\displaystyle3 \kappa_m \frac{n}{\sqrt{\ln n }} \left( 1 + O\left( \frac{1}{ \ln n }\right) \right)$ & $\displaystyle3 \kappa_m \frac{n}{\sqrt{\ln n}} \left( 1 + O\left( \frac{1}{ \ln n}\right) \right)$\\ &&\\ \hline \end{tabular} \caption{\label{table:asymtpics_overview} Overview over the different asymptotics. Recall that $\kappa_m = \sqrt{\frac{\ln 4m}{\pi }}$.} \end{table*} \section{DAG and string compression}\label{sec:dag_plus_string} As for the hdag, consider the forest $\text{fcns}(t_1), \ldots, \text{fcns}(t_n)$ of the binary encodings of the right-hand sides $t_1, \ldots, t_n$ of the reduced $0$-SLT grammar $\mathcal{G}_{\text{dag}(t)}^{\text{red}}$ for an unranked tree $t$. In the construction of the hdag we build the minimal dag of this forest. Therefore we only share repeated suffixes of child sequences, i.e., ``right branching'' trees in the binary encodings. Such trees can in fact be considered as \emph{strings}. We now want to generalize the sharing of suffixes. Instead of only sharing suffixes of child sequences, we now apply an arbitrary grammar-based string compressor to (a concatenation of) the child sequences. Such a compressor infers a small straight-line context-free grammar for the given string. Formally, a \emph{straight-line context-free string grammar, SL grammar} for short, is a triple $G = (N, \Sigma,\rho)$, where \begin{itemize} \item $N$ is a finite set of nonterminals \item $\Sigma$ is a finite set of terminal symbols \item $\rho : N \to (N \cup \Sigma)^*$ is a mapping such that the binary relation $\{ (X,Y) \mid X,Y \in N, \rho(X) \in (N \cup \Sigma)^* Y (N \cup \Sigma)^* \}$ is acyclic. \end{itemize} We do not need a start nonterminal for our purpose. From every word $u \in (N \cup \Sigma)^*$ we can derive exactly one terminal string $\text{eval}_G(u)$ using the mapping $\rho$. Formally, we extend $\rho$ to a morphism $\rho : (N \cup \Sigma)^* \to (N \cup \Sigma)^*$ by $\rho(a) = a$ for $a \in \Sigma$. Due to the above acyclicity condition, for every $u \in (N \cup \Sigma)^*$, there exists an $n \geq 1$ with $\rho^n(u) \in \Sigma^*$, and $\text{eval}_G(u)$ is this string. We define the size of $G$ as $\sum_{X \in N} |\rho(X)|$. As for SLTs we also write $X \to u$ if $\rho(X) = u$. An \emph{SL grammar-compressed $\Sigma$-labeled dag} is a tuple $D=(V, \gamma, \lambda, G)$ such that the following holds: \begin{itemize} \item $G = (N, V, \rho)$ is an SL grammar with terminal alphabet $V$ \item $\gamma : V \to (V \cup N)^*$ \item $\lambda : V \to \Sigma$ and \item the triple $d=(V, \gamma', \lambda)$ with $\gamma'(v) = \text{eval}_G(\gamma(v)) \in V^*$ is an ordered $\Sigma$-labeled rooted dag. \end{itemize} We define $\text{eval}(D)=\text{eval}(d)$. We define the size $|D|$ of $D$ as $|G| + \sum_{v \in V} |\gamma(v)|$. We say that $D$ is {\em minimal} if $d$ is the minimal dag for $\text{eval}(D)$. Note that there are many minimal SL grammar-compressed dags for a given tree, since we do not make any restrictions on the SL grammar part $G$. In particular, $G$ does not have to be size minimal. \begin{example} \label{ex-grammar-compressed-dag} Here is an example of an SL grammar-compressed $\Sigma$-labeled dag $D=(V, \gamma, \lambda, G)$ with $\Sigma = \{a,b,c,f,g,h\}$ and $$V = \{A_1, A_2, A_3, A_4,A,B,C\}. $$ The mappings $\gamma$ and $\lambda$ are shown below in the left and middle column in form of a $0$-SLT grammar. For instance, $A_1 \to f(A,D,A_4,D,C)$ stands for $\lambda(A_1) = f$ and $\gamma(A_1) = ADA_4DC$. The SL grammar $G$ is shown in the right column; it contains the nonterminals $D$ and $E$. \begin{alignat*}{3} A_1 & \to f(A,D,A_4,D,C) & \qquad A & \to a & \qquad D & \to A_2 A_3 \\ A_2 & \to g(E,A) & \qquad B & \to b & \qquad E & \to AA \\ A_3 & \to h(E,B) & \qquad C & \to c & \qquad & \\ A_4 & \to f(D) & \qquad & \end{alignat*} The size of this SL grammar-compressed dag is $14$ and it represents the dag $d$ with the following $0$-SLT grammar $\mathcal{G}_d$: \begin{alignat*}{2} A_1 &\to f(A,A_2,A_3,A_4 ,A_2,A_3 ,C) & \qquad A & \to a \\ A_2 &\to g(A,A,A) & \qquad B & \to b \\ A_3 &\to h(A,A,B) & \qquad C & \to c \\ A_4 &\to f(A_2 ,A_3). & & \end{alignat*} Also note that $D$ is minimal. \end{example} By the following theorem, a given SL grammar-compressed dag for a tree $t$ can be efficiently transformed into a $1$-SLT grammar that produces the binary encoding of $t$. \begin{theorem} \label{thm-construct-1-STL} An SL grammar-compressed $\Sigma$-labeled dag $D = (V, \gamma, \lambda, G)$ can be transformed in time $O(|D|)$ into a $1$-SLT grammar $G_1$ such that $\text{eval}(G_1) = \text{fcns}(\text{eval}(D))$ and $|G_1| \leq |D|+2(|V|+|N|)$. \end{theorem} \begin{proof} Let $G = (N,V,\rho)$. Let $\hat{V} = \{ \hat{v} \mid v \in V\}$, $V' = \{ v' \mid v \in V\}$, $\hat{N} = \{ \hat{X} \mid X \in N\}$, and $N' = \{ X' \mid X \in N\}$ be disjoint copies of the sets $V$ and $N$, respectively. The set of nonterminals of the $1$-SLT grammar $G_1$ is $N \cup N' \cup \hat{N} \cup V \cup \hat{V} \cup V'$. Nonterminals in $N \cup V \cup V'$ have rank $0$ and nonterminals in $\hat{N} \cup N' \cup \hat{V}$ have rank $1$. The idea is that $\hat{\alpha}$ (for $\alpha \in N \cup V$) represents a copy of $\alpha$ that appears at positions in the fcns encoding having exactly one child (a right child), whereas the original $\alpha$ will only appear in leaf positions. This distinction is necessary since in an SLT grammar every nonterminal has a fixed rank. Nonterminals in $N' \cup V'$ are used in order to keep $G_1$ small. The right-hand side mapping of $G_1$ is defined as follows: For every $v \in V$ with $\lambda(v) = f$ and $\gamma(v) = \alpha_1 \cdots \alpha_k$ ($k \geq 0$, $\alpha_1, \ldots, \alpha_k \in V \cup N$) we set: \begin{itemize} \item If $k=0$, then $v \to f$ and $\hat{v}(y) \to f(\Box,y)$; the nonterminal $v'$ is not needed in this case. \item If $k \geq 1$, then $v \to f(v', \Box)$, $\hat{v}(y) \to f(v', y)$, and $v' \to \hat{\alpha}_1( \cdots \hat{\alpha}_{k-1}( \alpha_k) \cdots )$. \end{itemize} Note that the total size of these productions is at most $k+2$ (recall that we do not count edges to $\Box$-labeled nodes). Removing the nonterminal $v'$ in the case $k \geq 1$ would result in a total size of $2k+1$. For the every $X \in N$ with $\rho(X) = \beta_1 \cdots \beta_m$ ($\beta_1, \ldots, \beta_m \in N \cup V$, $m \geq 2$ with loss of generality) we set $X \to X'(\beta_m)$, $\hat{X}(y) \to X'(\hat{\beta}_m(y))$, and $X'(y) \to \hat{\beta}_1( \cdots \hat{\beta}_{m-1}(y) \cdots)$. These rules have total size $m+2$. Hence, the size of the resulting $1$-SLT grammar $G_1$ is $|D|+2(|V|+|N|)$ and the time needed to construct it is clearly bounded by $O(|G_1|) = O(|D|)$. It is easy to see that $G_1$ produces $\text{fcns}(\text{eval}(D))$. \qed \end{proof} Theorem~\ref{thm-construct-1-STL} implies that results for 1-SLT grammars carry over to SL grammar-compressed dags. For instance, finite tree automata~\cite{DBLP:journals/tcs/LohreyM06} (with sibling constraints \cite{DBLP:journals/jcss/LohreyMS12}) and tree-walking automata \cite{DBLP:journals/jcss/LohreyMS12} can be evaluated in polynomial time over 1-SLT grammars and hence over SL grammar-compressed dags. To construct an SL grammar-compressed dag for a tree $t$, we first construct $\text{dag}(t)$ in linear time. Then we apply a grammar-based string compressor (e.g., RePair~\cite{DBLP:conf/dcc/LarssonM99} or Sequitur~\cite{DBLP:journals/jair/Nevill-ManningW97}) to the child sequences of the dag. In this second phase we want to derive a small SL grammar for a set of strings and not a single string. To do this, we concatenate all child sequences of the dag separated by unique symbols. For instance, for the dag at the end of Example~\ref{ex-grammar-compressed-dag} we obtain the string $$ A A_2 A_3 A_4 A_2 A_3 C \$_1 A A A \$_2 A A B \$_3 A_2 A_3 . $$ An application of RePair to this string yields the grammar $$ S \to A D A_4 D C \$_1 E A \$_2 E B \$_3 D, \qquad D \to A_2 A_3, \qquad E \to A A . $$ Then, the right-hand side $A D A_4 D C \$_1 E A \$_2 E B \$_3 D$ contains the right-hand sides of the $\gamma$-mapping of the SL grammar-compressed dag, whereas the two remaining productions $D \to A_2 A_3$ and $E \to AA$ make up the SL grammar part. The following example shows that our construction may compress $\text{dag}(t)$ exponentially. \begin{example} Consider the tree $f(a,a,\dots,a)$ with $2^n$ many $a$-leaves. Its dag has $2^n$ many edges. We apply a grammar-based string compressor to the string $a^{2^n}$. The string compressor may produce the string grammar \begin{eqnarray*} S' &\to& A_1 A_1 \\ A_i&\to& A_{i+1}A_{i+1} \text{ for }1\leq i\leq n-2\\ A_{n-1}&\to& aa \end{eqnarray*} of size $2n$. Actually, RePair would produce such a grammar. The construction from the proof of Theorem~\ref{thm-construct-1-STL} yields the following 1-SLT grammar, where we eliminate productions that do not reduce the total grammar size: \begin{eqnarray*} S &\to & f(\hat{A}_1(A_1),\Box) \\ A_i &\to & \hat{A}_{i+1}(A_{i+1}) \text{ for }1\leq i\leq n-2 \\ \hat{A}_i(y) & \to & \hat{A}_{i+1}(\hat{A}_{i+1}(y)) \text{ for }1\leq i\leq n-2 \\ A_{n-1} & \to & a(\Box,a) \\ \hat{A}_{n-1}(y) & \to & a(\Box,a(\Box,y)) \end{eqnarray*} The total size of this grammar is $3n-1$. Hence we obtain a 1-SLT grammar for the fcns encoding of $f(a,a,\dots,a)$ of size $O(n)$. \end{example} Both, $\text{hdag}(t)$ and $\text{rhdag}(t)$ can be seen as particular minimal SL grammar-compressed dags. For instance, $\text{hdag}(t)$ can be seen as a minimal SL grammar-compressed dag $D = (V, \gamma, \lambda, G)$, where the SL grammar $G = (N, V, \rho)$ is \emph{right regular}, i.e., for every nonterminal $X \in N$ we have $\rho(X) \in V^* N \cup V^+$, and similarly, for every $v \in V$ we have $\gamma(v) \in V^* N \cup V^*$. When transforming such an SL grammar compressed dag into a $1$-SLT grammar following the proof of Theorem~\ref{thm-construct-1-STL}, we do not need the copy sets $\hat{N}$ and $N'$, because nonterminals from $N$ always produce suffixes of child sequences in the dag. This implies the following: \begin{theorem} \label{thm-construct-1-STL-right-reg} An hdag that is represented as an SL grammar-compressed $\Sigma$-labeled dag $D = (V, \gamma, \lambda, G)$ can be transformed in time $O(|D|)$ into a $1$-SLT grammar $G_1$ such that $\text{eval}(G_1) = \text{fcns}(\text{eval}(D))$ and $|G_1| \leq |D|+2|V|$. \end{theorem} \section{Subtree equality check} In the previous sections we have discussed five different formalisms for the compact representation of unranked trees: \begin{enumerate}[(1)] \item dag \item binary dag \item hybrid dag \item SL grammar-compressed dag \item SLT grammars (e.g. produced by BPLEX or TreeRePair) \end{enumerate} As mentioned in Section~\ref{sec-SLT-grammar}, tree automata can be evaluated in polynomial time for SLT grammars, hence the same holds for the above five formalisms. In this section we consider another important processing primitive: \emph{subtree equality check}. Consider a program which realizes two independent node traversals of an unranked tree, using one of (1)--(5) as memory representation. At a given moment we wish to check if the subtrees at the two nodes of the traversals coincide. How expensive is this check? As it turns out, the formalisms behave quite differently for this task. The dags (1)--(3) as well as SL grammar-compressed dags (4) allow efficient equality check. We show below (Theorem~\ref{lemma:subtr_equality}) that for an appropriate representation of the two nodes, this test can be performed in time $O(\log N)$, where $N$ is the number of tree nodes. For SLT grammars such a check is much more expensive. Note that we cannot unfold the subtrees and check node by node, as this can take exponential time. For SLT grammars a polynomial time algorithm is known, based on Plandowski's result~\cite{DBLP:conf/esa/Plandowski94}. A new, fine difference between the dags (1)--(3) on the one hand and (4) and (5) on the other hand is that we can also check equality of sibling sequences in time $O(\log N)$ for (1)--(3) (see Theorem~\ref{lemma:endseq_equality}). For (4) and (5) we are not aware of such an algorithm. Let $t = (V,\gamma,\lambda) \in \mathcal{T}(\Sigma)$ be an unranked tree. Recall that the preorder traversal of $t$ ($\text{pre}(t)$ for short) enumerates the nodes of $t$ by first enumerating the root, followed by the preorder traversals of the direct subtrees of the root. Formally, if $r$ is the root of $t$ and $\gamma(r) = v_1 \cdots v_n$, then $\text{pre}(t) = r \; \text{pre}(t/v_1) \cdots \text{pre}(t/v_n)$. The preorder number of a node $u \in V$ is its position in $\text{pre}(t)$. In what follows we identify a preorder number $p$ with the node in $t$ that it represents, and simply speak of ``the node $p$''. In particular, we denote with $t/p$ ($1 \leq p \leq \|t\|$) the subtree rooted at node $p$. \begin{theorem}\label{lemma:subtr_equality} Let $t$ be an unranked tree with $N$ nodes. Given $g=\text{dag}(t)$ or a minimal SL grammar-compressed dag $g$ with $\text{eval}(g)=t$ (this includes the hdag) or $g=\text{bdag}(t)$, one can, after $O(|g|)$ time preprocessing, check for given $1\leq p,q\leq N$ whether $t/p = t/q$ in time $O(\log N)$. \end{theorem} \begin{proof} Let $t = (V,\gamma,\lambda) \in \mathcal{T}(\Sigma)$. First, consider $g=\text{dag}(t) = (U,\gamma',\lambda')$. For $1 \leq p \leq N$ let $y_p$ be the unique node of $g$ such that $\text{eval}_g(y_p) = t/p$. Then, $t/p = t/q$ if and only if $y_p = y_q$. Hence, it suffices to show that the dag-node $y_p$ can be computed from $p$ in time $O(\log N)$ (after $O(|g|)$ time preprocessing). For this, we use techniques from \cite{DBLP:conf/soda/BilleLRSSW11}. More precisely, we construct in time $O(|g|)$ an SL string grammar $G'$ for the word $y_1 y_2 \cdots y_N \in U^*$. For this, we introduce for every node $u \in U$ of the dag $g$ a nonterminal $\hat{u}$. If $\gamma'(u) = u_1 \cdots u_n$, then we set $\hat{u} \to u \hat{u}_1 \cdots \hat{u}_n$. It is straightforward to show that this SL string grammar produces the word $y_1 y_2 \cdots y_N$. Note that $|G'| = |g|$. It now suffices to show that for a given number $1 \leq p \leq N$, the $p$-th symbol of $\text{eval}(G')$ can be computed in time $O(\log N)$ after $O(|g|) = O(|G'|)$ time preprocessing. This is possible by \cite[Theorem~1.1]{DBLP:conf/soda/BilleLRSSW11}. Actually, in order to apply this result, we first have to transform $G'$ into Chomsky normal form, which is also possible in time $O(|G'|) = O(|g|)$. For a minimal SL grammar-compressed dag $g = (U, \gamma, \lambda, G)$ for $t$, where $G = (N,U,\rho)$, essentially the same procedure as for the dag applies. The set of nonterminals of the SL string grammar $G'$ is $\{ \hat{u} \mid u \in U\} \cup N$. For $u \in U$ with $\gamma(u) = \alpha_1 \cdots \alpha_n$ (with $\alpha_i \in U \cup N$) we set $\hat{u} \to u \hat{\alpha}_1 \cdots \hat{\alpha}_n$, where $\hat{\alpha}_i = \hat{v}$ if $\alpha_i = v \in U$ and $\hat{\alpha}_i = \alpha_i$ if $\alpha_i \in N$. The right-hand sides for the $G'$-nonterminals from $N$ are simply copied from the grammar $G$ with every occurrence of a symbol $u \in U$ replaced by $\hat{u}$ The reader finds an example of the construction in Example~\ref{ex-subtree-equality-test} below. Finally, for $g = \text{bdag}(t) = (U,\gamma,\lambda)$ we can proceed similarly. Again we construct in time $O(|g|)$ an SL string grammar $G'$. The set of nonterminals of $G'$ is $\{ \hat{u} \mid u \in U\}$. For every $u \in U$ with $\lambda(u) \neq \Box$ and $\gamma(u) = u_1 u_2$ we set $\hat{u} \to u \alpha_1 \alpha_2$, where $\alpha_i = \varepsilon$ if $\lambda(u_i) = \Box$ and $\alpha_i = \hat{v}$ if $\lambda(u_i) = v \in U$. Note that for given preorder numbers $1 \leq p,q \leq N$, the $p$-th symbol of $\text{eval}(G')$ is equal to the $q$-th symbol of $\text{eval}(G')$ if and only if the sibling sequences at nodes $p$ and $q$ of $t$ are equal. But we want to check whether the subtrees rooted at $p$ and $q$ are equal. For this, assume that using \cite[Theorem~1.1]{DBLP:conf/soda/BilleLRSSW11} we have computed in time $O(\log N)$ the $p$-th symbol $y_p \in U$ (resp. the $q$-th symbol $y_q \in U$) of $\text{eval}(G')$. Then, $t/p = t/q$ is equivalent to the following conditions: (i) $\lambda(y_p) = \lambda(y_q)$ and (ii) either $y_p$ and $y_q$ do not have left children in $g$, or the left children coincide. Since these checks only require constant time, we obtain the desired time complexity. \qed \end{proof} \begin{example} \label{ex-subtree-equality-test} Consider the following minimal SL grammar-compressed dag from Example~\ref{ex-grammar-compressed-dag}: \begin{alignat*}{3} A_1 & \to f(A,D,A_4,D,C) & \qquad A & \to a & \qquad D & \to A_2 A_3 \\ A_2 & \to g(E,A) & \qquad B & \to b & \qquad E & \to AA \\ A_3 & \to h(E,B) & \qquad C & \to c & \qquad & \\ A_4 & \to f(D) & \qquad & \end{alignat*} Then the construction from the proof of Theorem~\ref{lemma:subtr_equality} yields the following SL grammar $G'$: \begin{alignat*}{3} \hat{A}_1 & \to A_1 \hat{A} D \hat{A}_4 D \hat{C} & \qquad \hat{A} & \to A & \qquad D & \to \hat{A}_2 \hat{A}_3 \\ \hat{A}_2 & \to A_2 E \hat{A} & \qquad \hat{B} & \to B & \qquad E & \to \hat{A}\hat{A} \\ \hat{A}_3 & \to A_3 E \hat{B} & \qquad \hat{C} & \to C & \qquad & \\ \hat{A}_4 & \to A_4 D & \qquad & \end{alignat*} This grammar produces the string $$ \text{eval}(G') = A_1 A A_2 A^3 A_3 A^2 B A_4 A_2 A^3 A_3 A^2 B A_2 A^3 A_3 A^2 B C. $$ \end{example} We observe that for general SLT grammars, a result such as the one of Theorem~\ref{lemma:subtr_equality} is not known. To our knowledge, the fastest known way of checking $t/p = t/q$ for a given SLT grammar $G$ for $t$ works as follows: From $G$ we can again easily build an SL string grammar $G'$ for the preorder traversal of $t$, see, e.g.~\cite{DBLP:journals/is/BusattoLM08,DBLP:journals/corr/abs-1012-5696}. Assume that the subtree of $t$ rooted in $p$ (resp., $q$) consists of $m$ (resp., $n$) nodes. Then we have to check whether the substring of $\text{eval}(G')$ from position $p$ to position $p+m-1$ is equal to the substring from position $q$ to position $q+n-1$. Using Plandowski's result \cite{DBLP:conf/esa/Plandowski94}, this can be checked in time polynomial in the size of $G'$ and hence in time polynomial in the size of the SLT grammar $G$. Note that more efficient alternatives than Plandowski's algorithm exist, see, e.g.~\cite{l12} for a survey, but all of them require at least quadratic time in the size of the SL grammar. In the context of XML document trees, it is also interesting to check equivalence of two sibling sequences. For the dag, bdag, and hdag, this problem can be solved again very efficiently: \begin{theorem}\label{lemma:endseq_equality} Let $t$ be an unranked tree with $N$ nodes. Given $g=\text{dag}(t)$ or $g=\text{bdag}(t)$ or $g=\text{hdag}(t)$ we can, after $O(|g|)$ time preprocessing, check for given $1\leq p,q\leq N$, whether $\text{sibseq}(p)=\text{sibseq}(q)$ in time $O(\log N)$. \end{theorem} \begin{proof} The result for the dag follows from the hdag-case, since the hdag can be constructed in linear time from the dag by constructing the minimal dag for the forest consisting of the fcns encodings of the right-hand sides of $\mathcal{G}^{\text{red}}_{\text{dag}(t)}$ (recall that the minimal dag can be constructed in linear time \cite{DBLP:journals/jacm/DowneyST80}), and this linear time computation is part of the preprocessing. Furthermore, we have already dealt with the bdag in the last paragraph of the proof of Theorem~\ref{lemma:subtr_equality}. Hence, it remains to consider the hdag. We assume that the $g=\text{hdag}(t)$ is given as a minimal SL grammar-compressed dag $D = (V, \gamma, \lambda, G)$, where the SL grammar $G = (N, V, \rho)$ is \emph{right regular}, i.e., for every nonterminal $X \in N$ we have $\rho(X) \in V^* N \cup V^+$, and similarly, for every $v \in V$ we have $\gamma(v) \in V^* N \cup V^*$, see the end of Section~\ref{sec:dag_plus_string}. After introducing additional nonterminals, we can assume that for every $X \in N$ we have $\rho(X) \in V N \cup V$, and for every $v \in V$ we have $\gamma(v) \in N \cup \{\varepsilon\}$ (this transformation is possible in time $O(|g|)$). Then, the elements of $\text{sib}(t) \setminus \{t \}$ correspond to the elements of $N$. We now construct an SL string grammar $G'$ as follows, see also Example~\ref{ex-hdag-sibling-test} below: The set of nonterminals of $G'$ is $\{\hat{X} \mid X \in N \} \cup V$ and the set of terminals is $N$. The start nonterminal is the root $r \in V$ of the hdag. For every $v \in V$ we set $$ v \to \begin{cases} \varepsilon & \text{ if } \gamma(v) = \varepsilon \\ \hat{X} & \text{ if } \gamma(v) = X \in N . \end{cases} $$ For every $X \in N$ we set $$ \hat{X} \to \begin{cases} X v \hat{Y} & \text{ if } \rho(X) = v Y, v \in V, Y \in N \\ X v & \text{ if } \rho(X) = v \in V. \end{cases} $$ Then $\text{sibseq}(p)=\text{sibseq}(q)$ holds for two numbers $1\leq p,q\leq \|t\|$ if and only if $p=q=1$ or $p > 1$, $q > 1$, and the $(p-1)$-th symbol of $\text{eval}(G')$ is equal to the $(q-1)$-th symbol of $\text{eval}(G')$. We deal with the case $p=q=1$ separately because the sibling sequence $t$ corresponding to the root of $t$ is not represented in $\text{eval}(G')$ (the latter string has length $N-1$). \qed \end{proof} \begin{example} \label{ex-hdag-sibling-test} Consider the following hdag (our running example from Section~\ref{sec-hdag}), written as an SL grammar-compressed dag of the form used in the proof of Theorem~\ref{lemma:endseq_equality}. \begin{alignat*}{2} S&\to f(X_0), \qquad & X_0 & \to B X_1, \\ B &\to f(X_1), & X_1 & \to A X_2, \\ A &\to g(X_3), & X_2 & \to A, \\ C & \to a & X_3 & \to C . \end{alignat*} It produces the tree $t = f(f(g(a),g(a)),g(a),g(a))$, see Figure~\ref{fig:hdag}. The nonterminal $X_0$ represents the sibling sequence $f(g(a),g(a)) g(a) g(a)$, $X_1$ represents $g(a) g(a)$, $X_2$ represents $g(a)$, and $X_3$ represents $a$. These are all sibling sequences except for the length-$1$ sequence $t$. According to the construction from the proof of Theorem~\ref{lemma:endseq_equality}, we obtain the following SLT grammar $G'$: \begin{alignat*}{2} S & \to \hat{X}_0, \qquad & \hat{X}_0 & \to X_0 B \hat{X}_1, \\ B &\to \hat{X}_1, & \hat{X}_1 & \to X_1 A \hat{X}_2, \\ A &\to \hat{X}_3, & \hat{X}_2 & \to X_2 A, \\ C & \to \varepsilon & \hat{X}_3 & \to X_3 C . \end{alignat*} It produces the string $$ \text{eval}(G') = X_0 X_1 X_3 X_2 X_3 X_1 X_3 X_2 X_3. $$ For instance, at preorder positions $3$ and $7$ the same sibling sequence, namely $g(a)g(a)$ starts in the tree $t$. This sibling sequence is represented by the symbol $X_1$. Indeed, the 2nd and $6$-th symbol in $\text{eval}(G')$ is $X_1$. \end{example} For an SL grammar-compressed dag, the best solution for checking $\text{sibseq}(p)=\text{sibseq}(q)$ we are aware of uses again an equality check for SL grammar-compressed strings. \begin{table*}[ht] \centering \begin{tabular}{lrrrrrr} \toprule File & Edges & mD & aC & mC & dag & bdag \\ \midrule 1998statistics & 28305 & 5 & 22.4 & 50 & 1377 & 2403 \\ catalog-01 & 225193 & 7 & 3.1 & 2500 & 8554 & 6990 \\ catalog-02 & 2240230 & 7 & 3.1 & 25000 & 32394 & 52392 \\ dblp & 3332129 & 5 & 10.1 & 328858 & 454087 & 677389\\ dictionary-01 & 277071 & 7 & 4.4 & 733 & 58391 & 77554 \\ dictionary-02 & 2731763 & 7 & 4.4 & 7333 & 545286 & 681130 \\ EnWikiNew & 404651 & 4 & 3.9 & 34974 & 35075 & 70038 \\ EnWikiQuote & 262954 & 4 & 3.7 & 23815 & 23904 & 47710 \\ EnWikiVersity & 495838 & 4 & 3.8 & 43593 & 43693 & 87276 \\ EnWikTionary & 8385133 & 4 & 3.8 & 726091 & 726221 & 1452298 \\ EXI-Array & 226521 & 8 & 2.3 & 32448 & 95584 & 128009 \\ EXI-factbook & 55452 & 4 & 6.8 & 265 & 4477 & 5081 \\ EXI-Invoice & 15074 & 6 & 3.7 & 1002 & 1073 & 2071 \\ EXI-Telecomp & 177633 & 6 & 3.6 & 9865 & 9933 & 19808 \\ EXI-weblog & 93434 & 2 & 11.0 & 8494 & 8504 & 16997 \\ JSTgene.chr1 & 216400 & 6 & 4.8 & 6852 & 9176 & 14606 \\ JSTsnp.chr1 & 655945 & 7 & 4.6 & 18189 & 23520 & 40663 \\ medline & 2866079 & 6 & 2.9 & 30000 & 653604 & 740630 \\ NCBIgene.chr1 & 360349 & 6 & 4.8 & 3444 & 16038 & 14356 \\ NCBIsnp.chr1 & 3642224 & 3 & 9.0 & 404692 & 404704 & 809394 \\ sprot39.dat & 10903567 & 5 & 4.8 & 86593 & 1751929 & 1437445 \\ SwissProt & 2977030 & 4 & 6.7 & 50000 & 1592101 & 1453608 \\ treebank & 2447726 & 36 & 2.3 & 2596 & 1315644 & 1454520 \\ \bottomrule \end{tabular} \caption{The XML documents in Corpus I, their characteristics, and dag/bdag sizes} \label{list_1} \end{table*} \begin{table*}[ht] \centering \begin{tabular}{lrrrrr} \toprule File & rbdag & hdag & rhdag & DS & TR \\ \midrule 1998statistics & 2360 & 1292 & 1243 & 561 & 501 \\ catalog-01 & 10303 & 4555 & 6421 & 4372 & 3965\\ catalog-02 & 56341 & 27457 & 29603 & 27242 & 26746\\ dblp & 681744 & 358603 & 362571 & 149964 & 156412 \\ dictionary-01 & 75247 & 47418 & 46930 & 32139 & 22375 \\ dictionary-02 & 653982 & 414356 & 409335 & 267944 & 167927 \\ EnWikiNew & 70016 & 35074 & 35055 & 9249 & 9632 \\ EnWikiQuote & 47690 & 23903 & 23888 & 6328 & 6608 \\ EnWikiVersity & 87255 & 43691 & 43676 & 7055 & 7455 \\ EnWikTionary & 1452270 & 726219 & 726195 & 81781 & 84107 \\ EXI-Array & 128011 & 95563 & 95563 & 905 & 1000 \\ EXI-factbook & 2928 & 3847 & 2355 & 1808 & 1392 \\ EXI-Invoice & 2068 & 1072 & 1069 & 96 & 108 \\ EXI-Telecomp & 19807 & 9933 & 9932 & 110 & 140 \\ EXI-weblog & 16997 & 8504 & 8504 & 44 & 58 \\ JSTgene.chr1 & 14114 & 7901 & 7271 & 3943 & 4208 \\ JSTsnp.chr1 & 37810 & 22684 & 19532 & 9809 & 10327 \\ medline & 381295 & 466108 & 257138 & 177638 & 123817 \\ NCBIgene.chr1 & 10816 & 11466 & 7148 & 6283 & 5166 \\ NCBIsnp.chr1 & 809394 & 404704 & 404704 & 61 & 83 \\ sprot39.dat & 1579305 & 1000376 & 908761 & 335756 & 262964 \\ SwissProt & 800706 & 1304321 & 682276 & 278915 & 247511\\ treebank & 1244853 & 1250741 & 1131208 & 1121566 & 528372 \\ \bottomrule \end{tabular} \caption{The compressed sizes of the documents.} \label{list_2} \end{table*} \section{Experiments}\label{sec:exp_results} In this section we empirically compare the sizes of different dags of unranked trees, namely dag, bdag, rbdag, hdag, and rhdag. We also include a comparison with SL grammar-compressed dags with RePair \cite{DBLP:conf/dcc/LarssonM99} as the string compressor, as explained in Section~\ref{sec:dag_plus_string}, and with TreeRePair \cite{lohmanmen13}. We are interested in the tree structure only, hence we did not compare with XML file compressors like Xmill~\cite{DBLP:conf/sigmod/LiefkeS00} or XQueC~\cite{DBLP:journals/toit/ArionBMP07}. \subsection{Corpora} We use three corpora of XML files for our tests. For each XML document we consider the unranked tree of its element nodes; we ignore all other nodes such as texts, attributes, etc. One corpus (\emph{Corpus I}) consists of XML documents that have been collected from the web, and which have often been used in the context of XML compression research, e.g., in~\cite{DBLP:conf/vldb/KochBG03,DBLP:journals/is/BusattoLM08,lohmanmen13}. Each of these files is listed in Table~\ref{list_1} together with the following characteristics: number of edges, maximum depth (mD), average number of children of a node (aC), and maximum number of children of a node (mC). Precise references to the origin of these files can be found in~\cite{lohmanmen13}. The second corpus (\emph{Corpus II}) consists of all well-formed XML document trees with more than 10,000 edges and a depth of at least four from the \textit{University of Amsterdam XML Web Collection}\footnote{ http://data.politicalmashup.nl/xmlweb/}. We decided on fixing a minimum size because there is no necessity to compress documents of very small size, and we chose a minimum depth because our subject is tree compression rather than list compression. Note that out of the over 180,000 documents of the collection, only 1,100 fit our criteria and are part of Corpus~II (more than $27,000$ were ill-formed and more than $140,000$ had less than $10,000$ edges). The documents in this corpus are somewhat smaller than those in Corpus~1, but otherwise have similar characteristics (such as maximal depth and average number of children) as can be seen in Table~\ref{characteristics_average}. The third corpus (\emph{Corpus III}) consists of term rewriting systems\footnote{http://www.termination-portal.org/wiki/TPDB}. These are stored in XML files, but, are rather atypical XML documents, because their tree structures are trees with small rank, i.e., there are no long sibling sequences. This can be seen in Table~\ref{characteristics_average}, which shows that the average number of children is only $1.5$ for these files. \begin{table} \centering \begin{tabular}{lrrrrr} \toprule Corpus& Edges & mD & aC & mC \\ \midrule I & 1.9 $\cdot 10^6$ & 6.6 & 5.7 & 8 $\cdot 10^4 $\\ II & 79465 & $7.9$ & $6.0$ & 2925 \\ III & 1531 & 18 & 1.5 & 13.2 \\ \bottomrule \end{tabular} \caption{\label{characteristics_average} Document characteristics, average values.} \end{table} \subsection{Experimental setup} For the dag, bdag, rbdag, and hdag we built our own implementation. It is written in C++ (g++ version 4.6.3 with O3-switch) and uses Libxml 2.6 for XML parsing. It should be mentioned that these are only rough prototypes and that our code is not optimized at all. The running times listed in Table~\ref{times} should be understood with this in mind. For the RePair-compressed dag we use Gonzalo Navarro's implementation of RePair\footnote{ http://http://www.dcc.uchile.cl/$\sim$gnavarro/software/}. This is called ``DS'' in our tables. For TreeRePair, called ``TR'' in the tables, we use Roy Mennicke's implementation\footnote{ http://code.google.com/p/treerepair/} and run with max\_rank=1, which produces 1-SLT grammars. Our test machine features an Intel Core i5 with 2.5Ghz and 4GB of RAM. \begin{table}[t] \centering \begin{tabular}{lrrrrrrrr} \toprule Corpus & Parse & dag & hdag & DS & TR \\ \midrule I & 35 & 43 & 46 & 48 & 175\\ II & 85 & 105 & 120 & 117 & 310\\ III & 6.9 & 8.7 & 9.2 & 10.0 & 14.8\\ \bottomrule \end{tabular} \caption{Cumulative Running times (in seconds).} \label{times} \end{table} \subsection{Comparison}\label{sec:exp_results_dag} Consider first Corpus~1 and the numbers shown in Table~\ref{list_1} and~\ref{list_2}. The most interesting file, concerning the effectiveness of the hybrid dag and of the reverse binary encoding, is certainly the medline file. Just like dblp, it contains bibliographic data. In particular, it consists of MedlineCitation elements; such elements have ten children, the last of which varies greatly (it is a MeshHeadingList node with varying children lists) and thus cannot be shared in the dag. This is perfect for the reverse hybrid dag, which first eliminates repeated subtrees, thus shrinking the number of edges to 653,604, and then applies the last child/previous sibling encoding before building a dag again. This last step shrinks the number of edges to impressive 257,138. In contrast, the reverse binary dag has a size of 381,295. Thus, for this document really the combination of both ways of sharing, subtrees and reversed sibling sequences, is essential. We note that in the context of the first attempts to apply dag compression to XML~\cite{DBLP:conf/vldb/KochBG03} the medline files were particularly pathological cases where dag compression did not work well. We now have new explanations for this: using \emph{reverse} (last child/previous sibling) encoding slashes the size of the dag by almost one half. And using hybrid dags again brings an improvement of more than $30\%$. The dblp document is similar, but does not make use of optional elements at the end of long sibling lists. Thus, the reverse dags are not smaller for dblp, but the hybrid dag is indeed more than $20\%$ smaller than the dag. The treebank document, which is a ranked tree and does not contain long lists, gives hardly any improvement of hybrid dag over dag, but the reverse hybrid dag is somewhat smaller than the dag (by $5\%$). For treebank, TreeRePair is unchallenged and produces a grammar that is less than half the size of DS. \begin{table*}[t] \centering \small \begin{tabular}{lrrrrrrrrrr} \toprule C. & Input & dag & bdag & rbdag & hdag & G(hd) & rhdag & G(rh) & DS & TR \\ \midrule I & 43021 & 7815 & 9292 & 8185 & 6270 & 6323 & 5220 & 5285 & 2523 & 1671 \\ II & 90036 & 13510 & 15950 & 14671 & 10884 & 11109 & 9806 & 10039 & 5162 & 3957 \\ III & 2095 & 354 & 391 & 390 & 319 & 362 & 320 & 364 & 324 & 310 \\ \bottomrule \end{tabular} \caption{\label{table:overview} Accumulated sizes (in thousand edges). {\em C} stands for {Corpus}, {\em G(hd)} for the grammar size of the hdag and {\em G(rh)} for the grammar size of the reverse hybrid dag.} \end{table*} Next, consider the accumulated numbers for the three corpora in Table~\ref{table:overview}. For Corpus~I, the reverse hdag is smaller than the dag by around $38\%$ while the hdag is only around $25\%$ smaller than the dag. As noted in Section~\ref{section_reverseDag}, the somewhat surprising outcome that the reverse binary encoding enables better compression results from the custom that in many XML-documents optional elements are listed last. This means that there are more common prefixes than suffixes in child sequences. Hence the reverse schemes perform better. When we transform hdags into SLT grammars (according to Section~\ref{sec:dag_plus_string}), then we get a modest size increase of about 1--$2\%$. For the term rewriting systems of Corpus~III, the hdag improves about $10\%$ over the dag. Represented as grammars, however, this improvement disappears and in fact we obtain an accumulated size that is slightly larger than the dag. Note that for this corpus, also TreeRePair (TR) is not much smaller than the dag, and DS is even smaller than TR. Compared to the dag, TreeRePair shares tree patterns (=connected subgraphs). Hence, the trees in Corpus~III do not contain many repeated tree patterns which are not already shared by the dag. When we compare DS with TR, then we see on corpora~I and~II that TreeRePair grammars are on average around $34\%$ smaller than DS, while on Corpus~III it is only $23\%$ smaller. On very flat files, such as the EXI documents in Table~\ref{list_1}, DS is about as good as TreeRePair. For combined dag and string compression we also experimented with another grammar-based string compressor: Sequitur~\cite{DBLP:journals/jair/Nevill-ManningW97}, but found the combined sizes to be larger than with RePair. Concerning running times (see Table~\ref{times}) note that the dag-variants stay close to the parsing time, while TreeRePair needs considerably more time. Hence, dags should be used when light-weight compression is preferred. \section{Conclusion and future work} We compare the sizes of five different formalisms for compactly representing unranked trees: \begin{enumerate}[(1)] \item dag \item binary dag \item hybrid dag \item combination of dag and SL grammar-based string compression \item SLT grammars (e.g. produced by BPLEX or TreeRePair) \end{enumerate} For the comparison of (1)--(3) we prove precise bounds: (i) the size of the binary dag of a tree is bounded by twice the size of the hybrid dag of the tree and (ii) the size of the unranked dag of a tree is bounded by the square of the size of the hybrid dag of the tree. As a corollary we obtain that the size of the dag is at least of the size of the binary dag, and at most the square of the size of the binary dag. We also prove that for (1)--(4), checking equality of the subtrees rooted at two given nodes of these structures, can be carried out in $O(\log N)$ time, where $N$ is the number of nodes of the tree. One advantage of binary and hybrid dags, is that they also support the efficient checking of equality of (ending) sibling sequences in $O(\log N)$ time. Our experiments over two large XML corpora and one corpus consisting of term rewriting systems show that dags and binary dags are the quickest to construct. Out of the dags (1)--(3), the reverse hdag (which uses a last child/previous sibling encoding) gives the smallest results. On our XML corpora, using the reverse binary encoding instead of the standard first child/next sibling encoding gives a compression improvement of more than 20\%. As a practical yardstick we observe: For applications where sibling sequence check is important, or where the uniqueness of the compressed structures is important, the hybrid dag is a good choice. If strong compression is paramount, then structures (4) and (5) are appropriate. The advantage of (4) over (5) is its support of efficient subtree equality test. We give generating functions for the exact average sizes of dags and bdags over unranked trees on $n$ nodes and $m$ labels. We show that asymptotically the expected edge sizes of dags and bdags coincide, and that the node size of bdags is twice as large as the node size of dags. In future work we would like extend our average-case size analysis also to the hybrid dag and to Repair-compressed trees. Further, we would like to apply our compression within other recent compression schemes in databases, such as for instance factorized databases~\cite{DBLP:journals/pvldb/BakibayevOZ12}. \begin{acknowledgements} The second and fourth author were supported by the DFG grant LO 748/8. The third author was supported by the DFG grant INST 268/239 and by the Engineering and Physical Sciences Research Council project "Enforcement of Constraints on XML streams" (EPSRC EP/G004021/1). \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,420
MDU Rohtak Ph.D. Admission 2014-15 : Admission Merit List (All Subjects) Wednesday, December 17, 2014 | No comments | MD University (MDU), Rohtak Ph.D. Admission 2014 Last Updated : 17.09.2015 Maharshi Dayanand University, Rohtak (MDU), Haryana has declard the Ph.D Admission Merit List for 2014-15 for English, History, Management, Chemistry, Computer Science, Geography and other subjects. M.D.U. also released the Merit List of selected candidates for Pre-Ph.D. programme. See the Admission Merit List below: See Also : MDU Rohtak Ph.D. Admission 2015-16 : Revised Admission Schedule MDU Ph.D. Admission Merit List 2014-15 Provisional Merit List of Ph.D in Computer Sc. & Application Selected Condidate for Ph.D in Computer Sc. & Application Merit List of Ph.D in History Merit List of Ph.D in geography Merit List of Ph.D in Management Merit List of Ph.D in English Merit List of Ph.D In Chemistry Merit List of selected candidates for Pre-Ph.D. Biotech. Engg., UIET Merit List for Ph.D Admission ECE, UIET Dept Commerce: Provisional Merit List For Admission to Ph.D Programme Dept Pharma: Tentative Merit List For Admission to Ph.D. Programme Revised Merit list for the admission to Ph.d (Mech. Engg.) held on 15.12.2014 See : Ph.D Entrance Exam Merit List. Earlier M.D.U. had invited application forms for registeration to Ph.D. course and award of University Research Scholarship for the session 2014-15 for various subjects/faculties. Eligible candidates could submit application forms to the Department/Institute concerned by 01.12.2014 (upto 5.00 pm). Availability of the Prospectus at University Sale Counter : 17.11.14 Last Date for Submission of Application Form : 01.12.14 upto 5.00 pm Application/Prospectus Fee General Candidate : Rs. 500/- SC/BC (Haryana only) : Rs. 125/- The prospectus containing all information viz. eligibility, number of vacant seats, syllabi and pattern of entrance exam, admission schedule etc. will be available at the University Sale Counter, Near Gate No. 1 of the M.D.University on payment of Rs.500/- (Rs.125/- for SC/BC of Haryana only) w.e.f. 17.11.2014. ACADEMIC ELIGIBILITY For Ph.D : A candidate who wishes to be accepted as a candidate for Ph.D.research programme must satisfy the following academic criteria:- Candidates must have done M.Phil/Pre-Ph.D./Ph.D. in allied area subject to the following : Master's degree with at least 55% marks in aggregate in the subject concerned or in an allied subject (50% for SC/ST candidates). OR M.Phil degree or a recognised equivalent degree beyond Master's degree level with at least 55% marks (50% for SC/ST candidates) or equivalent grade in the grading system and Master's degree with 50% marks in aggregate in the subject concerned or an allied subject. OR For Faculty of Management Sciences - Master's Degree or any other degree recognized equivalent thereto in (a) Business Administration or Economics or Commerce or in allied subjects with at least 55% marks OR (b) Post Graduate Diploma in Management recognized equivalent to MBA by AICTE/AIU with 55% marks of equivalent grade therein. Candidates with Qualifications as laid down in (iii) shall also be eligible for doing Ph.D. in Department of Economics, and Commerce. NOTE : Candidates who have done M.Phil through distance education mode by taking admission in this course after December, 2009 are not exempted from Pre-Ph.D Course. Candidates can apply on the prescribed Application Form. The application form will be available at the University Sale Counter, Near Gate No. 1 of the M.D.University on the prescribed payment w.e.f. 17.11.2014. Candidates can also download the application form from the University website i.e. www.mdurohtak.ac.in and such candidates shall deposit the requisite fee with University Cashier or by Demand Draft drawn in favour of Finance Officer, M.D.University, Rohtak and payable at State Bank of India, MDU Branch, Rohtak (04734). Application form complete in all respect must reach in the Department/Institute concerned by 01.12.2014 (upto 5.00 pm). Download Application Form/Brochure for Registration to Ph.D. Programme Haryana, MDU Rohtak, Notification, Ph.D.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,933
/** */ package archoptions.impl; import archoptions.AllocateTogether; import archoptions.ArchoptionsPackage; import org.eclipse.emf.ecore.EClass; /** * <!-- begin-user-doc --> * An implementation of the model object '<em><b>Allocate Together</b></em>'. * <!-- end-user-doc --> * * @generated */ public abstract class AllocateTogetherImpl extends DeploymentOptionImpl implements AllocateTogether { /** * <!-- begin-user-doc --> * <!-- end-user-doc --> * @generated */ protected AllocateTogetherImpl() { super(); } /** * <!-- begin-user-doc --> * <!-- end-user-doc --> * @generated */ @Override protected EClass eStaticClass() { return ArchoptionsPackage.Literals.ALLOCATE_TOGETHER; } } //AllocateTogetherImpl
{ "redpajama_set_name": "RedPajamaGithub" }
3,914
Q: python udp socket from batch-file I've a simple code to send a UDP packet to a UDP server. It works on my Linux Ubuntu machine. It also works on my Windows 7 computer running python 2.6.6 via PythonWin or PythonIDE, but when trying to run from a the command shell or batch file it do not work. I write [full patch]\python.exe [full path]\cl.py and the program run, print statements work but no UDP packet is sent. (checked with WireShark on the windows machine) No other error is indicated. Is there something special to consider for running from command prompt or batch file? #!/usr/bin/python import socket port = 12345 ip = "10.30.5.70" data = "Hello World" UDPSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) UDPSock.sendto(data, (ip,port)) print "done." A: in windows im pretty sure you dont need to include the 'python.exe' or even 'python' you just enter the name of the python script like this '[full path]\cl.py' this works for me but I have Python 2.7.5. The only thing I find special about running cmd (command shell) is that errors are not echoed to the console like you would expect. Its hard to explain, but if you were to type: cl.py>log.txt If an error occured and python would normally return somthing like (any random old error): Traceback (most recent call last): File "", line 1, in s.s NameError: name 's' is not defined the log.txt will not append the error to the file. But anything that is printed normally in python will be appended to the file. But this information is not relevant. :\ Hope you get it working A: I noticed that when running python -v cl.py the script worked. So I have introduced a delay at the end of the script. import time # in beginning of script # my socket stuff time.sleep(1.0) # in the end of script and the script works fine.
{ "redpajama_set_name": "RedPajamaStackExchange" }
274
.class public interface abstract Lcom/sina/weibo/sdk/api/share/IWeiboHandler$Request; .super Ljava/lang/Object; # virtual methods .method public abstract onRequest(Lcom/sina/weibo/sdk/api/share/BaseRequest;)V .end method
{ "redpajama_set_name": "RedPajamaGithub" }
4,265
Company registration in South Africa Registration certificate + + + + + Registration of directors and shareholders + + + + + Registered office and agent + + + + + Nominee director for 1 year – – + + + Nominee shareholder for 1 year – – + + + Corporate Account in Rietumu bank – – – + + Personal account in Latvian bank – – – – + Total cost 1628 EUR 1893 EUR 2335 EUR 2335 EUR 2335 EUR Annual service fee 1628 EUR 1249 EUR 1673 EUR 1673 EUR 1673 EUR Republic of South Africa is the southern African country. A special feature of the state is the largest proportion of white and Asian population in Africa. The republic is rich in natural resources, has a fairly stable and growing economy, and is the only African country represented in the G20. The territory of the present Republic of South Africa was first colonized in the mid- 17 th century by the Dutch. In the 18 th century as a result of colonial wars territory came under the control of the British, by whom it is formally controlled until 1961, when the republic declared its independence and left the British Commonwealth. One of the major turning points in the history of the Republic of South Africa can be regarded as coming to power of the National Party in 1948, which has become a symbol of apartheid and oppression of black people in the country. End of era is associated with the names of Nelson Mandela and President Frederik de Klerk. Although the overall situation of oppression, life level of black people remained at a high standard relatively to other African countries, undertaken measures in the last 20 years began a gradual correction of the situation and have given impetus to economic growth. The most convenient forms of company for international business are: Public company (PLC), private company (PC) and, in some cases, an international holding company (IHC) The authorized capital for a public company is set at 100 thousand rand. The company may begin commercial activities after pay up of the capital. The main requirements to the directors and shareholders There are no restrictions on the legal form and \ or the nationality of directors. PLC must have at least two directors. Shareholders do not have any restrictions. If among the directors are non- residents, a "public official" must be assigned who is South African resident. Information on the beneficiaries is not disclosed. The standard rate of income tax for companies in Republic South Africa is 29%. The audit is required in case of excess of income over a certain amount. Even if the company does business outside South Africa, filing a tax return is obligatory. The company must keep accounting records and store them in registered office of the Company. Report must be filled in annually.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,478
Shares in car manual publisher Haynes motored on Thursday as the company reported a strong rise in first half profits. Pre-tax profit was up to £0.97m from £0.49 a year earlier as revenue increased 21% to £16.9m. There was a big rise in digital revenues of 51% to £5.5m as customers moved online in greater numbers. "I am pleased to report that this is the third consecutive set of results where Haynes has demonstrated strong underlying revenue and profit growth since we implemented our global operational, cost and structure review in 2015/16", said chairman Eddie Bell said. "These interim results confirm that Haynes is making clear progress in its turnaround and is on its way to becoming an integrated multi-media content provider." "The re-positioning of Haynes requires significant investment in new products and platforms. We will continue to carefully manage our costs and cash flows during this turnaround to ensure we maintain focus on our end goal: the transition of the Group to being an integrated multi-media content and data solutions provider."
{ "redpajama_set_name": "RedPajamaC4" }
6,883
{"url":"https:\/\/testbook.com\/question-answer\/find-the-value-of11-4-9-1--603522948533e79b1f64adc7","text":"# Find the value of:(11 + 4) \u2212 9 \u00d7 1 \u00f7 3 of 4\n\nThis question was previously asked in\nSSC GD Previous Paper 36 (Held On: 11 March 2019 Shift 2)_English\nView all SSC GD Constable Papers >\n1. $$\\frac{{75}}{4}$$\n2. $$\\frac{{57}}{4}$$\n3. 5\n4. 3\n\nOption 2 : $$\\frac{{57}}{4}$$\nFree\n\u092a\u0930\u094d\u092f\u093e\u092f\u0935\u093e\u091a\u0940 \u0936\u092c\u094d\u0926\n143413\n15 Questions 15 Marks 12 Mins\n\n## Detailed Solution\n\nGiven:\n\n(11 + 4) \u2212 9 \u00d7 1 \u00f7 3 of 4\n\nConcept used:\n\nFollow BODMAS rule to solve this question, as per the order given below,\n\nCalculation:\n\n(11 + 4) \u2212 9 \u00d7 1 \u00f7 3 of 4\n\n\u21d2 15 - 9\u00a0\u00f7 12 = ?\n\n\u21d2 15 - 3\/4 = ?\n\n\u21d2 ? = 57\/4\n\n\u2234 The value of ? is 57\/4.","date":"2021-09-19 08:53:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6769415140151978, \"perplexity\": 5433.555429971962}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780056752.16\/warc\/CC-MAIN-20210919065755-20210919095755-00072.warc.gz\"}"}
null
null
{"url":"https:\/\/wiki.kidzsearch.com\/wiki\/Aeschylus","text":"kidzsearch.com > wiki\n\nAeschylus\n\nAeschylus\nAeschylus\nBornc. 525 BC\nEleusis, Greece\nDiedc. 456 BC\nGela, Silcy, Italy\nOccupationPlaywright; soldier\nNationalityGreek\nPeriodAncient Greece\nGenresTragedy\nSubjectsGreek life and history\nNotable work(s)The Persians\nNotable award(s)Won at the Great Dionysia 13 times.\nChildrenEuphorion and Eu\u00e6on\nRelative(s)Philocles (nephew)\nThe funeral mask known as the \u201cAgamemnon Mask\u201d. Gold, found in Tomb V in Mycenae by Heinrich Schliemann (1876), XVIth century BC. National Archeological Museum, Athens\n\nAeschylus (525 BC \u2013 456 BC) was an Ancient Greek poet and writer. He wrote about 70\u201390 plays.[1][2] Only six of his tragedies have survived complete. Aeschylus was the earliest of the three greatest Greek writers of tragedians. The two others were Sophocles and Euripides.[1][3]\n\nAristotle said that Aeschylus added more characters into his plays. His characters spoke to each other and not just to the chorus. This made it easier to create drama between the characters.\n\nOne of his plays, The Persians, was about the Persian invasion of Greece. Aeschylus had fought in this war. People studying Greek history use his play as an important source of information. The war was so important to the Greeks and to Aeschylus, that the writing on his grave only talks about his part in the Greek victory at the Battle of Marathon. There is nothing about the plays he wrote.\n\nEarly life\n\nAeschylus was born about the year 525 BC in a small town called Eleusis, which is about 27\u00a0kilometers northwest of Athens.[4] The date is based on counting back forty years from his first victory in the Great Dionysia. His family was rich, and his father, Euphorion, was a member of the Eupatridae, the ancient nobility of Attica.[5] Pausanias wrote that Aeschylus worked in a vineyard until the god Dionysus visited him in his sleep. The god ordered him to write the first tragedies.[5] His first play was performed in 499 BC, when he was only 26 years old.[4][5]\n\nThe Persian wars\n\nIn 490 BC the Persian army, led by Darius, landed in Greece and tried to take it over. Aeschylus, and his brother Cynegeirus, joined the army from Athens and fought against the Persians at the Battle of Marathon.[4] The Athenians were able to defeat the much bigger Persian army. This battle, which stopped Darius, was celebrated across the city-states of Greece.[4] Cynegeirus died in the battle.[4] In 480 BC, Xerxes I of Persia tried to capture Greece. Aeschylus fought against them at the Battle of Salamis and at the Battle of Plataea in 479 BC.[4] His oldest surviving play The Persians, performed in 472 BC, is set during the Battle of Salamis. This play won first prize at the Dionysia.[6]\n\nThe Eleusinian Mysteries\n\nAeschylus was one of many Greeks who joined the Eleusinian Mysteries. This was the religious cult of Demeter, and based in his home town of Eleusis.[7] Members of the group learned mystical and secret knowledge. Members were sworn under the penalty of death not to say anything about the Mysteries to anyone. Aristotle wrote that some people thought that Aeschylus had shown some of the cult's secrets on stage.[8] Other writers said that an angry mob tried to kill Aeschylus on the spot, but he ran away. Later, Aeschylus said he did not know that he had shown any of the secrets. He was saved from death only because of his brave service in the Persian Wars.\n\nLater life\n\nAeschylus made two trips to Sicily in the 470s BC. He was invited by Hieron, tyrant of Syracuse, a big Greek city on the east side of the island. On one of these trips he wrote The Women of Aetna, in honor of the city founded by Hieron. He also restaged his Persians.[4] By 473 BC, Aeschylus was the yearly favorite in the Dionysia, winning first prize in nearly every competition.[4] In 458 BC, he returned to Sicily for the last time, visiting the city of Gela where he died in 456 or 455 BC. It is said that he was killed by a tortoise which fell out of the sky after it was dropped by an eagle. This story is probably only a legend.[9] Aeschylus' work was so respected by the Athenians that after his death, his were the only tragedies allowed to be restaged in future competitions.[4] His sons Euphorion and Eu\u00e6on, and his nephew Philocles, all wrote plays as well.[4]\n\nThe plays\n\nModern picture of the Theatre of Dionysus in Athens, where many of Aeschylus' plays were performed\n\nGreek drama began with festivals for the gods, mainly Dionysus, the god of wine.[10] During Aeschylus' lifetime, dramatic competitions became part of the city's Dionysia in the spring.[10] The festival began with an opening procession, then a competition of boys singing dithyrambs, and finally two dramatic competitions.[11] The first competition was for three playwrights each presenting three tragic plays, followed by a shorter comedy.[11] A second competition of five comedic playwrights followed, and winners of both competitions were chosen by a group of judges.[11]\n\nAeschylus took part in many of these competitions in his lifetime. Only six tragedies have survived intact: The Persians, Seven against Thebes, The Suppliants, and the trilogy known as The Oresteia, consisting of the three tragedies Agamemnon, The Libation Bearers and The Eumenides. There is also the play Prometheus Bound, but this was probably written by someone else. All of the surviving plays won first prize at the City Dionysia. One book, the Alexandrian Life of Aeschylus, said that he won the first prize at the City Dionysia 13 times. Sophocles' won 18 times out of his 120 plays, and Euripides only had five wins out of about 90 plays.\n\n\u2022 The Persians (Persai) (472 BC)\n\u2022 Seven Against Thebes (Hepta epi Thebas) (467 BC)\n\u2022 The Suppliants (Hiketides) (463 BC?)\n\u2022 Oresteia a series of three plays (458 BC)\n\u2022 Agamemnon\n\u2022 The Libation Bearers (Choephoroi)\n\u2022 The Eumenides\n\nInfluence on Greek drama and culture\n\nMosaic of Orestes, main character in Aeschylus' trilogy, The Oresteia\n\nWhen Aeschylus first began writing, the theatre was new. Some playwrights like Thespis had made the cast bigger to include an actor who was able to talk with the chorus.[2] Aeschylus added a second actor, allowing for more drama; and the chorus became less important.[2] He is said to have been the first to use skenographia, or scene-decoration,[12] though Aristotle said the first person was Sophocles. Aeschylus also added more details to the costumes, and had his actors wear platform boots, called cothurni, to help the audience see them better. When they walked on stage in the first performance of the Eumenides, the chorus of Furies were so frightening in looks that they made young children faint, old men urinate, and pregnant women go into labor.[13]\n\nHis plays were written in the strict style of Greek drama. They were in verse and no violence could be performed on stage. The plays had to be set away from normal life in Athens, either by telling stories about the gods or by being set, like The Persians, in a far-away place.[14] Aeschylus' work has a strong moral and religious emphasis.[14] The Oresteia plays were about man's position in the universe in relation to the gods, the laws of the gods, and punishment from the gods.[15]\n\nFifty years after Aeschylus' death, the comic playwright Aristophanes praised him in The Frogs. Aeschylus is a character in the play and says that his Seven against Thebes \"made everyone watching it to love being warlike\" (line 1022); with his Persians, he says he \"taught the Athenians to desire always to defeat their enemies\" (line 1026\u20137). He says that his plays helped the Athenians to be brave and virtuous (line 1039ff).\n\nNotes\n\n1. Freeman 1999, p. 243\n2. Pomeroy 1999, p. 222\n3. Schlegel, August Wilhelm von. Lectures on dramatic art and literature. p. 121.\n4. Sommerstein 1996, p. 33\n5. Bates 1906, pp. 53\u201359\n6. Sommerstein 1996, p. 34\n7. Martin 2000, \u00a7\u00a010.1\n8. Nicomachean Ethics 1111a8\u201310.\n9. See (e.g.) Lefkowitz 1981, 67ff. Cf. Sommerstein 2002, 33, who does not tell this story when giving a biographical sketch of the poet.\n10. Freeman 1999, p. 241\n11. Freeman 1999, p. 242\n12. According to Vitruvius. See Summers 2007, 23.\n13. Life of Aeschylus.\n14. Pomeroy 1999, p. 223\n15. Pomeroy 1999, pp. 224\u2013225\n\nReferences\n\n . https:\/\/archive.org\/details\/greekachievement0000free.\u00a0.\n\n . https:\/\/archive.org\/details\/ancientgreecepol00sara.\u00a0.\n\n\u2022 Sommerstein, Alan H. (1996). Aeschylean Tragedy. Bari.\u00a0.\n\u2022 \u2014(2002). Greek Drama and Dramatists. London: Routledge Press.\n\nTemplate-specific style sheet:\n\nISBN\u00a00-415-26027-2\n\u2022 \u00c6schylus. Aeschylus I: Oresteia. Transl. Richmond Lattimore. 8th ed, Chicago, IL: The University of Chicago Press, 1960. 1-31.","date":"2021-10-16 21:21:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.27362510561943054, \"perplexity\": 9934.03573151473}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585025.23\/warc\/CC-MAIN-20211016200444-20211016230444-00040.warc.gz\"}"}
null
null
Q: Fastest way to search through lists in python I'm sure there's a standard answer for this. What's the fastest method to loop over one list, forming a new list that is elements that are a member of a different list. I usually just do list comprehensions, sometimes turning the target list into a set. Hashing the target list usually improves performance when the target list is big, but this is the only improvement I know of. Any recommendations would be greatly appreciated. list_A=list(range(0,100)) list_B=list(range(50,60)) #List comprehension lookup listC=[x for x in list_A if x in list_B] #Using set set_B=set(list_B)) listD=[x for x in list_A if x in set_B] A: If you don't care about order, you could hash both set(listA) & set(listB) Otherwise, not much you can do
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,892
Richard E. Sorensen is the former dean of both the Pamplin College of Business at Virginia Tech and Appalachian State University's business school. He had been a dean for over 40 years. He is now a professor emeritus at Virginia Tech. He graduated from Brooklyn Polytechnic Institute with a BS, and did his MBA and PhD at NYU Stern. Several faculty fellowships and chair endowments are named after him. He was formally honored in the Virginia State Capitol by the Virginia General Assembly. AACSB International appointed him as special advisor. References Polytechnic Institute of New York University alumni New York University Stern School of Business alumni Year of birth missing (living people) Living people
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,691
Rack::XServedBy is a Rack middleware, that adds `X-Served-By` HTTP header to your responses. That is useful if load balance between many servers and want to know which one served the request. ## Installation Add this line to your application's Gemfile: ```ruby gem 'rack-x_served_by' ``` And then execute: $ bundle Or install it yourself as: $ gem install rack-x_served_by ## Usage In config.ru ```ruby use Rack::XServedBy run YourApp ``` Or in Rails `config/application.rb` ```ruby module YourApp class Application < Rails::Application config.middleware.use Rack::XServedBy end end ``` You can configure custom hostname as: ```ruby use Rack::XServedBy, 'custom-hostname' ``` With example `config.ru`: ``` $ curl -v localhost:9292 * Rebuilt URL to: localhost:9292/ * Hostname was NOT found in DNS cache * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 9292 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.37.1 > Host: localhost:9292 > Accept: */* > < HTTP/1.1 200 OK < X-Served-By: Michals-MacBook-Pro.local < Transfer-Encoding: chunked * Server WEBrick/1.3.1 (Ruby/2.2.2/2015-04-13) is not blacklisted < Server: WEBrick/1.3.1 (Ruby/2.2.2/2015-04-13) < Date: Fri, 22 May 2015 13:37:22 GMT < Connection: Keep-Alive < * Connection #0 to host localhost left intact Hello, world% ``` ## Development After checking out the repo, run `bin/setup` to install dependencies. Then, run `bin/console` for an interactive prompt that will allow you to experiment. To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release` to create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org). ## Contributing 1. Fork it ( https://github.com/3scale/rack-x_served_by/fork ) 2. Create your feature branch (`git checkout -b my-new-feature`) 3. Commit your changes (`git commit -am 'Add some feature'`) 4. Push to the branch (`git push origin my-new-feature`) 5. Create a new Pull Request
{ "redpajama_set_name": "RedPajamaGithub" }
9,910
FREE publicity on our website, newsletter and at our Community Program Office – get some free advertising and referrals through us! FREE One time trial of HEPA tools and vacuum supplies – try before you buy! See how HEPA tools can help you work quickly and keep dust down as you go! FREE HEPA filter and shop vac bags for your existing shop vac – keep dust down using a shop vac instead of sweeping up!
{ "redpajama_set_name": "RedPajamaC4" }
2,335
{"url":"http:\/\/mathhelpforum.com\/calculus\/174690-area-between-curves.html","text":"# Math Help - Area between Curves\n\n1. ## Area between Curves\n\nHey I have a calc exam Thursday and have been practicing for it. I came across this area problem that has me stumped, usually area is no problem for me but I can't seem to figure this one out.\n\nI figured that you would break the area into two from 0 to 1 and 1 to 3 since the function and y=127 intersect at x=1 but when I went to integrate everything the computer told me that my answer was wrong. Please help\n\n2. The question, as posted, doesn't make sense to me. The graph of the cubic, the x-axis, and x= 1 form a single region. I don't see what \"y= 127\" has to do with that.\n\n3. Hello, rawkstar!\n\nSince you didn't show your work, we have no idea where your error is.\n\n$\\text{Consider the region bounded by the graph of:}$\n$y\\:=\\:x^3 + 18x^2 + 108x,\\;(0 \\le x \\le 1)\\,\\text{ on the left,}$\n$x\\text{-axis below, }\\,y = 127\\text{ above, and }x = 3\\text{ on the right.}$\n\n$\\text{(1) Find the area of the region.}$\n\nThe graph looks like this:\n\nCode:\n\n|\n127+ * - - - - - - - *\n| .|:::::::::::::::|\n| .*|:::::::::::::::|\n| .*B:|::::::A::::::::|\n| .*::::|:::::::::::::::|\n- - * - - - + - - - + - - - + -\n| 1 2 3\n|\n\nArea $\\,A$ is a 2-by-127 rectangle: . $A = 254$\n\nArea $\\,B$ requires an integral: . $\\displaystyle B \\;=\\;\\int^1_0(x^3 + 18x^2 + 108x)\\,dx$\n\nWe have: . $B \\;=\\;\\frac{1}{4}x^4 + 6x^3 + 54x^2\\,\\bigg]^1_0 \\;=\\;\\frac{1}{4} + 6 + 54$\n. . . . . . . . $B \\;=\\;60\\frac{1}{4}$\n\nTherefore, the total area is: . $254 + 60\\frac{1}{4} \\;=\\;314\\frac{1}{4}\\text{ units}^2.$","date":"2014-12-26 08:27:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 11, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8626617193222046, \"perplexity\": 503.23995595756884}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-52\/segments\/1419447548655.55\/warc\/CC-MAIN-20141224185908-00050-ip-10-231-17-201.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} The objective of this paper is to analyze a spatial vector-host epidemic model. The model accounts for the random movement of the vector and host population in geographic regions, and the infection age-structure of the infected host population. Many diseases are transmitted to human by vectors, such as mosquito-borne diseases malaria, dengue, Zika and bug-borne Chagas. Such diseases are transmitted in a crisscross fashion: infected vectors transmit the disease to susceptible hosts, while susceptible vectors become infected through interactions with infected hosts. Crisscross models for the circulation of diseases between vectors and hosts have been proposed and studied by many researchers in the past. For example, in \cite{Bailey1957, Dietz1988} the authors studied the spread of malaria, and in \cite{Busenberg1988, inaba2004mathematical, Velasco1991} the authors studied the spread of Chagas disease by crisscross models. The vector and host populations are assumed to be confined in non-coincidental geographic regions. In particular we assume that the region of the vector population is contained in the region of the host population. The dispersal of individuals inside the regions is described by spatial diffusion terms with different diffusion rates for vector and host populations. We note that diffusion has been used to model the spartio-temporal spread of disease by a variety of authors \cite{Allen2008, Capasso1978, Fitzgibbon1994, Fitzgibbon2008, Webb1981}. After the susceptible hosts are infected with the disease, they are usually asymptomatic for a certain amount of time before becoming symptomatic and infectious. The incubation or non-infectious period of many vector-host diseases is appreciable longer than the time it takes for an individual to travel from one place to another. Indeed an outbreak in one locale could spread silently and globally via infected travelers, only be recognized days or even weeks later \cite{Knobler2006}. In order to incorporate the incubation period of the disease into the model, the host population is assumed to be structured by disease age. We note that the theory of age-structured population models has been well-developed recently, e.g. see \cite{Webb1985theory}. Epidemic models with diffusion and age-structure are studied in \cite{Fitzgibbon1995, Fitzgibbon1996, Langlais1988, Webb1980}. For a review of diffusive age-structured models, we refer the readers to \cite{Webb2008population}. Our goal is to understand how an infectious disease arises and spreads through vector-host populations in a geographical setting. Our model formulation assumes that diffusion descibes the movement of individuals, both vectors and hosts, within this geographical region. This assumption is an idealization, since the movement of both vectors and hosts, particularly hosts, may be extremely complex. We argue, however, that the geographical spread of an epidemic, particularly from an initial small local outbreak, can be modeled by random diffusive processes. In this context diffusion indirectly models the average spatial spread of the underlying micro-biologic infectious agent (viral, bacterial, parasitic), rather then the local-time movement of hosts and vectors. The infectious agent exists within the host and vector populations, and is not modeled directly. Reaction-diffusion mechanisms indirectly describe the way this infectious agent spreads in space and time within these populations. Our paper is organized as follows: we propose the vector-host model in the next section, which includes a system of reaction-diffusion equations for the vector population and a system with diffusion and age-structure for the host population; in section 3, we study the global existence of solution of the model using analytic semigroup to represent solutions of the diffusive age-structured equation; in section 4, we investigate the asymptotic behavior of the solution and prove that the solution always converges to the steady state. \section{The Model} We assume that infected hosts are initially located in a small area of much larger host habitat. Essentially the infected hosts act as vectors introducing the disease to the region. This input corresponds to the disembarkation of infected travelers from a ship, plane or other means of conveyance. We also assume that the vector and host habitats are non-coincident with the vector habitat being a smaller sub-region of the larger host habitat. Recent works on the transmission of disease between species with non-coincident habitats include \cite{Fitzgibbon2004, Fitzgibbon2005, Anita2009}. A salient feature of our consideration will be a noninfectious period of asymptomatic incubation of the virus in host. The incubation period complicates any effort to prophylactically screen for the infection at points of embarkation and disembarkation. We assume that the virus has no deleterious effect on the vectors and that the vectors become immediately infective upon contact with infected infectious hosts with no period of incubation. The virus is assumed to be non-lethal and of relatively short duration in the host and for this reason demographic considerations for the hosts will not be included in the model. We assume that our host population remain confined to a geographic region $\Omega\subset \mathbb{R}^2$. In particular we assume that $\Omega$ is a bounded domain in $\mathbb{R}^2$ with smooth boundary $\partial\Omega$. The vector population is assumed to inhabit and remain confined to a bounded subdomain $\Omega_*\subset\Omega$, where $\partial\Omega_*$ is smooth with $\partial\Omega\cap\partial\Omega_*=\varnothing$. We assume that vector population disperse by means of Fickian diffusion with flux term $-d_1(x)\triangledown\rho$ and further require that $d_1(x)\ge d$ for a positive number $d_*$. The time-evolving spatial dependent density of the vector population is denoted by $\rho(x, t)$. The confinement of the vector population to $\Omega_*$ translates as the Neumann boundary condition $d_1(x)\partial\rho/\partial\eta=0$ for $x\in\partial\Omega_*$, where $\eta$ denotes the unit outward normal on $\partial\Omega_*$. If $\beta(x)\ge \beta_*>0$ and $m(x)\ge m_*>0$ denote spatially dependent growth and logistic control coefficients respectively, the spatio-temporal evolution of the vector population is modeled by the diffusive logistic equation \begin{equation}\label{rho} \left\{ \begin{array} {lll} \frac{\partial \rho}{\partial t}-\triangledown\cdot d_1(x)\triangledown\rho=\beta(x)\rho-m(x)\rho^2, &\ \ \ x\in\Omega_*,\ t> 0,\medskip\\ \frac{\partial \rho}{\partial \eta}=0, &\ \ \ x\in\partial\Omega_*,\ t> 0, \medskip\\ \rho(x, 0)=\rho_0(x)>0, &\ \ \ x\in\Omega_*. \end{array} \right. \end{equation} The following result appears in \cite{Fitzgibbon2008, Cantrell2004} \begin{theorem}\label{theorem_rho} Assume that the functions $\beta$ and $m$ are strictly positive continuous, $d_1$ is strictly positive continuously differentiable, and $\rho_0$ is nontrivial nonnegative continuous on $\bar\Omega_*$. Then there exists a unique classical solution of \eqref{rho} on $\Omega_*\times (0, \infty)$ such that $$0<\rho(x, t)\le \max\{\|\rho_0\|_{\Omega_*, \infty}, \|\beta\|_{\Omega_*, \infty}/m_*\}.$$ Moreover, there exists a unique positive classical solution $\rho_*$ of \begin{equation}\label{rho_str} \left\{ \begin{array} {lll} -\triangledown\cdot d_1(x)\triangledown\rho=\beta(x)\rho-m(x)\rho^2, &\ \ \ x\in\Omega_*,\medskip\\ \frac{\partial \rho}{\partial \eta}=0, &\ \ \ x\in\partial\Omega_*, \medskip \end{array} \right. \end{equation} such that \begin{equation} \lim_{t\rightarrow\infty} \|\rho(\cdot, t)-\rho_*\|_{\Omega_*, \infty}=0. \end{equation} \end{theorem} Our model consists of four compartments: \begin{itemize} \item $\phi(x, t)$, $x\in\bar\Omega_*, t\ge 0$, denotes the time dependent spatial density of the uninfected vector population who have not contracted disease; \item $\psi(x, t)$, $x\in\bar\Omega_*, t\ge 0$, dentoes the time dependent spatial density of the infected vector population who are infected with the disease and capable of transmitting it to the hosts; \item $u(x, t)$, $x\in\bar\Omega, t\ge 0$, denotes the time dependent spatial density of the uninfected host population; \item $i(x, a, t)$, $x\in\bar\Omega, a\ge0, t\ge 0$, denotes the time and age dependent spatial density of the infected host population. \end{itemize} Integrating $i(x, a, t)$ with respect to the age variable over $[0, \infty)$ gives $v(x, t)$, the time dependent spatial density of the infected host population: \begin{equation} v(x, t)=\int_0^\infty i(x, a, t) da. \end{equation} Infected hosts are assumed to go through an incubation period of length $\tau>0$ during which they are neither symptomatic nor infectious. After passing through the incubation period, the infected hosts become infectious. The time dependent spatial density of the infected and infectious host population is computed by \begin{equation} v_\tau(x, t)=\int_\tau^\infty i(x, a, t) da. \end{equation} The two critical issues in understanding the transmission of the disease are the recruitment of infected vectors by means of direct contact with infected hosts and the recruitment of infected hosts by means of direct contact with infected vectors. The recruitment of infected vectors occurs via direct contact with infectious infected hosts, which is modeled by the incidence term \begin{equation} f_1(x, t, \phi(x, t), v_\tau(x, t))= \sigma_1(x)\phi(x, t)v_\tau(x, t). \end{equation} The virus is assumed to have no deleterious effect on the underlying demographics of vector population and we assume that there is no vertical transmission for of the virus among the host population. This considerations produce the following pair of reaction-diffusion type equations: \begin{equation}\label{vector} \left\{ \begin{array} {lll} \frac{\partial \phi}{\partial t}-\triangledown\cdot d_1(x)\triangledown\phi=\beta(x)\rho-\sigma_1(x)\phi v_\tau-m(x)\rho\phi, &\ \ \ x\in\Omega_*,\ t> 0,\medskip\\ \frac{\partial \psi}{\partial t}-\triangledown\cdot d_1(x)\triangledown\psi=\sigma_1(x)\phi v_\tau-m(x)\rho\psi, &\ \ \ x\in\Omega_*,\ t> 0,\medskip\\ \frac{\partial \phi}{\partial \eta}=\frac{\partial \psi}{\partial \eta}=0, &\ \ \ x\in\partial\Omega_*,\ t> 0, \medskip\\ \phi(x, 0)=\phi_0 , &\ \ \ x\in\Omega_*,\\ \psi(x, 0)=0, &\ \ \ x\in\Omega_*. \end{array} \right. \end{equation} With the introduction of the virus into the vectors, vector population is divided into two subclasses of population density, the susceptible class $\phi(x, t)$ and the infective class $\psi(x, t)$, where $\phi(x, t)+\psi(x, t)=\rho(x, t)$. Adding the two partial differential equations we obtain the diffusive logistic equation \eqref{rho} modeling the underlying vector demographics. We have assumed that $\psi(x, 0)=0$ consistent with our focus on the outbreak of the disease in a geographic region that was previously free of the virus. Mathematically we could assume any nonnegative initial data for $\psi(x, 0)$. The mechanism for host dispersion across the larger habitant $\Omega$ will also be Fickian diffusion with flux term $-\triangledown\cdot d_2(x)\triangledown u$ having strictly positive diffusivity $d_2(x)\ge d_*>0$ for all $x\in\bar\Omega_*$. The evolution of $i(x, a, t)$ as we shall see will be governed by a diffusive age transport equation of the form \begin{equation} \frac{\partial i}{\partial t}+\frac{\partial i}{\partial a}-\triangledown\cdot d_2(x)\triangledown i=-\lambda(a) i, \ \ \ x\in\Omega, a\ge 0, t\ge0. \end{equation} We will subsequently specify an initial condition $i(x, a, 0)$, and an age boundary condition or birth function $i(x, 0, t)=B(x, t)$. The confinement of the host population to its habitant is prescribed by homogeneous Neumann boundary condition on $\partial\Omega$. Recruitment of infected hosts (represented by $B(x, t)$) occurs as a result of the contact of susceptible hosts with infected vectors in $\Omega_*$. This is modeled by the incidence term \begin{equation} f_2(x, t, u(x, t), \psi(x, t))= \sigma_2(x)u(x, t)\psi(x, t), \end{equation} which becomes a loss term for the class of susceptible hosts. Contact between hosts and vectors only occurs in the region inhabited by the vector population, and thus we assume $B(x, t)\equiv 0$, $x\in\Omega-\bar\Omega_*$ and make note of the fact that this assumption makes $B(x, t)$ discontinuous on $\Omega$. Recruitment into the infective class occurs at the age boundary $\alpha=0$ and we have \begin{equation}\label{Birth} i(x, 0, t)=B(x, t)=\left\{ \begin{array} {lll} f_2(x, t, u(x, t), \psi(x, t))=\sigma_2(x)u(x, t)\psi(x, t) &\ \ \ x\in\bar\Omega_*,\ t> 0,\medskip\\ 0, &\ \ \ x\in\Omega-\bar\Omega_*, t>0. \end{array} \right. \end{equation} The virus is assumed to be non-fatal to the host population and removed from the infective class is due to recovery. We also assume that the recovered hosts gain permanent immunity and hence are removed from the ongoing dynamics of the system. The recovery rate is assumed to be vary with the age of infection and given by $\lambda(a)$ with $\lambda(a)\ge \lambda_*>0$ for all $a\in [0, \infty)$. We remark that it would be natural to assume that $\lambda(a)$ eventually becomes sharply increasing in $a$. We model the introduction of infected hosts into the region by specifying an age dependent spatial density for $i(x, a, 0)$. Here we envision the introduction of an extremely small number of infected hosts distributed over an extremely small subarea $\Omega_{**}$ of $\Omega$, i.e. $|\Omega_{**}|<<|\Omega|$ and $\int_{\Omega_{**}}\int_0^\infty i(x, a, 0)dadx<<\int_\Omega u(x, 0)dx$. The infected host population is assumed to have age density $z_0(a)$ initially distributed over $\Omega_{**}$. The distribution is modeled by a probability density function, $k(x)$, defined on $\Omega$, which vanishes identically outside of $\Omega_{**}$. The spatial density of initial infected population is $D(x)=k(x)\int_0^\infty z_0(a)da$. These considerations lead to the following partial differential equations which describe the temporal and spatial circulation of the virus in the host population: \begin{equation}\label{host} \left\{ \begin{array} {lll} \frac{\partial u}{\partial t}-\triangledown\cdot d_2(x)\triangledown u=-B(x, t), &\ \ \ x\in\Omega,\ t> 0,\medskip\\ \frac{\partial i}{\partial t}+\frac{\partial i}{\partial a}-\triangledown\cdot d_2(x)\triangledown i=-\lambda(a) i, &\ \ \ x\in\Omega, \ a>0,\ t> 0,\medskip\\ i(x, 0, t)=B(x, t), &\ \ \ x\in\Omega, \ t> 0,\medskip\\ \frac{\partial u}{\partial \eta}=\frac{\partial i}{\partial \eta}=0, &\ \ \ x\in\partial\Omega,\ a>0, \ t> 0, \medskip\\ i(a, x, 0)=i_0(a, x)=z_0(a)k(x), &\ \ \ x\in\Omega,\ a>0,\\ u(x, 0)= u_0(x), &\ \ \ x\in\Omega. \end{array} \right. \end{equation} We make the following assumptions: \begin{enumerate} \item[A0.] $d_1\in C^1(\bar\Omega_*)$ with $d_1(x)\ge d_*$ for all $x\in\bar\Omega_*$; $d_2\in C^1(\bar\Omega)$ with $d_2(x)\ge d_*$ for all $x\in\bar\Omega$; \item[A1.] $z_0\in C^1(\mathbb{R}^+)\cap L_\infty(\mathbb{R}^+)\cap L_1(\mathbb{R}^+)$ is nontrivial nonnegative with $z_0(0)=0$; \item[A2.] $k$ is nonnegative continuous function on $\bar\Omega$ such that $k=0$ on $\Omega-\Omega_{**}$ and \begin{equation*} \int_{\Omega_{**}} k(x) dx=1; \end{equation*} \item[A3.] $\sigma_1$ and $\sigma_2$ are strictly positive continuous functions on $\bar\Omega_*$; \item[A4.] $m$ and $\beta$ are bounded continuous functions on $\bar\Omega$ with $m(x)\ge m_*>0$ and $\beta(x)\ge \beta_*>0$ for all $x\in\bar\Omega$; \item[A5.] $\lambda\in C^1(\mathbb{R}^+)$ with $\lambda(a)\ge \lambda_*>0$ for all $a\ge 0$; \item[A6.] $u_0\in C(\bar\Omega)$ with $u_0(x)\ge (\not\equiv) 0$ for $x\in\bar\Omega$ and $\phi_0\in C(\bar\Omega_*)$ with $\phi_0(x)\ge (\not\equiv) 0$ for all $x\in\bar\Omega_*$. \end{enumerate} \section{Existence of solutions} We will use semigroup theory to represent solutions of the diffusive age transport equation, e.g. see \cite{Pazy2012}. We let $T(t), t\ge 0$, be the analytic semigroup on $C(\bar\Omega)$ with infinitesimal generator $A$, which is defined by \begin{equation} (Aw)(x)=\triangledown\cdot d_2(x)\triangledown w(x), \ \ w\in D(A) \text{ and } x\in\bar\Omega , \end{equation} with \begin{equation} D(A)=\left\{w\in C^2(\bar\Omega): \frac{\partial w}{\partial \eta}=0 \ on \ \partial\Omega\right\}. \end{equation} Let $c\in\mathbb{R}$ and $t_0\ge 0$ such that $t_0+c>0$. If $w_0\in C(\bar\Omega)$, then the classical solutions of \begin{equation}\label{w} \left\{ \begin{array} {lll} \frac{\partial w}{\partial t}-\triangledown\cdot d_2(x)\triangledown w=-\lambda(t+c)w, &\ \ \ x\in\Omega,\ t>t_0,\medskip\\ \frac{\partial w}{\partial \eta}=0, &\ \ \ x\in\partial\Omega,\ t> t_0, \medskip\\ w(x, t_0)=w_0(x), &\ \ \ x\in\Omega. \end{array} \right. \end{equation} has representation \begin{equation} w(\cdot, t)= e^{-\int_{t_0}^t\lambda(s+c)ds}T(t-t_0)w_0. \end{equation} The maximum principle guarantees that $$\|w(\cdot, t)\|_{\Omega, \infty}\le e^{-\lambda_*(t-t_0)}\|w_0\|_{\Omega, \infty} \ \ \text{ for } t\ge t_0.$$ We are now in a position to establish our well-posedness result. As previously observed in \cite{Fitzgibbon19952}, the existence of an incubation period $[0, \tau]$ allows us to effectively decouple the nonlinearity and obtain existence results via linear theory. \begin{theorem} If asumptions A1-A6 are satisfied, then there exists unique coupled positive solution pairs $\{\phi(x, t), \psi(x, t)\}$ and $\{u(x, t), i(x, a, t)\}$ of continuous functions on $\Omega_*\times (0, \infty)$ and $\Omega\times (0, \infty)\times (0, \infty)$ respectively, which satisfy the system \eqref{vector} and \eqref{host}. \end{theorem} \begin{proof} Adding the equations for the susceptible and infected hosts produces the diffusive logistic equation \begin{equation}\label{rho1} \left\{ \begin{array} {lll} \frac{\partial \rho}{\partial t}-\triangledown\cdot d_1(x)\triangledown\rho=\beta(x)\rho-m(x)\rho^2, &\ \ \ x\in\Omega_*,\ t\in (0, T],\medskip\\ \frac{\partial \rho}{\partial \eta}=0, &\ \ \ x\in\partial\Omega_*,\ t\in (0, T], \medskip\\ \rho(x, 0)=\rho_0(x)=\phi_0\ge (\not\equiv)0, &\ \ \ x\in\Omega_*. \end{array} \right. \end{equation} Theorem \ref{theorem_rho} guarantees classical uniformly bounded positive solution and allows us to assume that $\rho(x, t)$ is a known quantity. We initiate a method of steps argument and look for solutions on the time interval $[0, \tau]$ and use a characteristic argument to find a representation of the diffusive age transport equation. Adapting arguments appearing in \cite{Fitzgibbon19952}, we let $c\in\mathbb{R}$ and define the cohort function $w_c(x, t)$. Direct computation yields \begin{equation} \frac{\partial w_c(x, t)}{\partial t}=\mathcal{L}\ i(x, t+c, t), \end{equation} where $\mathcal{L}$ denotes the formal diffusive age transport operator $\mathcal{L}=\partial/\partial t+\partial/\partial a-\triangledown\cdot d_2(x)\triangledown$. We now introduce $t_c=\max\{0, -c\}$ and observe that the solution of \begin{equation*} \left\{ \begin{array} {lll} \frac{\partial w_c}{\partial t}+\frac{\partial w_c}{\partial a}-\triangledown\cdot d_2(x)\triangledown w_c=-\lambda(t+c) w_c, &\ \ \ x\in\Omega,\ t> t_c,\medskip\\ \frac{\partial w_c}{\partial \eta}=0, &\ \ \ x\in\partial\Omega,\ t> t_c. \end{array} \right. \end{equation*} is \begin{equation*} w_c(\cdot, t)= e^{-\int_{t_c}^t\lambda(s+c)ds}T(t-t_c)w_c(\cdot, t_c). \end{equation*} If $c=a-t\ge 0$, then $t_c=0$, and we have for $a\ge t$ \begin{eqnarray*} i(\cdot, a, t)=i(\cdot, a-t+t, t)&=&w_c(\cdot, t)\\ &=&e^{-\int_{0}^t\lambda(s+c)ds}T(t)w_c(\cdot, 0)\\ &=&e^{-\int_{0}^t\lambda(s+a-t)ds}T(t)i(\cdot, a-t, 0)\\ &=&e^{-\int_{0}^t\lambda(s+a-t)ds}T(t)z_0(a-t)k(\cdot). \end{eqnarray*} If $c=a-t<0$, then $t_c=t-a$, and we have for $t> a$ \begin{eqnarray*} i(\cdot, a, t)&=&w_c(\cdot, t)\\ &=&e^{-\int_{t-a}^t\lambda(s+c)ds}T(a)w_c(\cdot, t-a)\\ &=&e^{-\int_{t-a}^t\lambda(s+a-t)ds}T(a)i(\cdot, 0, t-a)\\ &=&e^{-\int_{0}^a\lambda(s)ds}T(a)B(\cdot, t-a). \end{eqnarray*} Hence we have \begin{equation}\label{irep} i(x, a, t)=\left\{ \begin{array}{lll} e^{-\int_{0}^t\lambda(s+a-t)ds}T(t)z_0(a-t)k(x), \ \ \ &x\in\Omega, t\le a, \\ e^{-\int_{0}^a\lambda(s)ds}T(a)B(x, t-a), \ \ \ &x\in\Omega, t>a. \end{array} \right. \end{equation} The infected become infective for $a \ge \tau$. Thus if $t\in [0, \tau]$, we can determine the time dependent spatial density of the infected infective population \begin{equation*} v_\tau(x, t)=\int_\tau^\infty i(x, a, t)da=\int_\tau^\infty e^{-\int_0^t\lambda(s+a-t)ds}T(t)z_0(a-t)k(x)da. \end{equation*} We observe that the assumptions on $z_0(a)$ and $k(x)$ insures that $v_\tau(x, t)> 0$ for $x\in\Omega$ and $t\in(0, \tau]$. Since $v_\tau(x, t)$ has been calculated and we have seen that we are guaranteed a solution to the diffusive logistic equationm, we can view \eqref{vector} as a coupled linear parabolic system. Standard parabolic theory guarantees a classical solution on $\Omega_*\times [0, \tau]$. Maximum principle arguments now insure that the solution $(\phi(x, t)), \psi(x, t)$ is positive on $\bar\Omega_*\times (0, \tau]$. Since the existence of $\psi(x, t)$ is now assured, the partial differential equation describing the depletion of the susceptible class, \begin{equation*} \frac{\partial u}{\partial t}-\triangledown\cdot d_2(x)\triangledown u=-B(x, t)=\left\{ \begin{array}{lll} -\sigma_2(x)u(x, t)\psi(x, t), \ \ \ &x\in\Omega_*, t\in(0, \tau],\\ 0 \ \ \ &x\in\Omega-\bar\Omega_*, t\in(0, \tau] \end{array} \right. \end{equation*} can be viewed as linear and therefore there exists a positive solutions on $\bar\Omega\times (0, \tau]$. Knowing $u(x, t)$ and $\psi(x, t)$ permits calculation of the birth function that provides the mechanism for entry into the infective class at the age boundary $a=0$ \begin{equation*} i(x, 0, t)=B(x, t)=\left\{ \begin{array}{lll} \sigma_2(x)u(x, t)\psi(x, t), \ \ \ &x\in\bar\Omega_*, t\in(0, \tau],\\ 0 \ \ \ &x\in\Omega-\bar\Omega_*, t\in(0, \tau]. \end{array} \right. \end{equation*} We complete the cycle of this argument with the observation that our knowledge of $B(x, t)$ and $i_0(x, a)=z_0(a)k(x)$ facilitates direct computation of the solution $i(x, a, t)$ to the diffusive age transport equation on $\bar\Omega\times (0, \infty)\times (0, \tau]$ by \eqref{irep} . By our assumption $z_0(0)=0$ and $\lambda(a)\ge \lambda_*>0$, $i(x, a, t)$ is uniformly continuous on $\Omega\times (0, \infty)\times (0, \tau]$. By the positivity of the semigroup $T(t)$, $i(x, a, t)$ is positive on $\Omega\times (0, \infty)\times (0, \tau]$. We then look for a solution for $t\in[\tau, 2\tau]$. Again we find the spatial density of the infective hosts by integrating $i(x, a, t)$ with respect to $a$ on $[\tau, \infty)$. However in this case $t\ge\tau$, we have \begin{equation*} v_\tau(x, t)=\int_\tau^\infty i(x, a, t)da=\int_\tau^t i(x, a, t)da+\int_t^\infty i(x, a, t)da. \end{equation*} Each of these integrals on the right hand side can be evaluated using previous knowledge. In the case of the second integral, we observe that \begin{equation}\label{in1} \int_t^\infty i(x, a, t)da=\int_t^\infty e^{-\int_0^t\lambda(s+a-t)ds}T(t)z_0(a-t)k(x)da. \end{equation} In the case of the first integral, we have $\tau\le a<t\le 2\tau$, which implies $t-a<\tau$. Hence \begin{equation}\label{in2} \int_\tau^t i(x, a, t)da=\int_\tau^t e^{-\int_0^a\lambda(s)ds}T(a)B(x, t-a)da. \end{equation} In the last integral, \begin{equation*} B(x, t-a)=\left\{ \begin{array}{lll} \sigma_2(x)u(x,t-a)\psi(x, t-a), \ \ \ &t-a \in[0, \tau], x\in\bar\Omega_*,\\ 0 , \ \ \ &t-a \in[0, \tau], x\in\Omega-\bar\Omega_*, \end{array} \right. \end{equation*} which has been obtained from the previous step. Thus, we can determine both integrals \eqref{in1}-\eqref{in2} that define $v_\tau(x, t)$ for $t\in [\tau, 2\tau]$. We can insert the pre-determined functions $\rho(x, t)$ and $v_\tau(x, t)$ into the system describing the evolution of $\psi$ and $\phi$, and again obtain a linear system. Hence we get a positive solution on $[\tau, 2\tau]$. Now since $\psi(x, t)$ is determined for $t\in[\tau,2\tau]$, we can reduce the equation for $u$ to a linear equation for which we can readily obtain a positive solution. Knowing $u(x, t)$ and $\psi(x, \tau)$ on $[\tau, 2\tau]$ permits calculation of $B(x, t)$ and hence $i(x, a, t)$ on $[\tau, 2\tau]$. Our analysis thus so far shows that we can determine the solution $(\phi(x, t), \psi(x, t))$ and $(u(x, t), i(x, a, t))$ of positive uniformly continuous functions on $\Omega_*\times [\tau, 2\tau]$ and $\Omega\times [0, \infty)\times [\tau, 2\tau]$ respectively. It is evident that the proceeding argument in this manner step our way across $t\in [0, \infty)$, and guarantee the unique positive solution $(\phi(x, t), \psi(x, t))$ and $(u(x, t), i(x, a, t))$ on $\Omega_*\times [0, \infty]$ and $\Omega\times [0, \infty)\times [0, \infty)$ respectively. \end{proof} \section{Asymptotic behavior} Analysis of the long term behavior of solutions will require us the following a priori bounds. \begin{proposition}\label{prop_bound} Let A1-A6 hold and let $(\phi(x, t), \psi(x, t), u(x, t), i(x, a, t))$ be the solution of system \eqref{vector} and \eqref{host}. Then there exists $M>0$ such that the following hold: \begin{eqnarray*} &&\sup\{\|\phi(\cdot, t)\|_{\Omega_*, \infty}, \|\psi(\cdot, t)\|_{\Omega_*, \infty}, \|u(\cdot, t)\|_{\Omega, \infty}, \|v(\cdot, t)\|_{\Omega, \infty}, \|v_\tau(\cdot, t)\|_{\Omega, \infty}: \ t> 0\}<M,\\ &&\sup\{\|i(\cdot, a, t)\|_{\Omega, \infty}: \ a\ge0, t>0\}<M. \end{eqnarray*} For any $p>1$, there exists $M_p>0$ such that \begin{eqnarray*} &&\sup\{\|\partial\phi(\cdot, t)/\partial t\|_{\Omega_*, p}, \|\partial\psi(\cdot, t)/\partial t\|_{\Omega_*, p}, \|\partial u(\cdot, t)/\partial t\|_{\Omega, p}, \|\partial v(\cdot, t)/\partial t\|_{\Omega, p}: \ t> 0\}<M_p,\\ &&\sup\{\|\triangledown\phi(\cdot, t)\|_{\Omega_*, p}, \|\triangledown\psi(\cdot, t)\|_{\Omega_*, p}, \|\triangledown u(\cdot, t)\|_{\Omega, p}, \|\triangledown v(\cdot, t)\|_{\Omega, p}: \ t> 0\}<M_p,\\ &&\sup\{\|\triangledown^2\phi(\cdot, t)\|_{\Omega_*, p}, \|\triangledown^2\psi(\cdot, t)\|_{\Omega_*, p}, \|\triangledown^2 u(\cdot, t)\|_{\Omega, p}, \|\triangledown^2 v(\cdot, t)\|_{\Omega, p}: \ t> 0\}<M_p. \end{eqnarray*} \end{proposition} \begin{proof} By the comparison principle, we have $$0\le \rho(x, t)\le \max\{\|\beta\|_{\Omega_*, \infty}/m_*, \|\rho_0\|\}.$$ This together with the non-negativity of $\phi(x, t)$ and $\psi(x, t)$ implies that \begin{equation*} \sup\{\|\phi(\cdot, t)\|_{\Omega_*, \infty}, \|\psi(\cdot, t)\|_{\Omega_*, \infty}: \ t\ge 0\}\le \max\{\|\beta\|_{\Omega_*, \infty}/m_*, \|\phi_0\|_{\Omega_*, \infty}\}. \end{equation*} We have established that $u(x, t)$ is nonnegative. Therefore, $\partial u/\partial t-\triangledown\cdot d_2(x)\triangledown u\le 0$ and the maximum principle implies \begin{equation*} \|u(\cdot, t)\|_{\Omega, \infty} \le \|u_0\|_{\Omega, \infty}. \end{equation*} We can observe that \begin{equation*} \|B(\cdot, t)\|_{\Omega, \infty} \le \|\sigma_2\|_{\Omega_*, \infty}\|u_0\|_{\Omega, \infty}\max\{ \|\beta\|_{\Omega_*, \infty}/m_*, \|\phi_0\|_{\Omega_*, \infty}\}\equiv N. \end{equation*} We now use ther representation of $i(x, a, t)$ to observe that \begin{equation*} \|i(\cdot, a, t)\|_{\Omega, \infty}\le \left\{ \begin{array} {lll} \|k\|_{\Omega, \infty}\|z_0\|_{[0, \infty), \infty}, &\ \ \ t\le a,\medskip\\ N, &\ \ \ t>a. \end{array} \right. \end{equation*} If we integrate the age transport equation on $[0, \infty)$, we get \begin{equation*} \frac{\partial v}{\partial t}-\triangledown\cdot d_2(x)\triangledown v= B(x, t)-\int_0^\infty\lambda(a)i(\cdot, a, t)da, \ \ \ x\in\Omega, t\ge0. \end{equation*} Let $w(x,t)=u(x,t)+v(x,t)$. We observe that \begin{equation*} \frac{\partial w}{\partial t}-\triangledown\cdot d_2(x)\triangledown w\le0. \end{equation*} Invoking the maximum principle, we have \begin{equation*} \|v(\cdot, t)\|_{\Omega, \infty}\le \|w(\cdot, t)\|_{\Omega, \infty}\le \|w(\cdot, 0)\|_{\Omega, \infty}\le\|k\|_{\Omega, \infty}\|z_0\|_{[0, \infty), \infty}. \end{equation*} By $\int_\tau^\infty i(x, a, t) da\le\int_0^\infty i(x, a, t)da$, we see that $\|v_\tau(\cdot, t)\|_{\Omega, \infty}\le \|v(\cdot, t)\|_{\Omega, \infty}$ and we have obtained a uniform a priori bound for $\|\phi(\cdot, t)\|_{\Omega_*, \infty}$, $\|\psi(\cdot, t)\|_{\Omega_*, \infty}$, $\|u(\cdot, t)\|_{\Omega, \infty}$, $\|v(\cdot, t)\|_{\Omega, \infty}$, and $\|v_\tau(\cdot, t)\|_{\Omega, \infty}$. The uniform estimate on the spatial derivatives follows from a common semigroup calculation in the fractional power spaces, and the estimate for the time derivative just follows from \eqref{vector}, \eqref{host}, and the other estimates. \end{proof} We are now in position to provide a complete description of the asymptotic behavior of the solution quadruple $(\phi(x, t), \psi(x, t), u(x, t), i(x, a, t))$. \begin{lemma}\label{lemma_v} Suppose that assumptions A1-A6 hold, and let $(\phi(x, t), \psi(x, t), u(x, t), i(x, a, t))$ be the solution of system \eqref{vector} and \eqref{host}. Then we have \begin{equation*} \lim_{t\rightarrow\infty} \|v(\cdot, t)\|_{\Omega, 1}=0 \end{equation*} \end{lemma} \begin{proof} Setting $w(x, t)=u(x, t)+v(x, t)$ and adding the differential equations for $u(x, t)$ and $v(x, t)$, we find \begin{equation*} \frac{\partial w}{\partial t}-\triangledown\cdot d_2(x)\triangledown w+\int_0^\infty \lambda(a)i(x, a, t)da=0, \ \ \ x\in\Omega, t\ge0. \end{equation*} Integrating both sides of the equation over $\Omega\times(0, t)$ and noticing that $\lambda(a)\ge \lambda_*>0$, we obtain \begin{equation*} \|w(\cdot, t)\|_{\Omega, 1}+\lambda_*\int_0^t\|v(\cdot, s)\|_{\Omega, 1} ds\le \|w(\cdot, 0)\|_{\Omega, 1}. \end{equation*} Hence, \begin{equation}\label{vbound} \int_0^\infty \|v(\cdot, s)\|_{\Omega, 1} ds \le \|w(\cdot, 0)\|_{\Omega, 1}. \end{equation} This fact together with the uniform a priori bound on $\|\partial v/\partial t\|_{\Omega, 1}$ guaranteed by Proposition \ref{prop_bound} insure that $$\lim_{t\rightarrow\infty} \|v(\cdot, t)\|_{\Omega, 1}=0.$$ \end{proof} \begin{lemma}\label{lemma_psi} Suppose that assumptions A1-A6 hold, and let $(\phi(x, t), \psi(x, t), u(x, t), i(x, a, t))$ be the solution of system \eqref{vector} and \eqref{host}. Then we have \begin{equation*} \lim_{t\rightarrow\infty} \|\psi\cdot, t)\|_{\Omega_*, 2}=0 \end{equation*} \end{lemma} \begin{proof} If we integrate the differential equation for $\psi$ over $\Omega_*$ and followed by the integration with respect to $t$, we have \begin{equation*} \int_{\Omega_*} \psi(x, t) dx = \int_0^t\int_{\Omega_*} \sigma_1(x)\phi(x,s)v_\tau(x,s)dx-\int_0^t\int_{\Omega_*}m(x)\rho(x,s)\psi(x,s)dxds. \end{equation*} Recalling $0\le\psi(x, t)\le \psi(x,t)+\phi(x,t)=\rho(x,t)$ and $m(x)\ge m_*>0$, we obtain \begin{equation*} \|\psi(\cdot, t)\|_{\Omega_*, 1}+m_*\int_0^t \|\psi(\cdot, s)\|^2_{\Omega_*, 2}ds \le \|\sigma_1\|_{\Omega_*, \infty}\|\phi\|_{\Omega_*, \infty}\int_0^t\|v(\cdot, s)\|_{\Omega, 1}ds. \end{equation*} So by \eqref{vbound} and Proposition \ref{prop_bound}, we have \begin{equation}\label{psibound} \int_0^\infty \|\psi(\cdot, s)\|^2_{\Omega_*, 2} ds <\infty. \end{equation} Multiplying both sides of the equation for $\psi$ by $\psi$ and integrating it over $\Omega_*$, we obtain \begin{equation}\label{psi} \frac{1}{2}\frac{d}{dt} \|\psi(\cdot, t)\|^2_{\Omega_*, 2} +\int_\Omega d_2(x)|\triangledown \psi(\cdot, t)|^2dx = \int_{\Omega_*}\sigma_1\phi v_\tau vdx-\int_{\Omega_*}m\rho\psi^2dx. \end{equation} By Proposition \ref{prop_bound}, $d\|\psi(\cdot, t)\|_{\Omega_*, 2}^2/dt$ is uniformly bounded for $t>0$, and this together with \eqref{psibound} implies that $$\lim_{t\rightarrow\infty}\|\psi(\cdot, t)\|_{\Omega_*, 2}=0.$$ \end{proof} \begin{lemma}\label{lemma_u} Suppose that assumptions A1-A6 hold, and let $(\phi(x, t), \psi(x, t), u(x, t), i(x, a, t))$ be the solution of system \eqref{vector} and \eqref{host}. Then there exists a constant $u_*\ge 0$ such that \begin{equation}\label{ueql} \lim_{t\rightarrow\infty} \|u(\cdot, t)-u_*\|_{\Omega, 2}=0. \end{equation} \end{lemma} \begin{proof} Let $U(t)=\int_\Omega u(x, t)dx$ and $\bar u(t)=U(t)/|\Omega|$. Integrating both sides of \begin{equation}\label{ueq} \frac{\partial u}{\partial t}-\triangledown\cdot d_2(x)\triangledown u=-B(x, t) \end{equation} over $\Omega$, we get $$\frac{\partial }{\partial t}U(t)=- B(x, t)\le 0.$$ So $U(t)$ is decreasing and there exists $u_*\ge 0$ such that $$\lim_{t\rightarrow \infty}\bar u(t)=u_*.$$ By the Poincare inequality, there exists $C>0$ such that for all $t> 0$ \begin{equation}\label{poincare} \|u(\cdot, t)-\bar u(t)\|_{\Omega_*, 2}\le C\|\triangledown u(\cdot, t)\|_{\Omega_*, 2}. \end{equation} Noticing \eqref{ueq} and \eqref{poincare}, we compute \begin{equation*} \begin{array} {lll} \frac{1}{2}\frac{d}{dt} \int_\Omega (u-\bar u)^2dx&=\int_\Omega (u-\bar u)(\triangledown\cdot d_2\triangledown u- B(x, t))dx\\ &=-\int_\Omega d_2 |\triangledown u|^2 dx-\int_\Omega (u-\bar u)B(x, t) dx\\ &\le -d_*C\int_\Omega (u-\bar u)^2dx+\frac{1}{2}d_*C\int_\Omega (u-\bar u)^2dx +\frac{ K}{2}\int_{\Omega_*}|\psi(\cdot, t)|^2dx \end{array} \end{equation*} for some $K>0$, where we used Cauchy's inequality in the last step. Then by the Gronwall's inequality, we have \begin{equation*} \|u(\cdot, t)-\bar u(t)\|^2_{\Omega_*, 2}\le \|u_0-\bar u_0\|^2_{\Omega_*, 2} K e^{-d_*Ct} e^{\int_0^t e^{d_*Cs}\|\psi(\cdot, s)\|^2_{\Omega_*, 2}ds }. \end{equation*} By Lemma \ref{lemma_psi}, we have \begin{equation*} \lim_{t\rightarrow\infty} e^{-d_*Ct} e^{\int_0^t e^{d_*Cs}\|\psi(\cdot, s)\|^2_{\Omega_*, 2}ds } = \lim_{t\rightarrow \infty} \frac{e^{d_*Ct} \|\psi(\cdot, t)\|^2_{\Omega_*, 2}}{ d_*C e^{d_*Ct}}=0. \end{equation*} It then follows that \begin{equation*} \lim_{t\rightarrow\infty} \|u(\cdot, t)-\bar u(t)\|_{\Omega, 2}=0. \end{equation*} So we have \begin{equation*} \lim_{t\rightarrow\infty} \|u(\cdot, t)-u_*\|_{\Omega, 2}\le \lim_{t\rightarrow\infty} \|u(\cdot, t)-\bar u(t)\|_{\Omega, 2} + \lim_{t\rightarrow\infty} \|\bar u(t)-u_*\|_{\Omega, 2}=0. \end{equation*} \end{proof} \begin{theorem}\label{theorem_con} Let A1-A6 hold and let $(\phi(x, t), \psi(x, t), u(x, t), i(x, a, t))$ be the solution of system \eqref{vector} and \eqref{host}. Then we have \begin{equation*} \lim_{t\rightarrow\infty} \|v(\cdot, t)\|_{\Omega, \infty}=0, \ \text{ and }\ \lim_{t\rightarrow\infty} \|\psi(\cdot, t)\|_{\Omega, \infty}=0, \end{equation*} and there exists a constant $u_*>0$ such that \begin{equation*} \lim_{t\rightarrow\infty} \|u(\cdot, t)-u_*\|_{\Omega, \infty}=0; \end{equation*} Moreover, \begin{equation}\label{con_phi} \lim_{t\rightarrow\infty} \|\phi(\cdot, t)-\rho_*\|_{\Omega_*, \infty}=0, \end{equation} where $\rho_*$ is the unique positive solution of \begin{equation*} \left\{ \begin{array} {lll} -\triangledown\cdot d_1(x)\triangledown\rho=\beta(x)\rho-m(x)\rho^2, &\ \ \ x\in\Omega_*,\medskip\\ \frac{\partial \rho}{\partial \eta}=0, &\ \ \ x\in\partial\Omega_*. \medskip \end{array} \right. \end{equation*} \end{theorem} \begin{proof} By the Sobolev imbedding theorem, we have that the imbeddings $W^{1, p}(\Omega)\subseteq C(\bar\Omega)$ and $W^{1, p}(\Omega_*)\subseteq C(\bar\Omega_*)$ are compact for $p>n/2$. So by Proposition \ref{prop_bound}, the orbits $\{\phi(\cdot, t), t\ge 1\}$ and $\{\psi(\cdot, t), t\ge 1\}$ are precompact in $C(\bar\Omega_*)$, and $\{u(\cdot, t), t\ge 1\}$ and $\{v(\cdot, t), t\ge 1\}$ are precompact in $C(\bar\Omega)$. Then the uniform convergence of $v, \psi$ and $u$ just follows from Lemmas \ref{lemma_v}-\ref{lemma_u}. Theorem \ref{theorem_rho} states that $$\lim_{t\rightarrow\infty} \|\rho(\cdot, t)-\rho_*\|_{\Omega_*, \infty}=0.$$ Then \eqref{con_phi} follows from the uniform convergence of $\psi$ to zero. To complete the proof, we still need to show $u_*>0$, and this will be done in the following two lemmas. \end{proof} \begin{lemma}\label{lemma_psibound} Suppose that assumptions A1-A6 hold, and let $(\phi(x, t), \psi(x, t), u(x, t), i(x, a, t))$ be the solution of system \eqref{vector} and \eqref{host}. If $u_*=0$ in Theorem \ref{theorem_con}, then \begin{equation}\label{psi_uni} \int_0^\infty \|\psi(\cdot, s)\|_{\Omega_*, \infty} ds<\infty. \end{equation} \end{lemma} \begin{proof} By the assumption $u_*=0$, for a given $\epsilon>0$ (to be specified later) we may assume without loss of generality that $\|u(\cdot, t)\|_{\Omega, \infty}<\epsilon$ for all $t\ge 0$. Since $\rho(\cdot, t)\rightarrow \rho_*>0$ in $C(\bar\Omega_*)$ as $t\rightarrow \infty$ and $m(x)\ge m_*>0$, we may assume without loss of generality that $m(x)\rho(x, t)\ge \lambda_1$ with some positive number $\lambda_1$ for all $x\in\bar\Omega_*$ and $t\ge 0$. As for convenience, we choose $\lambda_1$ small such that $\lambda_*>\lambda_1$ where $\lambda_*$ is in assumption A5. Let $A_1$ be an operator in $C(\bar \Omega_*)$ defined as $$A_1 w=\triangledown\cdot d_1\triangledown w-\lambda_1 w, \ \ \ w\in D(A_1),$$ $$D(A_1)=\{w\in C(\bar\Omega_*): \ \ w\in C^2(\bar\Omega_*) \ \text{ and } \ \frac{\partial w}{\partial \eta}=0 \text{ on } \partial\Omega_*\}.$$ Let $A_2$ be an operator in $C(\bar \Omega)$ defined as $$A_2 w=\triangledown\cdot d_2\triangledown w-\lambda_* w, \ \ \ w\in D(A_2),$$ $$D(A_2)=\{w\in C(\bar\Omega): \ \ w\in C^2(\bar\Omega) \ \text{ and } \ \frac{\partial w}{\partial \eta}=0 \text{ on } \partial\Omega\}.$$ Let $\{T_1(t): t\ge 0\}$ be the semigroup generated by $A_1$ in $C(\bar\Omega_*)$ and $\{T_2(t): t\ge 0\}$ be the semigroup generated by $A_2$ in $C(\bar\Omega)$. There exists $M_1>0$ such that $$\|T_1(t)\|\le M_1e^{-\lambda_1 t} \ \ \ \text{ and } \ \ \ \|T_2(t)\|\le M_1e^{-\lambda_* t}.$$ By the second equation of \eqref{vector}, we have \begin{eqnarray*} \psi(\cdot, t) &=&\int_0^t T_1(t-s)(\sigma_1\phi(\cdot, s)v_\tau(\cdot, s)- (m\rho(\cdot, s)-\lambda_1)\psi(\cdot, s))ds\\ &\le & \int_0^t T_1(t-s)(\sigma_1\phi(\cdot, s)v(\cdot, s))ds. \end{eqnarray*} It then follows that \begin{eqnarray} \|\psi(\cdot, t)\|_{\Omega_*, \infty} &\le & \int_0^t \| T_1(t-s)(\sigma_1\phi(\cdot, s)v(\cdot, s)) \|_{\Omega_*, \infty}ds \nonumber\\ &\le& M_2 \int_0^t e^{-\lambda_1(t-s)}\|v(\cdot, s)\|_{\Omega, \infty}ds,\label{conv1} \end{eqnarray} where $M_2=M_1M \|\sigma_1\|_{\Omega_*, \infty}$ with $M$ specified in Proposition \ref{prop_bound}. By the equation for $v$, we have \begin{eqnarray*} v(\cdot, t) &=&T_2(t)v_0+\int_0^t T_2(t-s)(B(\cdot, s)- \int_0^\infty(\lambda(a)-\lambda_*)i(\cdot, a, t)da)ds\\ &\le & T_2(t)v_0+\int_0^t T_2(t-s)B(\cdot, s)dads. \end{eqnarray*} It then follows that \begin{eqnarray} \|v(\cdot, t)\|_{\Omega, \infty} &\le & \|T_2(t)v_0\|_{\Omega, \infty}+\int_0^t \|T_2(t-s)B(\cdot, s)\|_{\Omega, \infty}dads \nonumber\\ &\le& M_1e^{-\lambda_* t}\|v_0\|_{\Omega, \infty} + \epsilon M_1 \|\sigma_2\|_{\Omega_*, \infty} \int_0^t e^{-\lambda_*(t-s)}\|\psi(\cdot, s)\|_{\Omega_*, \infty}ds.\label{conv2} \end{eqnarray} Combining \eqref{conv1} and \eqref{conv2}, we have \begin{equation*} \|\psi(\cdot, t)\|_{\Omega_*, \infty} \le M_3\int_0^t e^{-\lambda_1(t-s)} e^{-\lambda_* s}ds+ \epsilon M_4 \int_0^t e^{-\lambda_1(t-s)} \int_0^s e^{-\lambda_*(s-r)}\|\psi(\cdot, r)\|_{\Omega_*, \infty}drds. \end{equation*} where $M_3=M_1M_2\|v_0\|_{\Omega, \infty}$ and $M_4=M_1M_2\|\sigma_2\|_{\Omega_*, \infty}$. Notice that \begin{eqnarray*} \int_0^t e^{-\lambda_1(t-s)} \int_0^s e^{-\lambda_*(s-r)}\|\psi(\cdot, r)\|_{\Omega_*, \infty}drds&=&e^{-\lambda_1t}\int_0^t e^{\lambda_* r} \|\psi(\cdot, r)\|_{\Omega_*, \infty} \int_r^t e^{(\lambda_1-\lambda_*)s}dsdr\\ &\le& \frac{1}{\lambda_*-\lambda_1}\int_0^t e^{-\lambda_1(t-r)} \|\psi(\cdot, r)\|_{\Omega_*, \infty}dr. \end{eqnarray*} It then follows that \begin{equation*} \|\psi(\cdot, t)\|_{\Omega_*, \infty} \le \frac{M_3}{\lambda_*-\lambda_1}e^{-\lambda_1 t}+ \frac{\epsilon M_4}{\lambda_*-\lambda_1}\int_0^t e^{-\lambda_1(t-r)} \|\psi(\cdot, r)\|_{\Omega_*, \infty}dr. \end{equation*} Choose $\epsilon=(\lambda_*-\lambda_1)\lambda_1/(2M_4)$. Then by the Gronwall's inequality, there exists $M_5>0$ such that for all $t\ge 0$ $$\|\psi(\cdot, t)\|_{\Omega_*, \infty} \le M_5 e^{-\frac{\lambda_1 t}{2}}.$$ Therefore, \eqref{psi_uni} holds. \end{proof} \begin{lemma} Suppose that assumptions A1-A6 hold, and let $(\phi(x, t), \psi(x, t), u(x, t), i(x, a, t))$ be the solution of system \eqref{vector} and \eqref{host}. Then $u_*>0$, where $u_*$ is specified in Theorem \ref{theorem_con}. \end{lemma} \begin{proof} Assume to the contrary that $u_*=0$. By the first equation of \eqref{host}, we have \begin{equation*} \frac{\partial U}{\partial t}\ge - \|\sigma_2\|_{\Omega, \infty} \|\psi(\cdot, s)\|_{\Omega_*, \infty}U(t), \end{equation*} which implies that \begin{equation*} U(t)\ge U(0) \exp\left( -\|\sigma_2\|_{\Omega, \infty} \int_0^t \|\psi(\cdot, s)\|_{\Omega_*, \infty} ds\right) >0. \end{equation*} Hence $$u_*\ge \bar u_0\exp\left( -\|\sigma_2\|_{\Omega, \infty} \int_0^\infty \|\psi(\cdot, s)\|_{\Omega_*, \infty} ds\right) >0,$$ which is a contradiction by Lemma \ref{lemma_psibound}. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,484
{"url":"https:\/\/www.scientificlib.com\/en\/Mathematics\/LX\/ZerosumfreeMonoid.html","text":"# .\n\nIn abstract algebra, an additive monoid $$(M, 0, +)$$ is said to be zerosumfree, conical, centerless or positive if nonzero elements do not sum to zero. Formally:\n\n$$(\\forall a,b\\in M)\\ a + b = 0 \\implies a = b = 0 \\!$$\n\nThis means that the only way zero can be expressed as a sum is as 0 + 0.\n\nReferences\n\nWehrung, Friedrich (1996). \"Tensor products of structures with interpolation\". Pacific Journal of Mathematics 176 (1): 267\u2013285. Zbl 0865.06010.\n\n\u2022 Mathematics Encyclopedia","date":"2023-03-29 22:59:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9260880351066589, \"perplexity\": 1205.6323837636635}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296949035.66\/warc\/CC-MAIN-20230329213541-20230330003541-00535.warc.gz\"}"}
null
null
package wang.junqin.chaexpress.utils; import android.content.Context; import android.content.DialogInterface; import android.content.Intent; import android.content.IntentSender; import android.content.pm.PackageInfo; import android.content.pm.PackageManager; import android.graphics.drawable.Drawable; import android.os.Bundle; import android.os.Handler; import android.os.Message; import android.provider.Settings; import android.support.annotation.IdRes; import android.support.v7.app.AlertDialog; import android.util.Log; import android.view.View; import android.widget.ImageView; import android.widget.Toast; import com.bumptech.glide.Glide; import com.bumptech.glide.RequestBuilder; import java.util.List; import wang.junqin.chaexpress.data.ComCodeIcoMap; import wang.junqin.chaexpress.model.bean.ExpressComBean; /** * Created by KN on 2017/5/29. */ public class MyUtils { public static Context context; static Handler handler = new Handler(); public static void initMyUtils(Context context){ MyUtils.context = context; } public static void showToast(final String str){ handler.post(new Runnable() { @Override public void run() { Toast.makeText(context,str,Toast.LENGTH_SHORT).show(); } }); } public static Intent shareInfoToFriends(String str){ Intent intent = new Intent(); intent.setType("text/plain"); intent.setAction(Intent.ACTION_SEND); intent.putExtra(Intent.EXTRA_SUBJECT, "share"); intent.putExtra(Intent.EXTRA_TEXT, str); intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); return Intent.createChooser(intent, "分享物流信息"); } public static RequestBuilder<Drawable> loadImage(String comCode){ int comIcoRes = ComCodeIcoMap.getComIcoByCode(comCode); return Glide.with(context).load(comIcoRes); } public static String getVerisonInfo(Context context) throws Exception{ PackageInfo info = context.getPackageManager().getPackageInfo(context.getPackageName(), PackageManager.GET_CONFIGURATIONS); String verisonInfo = info.versionName; return verisonInfo; } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,852
Admit it… you've felt it. In the time between rolling out of bed and turning into a business ninja you lost your drive, your excitement, and your powers of productivity. You may say that you aren't awake yet because you haven't had your morning dose of caffeine or an interruption caused chaos to break out in your brain. But the truth is your day started by staring into the abyss that is your closet and you had to decide what to wear this morning. Your brain was challenged when you had to individually choose a top, a bottom, socks, shoes, and undergarments. Then you had to decide how to wear your hair, what makeup to wear, whether or not to shower, and in addition to figuring out what's clean, what's presentable, and what the heck is going on today. Throw in a little self-doubt, a few interruptions, your lack of caffeine, and your morning can start off bad before you even open your computer! The hard truth: Online Entrepreneurs are missing HUGE opportunities to take back their morning because they aren't taking their closets seriously. I've gathered 6 Simple Ways You Can Calm Your Closet, Be More Productive, and Save Your Energy for Your Business. The most successful people in the world wear the same type of outfit every day. Two of the most popular examples are the men who run the world – well one is the President of the United States and the other is the CEO of Facebook. I'd say pretty darn close to running the world! President Obama wears a blue or gray suit every day and Mark Zuckerberg wears a gray t-shirt and jeans. They don't have to decide anything in the morning when it comes to their attire. They simply pick the outfit in front of them and they go run their part of the universe. Make sure your outfit can go to more than one place with you. Note: there are a few exceptions like wedding or funeral attire. For example: I can wear my sweater dress with tights, leggings, or jeans. It can be worn for my workday, to church, running errands, speaking engagements, ladies events like bridal/baby showers, and date night. I can wear it anytime the weather is below 60*. My long sleeved black shirts or short sleeved plain t-shirts go great underneath it. The sweater dress goes with my tennis shoes, cute ankle boots, and my flats. I can wear most of my necklaces and earrings with it. This is how I would use my sweater dress in my everyday life. One piece of clothing gives me many options, it can be worn many places and in many situations. With your next load of laundry think about each piece of clothing and see if you can pair a few outfits for each top and each bottom you put away. Can you hear me groaning as I type this? I hate putting away clothes; I despise putting them away nicely, and I HATE having to go on a scavenger hunt to put one stinking outfit together. You KNOW that looking for ONE sock can be frazzling, it can throw your day off, and it definitely makes it hard to keep your calm. Keeping up with laundry is by far the most effective way to have your closet looking great because having a "grab and go" closet is awesome! 3. Know where you are going today, or better yet where you're going this week. This sounds SO SIMPLE and it is. What's on your calendar for the next few days? Can you plan your outfits a few days in advance? Do you need a dress for an event on Saturday and speaking attire on Thursday? Be sure to dress for the occasion. Dressing up is good, but try not to be under dressed. When you're meeting someone, like a client, for the first time whether one-on-one or at a conference it's always good to wear something that makes you look like your best self. It could be a dress and flats, nice pants and a button up shirt, or even a skirt, sweater, and scarf combo. Be sure to finish your look off with makeup, simple jewelry, and a nice laptop/work bag. For less formal meetings - Dark wash jeans are great with sweaters, button up shirts, or even cute flowy blouses. Avoid tight fitting pants unless wearing a sweater the length of a dress (aka your shirt should cover your bum if you are wearing leggings). Above all BE COMFORTABLE! You don't want to fidget with bra straps, skirts that are too short, shoes that are unbearably uncomfortable, or pants that seem to ride up with each movement. Think about the next few events coming up and peruse your closet to see if you have enough outfits to cover those meetings. Being on camera can be a great way to expand your audience and it's a great way to share information with your tribe. You will want to lean toward colors that look best on you. You can also start with teals and blues near your face because they make everyone look great on camera, especially gem tone shades of teal and navy. Most TV stylists recommend not wearing red, white, or black as the color balance gets thrown off so be aware of how these colors look on camera. Wear solid colors because prints can look like they are pulsing on screen, so avoid them to make sure your viewers don't get dizzy. Overall don't let the full picture be too busy with prints and colors; simplicity can be your friend. Be careful of the angle of the camera. It is better to have the camera slightly above you for the "head shots" and for full body shots test out angles with your camera. Typically the full body videos look better if you are slightly angled instead of straight on. Be aware of low shirts and oversized scarves – you'll want to look your best! Style your hair away from your face because it can cast shadows. Regular makeup and simple jewelry are great choices because they will look good on camera and the jewelry shouldn't create extra noises that could compete with your voice. Be sure to SMILE and inject your personality in your clothing choices. Personal style is a lot like branding for businesses.While website colors, fonts, and logos give the reader an idea of the business you created. Your personal style gives the person you're speaking with a sense of who you are on the inside. But what if you won't see anyone today? You WILL see yourself in the mirror and finding your personal style helps build confidence and it makes you feel at home in your own skin! How do you go about finding your personal style? What colors are you drawn to that make you feel happy? Wear those! Have a favorite style of clothing that makes you feel your best? Do you have a signature scarf or a wicked cat-eye? Is your overall vibe trendy-chic or vintage-bohemian? Pick unique elements that make you feel like yourself. Remember you're not running a corporate law-firm (kudos if you are!) you're running an online business that reflects your style, your personality, and your drive. The common denominator is YOU! If you find yourself going through your closet wanting to know more about finding your personal style take a look below at the signature block below. In my challenge "30 Days to Your Best Closet Ever" I share my best tips on personal style and guide you through getting your closet in tip-top shape. Because you are a Nora Conrad reader you will be getting actionable emails that can help you re-assess your wardrobe so you can create a calm closet and be more productive so you can rock in business. I'd love to tell you that creating a wardrobe you love will be easy. I'd love to tell you that there are shortcuts. I'd love to tell you that getting dressed in the morning will be effortless from here on out. But I won't, because we're being honest here, right? Being more productive in the morning requires finding the clothes that work for you. It requires keeping up with laundry, and having a plan for the upcoming day. Having a great wardrobe is the same as having a great anything else… it takes a bit of work and focused effort to make it happen. If you're willing to put in the time, then you will benefit from a calmer closet and you can look forward to more productive mornings. It will be tough, but it's worth it. So what are you waiting for? You have work to do! Your closet is calling. Hi! I'm Laura Gutknecht (goot-nick). I am a pony-tail wearing, jeans loving girl who knows a thing or two about fashion. I started Style Pep Talks as a way to help women find their spark, their confidence, and their personal sense of style. I'm looking for the DIY girl who can take a good look at herself in the mirror, take my knowledge, and rock it in her wardrobe. Join my free e-course "30 Days to Your Best Closet Ever"
{ "redpajama_set_name": "RedPajamaC4" }
8,805
Planning to spend some time overseas this summer studying, working or traveling? Congratulations! You're about to have the experience of a lifetime. Since you're the type who'll cross an ocean in search of adventure, you should know that there are tried-and-true ways to ensure you find it. Read on for tips to help you deep-dive into the country and culture around you, and make the very most of your time abroad. While there are major benefits to not over-scheduling your time, it's smart to do some targeted research beforehand. After traveling this far, it'd be a shame to miss something you're really interested in simply because it's closed or, worse yet, you didn't even know it existed. No one will begrudge you a stop at that famous tourist restaurant, or even the Golden Arches for homesickness' sake. But generally speaking, for the most authentic experience, find out where the locals like to dine. Also, remember to ask your server to point out the popular dishes. Of course, there's a ton to see. But consider the benefits of lingering and more deeply experiencing a place, versus checking off a list of sights, cities or countries that you merely whizzed through. The stuff of daily life is likely somewhat different here than back at home, so browse grocery store aisles and try out the local toothpaste, lip balm, snacks and more. You can learn a lot about a culture from its most popular conveniences, and maybe even find interesting and inexpensive presents for loved ones back home. Whether you're ambling to the store or sightseeing, alone or with a friend or two, it's a great way to really soak everything in. Once you get your bearings, try different routes—keeping in mind that getting lost isn't always such a bad thing. You've heard it before, but it bears repeating: Attempt to speak the native tongue, even if "everyone" speaks English. Your effort is generally appreciated. Plus, the more conversational phrases you pick up, the more confident you become. Although it's important to capture all you're experiencing through photos and to connect with the folks back home, make a point to tuck your phone safely out of reach from time to time. This will give you a chance to look around, interact with your surroundings, and simply be. Your host family, other kids in your program, the locals you come into contact with daily—all are people you could learn from. Don't be too shy to strike up conversations. Ask questions, and listen to the answers. The interesting people you meet generally wind up on your list of trip highlights. Take advantage of every opportunity to step outside your comfort zone (while still paying attention to safety and your gut). Go to the festival. Accept that invitation to dinner. Try that strange-looking food or avant-garde exhibit. It's always better to give something a shot, even if it ends up not being for you, rather than spending a second thinking "I wish I would've" once you're back home. Need a place for your things while you're off adventuring abroad? Contact EZ Storage for student special options to house your belongings until you return.
{ "redpajama_set_name": "RedPajamaC4" }
2,614
Q: How to build a string to dynamically access a .resx resource I am wondering how I can dynamically build a string to reference a string in my Resources.resx file? I basically want to do the equivalent of the following in XAML: This is so I can get the resource titled ToyBlockName or ToyBallName, which should come from the resx so it can be translated if necessary. I then hope to plug in those individual strings into formats of other strings. There are many strings that use these names, so it would be best if I could replace the single words rather than having a version of each string for each kind of toy I have. Some example strings are The {0} comes in a box., The {0} costs only a few dollars. Essentially trying to do this in XAML: String.Format(Properties.Resources.Default["ToyComesInBox"], Properties.Resources.Default["Toy" + Constants.ToyName + "Name"]); (Where ToyName = "Block" or "Ball", etc) Is there a way to accomplish this in XAML, or is there some other method I am not thinking of? A: I don't think it's possible to do this with XAML only but we're doing it with converter/s. It can become quite messy however if you design it better than me it has even more potential, and it makes the code better IMO. public class LocalizationConverter : IValueConverter { public object Convert( object value, Type targetType, object parameter, string language) { string valueString = value as string; var paramString = parameter as string; //so here you have the value of your binding in the value string //and if it's empty (because you don't want to use the bound value //you're setting the value string to be the param string if (string.IsNullOrWhiteSpace(valueString)) { valueString = paramString; } //if value string (formerly your param string is empty just return //there is no value to be found if (string.IsNullOrWhiteSpace(valueString)) { return null; } //now the fun starts :) ,I pass values with small command and //separator symbol to be able to parse my parameters //something like this: //if (paramString.StartsWith("AddAfter|")) //{ // var valToAppend = paramString.Substring("AddAfter|".Length); // return Strings.Get(Strings.Get(valToAppend + valueString)); //} //and this basically does -> append the given parameter string after //the part with the command to the value that comes from the binding //and then uses the resulting string from res dictionary //So you could do something like passing "ToyType|Block" or you can //pass something in the value like Block and then in the parameters //have description ToyType or even pass not string object and get //what you want from it like //if(value is ToyType) //{ // return StringsGet((value as ToyType).Name) //} //Your parsing logic HERE //This is how we get strings from resources in our project //that you already know how to do return Strings.Get(valueString); } public object ConvertBack( object value, Type targetType, object parameter, string language) { throw new NotImplementedException(); } } Usage Define in resources (page or global) <converters:LocalizationConverter x:Key="Loc" /> Use in XAML Value only from parameters (string in this case) <TextBlock Text="{Binding Converter={StaticResource Loc}, ConverterParameter=ToyBlockName}" /> Or value only from bound variable (could be any kind of object) <TextBlock Text="{Binding ToyBlockNameBoundValue Converter={StaticResource Loc}}" /> Or value from bound variable + complex parameter that could be parsed <TextBlock Text="{Binding SomeBoundValue Converter={StaticResource Loc}, ConverterParameter=SomeMoreComplex|Parameter}" />
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,481
Q: Hiding a Panel Grid row with javascript Since I'm not able to define a known count variable for a row id inside a <h:panelGrid> table, is there any way to hide a <tr> via javascript? I need to do something like document.getElementById("rowId") on a onclick button attribute, but there's one button for each row. And each button has to know its row id. A: ofCourse if you want to hide a tr just html: <button onclick="hideRow(this)"> js: function hideRow(element){ element.parentNode.parentNode.style.display='none';} A: You can use rowClasses like this : <h:panelGrid rowClasses="row1 row2 row3 row4..."> <h:outputText value="row1" /> <h:outputText value="row2" /> <h:outputText value="row3" /> <h:outputText value="row4" /> </h:panelGrid> It will render like this : <table> <tr class="row1"><td>row1</td></tr> <tr class="row2"><td>row2</td></tr> <tr class="row3"><td>row3</td></tr> <tr class="row4"><td>row4</td></tr> </table> Now you will be able to access them easily with jQuery for example $("table tr.row1").hide(); and do whatever you want. OR You can also access by nth like this : $("table tr:nth-child(4)").hide(); A: html: <button class="button-class"> JQuery: $(document).ready(function(){ $('.button-class').each(function(){ $(this).closest('tr').hide(); }); });
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,535
{"url":"https:\/\/www.tug.org\/pipermail\/texhax\/2013-June\/020338.html","text":"Sam Brown s_d_j_brown at hotmail.com\nTue Jun 4 00:40:42 CEST 2013\n\nDear all texhaxers\n\nI am wanting to write a document that displays all the fonts available on my system. To do this I've enlisted a batch script that returns all .tfm files in my texmf tree, which is then processed with R to create a LaTeX file with every font available. Unfortunately, a couple of errors get in the way of successful compilation. The first is that a number .tfm files produce the error: \"Font xxxx not loadable: Bad metric (TFM) file\". The second is that some files that pass this first test, throw a second error when compiled by themselves: \"mktexpk: don't know how to create bitmap font for xxxx\"\n\nWhat I would like is some sort of condition that tests to see if the font can be loaded and created. If it can, the test text is written in the font. If it can't, a message saying so is printed to the document. Any advice towards this end would be much appreciated.\n\nA minimal working example follows. The code should run properly as is. Uncomment the first line (fcsstt20) to get the first error, and the second (gbklisl25) to get the second.\n\nThank you very much!\n\nSamuel Brown\nBio-Protection Research Centre\nPO Box 84\nLincoln University\nLincoln 7647\nCanterbury\nNew Zealand\nsam.brown at lincoln.ac.nz\nhttp:\/\/www.the-praise-of-insects.blogspot.com\n\n%---------------------------------------------------------------------------------------------\n% Mimimal working example\n\n\\documentclass{article}\n\n\\def\\defaultfont{\\font\\supertinyfont = cmss10 at 10pt \\relax \\supertinyfont}\n\n\\newcommand{\\fontTest}[1]{\n\\def\\tempfont{\\font\\supertinyfont = #1 at 10pt \\relax \\supertinyfont}\n\n\\defaultfont\n#1\n\n\\tempfont\nThe quick brown jumps over the lazy dog\n12344567890\nHOLOTYPE PARATYPE\n\\\\\\\\\n}\n\n\\setlength{\\parindent}{0pt}\n\n\\begin{document}\n\n\\fontTest{favri8v}\n\\fontTest{zavmr7y}\n\n% \\fontTest{fcsstt20}\n% \\fontTest{gbklisl25}\n\n\\end{document}","date":"2022-08-16 18:52:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7798194885253906, \"perplexity\": 4908.548680134766}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882572515.15\/warc\/CC-MAIN-20220816181215-20220816211215-00006.warc.gz\"}"}
null
null
Josh MCdermitt (Phoenix, Arizona, 4 de juny de 1978) és un actor estatunidenc. pel seu personatge de Eugene Porter a la sèrie The Walking Dead. Carrera McDermitt va començar la seva carrera a l'espectacle per casualitat, trucant repetidament a un programa de ràdio local de Phoenix, Tim & Willy, per fer broma a les trucades telefòniques com a persones diferents. Més tard, va començar a treballar com a productor de ràdio al costat dels creadors del programa, passant a ràdios importants com KNIX-FM i KMLE. El 2006, va aparèixer com a concursant al programa de talent còmic Last Comic Standing, on va arribar a la semifinal, abans de passar a la interpretació real amb la pel·lícula de televisió Rehab for Rejects de 2009. Un dels seus papers destacats va ser com Brandon a la comèdia del canal de televisió Land Meet the Parents, que es va emetre durant dues temporades el 2011 i el 2012 abans de ser cancel·lada. L'octubre de 2013, els productors de la sèrie de televisió AMC The Walking Dead van anunciar la inclusió de McDermitt al repartiment de la quarta temporada de la saga, en el paper del personatge d'Eugene Porter. El seu personatge va ascendir a un paper habitual a la cinquena temporada. Referències Persones de Phoenix Actors d'Arizona Actors de cinema d'Arizona Actors de sèries de televisió estatunidencs Naixements del 1978
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,447
'Cordial' ties with Centre yet to pay off From NEET waiver to release of Central funds, the State's wish list remains just that The AIADMK government under Chief Minister Edappadi K. Palaniswami has always insisted that it is maintaining "cordial" ties with the BJP government at the Centre only in the interest of the State. However, several pleas by the State government for financial aid and new projects have yet to elicit a positive response from the Centre. In the last one year, on several occasions, more than two State Ministers were in the national capital on the same day to call on various Union Ministers. Despite their numerous visits, funds for various government schemes were either not granted or were sanctioned only partially. On the other hand, the State government has been implementing various schemes that the Centre is very keen on. Exemption for Tamil Nadu from NEET and finalisation of a location for the All India Institute for Medical Sciences (AIIMS) are some of the other major pleas pending before the Centre. Governor Banwarilal Purohit too, while assuming office in October last year, said he would use his "influence in Delhi" to ensure that the State got more funds and more developmental work was taken up. However, it was evident from the speech of Deputy Chief Minister O. Panneerselvam in the pre-budget meeting in Delhi that the State was still awaiting funds for jointly implemented schemes. 'An ongoing process' When The Hindu asked why the 'cordial' ties with the Centre hasn't helped the State much, Fisheries Minister D. Jayakumar defended the State government, contending that it had been taking up issues with the Centre every now and then, and had indeed been sanctioned projects and funds. "Seeking projects and funds from the Centre is an ongoing process. We have been smart enough to get what is good for the State," he maintained. Syrian troops sweep through north; unrest kills 32 M.F. Husain, celebrated artist, passes away in London India urges international community to step actions against HIV/AIDS CBI likely to take a call on Maran in 20 days Forced to sell stake to Maxis: ex-Aircel owner Baba Ramdev resumes satyagraha against corruption New initiatives launched, some old schemes scrapped April 14 is Tamil Nadu New Year Indian and Danish firms in U.S. death penalty row Japanese Opposition mulls no confidence motion against Naoto Kan First Back 521 522 523 524 525 526 527 528 529 530 531 532 Next Last
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,108
\section{Introduction} The most popular model for the generation of primordial density fluctuations is the inflationary scenarios \citep{guth 1981, linde 1982, albrecht steinhardt, linde 1983}. This model assumes primordial density perturbations of Gaussian random phase and it has been shown that such initial conditions produce a sponge-like topology on large scales \citep[1987]{gott 1986}. At such scales, where the power spectrum has not been transformed by nonlinear growth, the topology of structure in the early universe is well preserved, and small deviations from random phase predictions give important information about primordial non-gaussianity, biased galaxy formation, and non-linear clustering \citep{matsubara 1994,park 2005a, park 2012, park gott 1991,park 1998}. The genus statistic is central to these studies, and is now a well-tested quantitative measure \citep{gott 1986,gott 1987,hamilton 1986,gott 1989, vogeley 1994, hikage 2002, hikage 2003, choi 2010, park 2005a, park 2005b}, having been applied to both the SDSS LRG sample \citep{gott 2009, strauss 2002, SDSS}, and the CMB \citep{park 1998}. It's utility lies in the existence of the ``genus curve'', an analytical expression for genus as a function of density, which allows comparison of observed topology with that expected from a standard big bang inflationary model \citep{hamilton 1986}. So far, fitting the Gaussian random phase (hereafter, GRP) genus curve to mock surveys in a $\Lambda$CDM cosmology has been remarkably successful. The genus has now been suggested as a cosmic standard ruler \citep{park & kim 2010} and a means for probing dark energy \citep{park & kim 2010, zunckel,slepian 2013}. The Baryon Acoustic Oscillation (BAO) feature, detectable in the power spectrum and galaxy two point correlation function, is the established ``standard ruler'' \citep{Anderson}, with a reported fractional uncertainty in angular diameter distance to $z=0.6$ of 1.1 \% expected for the SLOAN survey when completed. Now, with the introduction of ever larger galaxy samples, such as the CMASS Data Release 10 sample of the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS), topology is becoming another attractive technique for probing the expansion of the universe and constraining the equation of state of Dark Energy. We apply the genus to 108 LRG mock surveys, derived from the Horizon Run 3 $N$-body Simulations \citep{horizonrun}, in order to ascertain the statistical accuracy of said ``topological distance measure''. \section{The Genus Statistic} \citet{gott 1986} presented the genus as a reliable description of topology. Traditionally, the genus comes from the Gauss-Bonnet theorem, which states that the integral of gaussian curvature $K=1/({r_1r_2})$ (where $r_1$ and $r_2$ are the principle radii) over a compact two-dimensional surface is given by \begin{equation} \int KdA = 4\pi(1-G_b). \end{equation} We use a slightly altered form of the Gauss-Bonnet genus, $G=G_b-1$, so that it has a more intuitive meaning for Cosmology \begin{equation} G = (\rm{\# of\ doughnut\ holes})-(\rm{\# of\ isolated\ regions}). \end{equation} See \citet{park 2013} for relation to the Euler characteristic and the Betti numbers. With this definition, the genus of a sphere is $G=-1$; a toroid, $G=0$; three isolated spheres, $G=-3$; a figure 8 pretzel, $G=1$ (two holes, one isolated body). Essentially, the genus is a measure of connectivity. A highly connected structure -- such as a sponge -- will have many holes, a single body, and therefore a large, positive genus. A sparse array of objects -- a meatball topology \citep{soneira peebles,press} -- will have many isolated regions, relatively few holes, and therefore a negative genus. An array of isolated voids will also produce a negative genus. To calculate the genus we smooth the Horizon Run 3 Physically Self Bound Subhalo distribution \cite{horizonrun} with a Gaussian smoothing ball of radius $\lambda$ (Eq. \ref{smoothingball}). We picked the most massive physically bound subhalos to match the number density of LRG galaxies projected for the SLOAN III survey when completed. The Horizon Run 3 is a Cold Dark Matter simulation. We make the simple assumption that the most luminous red galaxies will from in the centers of the most massive cold dark matter halos. In the simulation, the most massive subhalos ($>$ 30 CDM particles) are identified that physically bond and not tidally disruptable by larger structures -- these we associate with LRG galaxies. We then create iso-density contour surfaces of the smoothed density distribution, labeling them by $\nu$, which is related to the volume fraction $f$ on the high density side of the contour by \begin{equation} f = \frac{1}{\sqrt{2\pi}}\int_\nu^{\infty}e^{-x^2/2}dx, \end{equation} Where $x$ is the density parameter. The value $f=50\%$ corresponds to the median volume fraction contour ($\nu=0$). For GRP initial conditions the genus curve is \begin{equation}\label{genuscurve} g_{rf}(\nu)=A(1-\nu^2)e^{-x^2/2}. \end{equation} Where the amplitude $A=(1/2\pi^2) (\left<k^2\right>/3)^{3/2}$, and $\left<k^2 \right>$ is the average value of the squared wave vector, $k^2$ in the smoothed power spectrum \citep{gott 1986}; or, the slope of the two-point correlation function. The shape of a genus curve, and its deviation from the random phase prediction, can be parametrized by several variables. First, there is the $\chi^2$ best fit amplitude, which is measured by fitting the GRP curve ( Eq. \ref{genuscurve}) to the observed curve. This gives information about the power spectrum and phase correlation of the density fluctuation. Secondly, there are three variables which characterize deviations from a GRF \citep{park 1992}: \begin{eqnarray} \Delta \nu &=& \frac{\int_{-1}^{1}g(\nu)\nu d\nu}{\int_{-1}^{1}g_{\rm{rf}}(\nu)\nu d\nu},\\ A_V &=& \frac{\int_{-2.2}^{-1.2}g(\nu)d\nu}{\int_{-2.2}^{-1.2}g_{\rm{rf}}(\nu)\nu d\nu}, \\ A_C &=& \frac{\int_{1.2}^{2.2}g(\nu)d\nu}{\int_{1.2}^{2.2}g_{\rm{rf}}(\nu)\nu d\nu}, \end{eqnarray} where $g_{\rm{rf}}$ is the best-fit random phase curve (Eq. \ref{genuscurve}). $\Delta \nu$ measures any shift in the central part of the genus curve. The GRP curve has $\Delta \nu = 0$. A negative value of $\Delta \nu$ is called a meatball shift, caused by a greater prominence of isolated high-density structures, pushing the genus curve to the left. $A_V$ and $A_C$ measure the relative number of voids and clusters with respect to GRP expectations. \section{The $N$-body Simulations} The Horizon Runs, provided by the Korean Institute of Advanced Study (KIAS), provide some of the best raw material for calibrating topological study of LRG surveys \citep{park 2005a,horizonrun}. These $N$-body simulations replicate the topology of the SDSS LRG's exquisitely \citep{gott 2009,lrg sample}. We use the Horizon Run 3 (HR3) dataset exclusively, which adopts a pressureless cold dark matter cosmology with a pure cosmological constant $w_\Lambda=-1$. The basic HR3 cosmological parameters were fixed by the WMAP5 data \citep[]{spergel 2003, komatsu 2011, hinshaw 2013} and the initial linear power spectrum was calculated with the CAMB source code \citep{CAMB}. The entire simulation is a cube of 374 billion particles, spanning a volume of $(10.815 {~ h^{-1}} {\rm{Gpc}})^3$.\footnote{For comparison, this volume is 8800 times larger than the Millenium Run \citep{millenium}.} Initial redshift was $z=27$ and $N_{\rm step}=600$ discrete timesteps were taken. \subsection{Mock LRG Survey Construction} The selection of cold dark matter halos uses the Friend of Friend algorithm, where separation cut off distance is 20 \% of the mean separation distance. To improve cluster identification, HR3 searches for Physically Self Bound (PSB) subhalos that are gravitationally self-bound and not tidally disruptable \citep{kim and park 2006}. This provides a substantial increase in the similiarity between simulation and observational data, as these dark matter subhalos are sites for LRG formation. To simulate the SDSS survey dimensions, HR3 places 27 observers evenly within its cubical volume and allows each observer to see out to a redshift of $z < 0.7$. This crates 27 independent, non-overlapping spherical regions. The co-moving positions and velocities of all CDM particles are saved as they cross their past light cone and PSB subhalos are identified from this data. In preparation for the SDSS-III LRG catalogue, it was assumed that a volume-limited sample would yield a constant number density of $3 \times 10^{-4} (h^{-1}{\rm Mpc})^3$. In order to match this prediction, the minimum mass limit of the PSB subhalos was varied with redshift and the absolute minimums were set to $9.75 \times 10^{12}~{h^{-1}}{\rm M_{\odot}}$. Given these parameters, the physical properties of the HR3 mock surveys match very well with the most recent LRG surveys \citep{choi 2010,gott 2009, gott 2008}. \begin{figure}[tpb] \epsscale{1.0} \plotone{f01.eps} \caption{A spherical Horizon Run 3 Mock survey out to redshift $z=0.7$. The PSB subhalo counts have been smoothed with a Gaussian smoothing ball of $\lambda=34~h^{-1} \rm{Mpc}$. See \ref{3Dappendix} for 3D plots of the Horizon 3 data.}\label{HR3mock} \end{figure} \section{Methods} \subsection{Smoothing and Discretization} We smooth the 27 past lightcone PSB subhalo distributions with a gaussian smoothing ball \begin{equation}\label{smoothingball} W(\vec{r}) = \frac{1}{(2\pi)^{3/2}}e^{-\frac{r^2}{2\lambda^2}}, \end{equation} smearing structure on scales smaller than $\lambda$. The Mock survey data is placed into a three-dimensional pixel grid of density values, and we choose $\lambda$ to always be greater than $2.5$ pixel sidelengths $s$. For cold dark matter models, smoothing with a Gaussian recovers the topology of the initial density field, provided that the smoothing length $\lambda$ is sufficiently greater than the correlation length $R_0$ and non-linear effects are avoided\footnote{$R_0$ is approximately $5 ~h^{-1}\rm{Mpc}$ for LRG}. \subsection{Conversion and Trimming} With this smoothed Mock Survey in hand, we convert from co-moving spherical coordinates to redshift coordinates, using a comoving line of sight distance formula \citep{hogg}. PSB subhalo peculiar velocities are converted into redshift distortions by \begin{equation} \Delta z = \frac{\vec{v_{\rm{r}}}}{c} = \frac{\hat{r}\cdot \vec{v}_{\rm{pec}}}{c},\\ \end{equation} where $\vec{v}_r$ is the radial velocity, $\hat{r}$ is the unit radial vector, and $\vec{v}_{\rm pec}$ is the cartesian peculiar velocity of the subhalo. After redshift converting and correcting, we save PSB subhalo counts within a grid of dimensions $650^3$, with cubical pixel volume of $s^3=(6 {~ h^{-1}}{\rm{Mpc}})^3$. The entire grid spans a volume of $(1950 {~ h^{-1}}{\rm{Mpc}})^3$. We then apply an angular mask, splitting the 27 perfectly spherical mock surveys into four quadrants each of $\pi$ steradians and radius $z=0.6$, to approximate the area of sky coverage and depth in the SLOAN III survey. With these $4\times 27=108$ smoothed mock surveys in hand, we calculate the genus using a polygonal approximation scheme developed by \citet{weinberg1988,hamilton 1986} called ``Contour 3D'', which adds up angle deficits at pixel vertices. \section{Using Topology as a Standard Ruler} \label{protocol} An application of quantitative topology being applied to the SDSS LRG sample -- other than testing the gaussianity of initial density fluctuations -- is to measure cosmological parameters, such as those governing the expansion history of the universe. This can be done by measuring the genus statistic within a fixed volume at different redshifts. In the instance of $N$-body simulations, one knows the correct cosmological model and therefore the correct transformation $r \to z$. One smooths the density field with a known smoothing length $\lambda$ and then measures the median density genus within a volume $V$. This yields $g=G/V$, genus per unit volume, which one can use to indirectly measure any physical volume by counting structures. In order to more explicitly state the smoothing length dependence, the dimensionless quantity $g \lambda^3$ is often used, which is simply the genus per cubic smoothing length. This quantity can be analytically calculated from a full set of cosmological parameters and a linear power spectrum. Such a function $g \lambda^3 (\lambda)$ has been examined closely for the WMAP3 and WMAP5 parameters (see fig. 1 of \citealt{park & kim 2010} and fig. 1 of \citealt{zunckel}, drawn by Y.R. Kim.)(see Fig. \ref{glambda3}). In practice, we do not know the true cosmological model. Let us illustrate the effects of applying an incorrect cosmological model to a survey sample. If we underestimate the expansion rate of the universe $H_0$, then our conversion from redshift to comoving space will put celestial objects too far from the Earth. This causes an overestimation of survey volume. For a homogeneous and isotropic survey, genus is linearly proportional to volume and therefore an overestimation of $V$ will drive the genus at a certain smoothing length up ($G(\lambda) \uparrow$). At the same time however, we have also adopted a co-moving smoothing length $\lambda$ that is larger than intended. This will change the actual scale of study and erase all structure beneath scale by convolution, decreasing the genus ($G(\lambda) \downarrow$). Luckily, the net effect is detectable since the amplitude $G$ of the genus curve effectively measures the slope of the power spectrum at the scale $\lambda$, which is not scale invariant\citep{park & kim 2010}. Our procedure for measuring angular diameter co-moving distance to $z=0.6$ is straightforward. We assume a $\Lambda$CDM flat cosmological model. $\Omega_m$, $h$, and $\Omega_\Lambda$ come from CMB fits with $l > 210$ which are insensitive to $w_\Lambda$ because dark energy has negligible influence at recombination. These values are used to construct the power spectrum and from that, $g\lambda^3(\lambda)$ (see Fig. \ref{glambda3}). Now we measure $g\lambda^3$ and get a value; we look on our analytical plot -- Fig. \ref{glambda3} -- and find the true value of $\lambda$, which we will call $\lambda_{\rm{true}}$. If this is 1\% smaller than the initial value of $\lambda$ that one used, it means that the co-moving distance out to $z=0.6$ is also 1\% smaller than previously thought. In this way one can measure co-moving distance out to $z=0.6$. And, with this as one data point one can fit a cosmological model, leaving $w_\Lambda$ as a parameter \citep{park & kim 2010}. If the intial cosmological model is slightly wrong ({\it i.e.} $w_\Lambda$ may not be exactly $-1$, or may vary with time; \citealt{slepian 2013}), this is inconsequential because we are just measuring the topology -- counting the total number of structures inside $z=0.6$. If the radial co-moving distance inside this volume is proportionately in error it will make no difference, as that will just distort shapes and structures slightly without altering their count \citep[see][]{zunckel}. An rms cosmic variance in the total genus $\sigma_g$ out to $z=0.6$ in a survey sample will cause a fractional rms error of ${\sigma_g}/{g}$ in $g\lambda^3$; and given the slope of the curve, $(g\lambda^3)'$ at the applied $\lambda$, this will introduce an rms error in $\lambda$ and therefore in co-moving distance at $z=0.6$ of: \begin{equation} (g\lambda^3)'\frac{\sigma_\lambda}{\lambda} = \frac{\sigma_{g}}{g}. \end{equation} \begin{figure}[tpb] \centering \plotone{f02.eps} \caption{Genus per cubic smoothing length $g\lambda^3$ for the WMAP5 parameters ($\Omega_m=0.26$, $H_0=74$), assuming a flat $\Lambda$CDM cosmology \citep[taken from Fig. 1 of][calculated by Young Rae Kim]{zunckel}.} \label{glambda3} \end{figure} \subsection{Uncertainties in such a ruler} We examine the statistical variance of genus per unit volume $g$ in the Horizon Run 3 mock surveys, which is far from an ``ideal'' measurement. An ``ideal'' measurement of $g$ would be examining the initial density field in comoving space. If the initial conditions were of GRP, one would expect excellent agreement between the observed genus curve and the theoretical GRP curve; however, finite sample size, even at this level, introduces an error because of no power at large scales (or, larger than the simulation box size). The next best measurement of $g$ would be examining the final conditions of the entire $N$-body simulation in comoving space, which erases a portion of the cosmic variance associated with small survey size, but is subject to the effects of non-linear gravitational infall and galaxy formation bias. An unavoidable source of error, ``ideal'' or otherwise, is finite pixel resolution, which applies a smoothing scale to the data and destroys structure smaller than pixel size $s$. Observation of $g$ in comoving space has obvious advantages to observation in redshift space, since one has complete knowledge of all PSB subhalo positions and velocities. It has been found that the redshift correction for peculiar velocity presents the worst source of error for the $\chi^2$ best fit amplitude of the genus curve \citep{choi 2010}. The application of peculiar velocity redshift corrections is in essence a smoothing routine of its own, in that real-space structures are radially smeared due to ``fingers of god'' effects. This effectually raises the observed smoothing parameter $\lambda$ slightly and yields a lower $\chi^2$ best fit genus amplitude. The choice of survey volume, specifically volume to surface area ratio, also creates error because of data being ``smoothed out'' of the survey region. The complicated boundaries of the SDSS present a cause for concern; particularly the three thin stripes along the southern Galactic cap, which are ignored altogether during genus analysis. An SDSS measurement of $g$ uses a finite, redshift space sample, where the aforementioned sources of error apply: the cosmic variance associated with small survey size; non-linear clustering; boundary effects; and redshift space distortion. The situation sounds daunting, but because of its size, the Horizon Run 3 provides an ensemble of tests. We split the 27 HR3 spherical mock surveys into four quadrants, thereby acquiring 108 ``genus experiments'' for a chosen smoothing length $\lambda$. \citet{gott 2009} have reported the genus amplitude of the SDSS LRG to within 5\% accuracy. Based upon our results (see Table \ref{statresults}), we believe that this fractional uncertainty can be reduced to of 1\%. \section{Results} We measured the genus per cubic smoothing length for $\lambda=15$, 21, and 34 $~h^{-1} \rm{Mpc}$, studying the random and systematic error over 108 HR3 mock surveys. For $\lambda=15 ~ h^{-1}{\rm{Mpc}}$, the fractional uncerainty in genus per cubic smoothing length was less than one percent, which translates to a fractional uncerainty in smoothing length -- and angular diameter distance -- of approximately $2.1$\% (Table \ref{statresults}). Treating the variance at $\lambda=15$, 21, and 34 $h^{-1}{\rm{Mpc}}$ as statistically independent -- since HR3 adopts a random phase model and the smoothing volumes are significantly different -- we add the three smoothing length rms errors in quadrature \begin{equation} \frac{1}{\sigma_{\rm{eq}}^2} = \frac{1}{\sigma_1^2}+\frac{1}{\sigma_2^2}+\frac{1}{\sigma_3^2}, \end{equation} yielding a $1.69$\% fractional uncertainty in smoothing length and angular diameter distance out to $z=0.6$. Combining only the $21$ and $34~h^{-1}$Mpc samples, we get a $2.97$\% fractional uncertainty in smoothing length. \begin{table*}[htb] \renewcommand{\arraystretch}{1.5} \begin{center} \begin{tabular}{|l|l|l|l|}\hline & $\lambda=15~{\rm{Mpc}}/h$ & $\lambda=21~{ \rm{Mpc}}/h$ & $\lambda=34 ~{\rm{Mpc}}/h$ \\ \hline $g \lambda^3\times 10^3$ & 4.762 & 5.403 & 6.271 \\ $\sigma_{g \lambda^3}\times 10^3$ & 0.04380 & 0.6732 & 1.358 \\ $\frac{\sigma_{g \lambda^3}}{g \lambda^3}$ & .919\% & 1.245\% & 2.166\% \\ $\lambda_{\rm{t}}$ & 15.448 & 20.823 & 32.993\\ $\frac{\lambda_{\rm{t}}-\lambda}{\lambda}$ & 2.99\% & -0.84\% & -2.96\% \\ $\frac{\sigma_{\lambda}}{\lambda}$ & 2.096\% & 3.215\% & 6.742\%\\ \hline \end{tabular} \end{center} \caption{$g\lambda^3$ is the averaged $\chi^2$ best-fit genus per cubic smoothing length, for all 108 mock surveys -- multiplied by $10^3$ for the reader's sake. $\lambda_{\rm{t}}$ is the corresponding ``true'' smoothing length for the observed genus per cubic smoothing length, as discussed in Section \ref{protocol}. $\sigma_{g\lambda^3}$ and $\sigma_\lambda$ represent the variance in smoothing genus per cubic smoothing length and smoothing length. $({\lambda_t-\lambda})/{\lambda}$ is the fractional, systematic error in smoothing length.} \label{statresults} \end{table*} \begin{figure}[tpb] \centering \plotone{f03_1.eps} \plotone{f03_2.eps} \caption{Above: genus per cubic smoothing length $g\lambda^3$ for the WMAP5 parameters, with $\chi^2$ best-fit data points and $1\sigma$ error bars. Below: the ensemble averaged genus curves $G(\nu)$ for $\lambda=15$, 21, and 34 $h^{-1} {\rm{Mpc}}$ } \label{glambda3withdata} \end{figure} With 108 samples in hand, our fractional ``uncertainty of the uncertainty'' is $1/\sqrt{2(N-1)}=6.8\%$. It is notable that the systematic effect for the $21 ~ h^{-1} $Mpc sample was very small, $-0.84$\%, and that the $\chi^2$ best-fit genus amplitudes modeled the $g \lambda^3$ curve extraordinarily well (Fig. \ref{glambda3withdata}). \section{Discussion} With these results in hand, it is important to continue refining topological study of the SDSS LRG sample with $N$-body simulations. Extremely large cubes like HR3 allow for tight description of the cosmic variance in genus per unit volume $g$ and smoothing length $\lambda$. This statistical knowledge translates directly to the measurement of cosmological parameters such as $w$. A possible extension of this work is to more accurately model the SDSS survey with 108 less ``ideal'' masks. Another possible extension is to measure the variance in genus per cubic smoothing length $g\lambda^3$ for a large number of $\lambda$'s, perhaps iterating from $15~h^{-1}{\rm Mpc}$ to $34~ h^{-1}{\rm Mpc}$ in small increments $\Delta \lambda < 0.2~h^{-1}{\rm Mpc}$. Smooth plots of $\sigma_{g\lambda^3}$ and $\lambda - \lambda_{\rm true}$ as a function of $\lambda$ could yield useful information about the evolution of random and systematic error with scale. \acknowledgments We thank the Princeton Department of Astrophysical Sciences, Princeton NJ, where this work was completed. We thank the support of the Supercomputing Center/Korea Institute of Science and Technology Information with supercomputing resources, including technical support (KSC–2011–G2–02) for Horizon Run 3. We also thank Korea Institute for Advanced Study for providing computing resources (KIAS Center for Advanced Computation Linux Cluster System).
{ "redpajama_set_name": "RedPajamaArXiv" }
7,847
I have to admit I really like the crackled bumpy top...they look just like brownies!!! I am so glad you ventured into the whoopie pie making because these look out of this world good!!! Great recipe! I really like this post, I'll maybe give it a try! Best regards! These are so cute and looks so delicious! These are so cute! I saw something similar in a box mix at the store and couldn't bear to pay the price for it. This looks just as yummy at a fraction of the cost!! These look really yummy! Going to have to try my hand at them!! Thanks!! These are so cute and delicious,great Halloween treat! Yum! These look amazing. I'd love for you to join my weekly link party (and share this yummy recipe) at www.michellestastycreations.blogspot.com - Every Thursday - Monday. Have a great week. Stopping by from the Marvelous Mondays link up - love your post! I'd love for you to add this to the Pumpkin Patch Hop, too! These look so fantastic! I love the combination of chocolate + pumpkin.. plus, your whoopie pies alone look like brownies (mmm). Wow, these sound amazing. I am going to have to make these soon. I hope you will stop by and link this post up to my Wickedly Creative Halloween Ideas Party if you haven't already. Thank you so much! Yum! They came out perfect! I'll have to print this recipe to try out this weekend! Thank you. These are stunning! I loooove chocolate and pumpkin.
{ "redpajama_set_name": "RedPajamaC4" }
2,945
The meeting was attended by a total of 18 participants of 12 EUSDR member states as well as interested stakeholders – Europol, SELEC. The following topics were dicussed: - the outcome of the first call under the Seed Money Facility, a review of the 6th Annual Forum of the EUSDR; the cooperation between the PA 1a and PA 11, the joint operation following DARIF projects. A project idea of Baden-Wuerttemberg was presented on the outlaw motor cycle groups in the Danube region. The main activities of the Bulgarian Presidency of the EUSDR were presented, including the 7th Annual Forum to be held in October in 2018.
{ "redpajama_set_name": "RedPajamaC4" }
7,578
Red Rock Steakhouse & Saloon 101 S. Lentz St. Red Rock TX 78662 MTW 11am - 10pm Th-F 11am - 12am Saturday 11am - 1am Sunday 8am - 10pm About the Steakhouse Our own steakhouse building was the original General Store in neighboring Elroy, TX. We carefully moved it, and restored it. With the opening of this Steakhouse and Saloon long ago traditions from the founding of the village are being brought back to life along with the old world charm from the Bed and Breakfast and the quaint shops nearby. About Red Rock Red Rock, TX was originally established in June of 1870 when the Post office was built, with settlers there as early as 1850 and had a population of over 150 by 1884. This area is now referred to as Old Red Rock because in 1890 a railroad depot was built a little over one mile away, the citizens not wanting to miss the boom of rail profits, moved their places of business and the town thrived growing to a population of over 500. During the Great Depression however, means were scarce and the town began to fall to ruin with some of the buildings collapsing never to be rebuilt. This town however has managed to survive over time and is now being reborn. A historic home is currently being refurbished into an antiques store, the Post office and original General store still stands, and new historic buildings from nearby towns have been brought in. General Email - info@redrocksteakhouse.com Music Booking - music@redrocksteakhouse.com Events Booking (for private events) - events@redrocksteakhouse.com
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
447
Die Herbert-Schraube ist ein Spezialimplantat, das in der Medizin zur osteosynthetischen Versorgung des Kahnbeins im Rahmen eines Kahnbeinbruches oder einer Kahnbeinpseudarthrose verwendet wird. Sie wurde von Timothy James Herbert zusammen mit W. E. Fisher Ende der 1970er-Jahre entwickelt. Prinzip Die Herbert-Schraube besteht aus einem Schraubenschaft und kurzen Gewinden am Anfang und am Ende (Doppelgewindeschraube), deren beider Durchmesser etwas größer ist als der des Schraubenschaftes; ein Schraubenkopf fehlt, um sie einfacher vollständig im Knochen platzieren zu können. Die Schraube ist hohl ("kanüliert") und auf der distalen Seite mit einem Innensechskant versehen, um sie über einen Führungsdraht (Kirschnerdraht) mit einem speziellen Werkzeug einschrauben zu können. Damit eine axiale Zugkraft auf die Fragmente entsteht, ist die Gewindesteigung am proximalen Gewinde kleiner als am distalen, sodass das distale Fragment mechanisch an das proximale beim Einschrauben herangezogen wird (Zugschraube). Dadurch entsteht ein die Bruchheilung begünstigender Druck auf den Bruchspalt (interfragmentäre Kompression). Verwendung Bei der Operation wird zunächst das gebrochene Kahnbein reponiert und sofern notwendig fixiert, dann wird die Schraube eingebracht. Bei der Pseudarthrose ist in der Regel ergänzend eine Knochentransplantation notwendig, um eine stabile Ausheilung zu ermöglichen. Die Herbert-Schraube kann – muss aber meist nicht – später wieder entfernt werden. Einzelnachweise Therapeutisches Verfahren in Orthopädie und Unfallchirurgie
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,737
<?php namespace tests\unit\TomPHP\ContainerConfigurator; use PHPUnit_Framework_TestCase; use TomPHP\ContainerConfigurator\InflectorConfig; use TomPHP\ContainerConfigurator\InflectorDefinition; final class InflectorConfigTest extends PHPUnit_Framework_TestCase { public function testItMapsTheConfigArrayToInflectorDefinitions() { $interface = 'example_interface'; $methods = ['method1' => ['arg1', 'arg2']]; $subject = new InflectorConfig([$interface => $methods]); assertEquals( [new InflectorDefinition($interface, $methods)], iterator_to_array($subject) ); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,568
2019 Stock Watch – OLB Olasunkanmi Adeniyi – Stock Up Now that training camp is underway, and the roster for the offseason is close to finalized—though always fluid—it's time to take stock of where the Pittsburgh Steelers stand. Specifically where Steelers players stand individually based on what we have seen happen over the course of the past few months. A stock evaluation can take a couple of different approaches and I'll try to make clear my reasonings. In some cases it will be based on more long-term trends, such as an accumulation of offseason activity. In other instances it will be a direct response to something that just happened. So we can see a player more than once over the course of the summer as we move forward. Player: OLB Olasunkanmi Adeniyi Stock Value: Up He practiced yesterday, at least in some capacity. That's about as good of an outcome as one could have expected to hear on the Monday afternoon prior to the Steelers' regular season opener. Adeniyi suffered a knee injury that resulted in surgery to likely trim his meniscus, an operation that has sidelined him since going down in the first preseason game. There was a belief that the injury could cause him to miss the first couple of games during the regular season. Some speculation even existed that he may even be a candidate to be placed on the Reserve/Injured List for the second consecutive season with an eye toward him returning in the second half of the year. That he was able to practice in some capacity yesterday obviously renders that a highly unlikely proposition at best, and puts him in play to possibly be available for the Steelers this weekend, in which case he would serve as the number four outside linebacker and would be asked to participate on special teams, a role he did not have in his arsenal a year ago. Many felt that Adeniyi would be primed for a breakout season, and his performance in the early stages of training camp were very encouraging, but he will be working to get on the moving train at this point, having missed most of August. He was virtually a non-participant during his rookie season, having only been promoted to the 53-man roster from the Reserve/Injured List for the final four games, and he only dressed in one, playing a handful of snaps while subbing for Bud Dupree, who was nursing an injury at the time. The other two backups in addition to Adeniyi this season are fifth-year veteran Anthony Chickillo, who was the number three last season and will remain so to start this year, in addition to rookie undrafted free agent Tuzar Skipper, who earned his spot through a great preseason performance. Related Items:Olasunkanmi Adeniyi Jamir Jones Joins Growing List Of Preseason OLB Darlings Who Left No Mark In Pittsburgh Roster Turnover 2021: OLB Olasunkanmi Adeniyi Buy Or Sell: Steelers Will Add Another Veteran OLB Before Start Of Season
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
624
How Journalists Have Spiked NATO's Secrecy Guns Essay by Aidan White, IFJ General Secretary OVER the past three months a fierce battle has been fought within the European Union as military chiefs on both sides of the Atlantic try to stem the movement towards greater open government. Next year European Union leaders face a deadline set by the Treaty of Amsterdam in 1977 to put in place a procedure and policy to guarantee citizens' rights of access to documents of the European Parliament, the Council of Ministers and the Commission. But the co-decision process to agree a new code strengthening peoples' right to know is in chaos. There have been allegations of skullduggery, threats of court action are in the air and a range of proposals now before the Parliament reflect a failure to reach any sensible consensus on how to break the culture of secrecy that still rules in Brussels. The security chiefs of Europe (and NATO) have woken up to the fact that freedom of information is on the European Union agenda and, belatedly, they have plunged into the debate with an uncompromising approach that threatens to halt the march towards open government and may even signal a retreat from an openness policy first agreed seven years ago. But NATO's attempts to shut the door on the peoples' right to know, while increasingly desperate and comic, are still likely to fail. The security establishment began their campaign with a "summertime coup" on 14 August, while parliaments and journalists were on holiday, when the Council of Ministers unilaterally amended its own rules of procedure to deny access to certain documents under a new system of classification. For good measure they also excluded access to any category of other documents that might allow someone to deduce the fact a classified document exists. This approach not only torpedoes the traditions of a number of Member States, it undermines the core principle of freedom of information and makes a mockery of efforts to agree a new procedure, by May 2001, which is meant to "enshrine" the citizen's right of access to documents under Article 255 of the Amsterdam Treaty. The arrogance of the Council, led by Foreign Policy Chief and former NATO Secretary General Javier Solana, is touched with farce given the response to a request by Statewatch who asked for the papers upon which the decision was taken. They were told that access to a document "could fuel public discussion". Another request for documents, by the European Citizens Advice Service, received a blanket refusal, even though the papers concerned were already in the public domain. But the reality is that NATO's actions are almost certain to founder following the action taken by journalists in Sweden a few years ago who demonstrated that national law guaranteeing access to documents take precedence over the charmed circle of privileged access to information in Brussels. The Journalists Union of Sweden in May 1995 challenged the Council of Ministers over access to Council documents relating to Europol activities. At that time the Swedish Union asked for 20 documents from the Council and under Swedish Law asked for the same documents from the Swedish Government. The Council handed over just two documents, but in Sweden some 18 documents were released in line with the country's long-standing legal commitment to make access the rule of government rather than the exception. The Swedish Union mounted a legal challenge to the Council's and won their case at the Court of First Instance in Luxembourg. In its judgement on June 17th 1998 the Court set out the important principles: First that according to the 1993 European Union code, access to documents must be the rule; Second, any restrictions on access must be narrowly interpreted; Third, every document should be tried or examined on its own when deciding if it should be released; Fourth, if a document is refused there should be real harm to the interests concerned. All of these principles are, under NATO's guiding hand, being challenged by the European Union Council of Ministers. Meanwhile, in the United States security chiefs have put before the Senate a proposal to enact an "official secrets act" that will make it a criminal offence to leak classified information to the press. Although Congress has struck down such proposals in the past as unconstitutional, the latest effort, like the action by the Council of Ministers, has been taken without any public debate or review of the proposal. Any security service worthy of the name knows that secrecy rules within the European Union are constantly under threat from ambush at national level. As the Swedish case proves, national legal traditions can subvert Codes drawn up in Brussels. The benchmark for openness in Europe is not what Brussels can enforce, but the limits of transparency as defined by those countries with the highest levels of access to documents. The Council of Ministers, and NATO, will have to recognise, sooner or later, that there are different traditions at work here and, in line with the Amsterdam Treaty commitments, it only makes sense to harmonise openness rules up to the levels of access that operate at the highest level nationally. The alternative will be to attack the openness rules that apply in a number of national states - the Netherlands and the Nordic countries, in particular. That may happen, but if it does, journalists, like those in Sweden, or John Carvel at The Guardian or Steve Peers at Statewatch, who have also challenged secrecy in Europe, will be among the first to take to the barricades. by Aidan White
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,183
Time Travel Tuesday #timetravel a look back at the Adafruit, maker, science, technology and engineering world 1908 – American chemist and Women's Army Corps officer Myrtle Bachelder is born. During World War II, Bachelder enlisted in the Women's Army Corps (WAC) in November 1942, at the Springfield, Massachusetts headquarters. After spending time in training at military bases in several U.S. states, she received orders assigning her to the Company 'D' WAC Detachment of the Manhattan District, United States Army Corps of Engineers. Her secret assignment was to lead a group of 15 to 20 women from the WAC, stationed in Des Moines, Iowa, to Fort Sill, Oklahoma, and from there to Santa Fe, New Mexico. She and the women under her command arrived at Los Alamos, New Mexico on October 21, 1943.[1][4] "Manhattan" was the code name for the special military division dedicated to developing an atomic weapon. In the clandestine laboratory at the remote Los Alamos desert site, Bachelder was responsible for the analysis of the spectroscopy of uranium isotopes. Since the uranium-235 isotope is fissile, whereas the uranium-238 isotope is not, Bachelder's role in the project was a crucial task: to ensure the purity of the sub-critical material, and therefore the nuclear explosion, of the world's first atomic bombs. 1930 – After confirming its finding with multiple photographs, news of the discovery of Pluto is telegraphed to the Harvard College Observatory. Pluto was discovered by Clyde Tombaugh in 1930 and was originally considered to be the ninth planet from the Sun. After 1992, its status as a planet was questioned following the discovery of several objects of similar size in the Kuiper belt. In 2005, Eris, a dwarf planet in the scattered disc which is 27% more massive than Pluto, was discovered. This led the International Astronomical Union (IAU) to define the term "planet" formally in 2006, during their 26th General Assembly. That definition excluded Pluto and reclassified it as a dwarf planet. 1969 – After testing the Lunar Module, Apollo 9 returns to Earth. After launching on March 3, 1969, the crewmen performed the first manned flight of a LM, the first docking and extraction of a LM, two spacewalks (EVA), and the second docking of two manned spacecraft—two months after the Soviets performed a spacewalk crew transfer between Soyuz 4 and Soyuz 5. The mission proved the LM worthy of manned spaceflight. Further tests on the Apollo 10 mission would prepare the LM for its ultimate goal, landing on the Moon. They returned to Earth on March 13, 1969. 2016 – Hilary Putnam, American philosopher, mathematician, and computer scientist, dies. Putnam has contributed to scientific fields not directly related to his work in philosophy. As a mathematician, Putnam contributed to the resolution of Hilbert's tenth problem in mathematics. This problem was settled by Yuri Matiyasevich in 1970, with a proof that relied heavily on previous research by Putnam, Julia Robinson and Martin Davis. In computability theory, Putnam investigated the structure of the ramified analytical hierarchy, its connection with the constructible hierarchy and its Turing degrees. […] In computer science, Putnam is known for the Davis–Putnam algorithm for the Boolean satisfiability problem (SAT), developed with Martin Davis in 1960. The algorithm finds if there is a set of true or false values that satisfies a given Boolean expression so that the entire expression becomes true. In 1962, they further refined the algorithm with the help of George Logemann and Donald W. Loveland. It became known as the DPLL algorithm. This algorithm is efficient and still forms the basis of most complete SAT solvers. 2016 – Adafruit surpasses 100K Followers on Twitter! …And now, two years later, we're still going strong. Follow Adafruit on Twitter here! Filed under: history, science, STEM, time travel, Women In STEM — Tags: history, science, STEM, time travel, women in STEM — by Kelly Comments Off on Time Travel Tuesday #timetravel a look back at the Adafruit, maker, science, technology and engineering world "Fashion is what seems beautiful now but looks ugly later; art can be ugly at first but it becomes beautiful later" - Unknown
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,884
{"url":"https:\/\/ask.sagemath.org\/questions\/45047\/revisions\/","text":"# Revision history [back]\n\n### Why can't I use the sagesilent environment in AtBeginDocument?\n\nI'm trying to write a latex sty file to change the default behavior of sagetex.\n\nMy file is\n\n\\NeedsTeXFormat{LaTeX2e}[1994\/06\/01]\n\\ProvidesPackage{examplepackage}[2018\/12\/24 examplepackage]\n\n\\RequirePackage{sagetex}\n\\RequirePackage{etoolbox}\n\n\\AtBeginDocument{%\n\\begin{sagesilent}\nA = 3\n\\end{sagesilent}\n}\n\n\\endinput\n\n\nHowever, attempting to use this sty file in the following latex file produces the error ! File ended while scanning use of \\next.\n\n\\documentclass[12pt]{article}\n\n\\usepackage{stupid}\n\n\\begin{document}\nhello world!\n\\end{document}\n\n\nWhat's wrong with my sty file?\n\n### Why can't I use the sagesilent environment in AtBeginDocument?\n\nI'm trying to write a latex sty file to change the default behavior of sagetex.\n\nMy file is\n\n\\NeedsTeXFormat{LaTeX2e}[1994\/06\/01]\n\\ProvidesPackage{examplepackage}[2018\/12\/24 examplepackage]\n\n\\RequirePackage{sagetex}\n\\RequirePackage{etoolbox}\n\n\\AtBeginDocument{%\n\\begin{sagesilent}\nA = 3\n\\end{sagesilent}\n}\n\n\\endinput\n\n\nHowever, attempting to use this sty file in the following latex file produces the error ! File ended while scanning use of \\next.\n\n\\documentclass[12pt]{article}\n\n\\usepackage{stupid}\n\\usepackage{examplepackage}\n\n\\begin{document}\nhello world!\n\\end{document}\n\n\nWhat's wrong with my sty file?","date":"2019-03-26 06:49:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9424669146537781, \"perplexity\": 8573.72200516196}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-13\/segments\/1552912204857.82\/warc\/CC-MAIN-20190326054828-20190326080828-00438.warc.gz\"}"}
null
null
Taiwan Tati Cultural and Educational Foundation Taesiong Scripture Diaxde Volunteers Dr. Yang's Column Archive For Taiwan About Tati Hanzi Version Home The News News Breaking: Tibet burns with another self-immolation, Toll reaches 118 Breaking: Tibet burns with another self-immolation, Toll reaches 118 Wednesday, 29 May 2013 14:18 Phayul.com Tenzin Sherab in an undated photo. DHARAMSHALA, May 29: In reports coming just in, a Tibetan man set himself on fire in Adril region of eastern Tibet protesting China's occupation and hard-line policies in Tibet. Tenzin Sherab, 31, carried out his self-immolation protest on May 27. He succumbed to his injuries at the site of his fiery protest. According to Jampa Younten, a monk living in south India, Tenzin Sherab's family members and friends came to know about his self-immolation protest only after he had passed away. "Soon after the protest, Chinese security personnel from Chumar arrived at the site and confiscated Tenzin Sherab's body," Younten said. "However, the next day, on May 28, his body was handed over to his family members." In the days preceding his self-immolation protest, Tenzin Sherab had spoken to his friends about the evil policies of the Chinese government and expressed his concern about Tibetan religion and culture reaching a point of annihilation. "We can no longer bear to live under China's constant torture and repression," Tenzin Sherab had told his friends. Preparations are afoot for his cremation, the same source added. Tenzin Sherab is the son of Dhondup and Choemey and is the eldest among five siblings. Since 2009, as many as 118 Tibetans living under China's rule have set themselves on fire demanding freedom and the return of His Holiness the Dalai Lama from exile. The Chinese government has responded with even harsher policies, criminalising the self-immolation protests and sentencing scores of people to heavy prison terms on charges of "intentional homicide" for their alleged roles in self-immolation protests. Chinese officials have barred Tibetans from offering prayers and showing solidarity with families of self-immolators and announced the cancellation of development funds to those villages where self-immolations have taken place. Source: Phayul.com Add this page to your favorite Social Bookmarking websites Editorials of Interest Taiwan Impression Letters for Taiwan Activity Slideshow Video Watch 228 Tâi-uân-sîn Open Hearts Mantra Holy Mountain (Mar. 23) Articles from Book 20th Anniversary of June 4th We're 228 Followers @-Bian Casters Show your support and write a letter to former President Chen Shui-bian. Members of the Southern California National Taiwan University Alumni Association opposed to the association's decision to invite Academia Sinica member Kuan Chung-ming hold a news conference in Los Angeles on Saturday to announce the founding of a rival association. Photo: CNA Academia Sinica academician and NTU president-elect Kuan Chung-ming (管中閔) yesterday reiterated that he will not back down from what he called a fight for National Taiwan University's (NTU) autonomy against government intervention, in a speech at an annual gala held by the Southern California National Taiwan University Alumni Association in Los Angeles.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,910
\section{Introduction} Let $V$ be a smooth Fano variety with very ample anticanonical divisor. Consider the following \begin{prob} Classify all toric Fano varieties with at most Gorenstein singularities to which $V$ degenerates in its anticanonical embedding. \end{prob} \noindent This problem has recently become of interest due to considerations coming from mirror symmetry, which we discuss at the end of this section. The main result of the present paper is that we completely solve the problem for Fano threefolds of degree $d\leq 12$. We approach this problem via the study of Hilbert schemes. If $V$ is a smooth Fano threefold with very ample anticanonical divisor, then the Hilbert polynomial of $V$ in its anticanonical embedding is determined solely by its degree $d$. We denote the Hilbert scheme parametrizing subvarieties of $\mathbb{P}(|-K_V|)$ with this Hilbert polynomial by $\mathcal{H}_d$. The variety $V$ together with the anticanonical embedding corresponds to a point $[V]\in \mathcal{H}_d$, and this point lies on a single irreducible component, cf. \cite{mm:cla}. By studying the irreducible components of $\mathcal{H}_d$ on which such points $[V]$ lie, we find an answer to our problem. Indeed, if $V$ is general and $X$ is any threefold such that $[X]$ lies on the same irreducible component of $\mathcal{H}_d$ as $[V]$, then $V$ has a degeneration to $X$ in its anticanonical embedding. Apart from its applicability in classifying toric degenerations, our study of $\mathcal{H}_d$ provides some new examples of non-trivial Hilbert schemes. Let us quickly recall some facts about smooth Fano threefolds. Irreducible families of such varieties have been completely classified, see \cite{iskovskih:78a} and \cite{mori:81a}. Each family is distinguished by the degree $d$, the second and third Betti numbers $b_2$ and $b_3$, and the Lefschetz discriminant of any threefold in the family. For threefolds of degree less than $30$, the first three invariants suffice. The degree must always be even, and can range from $2$ to $64$. We shall refer to the families and general elements of the families interchangeably. Using this classification, we do some calculations to determine exactly which Fano threefolds of degree $d\leq 12$ have very ample anticanonical divisor, see Section \ref{sec:veryample}. The results are recorded in Table \ref{table:fanos}. If $-K_V$ is very ample, we record how many global sections the corresponding normal sheaf has, which we calculate using our Proposition \ref{prop:h0N}. This is just the dimension of the corresponding component of $\mathcal{H}_d$, to which we also give a name in the table. In addition to the $9$ components of Hilbert schemes corresponding to smooth Fano threefolds of degree less than or equal to twelve, we will encounter three non-smoothing components shown in Table \ref{table:gorfanos}. General elements of these components are certain Gorenstein trigonal Fano threefolds, which were classified in \cite{cheltsov:05a}. We give a more precise description of these components in Section \ref{sec:exotic}. \begin{table} \begin{center} \begin{tabular}{|c| c| c| c| c| c|c| } \hline Name & Degree & $b_2$ & $b_3/2$ & $-K_V$ very ample? & $h^0(\mathcal{N})$ & Component of $\mathcal{H}_d$\\ \hline $V_2$ & $2$ & $1$ & $52$ & No & N/A & N/A\\ $V_4$ & $4$ & $1$ & $30$ & Yes & 69 & $B_{69}^4$\\ $V_4'$ & $4$ & $2$ & $22$ & No & N/A& N/A\\ $V_6$ & $6$ & $1$ & $20$ & Yes & 69 & $B_{69}^6$\\ $V_6'$ & $6$ & $2$ & $20$ & No & N/A& N/A\\ $V_8$ & $8$ & $1$ & $14$ & Yes & 75 & $B_{75}$\\ $V_8'$ & $8$ & $2$ & $11$ & No & N/A & N/A\\ $V_{10}$ & $10$ & $1$ & $10$ & Yes & 85 & $B_{85}$\\ $V_{10}'$ & $10$ & $2$ & $10$ & Yes & 84 & $B_{84}$\\ $V_{12}$ & $12$ & $1$ & $5$ & Yes & 98 & $B_{98}$\\ $V_{12,2,6}$ & $12$ & $2$ & $6$ & Yes & 96 & $B_{96}$\\ $V_{12,2,9}$ & $12$ & $2$ & $9$ & Yes & 99 & $B_{99}$\\ $V_{12,3}$ & $12$ & $3$ & $8$ & Yes & 97 & $B_{97}$\\ \hline \end{tabular} \end{center}\caption{Smooth Fano Threefolds of Low Degree}\label{table:fanos} \end{table} \begin{table} \begin{center} \begin{tabular}{|c| c| c| c| } \hline Name & Degree & $h^0(\mathcal{N})$ & Component of $\mathcal{H}_d$\\ \hline $T_3$ & $10$ & $88$ & $B_{88}^\dagger$\\ $T_9$ & $10$ & $84$ &$B_{84}^\dagger$\\ $T_{25}$ & $12$& $99$ &$B_{99}^\dagger$\\ \hline \end{tabular} \end{center}\caption{Select Trigonal Fano Threefolds}\label{table:gorfanos} \end{table} Let $\mathfrak{tor}_d$ be the set of all toric Fano threefolds with at most Gorenstein singularities; these have been classified by Kreuzer and Skarke, see \cite{grdb} and \cite{kreuzer}. Our main result is then contained in Proposition \ref{prop:remain0} and Theorems \ref{thm:re10} and \ref{thm:re12}, which we can summarize as \begin{thm}\label{mainthm:0} For $d=4,6,8$ and any $X\in\mathfrak{tor}_d$, $[X]$ is a smooth point of $\mathcal{H}_d$ on the same irreducible component as $[V_d]$. For $d=10$ or $d=12$ and any $X\in\mathfrak{tor}_d$, $[X]$ is a point on exactly the irreducible components of $\mathcal{H}_d$ recorded in Tables \ref{table:toricten} and \ref{table:torictwelve}, where we refer to $X$ by its number in \cite{grdb}. \end{thm} \begin{table} \begin{center} {\small\torictentable} \end{center} \caption{Degree ten toric Fano varieties and the scheme $\mathcal{H}_{10}$.}\label{table:toricten} \end{table} \begin{table} \begin{center} {\small\torictwelvetable} \end{center} \caption{Degree twelve toric Fano varieties and the scheme $\mathcal{H}_{12}$.}\label{table:torictwelve} \end{table} Our proof of this Theorem relies on the efficient use of deformation-theoretic calculations. First of all, for each $d$, we identify one or more ``nice'' Stanley-Reisner schemes to which almost every $X\in\mathfrak{tor}_d$ degenerate. Finding such degenerations is not difficult, since they can be constructed from unimodular regular triangulations of the moment polytopes of $X$. We then use obstruction calculus to locally study $\mathcal{H}_d$ at the points corresponding to our Stanley-Reisner schemes. This allows us to bridge the gap between $X$ and any smooth Fano threefolds of degree $d$. To deal with those $X\in\mathfrak{tor}_d$ which do not degenerate to our nice Stanley-Reisner schemes, we utilize a more general strategy which we describe in Section \ref{sec:def2}. To the best of our knowledge, this strategy is new. Where possible, we use the format of \emph{rolling factors} which allows for the easy description of certain deformations, see Section \ref{sec:rolling}. We believe that these techniques will yield success in classifying degenerations of smooth Fano threefolds of higher degrees as well, although the increase in embedding dimension will lead to increased computational difficulties. We use a number of computer programs to carry out our calculations: {\small\verb+Macaulay2+} \cite{M2}, {\small\verb+Versal Deformations+} \cite{VD}, {\small\verb+TOPCOM+} \cite{TOPCOM}, and {\small\verb+4ti2+} \cite{4ti2}. Supplementary material containing many of the computer calculations is available online \cite{supp}. Toric degenerations are connected to mirror symmetry through the ansatz of \emph{extremal Laurent polynomials}, see \cite{przyjalkowski:09a}. The quantum cohomology of a smooth Fano variety $V$ of dimension $n$ is conjecturally related to the Picard-Fuchs operator of a pencil $f\colon Y\to \mathbb{C}$ called a (weak) Landau--Ginzburg model for $V$. The extremal Laurent polynomial ansatz conjectures that one should be able to take $Y=(\mathbb{C}^*)^n$, that is, $f$ is a Laurent polynomial. Furthermore, denoting the Newton polytope of $f$ by $\Delta_f$, it is expected that if $f$ gives a Landau--Ginzburg model for $V$, then $V$ degenerates to the toric variety whose moment polytope is dual to $\Delta_f^*$. Conversely, for any Fano toric variety $X$ with mild singularities smoothing to $V$, one expects to be able to find a Landau--Ginzburg model for $V$ in the form of a Laurent polynomial $f$ with $\Delta_f$ dual to the moment polytope of $X$. In \cite{przyjalkowski:09a}, V.~Przyjalkowski showed that for every smooth Fano threefold $V$ of Picard rank one, there is in fact a Laurent polynomial giving a weak Landau--Ginzburg model for $V$. Furthermore, in \cite{ilten:11b}, Przyjalkowski, J.~Lewis, and the second author of the present paper showed that these Laurent polynomials are related to toric degenerations in the above sense. In an ongoing project, T.~Coates, A.~Corti, S.~Galkin, V.~Golyshev, and A.~Kasprzyk are working on extending Przyjalkowski's result to Fano threefolds of higher Picard rank, as well as higher dimensions \cite{fanosearch}. It should thus be informative to try and match up our toric degenerations with the extremal Laurent polynomials they have found. Motivated by similar considerations, S.~Galkin classified all degenerations of smooth Fano threefolds to Fano toric varieties with at most \emph{terminal} Gorenstein singularities in \cite{galkin:07a}. This situation is however significantly different than the present one. Indeed, any Fano threefold with at most terminal Gorenstein singularities has a unique smoothing. This is no longer true if we relax the condition that the singularities be terminal; smoothings need not exist, and if they do, need not be unique. The remainder of this paper is organized as follows. Section \ref{sec:def} contains some background on deformation theory, which we will need for our calculations. In Section \ref{sec:comp} we discuss the components of $\mathcal{H}_d$ corresponding to families of smooth Fano threefolds for $d\leq 12$, as well as some non-smoothing components. In Section \ref{sec:sr} we introduce Stanley-Reisner schemes, and discuss degenerations of our smooth Fano threefolds to special Stanley-Reisner schemes. Here one also finds our Proposition \ref{prop:remain0}, which takes care of the cases $d=4,6,8$ in Theorem \ref{mainthm:0}. The case $d=10$ is covered in Theorem \ref{thm:re10} in Section \ref{sec:degten}. Section \ref{sec:bipyramid} contains local Hilbert scheme calculations at a special point of $\mathcal{H}_{12}$. Finally, Section \ref{sec:toric} contains our Theorem \ref{thm:re12}, which takes care of the $d=12$ case of Theorem \ref{mainthm:0}. \\ \noindent\emph{Acknowledgements.} We thank Sergei Galkin for helpful comments. \section{Deformation Theory Methods}\label{sec:def} \subsection{Comparison Theorems and Forgetful Maps}\label{sec:def1} Let $S = k[x_0,\dots, x_n]$, $A = S/I$ be a graded ring and $X = \Proj A \subseteq \mathbb{P}^n$. We may consider two deformation functors for $X$: the deformation functor $\Def_X$ of isomorphism classes of deformations of $X$ as a scheme, and the local Hilbert functor $H_X$ of embedded deformations of $X$ in $\mathbb{P}^n$, see \cite{sernesi:06a} for details. The former has a formal semiuniversal element and the latter a formal universal element. There is a natural forgetful map $H_X \to \Def_X$. Let $T^1_X$ and $T^2_X$ be the tangent space and obstruction space for $\Def_X$ and $T^i_{X/\mathbb{P}^n}$, $i=1,2$, the same for $H_X$. Assume now that \emph{$A$ is Cohen-Macaulay of Krull dimension $4$} which is the case for all schemes in this paper. We may use the comparison theorems of Kleppe to relate the $T^i_{X/\mathbb{P}^n}$ and $T^i_{X}$ to the degree 0 part of cotangent modules of the algebra $A$. This has a large computational benefit. For $H_X$, \cite[Theorem 3.6]{kleppe:79a} applied to our situation yields \begin{gather*} {\Hom_A(I/I^2, A)}_0 = {(T^1_{A/S})}_0 \simeq T^1_{X/\mathbb{P}^n} = H^0(X,\mathcal{N}_{X/\mathbb{P}^n}) \\ {(T^2_{A})}_0 \simeq {(T^2_{A/S})}_0 \simeq T^2_{X/\mathbb{P}^n} \, . \end{gather*} For $\Def_X$, \cite[Theorem 3.9]{kleppe:79a} applied to our situation yields \begin{gather*} {(T^1_{A})}_0 \simeq T^1_{X} \\ 0 \to {(T^2_{A})}_0 \to T^2_{X} \to H^3(X, \mathcal{O}_X) \end{gather*} where the latter sequence is exact. The cohomology $ H^3(X, \mathcal{O}_X)$ does not appear in the statement in \cite{kleppe:79a}, but a careful reading of the proof shows the existence of the sequence. Thus if $H^3(X, \mathcal{O}_X) = 0$, as will be the case for the Fano schemes in this paper, also $ {(T^2_{A})}_0 \simeq T^2_{X}$. The Zariski-Jacobi sequence for $k \to S \to A$ reads $$\dots \to T^1_{A/S} \to T^1_A \to T^1_S(A) \to T^2_{A/S} \to T^2_A \to T^2_S(A) \to \cdots$$ and since $S$ is regular $T^i_S(A) = 0$ for $i \ge 1$. This gives the above written isomorphisms of obstruction spaces $T^2_{A/S} \simeq T^2_A$ but also a surjection $T^1_{A/S} \to T^1_A$. By the above this means the forgetful map $H_X \to \Def_X$ is surjective on tangent spaces and injective on obstruction spaces, so it is smooth. The outcome of all this is that we may do versal deformation and local Hilbert scheme computations using the vector spaces $(T^1_{A})_0$, $(T^1_{A/S})_0$ and $(T^2_{A})_0$. Moreover, by smoothness of the forgetful map, the equations for the Hilbert scheme locally at $X$, in particular the component structure, will be obstruction equations for $\Def_X$ which involves much fewer parameters. \subsection{Rolling Factors and Deformations}\label{sec:rolling} When studying deformations of a scheme $X$, it is often useful to have a systematic way for writing down the equations in $X$. For subvarieties of rational normal scrolls, this is found in the method of \emph{rolling factors} introduced by Duncan Dicks, see e.g. \cite{reid:1989a} and \cite[Section 1]{stevens:01a}. Since many of the Fano varieties we consider are subvarieties of scrolls, we summarize this method in the following. Let $d_0\geq d_1\geq \ldots\geq d_k$ be non-negative integers and $d=\sum d_i$. Let $S$ be the image of $$\widetilde{S}=\mathbb{P}\left(\bigoplus \mathcal{O}_{\mathbb{P}^1}(d_i)\right)$$ under the map defined by the twisting bundle $\mathcal{O}(1)$. Then $\widetilde{S}$ is a $\mathbb{P}^k$ bundle over $\mathbb{P}^1$, and $S\subset \mathbb{P}^{d+k}$ is cut out by the $2\times 2$ minors of $$ M=\left (\begin{array}{c c c c c c} x_0^{(0)}&x_1^{(0)}&\cdots&x_{d_0-1}^{(0)}&x_0^{(1)}\cdots&x_{d_l-1}^{(l)}\\ x_1^{(0)}&x_2^{(0)}&\cdots&x_{d_0}^{(0)}&x_1^{(1)}\cdots&x_{d_l}^{(l)}\\ \end{array}\right ) $$ where $l$ is the largest integer such that $d_j\neq 0$. We call $S$ a scroll of type $(d_0,d_1,\ldots,d_k)$. Note that $\widetilde{S}=S$ if and only if $d_k\neq 0$. Let $f_0$ be a homogeneous polynomial in the variables $x_j^{(i)}$, $0\leq i \leq k$, $0 \leq j \leq d_i$ and suppose that every monomial in $f_0$ contains a factor from the top row of $M$. Then for every term in $f_0$, we may replace some $x_j^{(i)}$ by $x_{j+1}^{(i)}$ to obtain a new polynomial $f_1$. This process is called \emph{rolling factors}. Different choices of the factors might lead to different polynomials $f_1$, but any difference is contained in the ideal generated by the $2\times 2$ minors of $M$. \begin{ex} Let $S$ be a scroll of type $(2,2,0,0)$ with corresponding matrix \begin{align*} M=\left(\begin{array}{c c c c} x_0&x_1&y_0&y_1\\ x_1&x_2&y_1&y_2 \end{array}\right)\\ \end{align*} in variables $x_0,x_1,x_2,y_0,y_1,y_2,z_1,z_1$ (we've changed notation for readability). Set $f_0=x_0^2x_2-y_0z_1z_2$. By rolling factors we get the polynomial $f_1=x_0x_1x_2-y_1z_1z_2$, whose factors we can again roll to get $f_2=x_0x_2^2-y_2z_1z_2$. \end{ex} Let $f_0$ be a homogeneous polynomial as above of degree $e$, and suppose that we can subsequently roll factors $m$ times to get polynomials $f_0,\ldots,f_m$. Then the subvariety $X$ of $S$ cut out by the polynomials $f_0,\ldots,f_m$ is a divisor of type $eD-mF$, where $D$ is the hyperplane class and $F$ is the image of the fiber class of $\widetilde{S}$. Furthermore, any such subvariety may be described in this matter. Using this format for writing the equations of $X$, many of its deformations may be readily described, see \cite{stevens:01a}. Arbitrary perturbations of $f_0$ which still may be rolled $m$ times describe deformations of $X$ within its divisor class on $S$. Such deformations are called \emph{pure rolling factor} deformations. Perturbations of the entries of $M$ together with perturbations of the $f_i$ may give deformations of $X$ sitting on a deformed scroll. These deformations are called \emph{scrollar}. In general, there are also \emph{non-scrollar} deformations of $X$ which may not be described in either manner. For an illustration of all three types of deformations, see the example in Section \ref{sec:degten}. \subsection{Tangent Cones of Hilbert Schemes}\label{sec:def2} Let $X$ be a subscheme of $\mathbb{P}^n$. We would like to identify which components of the corresponding Hilbert scheme $\mathcal{H}$ the point $[X]$ lies on. In the following, we outline a general strategy for doing this. \begin{enumerate} \item Use obstruction calculus and the package {\small\verb+Versal Deformations+} to find the lowest order terms of obstruction equations for $X$. This will be feasible in the cases of interest to us due to the comparison theorems mentioned in Section \ref{sec:def1}. Let $Z$ denote the subscheme of the affine space $\Spec S^{\bullet} H^0(X,\mathcal{N}_{X/\mathbb{P}^n})$ cut out by these equations; the tangent cone of $\mathcal{H}$ at $[X]$ is contained in $Z$. \item Do a primary decomposition of these lowest order terms to find the irreducible decomposition $Z_1,\ldots,Z_k$ of $Z$. Any component of the tangent cone $\TC_{[X]}\mathcal{H}$ is contained in some $Z_i$. Let $d_i$ denote the dimension of $Z_i$. \item For each $Z_i\subset Z$, find a tangent vector $v\in H^0(X,\mathcal{N}_{X/\mathbb{P}^n})$ such that $v\in Z_i$ but $v\notin Z_j$ for $j\neq i$. Use {\small\verb+Versal Deformations+} to lift the first order deformation given by $v$ to higher order to get a one-parameter deformation $\pi:\mathcal{X}\to\mathbb{A}^1$ of $X$. In general, this may not be possible since the process of lifting to higher order may never terminate, resulting in a family defined by a power series. In practice however, for judicious choice of $v$, we almost always get a polynomial lifting after finitely many steps. \item We consider a general fiber $X'=\mathcal{X}_t$, $t\neq 0$ of $\mathcal{X}$. Suppose that $h^0(X',\mathcal{N}_{X'/\mathbb{P}^n})=d_i$ and $T^2_{X'/\mathbb{P}^n}=0$. Then $[X']$ lies on a component $B$ of $\mathcal{H}$ with $\dim B=d_i$. This implies that $Z_i$ is a component of $\TC_{[X]}\mathcal{H}$. Indeed, $[X]$ must also lie on $B$, and $\TC_{[X]}\mathcal{H}$ must have a component $Z_i'$ of dimension $d_i$ which contains $v$, since $X$ deforms to $X'$ with tangent direction $v$. Because $Z_i'$ contains $v$, it must be contained in $Z_i$, and equality follows from the equality in dimension. \item Suppose that we have shown that $Z_i$ is a component of $\TC_{[X]}\mathcal{H}$ as described in step (iv). We now wish to determine for which component $B$ of $\mathcal{H}$ the tangent cone $\TC_{[X]} B$ contains $Z_i$. One approach is via deformation of the $X'$ above: if $X'$ deforms to some scheme $V$ for which we know $[V]$ lies on $B$, then $[X']$ lies on $B$ and $Z_i\subset \TC_{[X]} B$. A slightly more complicated approach is via degeneration of $X'$: suppose that $X'$ degenerates to a scheme $X_0$. If there is a component $B$ of $\mathcal{H}$ such that the degeneration direction from $X'$ to $X_0$, viewed as a deformation of $X_0$, only lies in $\TC_{[X_0]} B$ and no other components of $\TC_{[X_0]}\mathcal{H}$, then $[X']$ lies on $B$ and again $Z_i\subset \TC_{[X]} B$. \end{enumerate} Several difficulties may arise when attempting to put the above strategy into practice. For one, limits on computer memory and processor speed might make obstruction or primary decomposition calculations impossible. Secondly, it could occur that the scheme $Z$ is not equal to $\TC_{[X]}\mathcal{H}$; this means that there will be some $Z_i$ which strictly contains a component of $\TC_{[X]}\mathcal{H}$. Thirdly, as mentioned in step (iii), lifting of one-parameter first order deformations might not terminate. For all the cases of present interest, these three problems almost never arise. The only such problem we will encounter is in the few cases where some $Z_i$ is an embedded component. In these cases we can use deformation considerations to show that $Z_i$ does not correspond to a smoothing component of the Hilbert scheme. This is done in the final two examples of Section \ref{sec:toric}. It can also occur in step (iv) that $h^0(X',\mathcal{N}_{X'/\mathbb{P}^n})=d_i$ and $[X']$ is a smooth point of $\mathcal{H}$, but $T^2_{X'/\mathbb{P}^n}\neq 0$. In such cases, an alternate strategy is needed to show that $[X']$ is indeed a smooth point of $\mathcal{H}$. One possible approach to deal with this problem is by using the structure of rolling factors, as we do for several cases in the proof of Theorem \ref{thm:re10}. \section{Components of $\mathcal{H}_d$}\label{sec:comp} \subsection{Component Dimension} Before discussing specific components of the Hilbert schemes $\mathcal{H}_d$, we prove a result concerning Hilbert scheme component dimensions for Fano varieties in general. Recall from the previous section that given a scheme $V \subset \mathbb{P}^n$, the dimension of the tangent space of the Hilbert scheme at the corresponding point $[V]$ is just $h^0(\mathcal{N}_{V/\mathbb{P}^n})$. For smooth Fano varieties, this can be computed as follows: \begin{prop}\label{prop:h0N} Let $V\hookrightarrow \mathbb{P}^n$ be a smooth Fano variety and $\mathcal{N}_{V/\mathbb{P}^n}$ the corresponding normal sheaf. Then $$ h^0(\mathcal{N}_{V/\mathbb{P}^n})=(n+1)^2-1-\chi(\Theta_V) $$ and $h^1(\mathcal{N}_{V/\mathbb{P}^n})=0$. Furthermore, if $V$ is a threefold in its anticanonical embedding, then $$ h^0(\mathcal{N}_{V/\mathbb{P}^n})=g^2+3g+22-b_2+\frac{1}{2}b_3, $$ where $g=\frac{1}{2}(-K_V)^3+1$ is the genus, and $b_2,b_3$ are the second and third Betti numbers. \end{prop} \begin{proof} By Kodaira vanishing, $h^1(\mathcal{O}_V)=h^1(\mathcal{O}_V(1))=0$. Kodaira vanishing also gives $h^i(\Theta_V)=0$ for $i>1$ so $\chi(\Theta_V)=h^0(\Theta_V)-h^1(\Theta_V)$. Furthermore, it follows from the Euler sequence that $h^1({{\Theta_{\mathbb{P}^n}}}_{|V} )=0$. The first claim then follows from the long exact cohomology sequence coming from the normal sequence for $V$ in $\mathbb{P}^n$. Now assume that $\dim V=3$. By Hirzebruch-Riemann-Roch, $$ \chi(\Theta_V)=\frac{1}{24}\deg(12c_1^3-19c_1c_2 + 12c_3) $$ where the $c_i$ are the Chern classes of $\Theta_V$. We have that $\deg c_3=\chi_\mathrm{top}(V)=2+2b_2-b_3$ by Poincar\'e duality, and by definition of $g$, $\deg c_1^{3}=2g-2$. Furthermore, an application of Hirzebruch-Riemann-Roch to $\mathcal{O}_V$ gives $ \deg c_1c_2=24$. Substituting these values into the above general dimension formula proves the second claim. \end{proof} \subsection{Smooth Fano Threefolds of Low Degree}\label{sec:veryample} In Table \ref{table:fanos}, we list all families of smooth Fano threefolds of degree at most twelve. Degrees of general elements of these families and their topological invariants are taken from \cite{iskovskih:78a}, \cite{isk:80a}, and \cite{mori:81a}. Our names, referring both to the family and to general elements thereof, are non-standard. Below we calculate case by case whether the anticanonical divisor $-K_V$ is very ample. If so, we use Proposition \ref{prop:h0N} to calculate how many global sections the corresponding normal sheaf has. This gives us a list of all components of the Hilbert schemes $\mathcal{H}_d$ for $d\leq 12$ which correspond to smooth Fano threefolds, and the dimensions thereof. To summarize, $\mathcal{H}_4$, $\mathcal{H}_6$, and $\mathcal{H}_8$ each have a single distinguished component, $\mathcal{H}_{10}$ has distinguished components $B_{84}$ and $B_{85}$ of dimensions $84$ and $85$, and $\mathcal{H}_{12}$ has distinguished components $B_{96}$, $B_{97}$, $B_{98}$, and $B_{99}$ of dimensions $96$, $97$, $98$, and $99$. \begin{lemma} Let $V$ be a smooth Fano threefold with $\deg -K_V\leq 8$ and $-K_V$ very ample. Then $V$ is a complete intersection in its anticanonical embedding. In particular, $V_2$, $V_4'$, $V_6'$, and $V_8'$ do not have very ample anticanonical divisor. \end{lemma} \begin{proof} Let $g=\frac{1}{2}(-K_V)^3+1$. Then $V\subset \mathbb{P}^{g+1}$ by Riemann-Roch. Thus $g>2$ for dimension reasons. If $g=3$, then $V$ must be a quartic hypersurface. If $g=4$, then $V$ is the intersection of a cubic and a quadric, see \cite[Theorem 2.14]{cheltsov:05a}. This follows for example by considering the long exact sequence coming from twists of \begin{equation*} 0\to \mathcal{I}\to \mathcal{O}_{\mathbb{P}^{g+1}}\to \mathcal{O}_V\to 0 \end{equation*} by $\mathcal{O}(2)$ and $\mathcal{O}(3)$. Finally, for the case $g=5$, it follows from \cite[Remark 1.9]{cheltsov:05a} that $V$ must be cut out by quadrics. For degree reasons, $V$ must thus be a complete intersection. For the statement regarding $V_2$, $V_4'$, $V_6'$, and $V_8'$, note that none of these varieties is a complete intersection in projective space. \end{proof} The varieties $V_4$, $V_6$, and $V_8$ all have very ample anticanonical divisors. Indeed, in their anticanonical embeddings they are complete intersections in projective space of degrees $4$, $(2,3)$, and $(2,2,2)$. Furthermore, the varieties $V_{10}$ and $V_{12}$ have very ample anticanonical divisors, cf. \cite{mukai:2004a}. In its anticanonical embedding, $V_{10}$ is the intersection of the Grassmannian $G(2,5)$ in its Pl\"ucker embedding with a quadric and two hyperplanes. Likewise, $V_{12}$ is the intersection of the orthogonal Grassmannian $OG(5,10)$ in its Pl\"ucker embedding with seven hyperplanes. We now deal with the remaining cases. \begin{prop}\label{prop:caneq} Any smooth degree ten or twelve Fano threefold has very ample anticanonical class. Equations for their ideals in the anticanonical embedding are: \begin{enumerate} \item In its anticanonical embedding in $\mathbb{P}^7$, the ideal of of $V_{10}'$ is given by the minors of \begin{align*} M= \left(\begin{array}{c c c c} x_0&y_0&z_0&w_0\\ x_1&y_1&z_1&w_1 \end{array}\right) \end{align*} together with cubics $f_0,f_1,f_2$, where $f_0$ is a general cubic which can be rolled twice to get $f_1$ and $f_2$. \item In its anticanonical embedding in $\mathbb{P}^8$, the ideal of $V_{12,2,6}$ is given by the minors of \begin{align*} \left(\begin{array}{c c c c c} x_0&x_1&y_0&z_0&w_0\\ x_1&x_2&y_1&z_1&w_1\\ \end{array}\right) \end{align*} together with cubics $f_0,f_1,f_2,f_3$, where $f_0$ is a general cubic which can be rolled three times to get $f_1$, $f_2$, and $f_3$. \item In its anticanonical embedding in $\mathbb{P}^8$, $V_{12,2,9}$ is defined by the $2\times 2$ minors of $$ \left ( \begin{array}{c c c} u&x_1&y_0\\ y_1&v&x_2\\ x_0&y_2 &w\\ \end{array}\right ) $$ together with a general quadric. \item In its anticanonical embedding in $\mathbb{P}^8$, $V_{12,3}$ is defined by the $2\times 2$ minors of the two matrices \begin{align*} \left( \begin{array}{c c c c} x_{000}&x_{100}&x_{001}&x_{101}\\ x_{010}&x_{110}&x_{011}&x_{111}\\ \end{array}\right) \qquad \left( \begin{array}{c c c c} x_{000}&x_{010}&x_{001}&x_{011}\\ x_{100}&x_{110}&x_{101}&x_{111}\\ \end{array}\right) \end{align*} in $\mathbb{C}[x_{ijk},t]_{i,j,k\in\{0,1\}}$ along with a general quadric. \end{enumerate} \end{prop} \begin{proof} For $V_{10}'$, note that the variety $V$ described by the equations in (i) is a divisor of type $3D-2F$ on a scroll $S$ of type $(1,1,1,1)$. Since this divisor is basepoint free, $V$ is smooth. The divisor $-K_S$ is equivalent to $4D-2F$ (see e.g. \cite[pp. 23]{kollar:00a}), and the adjunction formula thus shows that $-K_V=\mathcal{O}_V(1)$. Since $V$ is not cut out by quadrics, it cannot be $V_{10}$ and must thus be $V_{10}'$. For $V_{12,2,6}$, note that the variety $V$ described by the equations in (ii) is a divisor of type $3D-3F$ on a scroll $S$ of type $(2,1,1,1)$. Since this divisor is basepoint free, $V$ is smooth. The divisor $-K_S$ is equivalent to $4D-3F$, and the adjunction formula thus shows that $-K_V=\mathcal{O}_V(1)$. Since $V$ is not cut out by quadrics, it cannot be $V_{12}$ or one of the other two degree twelve threefolds dealt with below. For the remaining cases, we use the descriptions for the Fano threefolds found in \cite{mori:81a}. The threefold $V_{12,2,9}$ is a divisor of bidegree $(2,2)$ in $\mathbb{P}^2\times \mathbb{P}^2$. The equations for the Segre embedding of $\mathbb{P}^2\times \mathbb{P}^2$ in $\mathbb{P}^8$ are given by the $2\times 2$ minors of a general $3\times 3$ matrix. In this embedding, a divisor of bidegree $(2,2)$ is given by a quadric. It follows from the adjunction formula that this is in fact the anticanonical embedding, so in particular $-K_{V_{12,2,9}}$ is very ample. The threefold $V_{12,3}$ is a double cover of $\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$ with branch locus a divisor of tridegree $(2,2,2)$. Now, the equations listed in (iv) describe a smooth variety $V$ which is the cone over $\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$ embedded via $\mathcal{O}(1,1,1)$, intersected with a general quadric. Projection from the point $\{x_{ijk}=0,t=1\}$ gives a map $V\to\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$ whose ramification locus is a divisor of type $(2,2,2)$. Thus, $V$ equals $V_{12,3}$. To check that this is in fact its anticanonical embedding, we again use adjunction: a straightforward toric calculation shows that the anticanonical divisor on the cone over $\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$ is the pullback of $\mathcal{O}(3)$, so intersection with a quadric gives that $-K_V$ is the pullback of $\mathcal{O}(1)$. \end{proof} \begin{remark} All components $B$ of $\mathcal{H}_d$ for $d\leq 12$ corresponding to smooth Fano threefolds are unirational, that is, there is some dominant rational map $\mathbb{A}^k\dashrightarrow B$. Indeed, Mori and Mukai show that the variety parametrizing any family of smooth Fano threefolds is unirational. Since the map $\mathcal{H}_X\to \Def_X$ is smooth for any Fano threefold $X$, it follows that the corresponding Hilbert scheme component is also unirational. \end{remark} \subsection{Non-smoothing Components of $\mathcal{H}_d$}\label{sec:exotic} In our study of $\mathcal{H}_{10}$ and $\mathcal{H}_{12}$, we will encounter three additional components which do not correspond to smooth Fano threefolds, but instead non-smoothable trigonal Fano threefolds with Gorenstein singularities, see \cite{cheltsov:05a}. We first describe an $88$-dimensional component $B_{88}^\dagger$ of $\mathcal{H}_{10}$. Consider the matrix $$M=\left( \begin{array}{c c c} x_0&y_2&y_1\\ y_2&x_1&y_0\\ y_1&y_0&x_2\\ \end{array}\right). $$ Let $g_0,g_1,g_2$ be general quadrics in $x_i,y_j,z_1,z_2$, and let $f_0,f_1,f_2$ be the cubics defined by $$ M\cdot\left(\begin{array}{c} g_0\\ g_1\\ g_2\end{array}\right)=\left(\begin{array}{c} f_0\\ f_1\\ f_2\end{array}\right). $$ Note that $f_1$ and $f_2$ have been constructed from $f_0$ in a manner similar to rolling factors. Let $I$ be the ideal generated by the $2\times 2$ minors of $M$ and $f_0,f_1,f_2$. This cuts out a singular degree $10$ Fano variety $V\subset\mathbb{P}^7$ corresponding to a point $[V]\in\mathcal{H}_{10}$. Indeed, this is the case $T_{3}$ of \cite{cheltsov:05a}. Using {\small\verb+Macaulay2+}, we compute that $h^0(V,\mathcal{N})=88$, and that all deformations of $V$ come from perturbing the quadrics $g_0,g_1,g_2$ and are unobstructed. Thus, $[V]$ is a smooth point on an $88$-dimensional component $B_{88}^\dagger$ of $\mathcal{H}_{10}$. The remaining two non-smoothing components may be nicely described using rolling factors. The Hilbert scheme $\mathcal{H}_{10}$ has an additional $84$-dimensional component $B_{84}^\dagger$ which is \emph{not} the component $B_{84}$. Consider the matrix $$M=\left( \begin{array}{c c c c c} x_0&x_1&y_0&y_1\\ x_1&x_2&y_1&y_2\\ \end{array}\right). $$ With additional variables $z_1,z_2$, its maximal minors define a scroll of type $(2,2,0,0)$. Let $f_0$ be a general cubic which can be rolled $2$ times to $f_1$ and $f_2$. The ideal generated by the minors of $M$ together with these three cubics cuts out out a singular degree $10$ Fano variety $V\subset\mathbb{P}^7$ corresponding to a point $[V]\in\mathcal{H}_{10}$. Indeed, this is the case $T_{9}$ of \cite{cheltsov:05a}. Using {\small\verb+Macaulay2+}, we compute that $h^0(V,\mathcal{N})=84$, and that all deformations of $V$ are of pure rolling factor type. Thus, $[V]$ is a smooth point on a $84$-dimensional component of $\mathcal{H}_{10}$, and $V$ cannot be smoothed. It follows that the component $B_{84}^\dagger$ of $\mathcal{H}_{10}$ upon which $[V]$ lies is not $B_{84}$. The Hilbert scheme $\mathcal{H}_{12}$ has an additional $99$-dimensional component $B_{99}^\dagger$ which is \emph{not} the component $B_{99}$. Consider the matrix $$M=\left( \begin{array}{c c c c c} x_0&x_1&x_2&x_3&y_0\\ x_1&x_2&x_3&x_4&y_1\\ \end{array}\right). $$ With additional variables $z_1,z_2$, its maximal minors define a scroll of type $(4,1,0,0)$. Let $f_0$ be a general cubic which can be rolled $3$ times to $f_1,f_2$, and $f_3$. The ideal generated by the minors of $M$ together with these four cubics cuts out out a singular degree $12$ Fano variety $V\subset\mathbb{P}^8$ corresponding to a point $[V]\in\mathcal{H}_{12}$. Indeed, this is the case $T_{25}$ of \cite{cheltsov:05a}. Using {\small\verb+Macaulay2+}, we compute that $h^0(V,\mathcal{N})=99$, and that the obstruction space $T_V^2$ vanishes. Thus, $[V]$ is a smooth point on a $99$-dimensional component of $\mathcal{H}_{12}$. All deformations of $V$ are of pure rolling factor type, so $V$ cannot be smoothed. Thus, the component $B_{99}^\dagger$ of $\mathcal{H}_{12}$ upon which $[V]$ lies is not $B_{99}$. \begin{remark} It follows from the description in \cite[Theorem 1.6]{cheltsov:05a} of Gorenstein trigonal Fano threefolds that each family is parametrized by a unirational variety. By arguments similar to in the previous section, one can show that $B_{84}^\dagger$, $B_{88}^\dagger$, and $B_{99}^\dagger$ are also unirational. \end{remark} \begin{remark} Not every type of singular trigonal Fano described in \cite[Theorem 1.6]{cheltsov:05a} describes a new Hilbert scheme component. For example, a routine calculation shows that the case of $T_7$ (a scroll of type $(2,1,1,0)$ and a cubic rolled twice) always has a non-scrollar deformation which deforms it to $V_{10}$. \end{remark} \section{Stanley-Reisner Schemes and Degenerations}\label{sec:sr} \subsection{Stanley-Reisner Basics} We now recall some basic facts about simplicial complexes and Stanley-Reisner schemes, see for example \cite{stanley:83a}. Let $[n]$ be the set $\{0,\ldots,n\}$ and $\Delta_n$ be the full simplex $2^{[n]}$. An abstract \emph{simplicial complex} is any subset $\mathcal{K}\subset\Delta_n$ such that if $f\in\mathcal{K}$ and $g\subset f$, then $g\in \mathcal{K}$. Elements $f\in\mathcal{K}$ are called \emph{faces}; the dimension of a face $f$ is $\dim f:=\#f-1$. Zero-dimensional faces are called \emph{vertices}; one-dimensional faces are called \emph{edges}. The \emph{valency} of a vertex is the number of edges containing it. Two simplicial complexes are isomorphic if there is a bijection of the vertices inducing a bijection of all faces. We will not differentiate between isomorphic complexes. Given two simplicial complexes $\mathcal{K}$ and $\mathcal{L}$, their \emph{join} is the simplicial complex $$ \mathcal{K} * \mathcal{L}=\{f\vee g\ | \ f\in\mathcal{K},\ g\in\mathcal{L}\}. $$ To any simplicial complex $\mathcal{K}\subset\Delta_n$, we associate a square-free monomial ideal $I_\mathcal{K}\subset \mathbb{C}[x_0,\ldots,x_n]$ $$ I_\mathcal{K}:=\langle x_p \ | \ p\in\Delta_n\setminus\mathcal{K}\rangle $$ where for $p\in\Delta_n$, $x_p:=\prod_{i\in p}x_i$. This gives rise to the \emph{Stanley-Reisner ring} $A_\mathcal{K}:=\mathbb{C}[x_0,\ldots,x_n]/I_\mathcal{K}$ and a corresponding projective scheme $\mathbb{P}(\mathcal{K}):=\Proj A_\mathcal{K}$ which we call a Stanley-Reisner scheme. The scheme $\mathbb{P}(\mathcal{K})$ ``looks'' like the complex $\mathcal{K}$: each face $f\in\mathcal{K}$ corresponds to some $\mathbb{P}^{\dim f}\subset \mathbb{P}(\mathcal{K})$ and the intersection relations among these projective spaces are identical to those of the faces of $\mathcal{K}$. In particular, maximal faces of $\mathcal{K}$ correspond to the irreducible components of $X$. In this paper, we will only consider Stanley-Reisner schemes of the form $\mathbb{P}(\mathcal{K}*\Delta_0)$, where $\mathcal{K}$ is topologically the triangulation of a two-sphere. Such schemes are Gorenstein Fano threefolds, and are embedded via the anticanonical class, see \cite[Proposition 2.1]{paperone}. \subsection{Degenerations to Stanley-Reisner Schemes} We recall the correspondence between unimodular triangulations and degenerations of toric varieties. Consider some lattice $M$ and some lattice polytope $\nabla\subset M_\mathbb{Q}$ in the associated $\mathbb{Q}$-vector space. By $\mathbb{P}(\nabla)$ we denote the toric variety $$ \mathbb{P}(\nabla)=\Proj \mathbb{C}[S_\nabla] $$ where $S_\nabla$ is the semigroup in $M\times \mathbb{Z}$ generated by the elements $(u,1)$, $u\in \nabla \cap M$. By Theorem 8.3 and Corollary 8.9 of \cite{sturmfels:96a}, square-free initial ideals of the toric ideal of $\mathbb{P}(\nabla)$ are exactly the Stanley-Reisner ideals of unimodular regular triangulations of $\nabla$, see loc. cit. for definitions. We now describe the triangulated two-spheres we will need. First of all, let $T_4=\partial \Delta_3$, $T_5=(\partial \Delta_2)*(\partial \Delta_1)$, and for $6\leq i \leq 10$, let $T_i$ be the unique triangulation of the sphere with $i$ vertices having valencies four and five.\footnote{The $T_i$ arise naturally as the boundary complexes of the convex deltahedra (excluding the icosahedron).} For concrete realizations of these triangulations, see \cite[Figure 1]{paperone}. The corresponding Stanley-Reisner schemes $\mathbb{P}(T_i*\Delta_0)$ satisfy some nice properties: \begin{thm}[{See \cite[Section 3]{paperone}}]\label{thm:nice} Let $4\leq i \leq 10$ and $d=2i-4$ and let $V_d$ be a general rank one degree $d$ Fano threefold. Then $V_{d}$ degenerates to $\mathbb{P}(T_i*\Delta_0)$ in its anticanonical embedding. Furthermore, $[\mathbb{P}(T_i*\Delta_0)]\in\mathcal{H}_d$ is a smooth point. \end{thm} This theorem alone allows us to locate the position of toric Fano threefolds of degree $d$ with at most Gorenstein singularities in $\mathcal{H}_d$ for $2<d< 10$. Indeed, using the classification of such varieties by \cite{kreuzer} and computer calculations with {\small\verb+TOPCOM+}, we verify that for any such variety $X$ of degree $d=2i-4$, its moment polytope has a regular unimodular triangulation of the form $T_i*\Delta_0$. Thus, $X$ degenerates to $\mathbb{P}(T_i*\Delta_0)$, so $[X]$ is a smooth point of $\mathcal{H}_d$ on the same component as $[V_d]$. We sum this up as \begin{prop}\label{prop:remain0} For $d=4,6,8$, let $X$ be a toric Fano threefold of degree $d$ with at most Gorenstein singularities. Then $[X]$ is a smooth point on the component of $\mathcal{H}_d$ corresponding to $V_d$. In particular, $X$ always admits an embedded smoothing to a smooth Fano threefold. \end{prop} \begin{remark} Alternatively, one may use {\small\verb+4ti2+} to show that any such $X$ is a complete intersection of the same type as $V_d$. \end{remark} \begin{ex} There is a single toric Fano threefold of degree $4$ with Gorenstein singularities, which is cut out by the quartic $x_1x_2x_3x_4-x_0^4$. A degeneration to $\mathbb{P}(T_4*\Delta_0)$ is given by degenerating the quartic to its first term. \end{ex} We will deal with the degree $10$ case in the following section. For the degree $12$ case, we will need an additional simplicial complex. Let $T_8'$, the bipyramid over the hexagon, be the unique triangulation of the sphere with valencies $4,4,4,4,4,4,6,6$. This triangulation is pictured in Figure \ref{fig:tri}, where the hollow dot represents the point at infinity. \begin{figure} \begin{center} \tritwelve \end{center} \caption{The triangulation $T_8'$}\label{fig:tri} \end{figure} The Stanley-Reisner scheme corresponding to this triangulation also arises as a degeneration of smooth Fano threefolds: \begin{prop}\label{prop:srdegen} The smooth Fano threefolds $V_{12}$, $V_{12,2,9}$ and $V_{12,3}$ all degenerate to $\mathbb{P}(T_8'*\Delta_0)$. \end{prop} \begin{proof} First of all, note that the polytope dual to number 127896 from \cite{grdb} has regular unimodular triangulations to both $T_8*\Delta_0$ and $T_8'*\Delta_0$. Thus, the corresponding toric variety degenerates to both $\mathbb{P}(T_8*\Delta_0)$ and $\mathbb{P}(T_8'*\Delta_0)$. But since $[\mathbb{P}(T_8*\Delta_0)]$ is a smooth point of the Hilbert scheme, $[\mathbb{P}(T_8*\Delta_0)]$ and $[\mathbb{P}(T_8'*\Delta_0)]$ must lie on the same component, and since $V_{12}$ degenerates to $\mathbb{P}(T_8*\Delta_0)$, it also degenerates to $\mathbb{P}(T_8'*\Delta_0)$. For the remaining degenerations, we use the equations from Proposition \ref{prop:caneq} and Section \ref{sec:exotic}. To degenerate from $V_{12,2,9}$, we degenerate the quadric to $uv$ and then choose an elimination term order for $u,v,w$. The resulting initial ideal is the Stanley-Reisner ideal for $T_8'*\Delta_0$. Finally, consider the equations for $V_{12,3}$. The variables $x_{ijk}$ correspond to the vertices $(i,j,k)$ of a cube in $\mathbb{Q}^3$. The equations of Proposition \ref{prop:caneq} correspond to the affine relations between these lattice points, see figure \ref{fig:cube}(a). The first six equations correspond to intersecting diagonals on the six faces of the cube and the last set of equations corresponds to the four diagonals intersecting in the middle of the cube. We choose any term order which, for the first six equations, selects monomials corresponding to diagonals which form two non-intersecting triangles, see figure \ref{fig:cube}(b). Degenerating the quadric to the product of the two vertices not lying on these triangles and taking the initial ideal gives the desired degeneration. \end{proof} \begin{figure} \subfigure[Monomials on the cube]{\cubeone} \subfigure[A choice of term order]{\cubetwo} \caption{Relations coming from a cube}\label{fig:cube} \end{figure} \section{Toric Fano Threefolds of Degree Ten}\label{sec:degten} We now locate the position of toric Fano threefolds of degree $10$ with at most Gorenstein singularities in $\mathcal{H}_{10}$. Indeed, let $\mathfrak{tor}_{10}$ denote the set of all toric Fano threefolds of degree ten with at most Gorenstein singularities. There are exactly 54 of these, see \cite{grdb} and \cite{kreuzer}. \begin{thm}\label{thm:re10} Consider $X\in\mathfrak{tor}_{10}$. Then the point $[X]$ lies exactly on the components of $\mathcal{H}_{10}$ as recorded in Table \ref{table:toricten}, where we refer to $X$ by its number in \cite{grdb}. In particular, $X$ always admits an embedded smoothing to a smooth Fano threefold. \end{thm} \begin{proof} The theorem is proved using case by case computer computation. First we use {\small\verb+TOPCOM+} to check whether the moment polytope of $X$ has a regular unimodular triangulation of the form $T_7*\Delta_0$; this is true exactly for those $X\in\mathfrak{tor}_{10}$ not listed in Table \ref{table:toricten}. These $X$ are therefore unobstructed and $[X]$ is a smooth point on $B_{85}$, the component corresponding to $V_{10}$. This leaves twelve exceptional cases to which we apply the general strategy outlined in Section \ref{sec:def2}. \begin{ex}We describe the five steps of Section \ref{sec:def2} explicitly for $X$ being the toric variety number 275510. $X$ sits on a scroll of type $(2,2,0,0)$ and is cut out by the $2\times 2$ minors of \begin{align*} M=\left(\begin{array}{c c c c} x_0&x_1&y_0&y_1\\ x_1&x_2&y_1&y_2 \end{array}\right)\\ \end{align*} together with $f_0=x_0^2x_2-y_0z_1z_2$ and the two other cubics $f_1$ and $f_2$ gotten by rolling factors, see the example in Section \ref{sec:rolling}. The dimension of $T_X^1$ is $27$, and the dimension of $T_X^2$ equals $4$. The space $T_X^1$ can be decomposed into the direct sum of a $24$-dimensional space $T_{\mathrm{roll}}^1$ consisting of perturbations of the cubics $f_i$, a two-dimensional space $T_{\mathrm{scroll}}^1$ generated by the perturbations \begin{align*} \left(\begin{array}{c c c c} x_0&x_1-t_1z_1-t_2z_2&y_0&y_1\\ x_1&x_2&y_1&y_2 \end{array}\right)\\ \end{align*} of $M$, and a one-dimensional space generated by the non-scrollar perturbation \begin{align*} x_0x_2-x_1^2-t_0z_1z_2\\ x_0y_1-x_1y_0\\ x_0y_2-x_1y_1\\ x_1y_1-x_2y_0+t_0x_0x_2\\ x_1y_2-x_2y_1+t_0x_1x_2\\ y_0y_2-y_1y_1\\ \end{align*} of the minors of $M$, keeping the $f_i$ constant. Consider the perturbations of the $f_i$ induced via rolling factors by the perturbation $$ f_1-s_1x_0z_1^2-s_2x_0z_2^2 $$ of $f_1$. This may be extended to a basis of $T_{\mathrm{roll}}^1$ such that the obstruction equations are \begin{align*} {t}_{1} s_1,\qquad {t}_{2} s_2,\qquad {t}_{0} {s}_{1},\qquad {t}_{0} {s}_{2}. \end{align*} This decomposes into the four components $Z_1=V(s_1,s_2)$, $Z_2=V(t_0,t_1,t_2)$, $Z_3=V(t_0,t_1,s_2)$, and $Z_4=V(t_0,s_1,t_2)$. Since the tangent space dimension $h^0(\mathcal{N}_{X/\mathbb{P}^7})=87$, these cut out schemes of respective dimensions $85$, $84$, $84$, and $84$ in the tangent space of the local Hilbert scheme. To see that $Z_1$ is a component of the tangent cone of $\mathcal{H}_{10}$ at $[X]$, consider the one-parameter deformation $\mathcal{X}\to\mathbb{A}^1$ given by the parameter $t_0$. It is straightforward to check that this lifts to higher order with no further perturbations. By construction, the tangent direction of this deformation lies only in the component $Z_1$. For $t_0\neq 0$, the resulting ideal is generated by six quadrics, and is easily seen to have a Gr\"obner degeneration to the ideal of $\mathbb{P}(T_7*\Delta_0)$. Thus, $Z_1$ must be an $85$-dimensional component of the tangent cone of $\mathcal{H}_{10}$ corresponding to the component $B_{85}$. To see that $Z_2$ is also a component of the tangent cone of $\mathcal{H}_{10}$ at $[X]$, we may consider a general linear perturbation of $f_1$ (subject to the condition that its factors may be rolled). This defines a general element of the component $B_{84}^\dagger$, since the perturbation still lies on a scroll of type $(2,2,0,0)$. Finally, we see that $Z_3$ and $Z_4$ are also components of the tangent cone of $\mathcal{H}_{10}$ at $[X]$, both corresponding to $B_{84}$. Consider for example the perturbation \begin{align*} \left(\begin{array}{c c c c} x_0&x_1-tz_1&y_0&y_1\\ x_1&x_2&y_1-t^2z_2&y_2 \end{array}\right)\\ \end{align*} along with $f_1-tx_0z_2^2$ and the corresponding rolling factors perturbations. The tangent direction of this perturbation is only contained in $Z_3$; for $t\neq 0$ the fiber is contained in a scroll of type $(1,1,1,1)$ and thus can lie only on the component $B_{84}$. For the case of $Z_4$, a similar perturbation can be made after interchanging $z_1$ and $z_2$. \end{ex} We now provide brief sketches of the remaining cases. \begin{itemize} \item Number 437961: $X$ lies in a scroll of type $(1,1,1,1)$ and is cut out by $f_0=x_0y_0z_0-w_0^2w_1$ and the cubics $f_1$ and $f_2$ gotten by rolling factors twice. Thus, $V_{10}'$ degenerates to $X$, so $[X]$ lies on $B_{84}$. A calculation shows that $T_X^2=0$, so $[X]$ is a smooth point of $\mathcal{H}_{10}$. \item Numbers 86711, 98325, 433633, 439399: The tangent cone at $X$ has two components, of dimensions $84$ and $85$, cut out by the lowest order terms of the obstruction equations. We can deform onto the $85$-dimensional component, and then degenerate to $\mathbb{P}(T_7*\Delta_0)$, showing that the $85$-dimensional component is the smoothing component $B_{85}$. Likewise, we can deform onto the $84$-dimensional component. The resulting variety is a divisor of type $3D-2F$ on a scroll of type $(1,1,1,1)$, and thus deforms to $V_{10}'$. Hence, this component must be the smoothing component $B_{84}$. \item Numbers 522075, 523456, 547399: Obstruction equations predict that the tangent cone at $X$ has two components, of dimensions $88$ and $85$. We can deform onto the $85$-dimensional component, and then degenerate to $\mathbb{P}(T_7*\Delta_0)$, showing that the $85$-dimensional component is the smoothing component $B_{85}$. If we deform onto the supposed $88$-dimensional component, the tangent space to $\mathcal{H}_{10}$ at this point indeed has dimension $88$, but the obstruction space does not vanish, instead having dimension one or two. However, the deformed variety is cut out by equations of the type for members of $B_{88}^\dagger$ except with degenerate quadrics $g_0,g_1,g_2$. We may conclude that $[X]$ lies on $B_{88}^\dagger$. \item Numbers 283519, 521212, 522702: These cases are completely analogous to the example above: all lie on scrolls of type $(2,2,0,0)$. Obstruction equations predict that the tangent cone at $X$ has four components, of dimensions $85$, $84$, $84$, and $84$. Deforming onto the first of these components (with a non-scrollar deformation), we can degenerate to $\mathbb{P}(T_7*\Delta_0)$, showing that $[X]$ lies on $B_{85}$. The second component consists only of pure rolling factor deformations, thus corresponding to $B_{84}^\dagger$. The third and fourth components both involve scrollar deformations to a scroll of type $(1,1,1,1)$ and both correspond to $B_{84}$. \end{itemize} \end{proof} \section{The Hilbert Scheme $\mathcal{H}_{12}$ at $[\mathbb{P}(T_8'*\Delta_0)]$}\label{sec:bipyramid} In this section we will study the local structure of $\mathcal{H}_{12}$ at the point $[\mathbb{P}(T_8'*\Delta_0)]$. Note that $T_8'$ is the join of the boundary $\partial \Delta_1$ of a one-simplex (i.e. two points) with the boundary of a hexagon. Let $X_\mathbf{bp}=\mathbb{P}(T_8'*\Delta_0)$.\footnote{The subscript $\mathbf{bp}$ refers to the fact that $T_8'$ is the {\bf b}i{\bf p}yramid over a hexagon.} We identify the vertices of the hexagon with variables $x_1,\ldots,x_6$ ordered cyclically, the vertices of $\partial \Delta_1$ with variables $y_1,y_2$, and the vertex of $\Delta_0$ with the variable $y_0$. Then $X_\mathbf{bp}$ is cut out by the quadrics $x_{i-1}x_{i+1}$ for $i=1,\ldots,6$ $x_ix_{i+3}$ for $i=1,2,3$, and $y_1y_2$, where all indices are taken modulo six. We now describe the space $T_{X_\mathbf{bp}}^1$ of first-order deformations of $X_\mathbf{bp}$. Consider the $24$ deformation parameters $s_i$ and $t_{i,j}$ for $1\leq i \leq 6$ and $j=0,1,2$. We have further $19$ deformation parameters $a_i,b_i,c_j$ for $1\leq i \leq 6$ and $0\leq j \leq 6$. These $33$ parameters will give us a basis of $T_{X_\mathbf{bp}}^1$. To consolidate the presentation, we will write down perturbations of our equations which already include higher order perturbations, since we shall be considering families over the versal base space components. Let $p(z)$ be a power series solution of the functional equation $$ zp(z)^4=p(z)+1 $$ and set $f=p(s_1\cdots s_6)$, $e=f/(f+2)$. For $i=1,\ldots,6$, set $t_i=\sum_{j=0}^2t_{i,j}y_j$. We consider the perturbations \begin{multline}\label{eq:p1} x_{i-1}x_{i+1}+(t_i+s_ix_i)x_i\\ + s_{i+3}(e^2t_{i-2}t_{i+2}+efs_{i+2}t_{i-2}x_{i+2} + eft_{i+2}s_{i-2}x_{i-2})\\ -s_{i-2}s_{i+2}(et_{i+3}+fs_{i+3}x_{i+3})^2 \\ + e^2f^2s_{i-2}s_{i-1}s_{i+1}s_{i+2}s_{i+3}t_{i}^2 \end{multline} for $i = 1, \ldots ,6$, \begin{multline}\label{eq:p2} x_ix_{i+3} + et_{i+1}t_{i+2} + et_{i+2}s_{i+1}x_{i+1} + et_{i+1}s_{i+2}x_{i+2} + fs_{i+1}s_{i+2}x_{i+1}x_{i+2}\\ + et_{i-2}s_{i-1}x_{i-1} + et_{i-1}s_{i-2}x_{i-2} + fs_{i-1}s_{i-2}x_{i-1}x_{i-2}\\ - e^2f^2s_{i-2}s_{i-1}s_{i+1}s_{i+2}t_it_{i+3} \end{multline} for $i=1,2,3$, and \begin{align}\label{eq:p3} y_1y_2+c_0y_0^2+\sum_{i=1}^6 (a_ix_i+b_ix_{i+1}+c_iy_0)x_i. \end{align} As it stands, the family defined by these perturbations is only flat if considered up to first order. However, we shall see that it becomes flat if we restrict to two of the base space components. A calculation with {\small\verb+Macaulay2+} or using \cite[Theorem 13]{altmann:04a} shows that with respect to these perturbations, the above deformation parameters form a basis for $T^1_{X_\mathbf{bp}}$. \begin{thm}\label{thm:tc} The tangent cone $\TC_{[\mathbb{P}(T_8'*\Delta_0)]}\mathcal{H}_{12}$ is cut out by the fifteen quadrics \begin{align} t_{i+1,j}t_{i+2,j}-t_{i-1,j}t_{i-2,j}\qquad &i\in\{1,2,3\},\quad j\in\{0,1,2\}\label{eqn:tc1} \\ t_{i+1,j}t_{i+2,0}+t_{i+1,0}t_{i+2,j}-t_{i-1,j}t_{i-2,0}-t_{i-1,0}t_{i-2,j}\qquad &i\in\{1,2,3\},\quad j\in\{1,2\}.\label{eqn:tc2} \end{align} It decomposes into four irreducible components $Z_{97}$, $Z_{99}$, $Z_{98}^1$, $Z_{98}^2$ of respective dimensions $97$, $99$, $98$, and $98$. $Z_{97}$ is cut out by the $2\times 2$ minors of \begin{align*} \left(\begin{array}{c c c c c c} t_{1,0} & t_{1,1} & t_{1,2} & t_{4,0} &t_{4,1}& t_{4,2}\\ t_{3,0} & t_{3,1} & t_{3,2} & t_{6,0} &t_{6,1}& t_{6,2}\\ t_{5,0} & t_{5,1} & t_{5,2} & t_{2,0} &t_{2,1}& t_{2,2} \end{array}\right) \end{align*} and corresponds to the component $B_{97}$. $Z_{99}$ is cut out by the $2\times 2$ minors of \begin{align*} \left(\begin{array}{c c c c c c c c c} t_{1,0} & t_{1,1} & t_{1,2} & t_{3,0} & t_{3,1} & t_{3,2}& t_{5,0} & t_{5,1} & t_{5,2}\\ t_{4,0} &t_{4,1}& t_{4,2}& t_{6,0} &t_{6,1}& t_{6,2} & t_{2,0} &t_{2,1}& t_{2,2} \end{array}\right) \end{align*} and corresponds to the component $B_{99}$. Finally, both components $Z_{98}^1$, $Z_{98}^2$ correspond to $B_{98}$ and for $k,l=1,2$ with $k\neq l$, $Z_{98}^k$ is cut out by the thirty quadrics \begin{align*} t_{i+1,j}t_{i+2,j}-t_{i-1,j}t_{i-2,j}\qquad & i\in\{1,2,3\},\quad j\in\{0,1,2\}\\ t_{i+1,k}t_{i-1,0}-t_{i+1,0}t_{i-1,k}\qquad & i\in\{1,2,3,4,5,6\}\\ t_{i+1,k}t_{i+2,0}-t_{i-1,k}t_{i-2,0}\qquad & i\in\{1,2,3,4,5,6\}\\ t_{i,l}t_{i+1,0}-t_{i+3,l}t_{i+1,0}\qquad & i\in\{1,2,3\}\\ t_{i,l}t_{i-1,0}-t_{i+3,l}t_{i-1,0}\qquad & i\in\{1,2,3\}\\ t_{i,l}t_{i+3,0}-t_{i,0}t_{i+3,l}\qquad & i\in\{1,2,3\}.\\ \end{align*} \end{thm} \begin{proof} Using {\small\verb+Versal Deformations+}, we can calculate that the lowest order terms of the obstruction equations are exactly the quadrics in \eqref{eqn:tc1} and \eqref{eqn:tc2}. The tangent cone is certainly contained in the subscheme $Z$ cut out by these equations. Using primary decomposition in {\small\verb+Macaulay2+}, we see that $Z$ decomposes into the four components $Z_{97}$, $Z_{99}$, $Z_{98}^1$, and $Z_{98}^2$, which have the stated dimension. Now, by Proposition \ref{prop:srdegen}, we know that $[\mathbb{P}(\mathcal{T}_8'*\Delta_0)]$ lies on $B_{97}$, $B_{98}$, and $B_{99}$. Thus, the tangent cone at this point must have components of dimensions $97$, $98$, and $99$. $Z_{98}^1$, and $Z_{98}^2$ are indistinguishable modulo a $\mathbb{Z}_2$ symmetry, so we can conclude that the lowest order terms of the obstruction equations actually cut out the tangent cone. \end{proof} If we ignore the component $B_{98}$, we can even say more about the local structure of $\mathcal{H}_{12}$ at the point $[\mathbb{P}(T_8'*\Delta_0)]$. \begin{thm}\label{thm:uf} In a formally local neighborhood of $[\mathbb{P}(T_8'*\Delta)]\in\mathcal{H}_{12}$, the components $B_{97}$ and $B_{99}$ are respectively cut out by the equations for $Z_{97}$ and $Z_{99}$. Over these components, a universal family $\mathcal{U}$ is given by the perturbations \eqref{eq:p1}, \eqref{eq:p2}, and \eqref{eq:p3} after adding linear changes of coordinates to account for trivial deformations. \end{thm} \begin{proof} We claim that the family defined by \eqref{eq:p1}, \eqref{eq:p2}, and \eqref{eq:p3} is flat if we impose the equations for either $Z_{97}$ or $Z_{99}$. Indeed, by \cite[Proposition 6.6]{altmann:09a}, the family defined by \eqref{eq:p1} and \eqref{eq:p2} is flat if we require the vanishing of the $2\times 2$ minors of \begin{align*} \left(\begin{array}{c c c} t_1&t_3&t_5\\ t_4&t_6&t_2 \end{array}\right). \end{align*} The equations cutting out $Z_{97}$ and $Z_{99}$ are two different ways of satisfying this condition. When we add the equation \eqref{eq:p3}, the additional relations are simply Koszul relations and can be lifted trivially. Since this family spans the vector space of first order deformations, the statement of the theorem follows. \end{proof} \section{Toric Fano Threefolds of Degree Twelve}\label{sec:toric} We now locate the position of toric Fano threefolds of degree $12$ with at most Gorenstein singularities in $\mathcal{H}_{12}$. Indeed, let $\mathfrak{tor}_{12}$ be the set of all Gorenstein toric Fano threefolds of degree twelve. There are exactly 135 of these, see \cite{grdb} and \cite{kreuzer}. \begin{thm}\label{thm:re12} Consider $X\in\mathfrak{tor}_{12}$. Then the point $[X]$ lies exactly on the components of $\mathcal{H}_{12}$ as recorded in Table \ref{table:torictwelve}, where we refer to $X$ by its number in \cite{grdb}. In particular, $X$ always admits an embedded smoothing to a smooth Fano threefold. Furthermore, each component $B$ of $\mathcal{H}_{12}$ is smooth at $[X]$, unless $[X]$ lies on $B_{97}$, $B_{98}$ and $B_{99}$ (and possibly $B_{99}^\dagger$) and $B=B_{98}$, or if $X$ is number 544886 and $B=B_{96}$. \end{thm} \begin{proof} The theorem is proved using case by case computer computation. We use {\small\verb+TOPCOM+} to partition $\mathfrak{tor}_{12}$ into three subsets: \begin{enumerate} \item Those $X\in\mathfrak{tor}_{12}$ whose moment polytope has a regular unimodular triangulation of the form $T_8*\Delta_0$; \item Those $X\in\mathfrak{tor}_{12}$ whose moment polytope has a regular unimodular triangulation of the form $T_8'*\Delta_0$ but not of the form $T_8*\Delta_0$; \item Those $X\in\mathfrak{tor}_{12}$ whose moment polytope does not have a regular unimodular triangulation of the form $T_8*\Delta_0$ or $T_8'*\Delta_0$. \end{enumerate} If $X$ belongs to set (i), then $X$ is unobstructed and $[X]$ is a smooth point on $B_{98}$, the component corresponding to $V_{12}$. This covers all $X\in\mathfrak{tor}_{12}$ not listed in Table \ref{table:torictwelve}. The set (ii) is a subset of all those $X\in\mathfrak{tor}_{12}$ which are listed in Table \ref{table:torictwelve} as lying on $B_{97}$ or $B_{99}$. It excludes exactly those lying on $B_{96}$ or $B_{99}^\dagger$ and numbers 321879 and 524375. Since such $X$ has a moment polytope with a regular unimodular triangulation of the form $T_8'*\Delta_0$, it follows from Theorem \ref{thm:tc} that the only possible components of $\mathcal{H}_{12}$ on which $[X]$ can lie are $B_{97}$, $B_{98}$, or $B_{99}$. Now, using the triangulation to $T_8'*\Delta_0$, we can explicitly find a curve in $\mathcal{H}_{12}$ passing through $[X]$ and $[X_{\mathbf{bp}}]$. Using the local universal family $\mathcal{U}$ over $B_{97}$ and $B_{99}$ from Theorem \ref{thm:uf}, it turns out that $X$ always appears as fiber of $\mathcal{U}$ defined by polynomials (instead of power series). This allows us to determine exactly which of the components $B_{97}$ and $B_{99}$ $[X]$ lies on, and using the local equations for these components, whether that component is smooth at $[X]$. Computations show that we in fact always have smoothness in these cases. For such cases, it remains to be seen if $[X]$ also lies on $B_{98}$ (and whether that is a smooth point on the component). Since $[X]$ is a smooth point on $B_{97}$ and/or $B_{99}$, the components of the tangent cone corresponding to $B_{97}$ and/or $B_{99}$ must be cut by equations whose lowest order terms are linear. Suppose that $[X]$ does not lie on $B_{98}$. Then the ideal of the tangent cone will be generated by equations with lowest order term quadratic if $[X]$ lies on both $B_{97}$ and $B_{99}$, and by equations with lowest order term linear otherwise. Thus, in such cases, the tangent cone is cut out by the lowest order obstruction equations. We may now proceed as follows. First, we use {\small\verb+Versal Deformations+} to compute the lowest order terms for the obstruction equations of $X$, and decompose the scheme $Z$ cut out by these equations into irreducible components. By the above argument, if this decomposition includes anything but smooth components of dimensions $97$ and $99$, $[X]$ must lie on $B_{98}$. The claim regarding the smoothness of $B_{98}$ follows in the appropriate cases from the fact that, in these cases, the additional components of $Z$ a posteriori consist of a single smooth $98$-dimensional component. \begin{ex}Let $X$ be the toric variety number 5953. Then the ideal of $X$ is generated by the ten binomials \begin{align*} x_2x_6-y_0x_1,\qquad x_1x_3-y_0x_2,\qquad x_2x_4-y_0x_3,\\ x_3x_5-y_0x_4,\qquad x_4x_6-y_0x_5,\qquad x_1x_5-y_0x_6,\\ x_1x_4-y_0^2,\ \ \ \qquad x_2x_5-y_0^2,\ \ \ \qquad x_3x_6-y_0^2,\\ y_1y_2-y_0x_1. \end{align*} Note that the lead monomials of these binomials are just the generators of the Stanley-Reisner ideal of $T_8'*\Delta$. Using the universal family from Theorem \ref{thm:uf}, the point $[X]$ locally has coordinates $t_{i,0}=-1$ for $i=1,\ldots,6$ and $c_1=-1$, with all other coordinates vanishing. The $3\times 6$ and $2\times 9$ matrices appearing in Theorem \ref{thm:tc} both have rank $1$ when evaluated at this point. Thus, $[X]$ is a smooth point on both $B_{97}$ and $B_{99}$. The lowest order terms of the obstruction equations for $X$ have the form $t_1t_3$, $t_1t_4$, $t_2t_5$, $t_2t_6$, which decomposes into the components $V(t_1,t_2)$ of dimension $98$, $V(t_1,t_5,t_6)$ and $V(t_2,t_3,t_4)$ of dimensions $98$, and $V(t_3,t_4,t_5,t_6)$ of dimension $97$. Thus, $[X]$ must lie on $B_{98}$ as well. \end{ex} The above two techniques deal with almost all $X\in\mathfrak{tor}_{12}$; for the set (iii) we are left with 10 exceptional cases which we deal with as described in Section \ref{sec:def2}. Once we have identified the number and dimension of components of the tangent cone at $[X]$ using obstruction calculus and one-parameter deformations, we still need to match these components to $B_{96}$, $B_{97}$, $B_{98}$, $B_{99}$, and $B_{99}^\dagger$, that is, step (v) from Section \ref{sec:def2}. For $B_{96}$ this is done by finding explicit degenerations of $V_{12,2,6}$ by using the rolling factors description in Proposition \ref{prop:caneq}. Likewise, for $B_{99}^\dagger$ we also use the rolling factors format to find explicit degenerations. For $B_{97}$, $B_{98}$, $B_{99}$, we find explicit degenerations to $\mathbb{P}(T_8*\Delta_0)$ and/or $\mathbb{P}(T_8'*\Delta_0)$. The cases 146786, 444999, 544855, and 544887 are dealt with in a straightforward manner. The remaining six cases are all more difficult, since their tangent cones appear to contain embedded components. \begin{ex} Let $X$ be the toric variety number 147467. Then $X$ is a divisor of type $4D-3F$ on a scroll of type $(2,2,1,0)$. If the scroll is given by the maximal minors of $$ M=\left(\begin{array}{c c c c c} x_0&x_1&y_0&y_1&z_0\\ x_1&x_2&y_1&y_2&z_1\\ \end{array} \right) $$ then $X$ is cut out by rolling the cubic $f_0=x_0z_0w-y_0^2y_1$ three times to $f_1,f_2,f_3$. The dimension of $T_X^1$ is $22$, that of $H^0(\mathcal{N}_X)$ is $99$, and the dimension of $T_X^2$ equals $4$. The space $T_X^1$ can be decomposed into the direct sum of a $20$-dimensional space $T_{\mathrm{roll}}^1$ consisting of pure rolling factors perturbations of the cubics $f_i$, a one-dimensional space $T_{\mathrm{scroll}}^1$ generated by the perturbation $$ \left(\begin{array}{c c c c c} x_0&x_1&y_0&y_1&z_0\\ x_1&x_2&y_1-t_1w&y_2&z_1\\ \end{array} \right) $$ of $M$, and a one-dimensional space generated by a certain non-scrollar perturbation of the minors of $M$, keeping the $f_i$ constant. A basis of $T_X^1$ may be chosen such that the lowest order terms\footnote{We ignore one element of $T^2$ which, at least up to order $8$ doesn't contribute an obstruction equation.} of the obstruction equations are $t_1^2t_3$, $t_1^2t_4$, and $t_1t_2$, where $t_1$ is as above, $t_2$ is a parameter for the non-scrollar perturbation, and $t_3,t_4$ are parameters for pure rolling factors deformations. The scheme $Z$ in the tangent space $\mathcal{H}_{10}$ cut out by this decomposes into components $Z_{98}=V(t_1)$ of dimension $98$, $V(t_2,t_3,t_4)$ of dimension $96$, and an embedded component $Z_{97}=V(t_2,t_1^2)$ of dimension $97$. The deformation in the $t_1$ direction described above deforms the scroll to one of type $(2,1,1,1)$, which gives a deformation to $V_{12,2,6}$. Thus, $Z_{96}$ is a component of the tangent cone corresponding to $B_{96}$. Similarly, a deformation in the $t_2$ direction takes us to a scheme $X'$ which degenerates to $\mathbb{P}(T_8*\Delta_0)$, so $Z_{98}$ corresponds to $B_{98}$. We need to check that the component $Z_{97}$ does not correspond to $B_{97}$. Since $Z_{97}$ is embedded in $Z_{98}$, there is no obvious way to deform onto a $97$-dimensional component. In fact, by the discussion below, we will see that $Z_{97}$ cannot correspond to a non-embedded $97$-dimensional component of $\mathcal{H}_{12}$. \end{ex} Let $S\subset\mathcal{H}_{12}$ be the closure of the set of all points corresponding to divisors of type $4D-3F$ on a scroll of type $(2,2,1,0)$. From the above example, it follows that $\dim S = 97$. \begin{lemma} If $\eta$ is a general point of $S$, then $\dim T_{\eta}\mathcal{H}_{12}=99$. \end{lemma} \begin{proof} Let $Y$ correspond to $\eta$, i.e. $[Y]=\eta$. By the above example, $\dim T_{\eta}\mathcal{H}_{12}\leq 99$. Now, consider the subscheme $\mathcal{Y}$ of $\mathbb{P}^{16}$ defined by the maximal minors of the matrix $M$ above, along with cubics $f_0,\ldots,f_3$ gotten by rolling factors, where $$ f_0=z_0^3+s_1x_0^2+s_2x_0x_1+s_3x_0x_2+s_4x_0y_0+s_5x_0y_1+s_6y_0y_0+s_7y_0y_1+s_8y_0y_2. $$ Here, $x_0,\ldots,w,s_1,\ldots,s_8$ are coordinates on $\mathbb{P}^{16}$. Then $Y$ is codimension $8$ linear section of $\mathcal{Y}$. A computer calculation shows that $T_\mathcal{Y}^1$ is two-dimensional. Thus, there are two deformation directions in $T_Y^1$ which are not of pure rolling factor type. Hence, $\dim T_{\eta}\mathcal{H}_{12}\geq \dim S+2=99$. \end{proof} \begin{prop}Let $X$ be the toric Fano threefold 147467. The component $Z_{97}$ from the example above cannot correspond to a $97$-dimensional component of $\mathcal{H}_{12}$ which is not embedded. \end{prop} \begin{proof} Suppose $Z_{97}$ corresponds to a non-embedded 97-dimensional component $B\subset\mathcal{H}_{12}$. The scheme $Z_{98}$ is smooth, so it follows that $[X]$ is a smooth point on $B_{98}$. Now, a general point $\eta\in S$ does not lie on $B_{96}$, and thus must lie on either $B$ or $B_{98}$ or both. But $\dim T_{\eta} \mathcal{H}_{12}=99$ by the above lemma, and if $\eta\in B$, $\dim T_{\eta} B\leq 98$, and if $\eta\in B_{98}$, $\dim T_{\eta} B_{98}=98$. Thus, $\eta\in B\cap B_{98}$, so we have $S\subset B\cap B_{98}$. But $B_\mathrm{red}=S$ for dimension reasons, which means $B$ is embedded in $B_{98}$, a contradiction. \end{proof} This concludes the discussion of the case 147467. The cases 446913 and 544886 are almost identical, also being divisors on a scroll of type $(2,2,1,0)$. For the remaining three cases (321879, 524375, and 547426) the lowest order terms of the obstruction equations give rise to embedded components of dimensions $97$ and less. We must show that these cases do not lie on $B_{96}$. \begin{ex} Let $X$ be the toric variety number 524375. Then $X$ is a divisor of type $4D-3F$ on a scroll $S$ of type $(3,2,0,0)$. If the scroll is given by the maximal minors of $$ M=\left(\begin{array}{c c c c c} x_0&x_1&x_2&y_0&y_1\\ x_1&x_2&x_3&y_1&y_2\\ \end{array} \right) $$ then $X$ is cut out by rolling the cubic $f_0=x_0zw-y_0^3$ three times to $f_1,f_2,f_3$. Now, suppose that $X$ has an embedded smoothing to $V_{12,2,6}$, which is a divisor on a scroll of type $(2,1,1,1)$. Then the deformation of $X$ corresponds to a deformation of the scroll $S$ to a scroll $S'$ of type $(2,1,1,1)$. Indeed the perturbations of the quadrics cutting out $S$ must cut out a scroll of type $(2,1,1,1)$. Now, a versal deformation of $S$ is given by linear perturbations of the $x_1$, $x_2$, and $y_1$ entries of the top row of $M$. In order to deform to a scroll of type $(2,1,1,1)$, either the $x_1$ or $x_2$ entry must be perturbed nontrivially. But this makes it impossible to roll $f_0$ (or a perturbation thereof) three times. Thus, $X$ does not deform to $V_{12,2,6}$ and $[X]$ does not lie on $B_{96}$. \end{ex} Similar arguments may be made for 321879 and 547426 which lie on scrolls of type $(3,2,0,0)$ and $(4,1,0,0)$, respectively. This completes the proof of the theorem. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,459
You are browsing the archive for Social history. A Contrast in Paths to Achievement: Daily Routines and Social Lives November 7, 2016 in 1950s-1960s, Social history At Fisk University many female students arose early to eat breakfast, apply tons of makeup, arrange their hair in neatly manicured styles, and head off to class-most in the highest of heeled shoes! Very few did not wear heels all day long. Dress codes—self mandated I supposed—were very strict, and during pledge season very, very strict. A single run in hosiery would send a pledgee to scrub the cafeteria floor with a toothbrush I was told by the returning students. Guys were well-groomed, also, as they were "decked" in nice sweaters, highly polished shoes and creased pants. . If students were not careful to use the proper silverware in the proper order, they were privately ridiculed. To the contrast, UK students were more relaxed. All females did wear dark brown, shiny weejuns with tassels the first year I was there. Seemed to be a uniform foot dress code, but the regimen ended there. I had to adjust to both scenes as my mother had to purchase a new shoe wardrobe for me, since none of those shoes were a scene at the high school to which I had gone. I was not privy to talk of any existing hazing incidents, but then I wouldn't have been because of my minority status. In all fairness, UK was a large sprawled out campus which was not conducive to heels whereas Fisk was more compact to accommodate the kinds of shoes that almost all of the females wore. But thank goodness, both were flat terrains, unlike Western Kentucky University in my hometown which is nothing but one big hill. I huffed and puffed my way through graduate classes there and longed for the days that I had the flat walk strolls to class at both Fisk and UK. (I am digressing, I know, but still in keeping with being a Kentucky Woman during this era.) E.A.Diddle Arena, Western Ky. University Discussing a different Kentucky college such as Western Kentucky University, painfully reminds me of my home birthplace right on the spot where the Diddle Arena Sports Complex now exists and how WKU had Urban Renewal come through and practically just take the homes of African-Americans without adequate compensation or time to even shop around for commensurate housing. My church was leveled and my aging grandmother and other aging relatives had to move. They fought, but to no avail. (See more on this issue at the Notable Kentucky African Americans database – and see a museum poster about Jonesville below.) As a matter of fact, those memories in addition to my mother's desire for me to attend UK drove me away from WKU. Mom was a Tennessee native who married my dad (a Bowling Green native) and moved to Bowling Green. Click on poster to see larger image. Back to student dress. All of the other clothing on both campuses was pretty much the same as today with skirts, sweaters, blouses, shirts and pants except at Fisk, coats and sweaters for females were often fur-trimmed. The preceding differences speak volumes for the school cultures during that era, and any reader should make the determination as to why, keeping in mind that one was a private institution whose ancestors were removed by approximately eight decades from slavery years, while the others consisted of students whose forebears had always been free. The latter had less to prove. Tags: fair housing, fashion, Fisk University, Jonesville KY, University of Kentucky, urban development, Western Kentucky University No Comments » A Tale of Two Universities October 22, 2016 in 1950s-1960s, Primary source, Social history A tale of two universities during the civil rights era — one black, one white as told by a Kentucky woman who attended both during the same time period. Kith of the Famous, Student-Identified Notables Before the regular routine of classes began and immediately after we transfer students had met our roommates and settled in, small gatherings of returning student hall mates toured us through the Oval, the Fisk yearbook. They gave us a brief run-down of just who was who on campus. On my floor, just two doors down were Jackie Barrow, daughter of Joe Louis (Barrow) "the brown bomber," legendary boxer who was world heavyweight champion from 1937 to 1949. She was a quiet freckled-face student who pretty much stayed to herself. Next door was Valerie Grant, niece of the triple threat entertainer, Earl Grant, talented as a vocalist, organist, and pianist. Valerie was a petite really small, student who spent most of her time with her boyfriend, my "homeboy." (People from the same town were referred to and greeted as "homes," "homey," or "homeboy," or "home girl," as a way of feeling less isolated.) He was really responsible for my getting interested in the social life at Fisk that I was missing at UK, having shown me his yearbook the year before I decided to transfer. Then they pointed out others on other floors such as Judith Jamison a celebrated American dancer and now Artistic Director of Alvin Ailey American Dance Theater. Across the driveway in the WEB Dubois male dormitory was Bobby McCans, great-grandson of WEB Dubois (who needs no description).They pointed out all of the leaders in the fraternities and sororities and whom they dated. They discussed the teachers one should avoid; especially prejudiced white ones who concurrently dualized the teaching between Fisk and Vanderbilt just down the boulevard. They seemed to know which students were from families of means and which ones weren't. Miss Fisk Material One picture of particular interest was that of Miss Fisk and her court. In that yearbook, Miss was a statuesque, chocolate-and crème, wavy haired, very attractive young lady, but my particular tour guides mentioned that she was not Miss Fisk material. Huh? Well they explained that she was not light-skinned enough, and her hair was not long or straight enough. Since I had been at UK where all of the females on the various courts reflecting a degree of beauty looked like that because they were white, I thought that almost every one there would make it on a Fisk Court. Remember the "Black is Beautiful," "I'm proud" mentality was a year or two to come when I was at Fisk. (Note: Before that, even most black schools of all types tended to exhibit the same mentality of trying to mimic and/or appease their former oppressors in many ways. And curiously enough in many instances, if even a brown-skinned "Negro" called a darker one "black," he or she had a fight on hand.) Kudos to the Black Pride movement, but still more curious was that after the realization of that movement set in, some lighter-skinned female "Negroes" sometimes had difficulty winning black beauty contests and riding on floats because they were not black enough! Those were the times preceding the rare finding of black dolls for little girls at Christmas, lack of any black pictures in mainstream magazines other for sports, or entertainment, etc Definitely an era when being black was associated with all kinds of negativities, heaven forbid. Back to the yearbook. As I looked further through the Oval of that year, what my tour guides were trying to tell me was corroborated on quite a few other pages where most of the queens and their courts were light-skinned and had straight hair. Only a very few have escaped those stringent requirements. (Sm.) I surmised that the ones who did survive either were very, very smart, had familial status, or were very wealthy. Some one had hinted earlier that to make it at Fisk, a female had to be rich, high-yellow or very smart. Not meeting any of those criteria, I am not sure to this day how I fit in as well as I did, but then, again, and somehow or other, I always seemed to make long-lasting friends in any environment. Maybe it was that I didn't try as hard. Sara Jane Kramer, UK – "Look Girl" University of Kentucky's Sara Jane Kramer Back at my UK college home, I did not see a yearbook until my senior year as I had no home boy or girl on campus, and yearbooks, therefore, did not seem to be an issue in the all freshman dormitory in which I dwelled. And since I was not Greek with an access to one of their "houses," I was not privy to any yearbook touring. There, in my freshman year, the notables were discussed by name. They were cheerleaders, basketball players, teachers, and Sara Jane Kramer. Sara, or Miss Kramer, as some of her teachers referred to her was a wisp of a young girl whose picture had been chosen by Look Magazine from the files of entering freshmen across the country as "most photogenic." This gave her quite a status across the UK campus. Females and males alike stared at her, male teachers deferred to her and the limited media that we had back then such as radio and newspapers did endless interviews with her. She, too, was a thin, wispy rather reticent but friendly young model type young lady who never smiled much. She did blink her eye lids almost incessantly as what I considered was a sort of nervousness. One can Google her in UK Portrait Archives as Kramer, Sara Jane "Look Girl." Tags: beauty queens, color, Fisk University, University of Kentucky No Comments » Taking UK to Fisk October 18, 2016 in 1950s-1960s, Political history, Social history The experience of being a Kentucky black female transfer student from a large, predominantly white public institution to a small, predominantly private one of color during the civil rights era afforded me many unique and multifaceted perspectives. Founding Similarities Fisk University, the higher education institution to which I transferred and the one from whence I came, the University of Kentucky, were decidedly different in many areas, but certainly not all. Both were founded during approximately the same time period in history: U.K. in 1865 by John Bryan Bowman and Fisk, also, in 1865 by John Ogden, Reverend Erastus Cravath, and Reverend Edward Smith. It was named after General Clinton B. Fisk of the Tennessee Freedman's Bureau. Fisk U. has remained a small private school while UK located has grown by leaps and bounds remaining large and public. Both, then, as higher level of learning institutions were highly ranked in their respective categories. Both had produced historically notable graduates, and both had achieved the status of University over the years. Thenceforward, Characteristics Begin to Differ In my case, starting with the application form for each institution, remarkable differences stood out. U.K. presented a regular run-of-the mill item. Despite their strong emphasis on writing within the curriculum, not one essay was required as they are at institutions of today. (My younger daughter applied to one institution which required six essays by the time one finished the a's and b's as additional segments!) Fisk, on the other hand wanted a listing of how many telephones and how many cars one had at his/her home abode. Seriously! Not sure why that was a request on the application form. No essay was required there either. Whereas, many of UK's notables centered on basketball sports figures, those at Fisk tended to be makers and shakers in civil rights history. Of course, there were notables outside of these realms, also. Fisk, a Hotbed of Civil Rights Issues, etc. Fisk University, located in Nashville, Tennessee was a hotbed of issues, protests, and activities during the 1950's and 1960's, with its students historically recognized for fighting injustices. As a Kentucky female of color arriving around 1962, by one year, I had missed the infamous lunch counter sit-ins that landed many females in my age group in jail., but the fumes were still hovering as groups of "Negroes" such as now Congressman John L. Lewis who had been severely beaten several times during such integrative excursions as "bloody Sunday," went out on almost a daily basis with other young, black male students to integrate eating counters, restaurants, and other facilities in Nashville as they were about the business of breaking down racial barriers and challenging city inequities. My arrival at the time I did in civil rights history afforded me to share the small campus with young blacks other than Congressman Lewis. One such student was the late Ronald Walters, a leading scholar of the problems race, politics and author of 13 books, one of which mapped a way to the White House for the first-ever black president whenever that should occur. Dr. Walters later became director of the African-American Leadership Institute at the University of Maryland and was oft quoted and interviewed on national television. He was instrumental in the establishment of the Congressional Black Caucus. The list of "notable" notables will be continued later in this blog. Someone once remarked that, "Perhaps no single institution has played so central a role as Fisk University in the shaping of black learning and culture in America." I agree. Tags: Fisk University, HBCU, higher education No Comments » by Danielle Gabbard Oral history interviews with Black women in Kentucky–Part 2 April 16, 2015 in 1940s-1950s, 1950s-1960s, 1960s-1970s, Oral history, Primary source, Social history While listening to oral histories featuring Black women in Kentucky I've gotten to hear some amazing stories directly from the women who lived them: women who marched in demonstrations in Lexington during the 1960s, women who taught at integrated schools, women who faced discrimination daily no matter what job they held. It is so important that these stories not only be saved, but also passed on. So I'd like to share a few with you. Julia Etta Lewis (1932-1998), leader in Lexington chapter Congress of Racial Equality Marilyn Gaye's interview in 1978 by the great historian George Wright (now President at Prairie View A&M University) is one of my favorites out of the entire collection. Gaye grew up in Lexington and was a teenager during the civil rights movement. In her interview she talks about what life was like as a child living in Lexington under segregation, describing her experiences of having to sit in the balcony of the Ben Ali Theater to see shows. She talks about how she became involved in civil rights demonstrations in Lexington and describes the experience of a march from the very beginning, waiting in a basement for a phone call from Julia Lewis, the head of Lexington's chapter of CORE, to tell them it was time to go. She describes what it was like to march through downtown Lexington and talks about the songs they sang as they marched. She discusses the reactions of white Lexingtonians to the march, and what the demonstration accomplished. I think this is one of my favorite interviews because the perspective it offers is so uncommon. Of all of the interviews in this collection there are actually very few with women who actively participated in the civil rights movement in Kentucky, and to have done so as a teenage girl makes Marilyn Gaye even more unique. Rosetta Beatty during her interview with Joan Brannon on February 2, 2009 I found the Rosetta Beatty interview interesting mainly because of her detailed descriptions of the East End area of Lexington during the 1960s. The East End encompasses an area north and east of downtown Lexington, between Main Street and Loudon Avenue. Beatty describes many of the streets in the neighborhood and lists the businesses, churches, and restaurants along each street, including Shiloh Baptist Church, Club Hurricane, and the Lyric Theater. Listening to her describe the neighborhood gives you such a clear picture of the area that you feel like you're walking along it with her. She talks about which businesses were owned by African Americans, and also describes the relationships between neighbors on Elm Tree Lane, stating that everyone looked out for each other's children. Lillian Buntin during her interview with Joan Brannon on April 9, 2009 Like Rosetta Beatty, Lillian Buntin grew up in the East End area of Lexington. Her interview also provides a great description of the neighborhood, focusing mainly on Ohio Street where Buntin lived as a child, as well as local churches, restaurants, drugstores, and the Lyric Theater. Along with her descriptions of the area, Buntin's interview is also interesting because she talks about attending a segregated school as a child before becoming a teacher at an integrated school. Her interview provides a personal account of not only what it was like to be a student under segregation, but also what it was like to be a teacher throughout the changes of integration in Lexington, including discussion of her relationships with students, parents, principals, and her fellow teachers. Patricia R. Laine talks with Emily Parker about her family history, including her ancestors who were once slaves in Kentucky. Her interview (August 6, 1986) also provides an interesting look at the role of the church in the Black community and how it has changed since her childhood in the 1940s. One of the most compelling parts of Laine's interview were her stories of the discrimination she faced both in her job as a domestic worker for a white family near Midway, but also throughout her employment at the National Institute of Mental Health Clinical Research Center (then known as "The Narcotics Farm" or "Narco," now called The Federal Medical Center, Lexington). Narco housed both prisoners and self-committed patients attempting to overcome drug addictions. Her discussion of the treatment of Black employees is eye-opening, and Laine says that because there was also gender discrimination, Black women received the fewest promotions. Her description of the treatment of the patients is also fascinating, especially when she discusses the facility becoming a federal prison. Laine also discusses the impact of the civil rights movement in Lexington, stating that racism has not been reduced, it has only become more covert, and that many Black businesses closed because of desegregation. Mrs. Charles Chenault Jones was the first African American teacher at Arlington Elementary School in Lexington after integration. During her interview she describes what it was like being the only Black person at PTA meetings, and discusses her interactions with school staff, students, and parents. She talks about witnessing discrimination against the Black students. Jones also discusses the effects of integration on Lexington businesses, neighborhoods, and, most interestingly, attitudes in the Black community. She gives her opinion on the decline of ministers' and churches' involvement in the community since her childhood days in Madison County of "basket meetings." These are not the only interesting interviews in this collection, just a few I personally enjoyed or considered particularly important. There are many more in the collection worth checking out that provide different perspectives and experiences. Tags: basket meetings, Bluegrass-Aspendale housing projects, Constitution Elementary School, CORE, Dunbar High School, East End of Lexington, George C. Wright, Greystone Hotel, Julia Etta Lewis, Kentucky State College, Lillian Buntin, Lyric Theatre, Midway, Narcotics Farm, Ohio Street in Lexington, Patricia Laine, Richmond, Rosetta Beatty, Shiloh Baptist Church No Comments » Oral history interviews with Black women in Kentucky March 10, 2015 in 1920s-30s, 1940s-1950s, 1950s-1960s, 1960s-1970s, Oral history, Primary source, Social history While indexing interviews for the project on oral histories featuring Black women in Kentucky it's hard not to become fascinated with a particular person or story. While every interview is, of course, valuable in its own right, some interviews are more detailed than others, and some interviewees have interesting perspectives or personal stories to add. These are the interviews I found particularly interesting while indexing the first batch of oral histories: Dorothy Perkins during her interview with Joan Brannon in 2009 Dorothy Perkins grew up in Lexington during the 1930s and '40s. One of my favorite things about this interview is that she describes the neighborhoods of Lexington at this time in great detail, including businesses, schools, and churches once located in the East End of Lexington. She not only paints a vivid picture of Deweese Street in its heyday, but also describes the fashion and clothing styles that were popular at the time. Perkins gives great detail in her description of Lexington theaters and what it felt like as a child only being allowed to watch shows from the balcony. Perkins' life was full of interesting stories, including the one about being expelled from school for fighting another girl by attacking her with her fingernails. Valinda Livingston in an interview with Brannon 2009 Valinda Livingston grew up in the East End of Lexington and discusses attending both Constitution Elementary School and Shiloh Baptist Church in the neighborhood. Livingston describes Lexington during her childhood in great detail, including parks, restaurants, drugstores, and funeral homes. She also talks about being warned to stay away from Deweese Street, which makes for an interesting comparison with Dorothy Perkins' description of the area. Livingston attended college at Kentucky State before becoming one of the first African American students at the University of Kentucky when integration began. She became a teacher and later, principal at Russell Elementary School. Livingston provides a great deal of information on the founding of Russell School, her time as principal, and the closing of the school. Mattie Jackson was a teacher at George Washington Carver School from 1914-1960. In her interview with Edward Owens, Jackson gives a first-hand account of the experiences of an African American teacher working in schools prior to integration. She discusses the conditions in all-Black schools, from the lack of equipment to the lower salaries for Black teachers. She talks about the students' reactions to White teachers at the school, including a story about a music teacher who made racist comments to the students. Wilhelmina Hunter was the wife of Dr. Bush Hunter, an African American doctor in Lexington. Mrs. Hunter grew up in Boston, Massachusetts where she studied business in college before moving to Washington, D.C. to work for the IRS. Hunter talks about the discrimination she and her family faced when they moved to Lexington, and discusses her involvement in organizations dedicated to improving conditions for Blacks in Lexington. Throughout the interview Hunter paints a picture of race relations in Lexington from the perspective of someone who not only lived it, but of someone who had also experienced different ways of life in Boston and Washington, D.C. An interesting side note from the interview: Mrs. Hunter mentions her relationships with famous entertainers Duke Ellington and Marion Anderson, both of whom gave performances in her home in Lexington. Elizabeth Harris describes her childhood community and discusses the close-knit relationships between neighbors, who she says often disciplined each others' children. I feel like this interview is unique among most of the others in this collection because Harris expresses an opinion that may often be felt but is not often mentioned in discussions on race relations: opposition to integration. She also discusses what happened to Black businesses in Lexington after the civil rights movement of the 1960s. One of the most interesting parts of the interview for me was not only hearing about Harris' experiences with segregation in movie theaters, hotels, and other Lexington businesses, but also her story about refusing to sit at the back of a bus. As I said, these are not the only interesting interviews in this collection (nor even the only interesting parts of these particular interviews). Each woman interviewed offers a unique perspective on childhood, schools (both all-Black and integrated), race relations in Lexington, discrimination, and their own role in the civil rights movement, from the perspective of a Black woman in Kentucky. Tags: Booker T. Washington Elementary School, Chandler Normal School, Charles Young Park, Constitution Elementary School, Deweese Street, Douglass Park, Duncan Park, George Washington Carver Elementary School, Kentucky State College, Lexington Committee on Religion and Human Rights, Main Street Baptist Church, Paul Lawrence Dunbar High School, Pralltown, Russell High School, Urban League, William Wells Brown Elementary School 1 Comment » Indexing Oral History Interviews of Black Women in Lexington November 11, 2014 in Oral history, Social history Recently, Randolph Hollingsworth asked if I would be interested in indexing a collection of oral history interviews from the "Blacks in Lexington Oral History Project." The interviews selected for Randolph's project focus on women in the community from a variety of different backgrounds, and many discuss conditions in Lexington before and after the Civil Rights Movement. One of the goals of the project is to provide greater access to the stories these women have to tell; stories that were often overlooked by traditional mainstream media sources. (For more information on this project, check out Randolph's blog post here.) Indexing is the process of making an oral history interview more accessible to users through the addition of searchable keywords, subjects, summaries, and other information. This enables users to locate points of interest within an interview, saving them the time it would take to listen to the interview in its entirety. Indexers use OHMS, the Oral History Metadata Synchronizer, a system which allows textual information to be assigned to audio or video recordings at a fraction of the cost of creating transcripts. (For more information on the OHMS system please visit www.OralHistoryOnline.org.) To index an interview, an indexer listens to the interview, breaking it down into 5-10 minute segments based on common topics. Each segment is given a title based on the topics covered. Within these segments keywords and subjects are chosen based upon the topics covered in the segment and upon the interviewee's own words. The indexer writes a summary for each segment, informing the users of the content of each section. Additional information, such as GPS coordinates, links to other websites, and partial transcripts may also be added, depending on the needs of the project. Beginning to index a new oral history project is usually the most difficult part. It takes time to learn and understand the purpose, tone, and topic of the project. Sometimes the best place to start is by listening to an interview to get a feel for the types of questions asked, the main subjects of the interviews, and the pace or structure of the interview. Though these can vary between interviews within the same collection, the interviews generally follow a similar pattern and it can be useful to listen to one interview to get a feel for the entire collection. While listening to the interview I like to write down important words or phrases that the interviewee uses, the main topics of the interview, and any keywords or subjects that I think may repeat throughout the collection. From this list I can begin creating my keywords thesaurus and my subjects thesaurus. The keywords thesaurus is generally less formal and is made up of names, places, and other topics mentioned in the interview. The subjects thesaurus is made up of Library of Congress approved subject headings. These are generally more broad and cover the overarching topics within the interview. As I listen to more interviews within the collection I add the new keywords and subjects for each interview to the list, while also checking each interview's content against the existing list. This ensures that I am using the same version of a word throughout all interviews within a collection, maintaining consistency for users. For the Blacks in Lexington project I also added GPS coordinates to many of the segments. These coordinates allow users to see a map of the locations mentioned within the segment, for instance the Lyric Theater, or the Charles Young Community Center. This gives users a better sense of the community discussed within the interview. This collection in particular has been challenging in regard to locations due to the fact that the landscape of Lexington, especially the East End area, has changed greatly over the years. In a future post here I will be chronicling these challenges and my efforts to find maps depicting the streets of Lexington from the 1940s to the present. As this project progresses I am learning more about my hometown of Lexington as well as some of the people who have lived and made history here and I can't wait to share what I've learned with everyone at the KYWCRH. by Randolph Hollingsworth Race Matters Training for Fayette County RCCW Initiative November 7, 2014 in Historiography, Intellectual history, Political history, Social history As part of the training sessions for the Race, Community and Child Welfare (RCCW) Fayette County (see more at the RCCW website)​, I presented on the "History of Racism and Anti-Racist Activism in Lexington and Fayette County, Kentucky." The goal is to provide an historical — and local — context for the understanding of racism here in our community. This historical context should help to explain why the problem of racism is so deeply ingrained in our cultures and institutions. As anti-racist practitioners we need to be patient and persistent since racism has been an integral part of the creation and growth of Lexington and Fayette County as much as it is the reason for violence, inequities and apathy. Here is my speech (History of Racism and Anti-Racism in Fayette County) for the participants in the training. I present it here for you to download and read. I invite you to reply and comment on this essay and how I have presented the history of Lexington and Fayette County. Tags: Desegregation, Lexington, racism, Segregation No Comments » New Project in the Works – Indexing Oral History Interviews of Black Women in Lexington July 25, 2014 in 1920s-30s, 1940s-1950s, 1950s-1960s, 1960s-1970s, Economic history, Oral history, Political history, Primary source, Religious history, Social history Good news alert! The Kentucky Oral History Commission of the Kentucky Historical Society has awarded us funding for a project to index the oral history interviews from the "Blacks in Lexington Oral History Project." The interviews will be placed in the Oral History Metadata Synchonizer (OHMS) of the Oral History Collection Management System here at the University of Kentucky. After the interviews are indexed in OHMS, geo-tagging linked to the digitized segments of the oral histories will provide an important digital humanities geo-spatial component to these resources. They will be viewable via the ExploreUK Kentucky Digital Library. About the interviews The women who contributed their voices to the collection of "Blacks in Lexington Oral History Project" come from all walks of life. Their ages and backgrounds are highly diverse, providing a sort of prototype for a good micro-history of Kentucky in the twentieth century. The interviewers for this whole collection are highly regarded educators and oral historians whose work in the 1970s and '80s even up to the present day. The oral historians, Ann Grundy, Edward Owens, Emily Parker, Gerald Smith, and George C. Wright are respected local community activists, scholars and authors. The resulting interviews are nuanced in ways that evoke strong passion for the role of place and community in history, and the questions based in a strong historiographical methodology worth raising up for others to learn from them. Similar to other twentieth century local history collections, this series has a wide scope of perspectives and serves as a good sampling of the many different types of backgrounds and occupations of the interviewees. However, Lexington's history has traditionally been written from the perspective of its men – or at least a male-dominated political history. This project will use selected interviews from this collection to provide access to a unique and valuable overview of twentieth century Lexington from a female perspective. Most all of the women in this collection were wage earners and a solid majority of the interview time is voiced by women professionals: educators, clerks, administrators and managers, librarians, nurses and dentists, social workers and politicians. Several women represent the entrepreneurs and technical workers that fuel a thriving local economy: beauticians, cooks, housekeepers, and even a "Dorm mother" at a residence hall at UK. A few well-to-do women are identified as homemakers and a couple of women explain their views on Lexington from their work as a pastor's wife. This collection of interviews is an important component of statewide documentation of the Civil Rights Movement in Kentucky. Interviewees in this collection are typically older than those women whose interviews are archived at the Kentucky Historical Society (KHS) and made accessible by the Online Media Database and the Kentucky Educational Television website. By providing greater access to these interviews from Lexingtonians, a more balanced narrative (not just highly publicized events in Louisville) could expand the scope of the evidence presented in published scholarly monographs such as the highly useful book Freedom on the Border: An Oral History of the Civil Rights Movement in Kentucky, edited by Catherine Fosl and Tracy K'Meyer (University Press of Kentucky, 2009). Out of 189 oral history interviews (a total of 194.25 interview hours) in the UK Oral History collection, "Blacks in Lexington Oral History Project," only 56 were of women. Those interviews, while less than a third of the total collection however, made up nearly half of the total interview time (almost 64 hours or 3,832 interview minutes). The interviews average 68 minutes (from one as short as 10 minutes to one as long as 180 minutes). Only 9 of the interviews' audio are in poor condition. Women Interviewees, Occupation Viola Greene, Teacher Marilyn Gaye, Civil rights activist Virginia McDonald, Librarian Alvinia Newell, Dentist Ann Miller, Teacher Kay and Reverend Lamont Jones, Pastor's wife Ella Bosley, unknown Abby Marlatt, Professor Lulla Riffe, unknown Faustina Cruise, unknown Roberta Laine, Teacher Estelle Tatman, Community activist Mattie Jackson, Teacher Mary D. Muir, Laundress Mary Jones, Pastor's wife Mary Porter, unknown Grace Cooper, Community Center Director Laura Wendell Moore and Clara Wendell Stitt, Homemakers Lillie Yates, unknown Anna McCann, unknown Helen Noble, Teacher Sadie Reid Brown, Homemaker Dorothy Pumphrey, Teacher Bettye Simpson, Social worker Loretta Nickens, Teacher Wilhelmina Hunter, unknown Mattie Johnson Gray, unknown Elizabeth R. Harris, unknown Patricia R. Spencer Laine, Beautician Joanna Offutt Childress, Teacher Laura Wendell Moore, unknown Mrs. Charles Chenault Jones, Teacher Grace Potter Carter, Cook Virginia Hawkins Anderson, Housekeeper Jennie Bibbs Didlick, Principal Grace Grevious Coleman, Teacher Florence Young, unknown Verna Bales Williams Clark, Teacher Frances A. Smallwood, Nurse Dorothy McCoy Cooper, Principal Ann Brewer Black, Teacher Edythe Larcena Jones Hayes, Teacher Delores Vinegar Oderinde, unknown Cordie Wilkerson Briggs, Hotel Laundry Manager Charlie Mae Brooks, Switchboard operator Edna Unson Carr, Dorm mother at UK Ruby Ragsdale Morris Benberry, Teacher Sophia Dotson Smith, Teacher Susie E. White, Beautician Georgia Montgomery Powers, Politician Helen Route Smith, Housekeeper Elizabeth Parker Thomas, Teacher Daisy Carolyn Bishop, Notary Public Virginia Case Shelby, Housekeeper Katherine Hardin Rollins, Teacher Mary Edna Page Berry, Dental assistant Additions to the collection since 1990: Evelyn Livisay Madeline C. Jones Harriet B. Haskins Elizabeth Beatty Ann Hunter Elenora L. Smith Annie B. Coleman Lillian B. Gentry Alice J. Alexander Martha L. Edwards Eula Tatman Sandra Richardson Lilia Garrison Mrs. Sidney Bell Johnson What's Happening Next? I'm partnering with the Louie B. Nunn Center for Oral History to get the digitization and indexing done, and I am glad to bring Danielle Gabbard on board as the indexer. Ms. Gabbard will be blogging here about her work as she goes, so you can follow along too as the work progresses. Tags: curation 1 Comment » WPA Pack Horse Library Project, 1936-43 July 22, 2013 in 1920s-30s, 1940s-1950s, Primary source, Social history Though Kentucky politicians today and in the past have regularly bemoaned intervention from outside state, women and minorities benefit from the influx of federal funds. One of the most interesting projects that the federal government subsidized in Kentucky is the Pack Horse Library Project of the Works Progress Administration. The WPA hired women in Appalachia to deliver books and other reading material to remote mountain schools and homes from 1936-1943. The usual library extension services in the mountainous region had declined by the 1930s, but the wonderful work of Cora Wilson Stewart and the brave teachers of the Moonlight Schools before World War I had whetted locals' appetite for literacy. The Pack Horse Library Project eventually reached nearly every resident in the nearly 10,000 square mile region of Appalachian Kentucky. More details about this library project can be found in Donald C. Boyd, "The Book Women of Kentucky: The WPA Pack Horse Library Project," Libraries & the Cultural Record, Vol. 42, No. 2 (2007): 111-128. You can see some of the wonderful photographs in a project by Angelia Pulley, now a graduate student in UK Library Sciences program. The images and their captions come from the Goodman-Paxton Photographic Collection in the Goodman-Paxton papers (PA64M1,Special Collections, University of Kentucky). Packhorse Librarians in Kentucky, 1936-1943 Tags: Angelia Pulley, Kentucky Division of Library Services, Moonlight Schools, Pack Horse Library Project, packhorse librarians, photography, Works Progress Administration No Comments » by dreambig123 April 30, 2013 in Oral history, Social history As the semester comes to an end, I can't believe all of the work I have done and the knowledge I have gained. To look back and see the wonderful pieces that my classmates and I have accomplished, is incredible. I have truly learned so much about the Civil Rights Movement in Kentucky and the women who participated in it. Finding out about what the women that were apart of this Movement did and how influential they were, was something I wouldn't have gained anywhere else. My partner and I are finishing up our final project on Suzy Post, and are working hard on making sure that all of the details are there. After being able to interview Ms. Post, we wanted to make sure that we covered all of the major points in her life, the organizations she was apart of, and the great significance that she made towards the Movement in Louisville. She was a truly remarkable woman. In order to do this, we are putting the final touches on our webpage that focuses on the important organizations that she contributed to as well as other aspects of her life. We have pages dedicated to her Civil Rights activism, work with the women's movement, involvement in the anti-war movement, and her family life. We are so excited to get all of the information out and allow everyone to see how wonderful a woman she truly is. "Civil Rights Movement." Wikipedia. Wikimedia Foundation, 27 Apr. 2013. http://en.wikipedia.org/wiki/Civil_rights_movement. 30 Apr. 2013. "Suzy Post." Wikipedia. Wikimedia Foundation, 24 Feb. 2013. http://en.wikipedia.org/wiki/Suzy_Post. 30 Apr. 2013. "Suzy Post, Hall of Fame 2007." Kentucky: Kentucky Commission on Human Rights. http://kchr.ky.gov/hof/halloffame2007.htm?&pageOrder=0&selectedPic=10. 30 Apr. 2013. Tags: Civil Rights Movement, Kentucky Civil Rights Hall of Fame, Suzy Post 1 Comment »
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,030
The name of the band is When Noone Is Watching. I was only able to photograph them during their first song, but I think they came out OK. I'm not sure I know what I'm doing regarding posting pics so we'll see what happens. This is my first time photographing a band with a oight show, so C&C is welcomed. PS - Apparently I don't know how to post pics here. Some help would be appreciated. Ian, its the URL's to your pics, there is part missing. If you go to the pics and right click on em,then click Properties, youll see the URL's are a bit longer than you have posted. Also, you need the end Image tag at the end of the url too [/img] and, when doing this, be sure to NOT make it a Hot Link, you want the URL to stay black (not orange) and have no underline, If this happens, just set your curser between the last character and the [/img] and hit back space once! Great shots by the way! I still can't figure out what I'm supposed to do. I don't know html, so that's probably the problem. This is frustrating:( . they look good!! I like the red lights... kind goes with the whole October thing!! Spooooky! These arent too bad actually....did you use manual or shutter speed or what? There is a lot of motion blur but it sort of works with the ambiance of the photos. I see they used a LOT of red lighting...I HATE when they overdo the red! Do you have an external flash? I didnt see one listed with your equipment. You might benefit from using a flash and bouncing it off a wall or even the ceiling in really really low light concerts. Personally I try NOT to use a flash...it kind of takes away from the whole mood of the picture. Play around with the shutter speeds...I try and keep int under 1/60 but then I bump up the ISO depending on the lighting but nothing under 400 and then I bump up the exposure roughly 2. I hope to see more of what you can do! Aperature priority mode set to f/1.8 (wide open), and I typically was getting 1/40-1/15s shutter speeds. The bass player was shot at 1/15s. The room was really dark and there was no real organized light show. There were no spots in front of the band, only the red light on the side. Yah I can see why you had a hard time getting good shots then! With almost No light there was just no way for you! You did well with what light you did have though!
{ "redpajama_set_name": "RedPajamaC4" }
1,616
{"url":"https:\/\/www.ucale.com\/trigonometrical-basic-formulas\/","text":"# Trigonometrical Basic Formulas\n\nTrigonometric identities\u00a0are equalities that involve\u00a0trigonometric\u00a0functions and are true for every value of the occurring variables where both sides of the equality are defined. Geometrically, these are\u00a0identities\u00a0involving certain functions of one or more angles\n\n$\\displaystyle (1)\\quad { Sin }^{ 2 }A+{ Cos }^{ 2 }A=1\\\\ \\Rightarrow { Cos }^{ 2 }A=1-{ Sin }^{ 2 }A\\qquad or\\quad { Sin }^{ 2 }A=1-{ Cos }^{ 2 }A$\n\n$\\displaystyle (2)\\quad 1+{ tan }^{ 2 }A={ Sec }^{ 2 }A\\\\ \\Rightarrow { Sec }^{ 2 }A-{ tan }^{ 2 }A=1$\n\n$\\displaystyle (3)\\quad 1+{ cot }^{ 2 }A={ cosec }^{ 2 }A\\\\ \\Rightarrow { cosec }^{ 2 }A-{ cot }^{ 2 }A=1$\n\n$\\displaystyle (4)\\quad \\tan { A } =\\frac { \\sin { A } }{ \\cos { A } } \\quad and\\quad \\cot { A } =\\frac { \\cos { A } }{ \\sin { A } }$\n\n(5) Fundamental inequalities: For 0<A<\u03c0 \/2\n\n$\\displaystyle 0<\\cos { A } <\\frac { \\sin { A } }{ A } <\\frac { 1 }{ \\cos { A } }$\n\n(6) It is possible to express trigonometrical ratios in terms of any one of them as,\n\n$\\displaystyle \\sin { \\theta } =\\frac { 1 }{ \\sqrt { 1+{ cot }^{ 2 }\\theta } } \\\\ \\cos { \\theta } =\\frac { { cot }^{ 2 }\\theta }{ \\sqrt { 1+{ cot }^{ 2 }\\theta } } \\\\ \\tan { \\theta } =\\frac { 1 }{ { cot }\\theta } \\\\ cosec\\theta =\\sqrt { 1+{ cot }^{ 2 }\\theta } \\\\ \\sec { \\theta } =\\frac { \\sqrt { 1+{ cot }^{ 2 }\\theta } }{ { cot }\\theta }$\n\nFebruary 22, 2019","date":"2022-11-30 16:37:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 6, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8388679027557373, \"perplexity\": 928.6812679400769}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710765.76\/warc\/CC-MAIN-20221130160457-20221130190457-00046.warc.gz\"}"}
null
null
namespace Lucid { namespace GAL { ref class Color; ref class Viewport; ref class Unordered2D; ref class RenderTarget2D; ref class DepthTarget2D; ref class Program; ref class VertexFormat; ref class VertexBuffer; ref class IndexBuffer; public enum class Topology { POINT_LIST = 0, LINE_LIST = 1, LINE_STRIP = 2, TRIANGLE_LIST = 3, TRIANGLE_STRIP = 4, TRIANGLE_ADJACENCY = 5, }; public ref class Pipeline { public: virtual ~Pipeline() {} static void initialize(int32_t width, int32_t height, int32_t samples, void *window); static void shutdown(); static void resize(int32_t width, int32_t height); static void beginScene(); static void endScene(); static void setRenderTarget(int index, RenderTarget2D ^renderTarget); static void setUnorderedTarget(Unordered2D ^unordered); static void setDepthTarget(DepthTarget2D ^depthTarget); static void restoreBackbuffer(bool color, bool depth, bool unordered); static Viewport ^getViewport(); static void setViewport(Viewport ^viewport); static void clear(Color ^color, float32_t depth); static void clear(Color ^color); static void clear(float32_t depth); static void beginProgram(Program ^program); static void endProgram(Program ^program); static void beginGeometry(VertexFormat ^format); static void endGeometry(VertexFormat ^format); static void setVertexStream(int index, VertexBuffer ^buffer); static void setIndexStream(IndexBuffer ^buffer); static void draw(Topology topology, int vertexCount, int indexCount); static void drawInstanced(Topology topology, int vertexCount, int indexCount, int instanceCount); protected: Pipeline() {} }; } /// GAL } /// Lucid
{ "redpajama_set_name": "RedPajamaGithub" }
8,689
Grind for July 12th FIRST SIP: "Laughing at our mistakes can lengthen our own life. Laughing at someone else's can shorten it." – Cullen Hightower The Headline Anti-Trump nutcase Tom Steyer declares presidential bid Billionaire activist and Democratic megadonor Tom Steyer announced his presidential bid on Tuesday, bringing the total number of candidates to 26. In January, he said he wouldn't run and expressed support for Elizabeth Warren. Now, he plans to spend $100 million on his 2020 campaign. "If we can reduce the influence of corporate money in our democracy, and start to address the devastating impacts of climate change, we can unlock the full potential of the American people and finally solve the many challenges facing our country," Steyer said on Tuesday. Tom Steyer is a former hedge fund manager and longtime Democratic donor known for his "impeach Trump" television ads. His progressive organization "Need to Impeach" has attracted 8 million members in just 2 years. Steyer is a self-made man whose net worth reached $1.6 billion this year. This similarity to Trump puts him at an immediate disadvantage in the eyes of voters who favor grassroots campaigns and who don't want to see another rich white man in the Oval Office. To qualify for a spot in the second round of Democratic debates, Steyer will need to collect 65,000 unique donations and earn 1% support in the polls by the end of the month. To qualify for the third debate, which takes place in September, he will need to amass 130,000 donations and 2% support in the polls. Steyer might have the cash to make an impact on the 2020 elections, but Republicans don't expect him to attract much support. "It doesn't say much for the whole Democratic Party that the number one Democratic donor took a look around and decided that there's no one he can support," said Tim Murtaugh, communications director for Trump 2020. Space tourism company Virgin Galactic prepares to enter stock market With plans to enter the US stock market by the end of the year, Virgin Galactic is set to become the first-ever publicly traded spaceflight company. "We know that millions of people are deeply inspired by human spaceflight, would love to become more involved and, ultimately experience space for themselves," says Richard Branson, who founded Virgin Galactic in 2004. "By taking Virgin Galactic pubic…we can open space to more investors and in doing so, open space to thousands of new astronauts." Virgin Galactic experienced a devastating setback in 2014 when a test flight crashed into the Mojave Desert and resulted in the death of Michael Alsbury. The company has since rebounded and completed two crewed test flights into space. Virgin Galactic has reservations from more than 600 people representing $80 million in deposits and $120 million in potential revenue, but the company will need a lot more than that to make the flights a reality. Virgin Galactic ended investment talks with Saudi Arabia earlier this year following the murder of journalist Jamal Khashoggi and hopes to make the money it needs by going public. The offering will occur following a $1.5 billion merger with Social Capital Hedosophia, whose founder and CEO Chamath Palihapitiya will invest $100 million and become Chairman of the combined entity. GOOD TO THE LAST DROP: Did you know… The human brain is more active during sleep than during the day.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,913
{"url":"https:\/\/ftp.aimsciences.org\/article\/doi\/10.3934\/cpaa.2004.3.151","text":"# American Institute of Mathematical Sciences\n\nMarch\u00a0 2004,\u00a03(1):\u00a0151-159. doi:\u00a010.3934\/cpaa.2004.3.151\n\n## Asymptotic behavior of the $L^1$ norm of solutions to nonlinear parabolic equations\n\n 1 Departamento de Matematica Pura e Aplicada, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS 91509-900, Brazil\n\nReceived\u00a0 December 2002 Revised\u00a0 September 2003 Published\u00a0 January 2004\n\nWe examine the large time behavior of the $L^{1}$ norm of solutions $u(\\cdot,t)$ to nonlinear parabolic equations $u_{t} + f(u)_{x} = (\\kappa(u) u_{x})_{x}$ in 1-D with (arbitrary) initial states $u(\\cdot,0)$ in $L^{1}(\\mathbb{R})$, where $\\kappa(u)$ is positive. If $u(\\cdot,t)$, \u0169$(\\cdot,t)$ are any solutions having the same mass, say $m$, then one has $\\| u(\\cdot,t) -$ \u0169$(\\cdot,t) \\|_{L^{1}(\\mathbb{R})} \\rightarrow 0$ as $t \\rightarrow \\infty$, and the limiting value for the $L^{1}$ norm of either solution is the absolute value of $m$. Other results of interest are also discussed.\nCitation: P. R. Zingano. Asymptotic behavior of the $L^1$ norm of solutions to nonlinear parabolic equations. Communications on Pure and Applied Analysis, 2004, 3 (1) : 151-159. doi: 10.3934\/cpaa.2004.3.151\n [1] Braxton Osting, J\u00e9r\u00f4me Darbon, Stanley Osher. Statistical ranking using the $l^{1}$-norm on graphs. Inverse Problems and Imaging, 2013, 7 (3) : 907-926. doi: 10.3934\/ipi.2013.7.907 [2] Jiangang Qi, Bing Xie. Extremum estimates of the $L^1$-norm of weights for eigenvalue problems of vibrating string equations based on critical equations. Discrete and Continuous Dynamical Systems - B, 2021, 26 (7) : 3505-3516. doi: 10.3934\/dcdsb.2020243 [3] Pia Heins, Michael Moeller, Martin Burger. Locally sparse reconstruction using the $l^{1,\\infty}$-norm. Inverse Problems and Imaging, 2015, 9 (4) : 1093-1137. doi: 10.3934\/ipi.2015.9.1093 [4] Ahmad Mousavi, Zheming Gao, Lanshan Han, Alvin Lim. Quadratic surface support vector machine with L1 norm regularization. Journal of Industrial and Management Optimization, 2022, 18 (3) : 1835-1861. doi: 10.3934\/jimo.2021046 [5] Duo Wang, Zheng-Fen Jin, Youlin Shang. A penalty decomposition method for nuclear norm minimization with l1 norm fidelity term. Evolution Equations and Control Theory, 2019, 8 (4) : 695-708. doi: 10.3934\/eect.2019034 [6] Changjiang Zhu, Ruizhao Zi. Asymptotic behavior of solutions to 1D compressible Navier-Stokes equations with gravity and vacuum. Discrete and Continuous Dynamical Systems, 2011, 30 (4) : 1263-1283. doi: 10.3934\/dcds.2011.30.1263 [7] Mostafa Bendahmane, Kenneth H. Karlsen. Renormalized solutions of an anisotropic reaction-diffusion-advection system with $L^1$ data. Communications on Pure and Applied Analysis, 2006, 5 (4) : 733-762. doi: 10.3934\/cpaa.2006.5.733 [8] Lidan Li, Hongwei Zhang, Liwei Zhang. Inverse quadratic programming problem with $l_1$ norm measure. Journal of Industrial and Management Optimization, 2020, 16 (5) : 2425-2437. doi: 10.3934\/jimo.2019061 [9] Abdelaziz Rhandi, Roland Schnaubelt. Asymptotic behaviour of a non-autonomous population equation with diffusion in $L^1$. Discrete and Continuous Dynamical Systems, 1999, 5 (3) : 663-683. doi: 10.3934\/dcds.1999.5.663 [10] Peter Weidemaier. Maximal regularity for parabolic equations with inhomogeneous boundary conditions in Sobolev spaces with mixed $L_p$-norm. Electronic Research Announcements, 2002, 8: 47-51. [11] Jingyu Li. Asymptotic behavior of solutions to elliptic equations in a coated body. Communications on Pure and Applied Analysis, 2009, 8 (4) : 1251-1267. doi: 10.3934\/cpaa.2009.8.1251 [12] Anderson L. A. de Araujo, Marcelo Montenegro. Existence of solution and asymptotic behavior for a class of parabolic equations. Communications on Pure and Applied Analysis, 2021, 20 (3) : 1213-1227. doi: 10.3934\/cpaa.2021017 [13] Lie Zheng. Asymptotic behavior of solutions to the nonlinear breakage equations. Communications on Pure and Applied Analysis, 2005, 4 (2) : 463-473. doi: 10.3934\/cpaa.2005.4.463 [14] Yadong Shang, Jianjun Paul Tian, Bixiang Wang. Asymptotic behavior of the stochastic Keller-Segel equations. Discrete and Continuous Dynamical Systems - B, 2019, 24 (3) : 1367-1391. doi: 10.3934\/dcdsb.2019020 [15] Yutian Lei, Chao Ma. Asymptotic behavior for solutions of some integral equations. Communications on Pure and Applied Analysis, 2011, 10 (1) : 193-207. doi: 10.3934\/cpaa.2011.10.193 [16] Jos\u00e9 Luiz Boldrini, Jonathan Bravo-Olivares, Eduardo Notte-Cuello, Marko A. Rojas-Medar. Asymptotic behavior of weak and strong solutions of the magnetohydrodynamic equations. Electronic Research Archive, 2021, 29 (1) : 1783-1801. doi: 10.3934\/era.2020091 [17] Chunpeng Wang. Boundary behavior and asymptotic behavior of solutions to a class of parabolic equations with boundary degeneracy. Discrete and Continuous Dynamical Systems, 2016, 36 (2) : 1041-1060. doi: 10.3934\/dcds.2016.36.1041 [18] Daniel Han-Kwan. $L^1$ averaging lemma for transport equations with Lipschitz force fields. Kinetic and Related Models, 2010, 3 (4) : 669-683. doi: 10.3934\/krm.2010.3.669 [19] C. Garc\u00eda V\u00e1zquez, Francisco Orteg\u00f3n Gallego. On certain nonlinear parabolic equations with singular diffusion and data in $L^1$. Communications on Pure and Applied Analysis, 2005, 4 (3) : 589-612. doi: 10.3934\/cpaa.2005.4.589 [20] Paola Goatin, Philippe G. LeFloch. $L^1$ continuous dependence for the Euler equations of compressible fluids dynamics. Communications on Pure and Applied Analysis, 2003, 2 (1) : 107-137. doi: 10.3934\/cpaa.2003.2.107\n\n2020\u00a0Impact Factor:\u00a01.916","date":"2022-05-28 00:09:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6024276614189148, \"perplexity\": 3231.5703016732123}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652663011588.83\/warc\/CC-MAIN-20220528000300-20220528030300-00387.warc.gz\"}"}
null
null
Chilecito je město v Argentině v provincii La Rioja. Je hlavním městem stejnojmenného departementu. Ve městě v roce 2010 žilo 33 724 obyvatel. Historie Město bylo založeno v roce 1715 španělskými kolonizátory. Odkazy Reference Externí odkazy Sídla v La Rioji (argentinská provincie) Města v Argentině
{ "redpajama_set_name": "RedPajamaWikipedia" }
259
#ifndef JANET_AMALG #include "features.h" #include <janet.h> #include <string.h> #include "util.h" #include "vector.h" #include "util.h" #endif #ifdef JANET_PEG /* * Runtime */ /* Hold captured patterns and match state */ typedef struct { const uint8_t *text_start; const uint8_t *text_end; const uint32_t *bytecode; const Janet *constants; JanetArray *captures; JanetBuffer *scratch; JanetBuffer *tags; const Janet *extrav; int32_t extrac; int32_t depth; enum { PEG_MODE_NORMAL, PEG_MODE_ACCUMULATE } mode; } PegState; /* Allow backtrack with captures. We need * to save state at branches, and then reload * if one branch fails and try a new branch. */ typedef struct { int32_t cap; int32_t scratch; } CapState; /* Save the current capture state */ static CapState cap_save(PegState *s) { CapState cs; cs.scratch = s->scratch->count; cs.cap = s->captures->count; return cs; } /* Load a saved capture state in the case of failure */ static void cap_load(PegState *s, CapState cs) { s->scratch->count = cs.scratch; s->captures->count = cs.cap; s->tags->count = cs.cap; } /* Add a capture */ static void pushcap(PegState *s, Janet capture, uint32_t tag) { if (s->mode == PEG_MODE_ACCUMULATE) { janet_to_string_b(s->scratch, capture); } if (tag || s->mode == PEG_MODE_NORMAL) { janet_array_push(s->captures, capture); janet_buffer_push_u8(s->tags, tag); } } /* Prevent stack overflow */ #define down1(s) do { \ if (0 == --((s)->depth)) janet_panic("peg/match recursed too deeply"); \ } while (0) #define up1(s) ((s)->depth++) /* Evaluate a peg rule * Pre-conditions: s is in a valid state * Post-conditions: If there is a match, returns a pointer to the next text. * All captures on the capture stack are valid. If there is no match, * returns NULL. Extra captures from successful child expressions can be * left on the capture stack. */ static const uint8_t *peg_rule( PegState *s, const uint32_t *rule, const uint8_t *text) { tail: switch (*rule & 0x1F) { default: janet_panic("unexpected opcode"); return NULL; case RULE_LITERAL: { uint32_t len = rule[1]; if (text + len > s->text_end) return NULL; return memcmp(text, rule + 2, len) ? NULL : text + len; } case RULE_NCHAR: { uint32_t n = rule[1]; return (text + n > s->text_end) ? NULL : text + n; } case RULE_NOTNCHAR: { uint32_t n = rule[1]; return (text + n > s->text_end) ? text : NULL; } case RULE_RANGE: { uint8_t lo = rule[1] & 0xFF; uint8_t hi = (rule[1] >> 16) & 0xFF; return (text < s->text_end && text[0] >= lo && text[0] <= hi) ? text + 1 : NULL; } case RULE_SET: { uint32_t word = rule[1 + (text[0] >> 5)]; uint32_t mask = (uint32_t)1 << (text[0] & 0x1F); return (text < s->text_end && (word & mask)) ? text + 1 : NULL; } case RULE_LOOK: { text += ((int32_t *)rule)[1]; if (text < s->text_start || text > s->text_end) return NULL; down1(s); const uint8_t *result = peg_rule(s, s->bytecode + rule[2], text); up1(s); return result ? text : NULL; } case RULE_CHOICE: { uint32_t len = rule[1]; const uint32_t *args = rule + 2; if (len == 0) return NULL; down1(s); CapState cs = cap_save(s); for (uint32_t i = 0; i < len - 1; i++) { const uint8_t *result = peg_rule(s, s->bytecode + args[i], text); if (result) { up1(s); return result; } cap_load(s, cs); } up1(s); rule = s->bytecode + args[len - 1]; goto tail; } case RULE_SEQUENCE: { uint32_t len = rule[1]; const uint32_t *args = rule + 2; if (len == 0) return text; down1(s); for (uint32_t i = 0; text && i < len - 1; i++) text = peg_rule(s, s->bytecode + args[i], text); up1(s); if (!text) return NULL; rule = s->bytecode + args[len - 1]; goto tail; } case RULE_IF: case RULE_IFNOT: { const uint32_t *rule_a = s->bytecode + rule[1]; const uint32_t *rule_b = s->bytecode + rule[2]; down1(s); const uint8_t *result = peg_rule(s, rule_a, text); up1(s); if (rule[0] == RULE_IF ? !result : !!result) return NULL; rule = rule_b; goto tail; } case RULE_NOT: { const uint32_t *rule_a = s->bytecode + rule[1]; down1(s); const uint8_t *result = peg_rule(s, rule_a, text); up1(s); return (result) ? NULL : text; } case RULE_BETWEEN: { uint32_t lo = rule[1]; uint32_t hi = rule[2]; const uint32_t *rule_a = s->bytecode + rule[3]; uint32_t captured = 0; const uint8_t *next_text; CapState cs = cap_save(s); down1(s); while (captured < hi) { CapState cs2 = cap_save(s); next_text = peg_rule(s, rule_a, text); if (!next_text || next_text == text) { cap_load(s, cs2); break; } captured++; text = next_text; } up1(s); if (captured < lo) { cap_load(s, cs); return NULL; } return text; } /* Capturing rules */ case RULE_GETTAG: { uint32_t search = rule[1]; uint32_t tag = rule[2]; for (int32_t i = s->tags->count - 1; i >= 0; i--) { if (s->tags->data[i] == search) { pushcap(s, s->captures->data[i], tag); return text; } } return NULL; } case RULE_POSITION: { pushcap(s, janet_wrap_number((double)(text - s->text_start)), rule[1]); return text; } case RULE_ARGUMENT: { int32_t index = ((int32_t *)rule)[1]; Janet capture = (index >= s->extrac) ? janet_wrap_nil() : s->extrav[index]; pushcap(s, capture, rule[2]); return text; } case RULE_CONSTANT: { pushcap(s, s->constants[rule[1]], rule[2]); return text; } case RULE_CAPTURE: { uint32_t tag = rule[2]; down1(s); const uint8_t *result = peg_rule(s, s->bytecode + rule[1], text); up1(s); if (!result) return NULL; /* Specialized pushcap - avoid intermediate string creation */ if (!tag && s->mode == PEG_MODE_ACCUMULATE) { janet_buffer_push_bytes(s->scratch, text, (int32_t)(result - text)); } else { pushcap(s, janet_stringv(text, (int32_t)(result - text)), tag); } return result; } case RULE_ACCUMULATE: { uint32_t tag = rule[2]; int oldmode = s->mode; if (!tag && oldmode == PEG_MODE_ACCUMULATE) { rule = s->bytecode + rule[1]; goto tail; } CapState cs = cap_save(s); s->mode = PEG_MODE_ACCUMULATE; down1(s); const uint8_t *result = peg_rule(s, s->bytecode + rule[1], text); up1(s); s->mode = oldmode; if (!result) return NULL; Janet cap = janet_stringv(s->scratch->data + cs.scratch, s->scratch->count - cs.scratch); cap_load(s, cs); pushcap(s, cap, tag); return result; } case RULE_DROP: { CapState cs = cap_save(s); down1(s); const uint8_t *result = peg_rule(s, s->bytecode + rule[1], text); up1(s); if (!result) return NULL; cap_load(s, cs); return result; } case RULE_GROUP: { uint32_t tag = rule[2]; int oldmode = s->mode; CapState cs = cap_save(s); s->mode = PEG_MODE_NORMAL; down1(s); const uint8_t *result = peg_rule(s, s->bytecode + rule[1], text); up1(s); s->mode = oldmode; if (!result) return NULL; int32_t num_sub_captures = s->captures->count - cs.cap; JanetArray *sub_captures = janet_array(num_sub_captures); safe_memcpy(sub_captures->data, s->captures->data + cs.cap, sizeof(Janet) * num_sub_captures); sub_captures->count = num_sub_captures; cap_load(s, cs); pushcap(s, janet_wrap_array(sub_captures), tag); return result; } case RULE_REPLACE: case RULE_MATCHTIME: { uint32_t tag = rule[3]; int oldmode = s->mode; CapState cs = cap_save(s); s->mode = PEG_MODE_NORMAL; down1(s); const uint8_t *result = peg_rule(s, s->bytecode + rule[1], text); up1(s); s->mode = oldmode; if (!result) return NULL; Janet cap = janet_wrap_nil(); Janet constant = s->constants[rule[2]]; switch (janet_type(constant)) { default: cap = constant; break; case JANET_STRUCT: if (s->captures->count) { cap = janet_struct_get(janet_unwrap_struct(constant), s->captures->data[s->captures->count - 1]); } break; case JANET_TABLE: if (s->captures->count) { cap = janet_table_get(janet_unwrap_table(constant), s->captures->data[s->captures->count - 1]); } break; case JANET_CFUNCTION: cap = janet_unwrap_cfunction(constant)(s->captures->count - cs.cap, s->captures->data + cs.cap); break; case JANET_FUNCTION: cap = janet_call(janet_unwrap_function(constant), s->captures->count - cs.cap, s->captures->data + cs.cap); break; } cap_load(s, cs); if (rule[0] == RULE_MATCHTIME && !janet_truthy(cap)) return NULL; pushcap(s, cap, tag); return result; } case RULE_ERROR: { int oldmode = s->mode; s->mode = PEG_MODE_NORMAL; int32_t old_cap = s->captures->count; down1(s); const uint8_t *result = peg_rule(s, s->bytecode + rule[1], text); up1(s); s->mode = oldmode; if (!result) return NULL; if (s->captures->count > old_cap) { /* Throw last capture */ janet_panicv(s->captures->data[s->captures->count - 1]); } else { /* Throw generic error */ int32_t start = (int32_t)(text - s->text_start); int32_t end = (int32_t)(result - s->text_start); janet_panicf("match error in range (%d:%d)", start, end); } return NULL; } case RULE_BACKMATCH: { uint32_t search = rule[1]; for (int32_t i = s->tags->count - 1; i >= 0; i--) { if (s->tags->data[i] == search) { Janet capture = s->captures->data[i]; if (!janet_checktype(capture, JANET_STRING)) return NULL; const uint8_t *bytes = janet_unwrap_string(capture); int32_t len = janet_string_length(bytes); if (text + len > s->text_end) return NULL; return memcmp(text, bytes, len) ? NULL : text + len; } } return NULL; } } } /* * Compilation */ typedef struct { JanetTable *grammar; JanetTable *default_grammar; JanetTable *tags; Janet *constants; uint32_t *bytecode; Janet form; int depth; uint32_t nexttag; } Builder; /* Forward declaration to allow recursion */ static uint32_t peg_compile1(Builder *b, Janet peg); /* * Errors */ static void builder_cleanup(Builder *b) { janet_v_free(b->constants); janet_v_free(b->bytecode); } JANET_NO_RETURN static void peg_panic(Builder *b, const char *msg) { builder_cleanup(b); janet_panicf("grammar error in %p, %s", b->form, msg); } #define peg_panicf(b,...) peg_panic((b), (const char *) janet_formatc(__VA_ARGS__)) static void peg_fixarity(Builder *b, int32_t argc, int32_t arity) { if (argc != arity) { peg_panicf(b, "expected %d argument%s, got %d", arity, arity == 1 ? "" : "s", argc); } } static void peg_arity(Builder *b, int32_t arity, int32_t min, int32_t max) { if (min >= 0 && arity < min) peg_panicf(b, "arity mismatch, expected at least %d, got %d", min, arity); if (max >= 0 && arity > max) peg_panicf(b, "arity mismatch, expected at most %d, got %d", max, arity); } static const uint8_t *peg_getset(Builder *b, Janet x) { if (!janet_checktype(x, JANET_STRING)) peg_panic(b, "expected string for character set"); const uint8_t *str = janet_unwrap_string(x); return str; } static const uint8_t *peg_getrange(Builder *b, Janet x) { if (!janet_checktype(x, JANET_STRING)) peg_panic(b, "expected string for character range"); const uint8_t *str = janet_unwrap_string(x); if (janet_string_length(str) != 2) peg_panicf(b, "expected string to have length 2, got %v", x); if (str[1] < str[0]) peg_panicf(b, "range %v is empty", x); return str; } static int32_t peg_getinteger(Builder *b, Janet x) { if (!janet_checkint(x)) peg_panicf(b, "expected integer, got %v", x); return janet_unwrap_integer(x); } static int32_t peg_getnat(Builder *b, Janet x) { int32_t i = peg_getinteger(b, x); if (i < 0) peg_panicf(b, "expected non-negative integer, got %v", x); return i; } /* * Emission */ static uint32_t emit_constant(Builder *b, Janet c) { uint32_t cindex = (uint32_t) janet_v_count(b->constants); janet_v_push(b->constants, c); return cindex; } static uint32_t emit_tag(Builder *b, Janet t) { if (!janet_checktype(t, JANET_KEYWORD)) peg_panicf(b, "expected keyword for capture tag, got %v", t); Janet check = janet_table_get(b->tags, t); if (janet_checktype(check, JANET_NIL)) { uint32_t tag = b->nexttag++; if (tag > 255) { peg_panic(b, "too many tags - up to 255 tags are supported per peg"); } Janet val = janet_wrap_number(tag); janet_table_put(b->tags, t, val); return tag; } else { return (uint32_t) janet_unwrap_number(check); } } /* Reserve space in bytecode for a rule. When a special emits a rule, * it must place that rule immediately on the bytecode stack. This lets * the compiler know where the rule is going to be before it is complete, * allowing recursive rules. */ typedef struct { Builder *builder; uint32_t index; int32_t size; } Reserve; static Reserve reserve(Builder *b, int32_t size) { Reserve r; r.index = janet_v_count(b->bytecode); r.builder = b; r.size = size; for (int32_t i = 0; i < size; i++) janet_v_push(b->bytecode, 0); return r; } /* Emit a rule in the builder. Returns the index of the new rule */ static void emit_rule(Reserve r, int32_t op, int32_t n, const uint32_t *body) { janet_assert(r.size == n + 1, "bad reserve"); r.builder->bytecode[r.index] = op; memcpy(r.builder->bytecode + r.index + 1, body, n * sizeof(uint32_t)); } /* For RULE_LITERAL */ static void emit_bytes(Builder *b, uint32_t op, int32_t len, const uint8_t *bytes) { uint32_t next_rule = janet_v_count(b->bytecode); janet_v_push(b->bytecode, op); janet_v_push(b->bytecode, len); int32_t words = ((len + 3) >> 2); for (int32_t i = 0; i < words; i++) janet_v_push(b->bytecode, 0); memcpy(b->bytecode + next_rule + 2, bytes, len); } /* For fixed arity rules of arities 1, 2, and 3 */ static void emit_1(Reserve r, uint32_t op, uint32_t arg) { emit_rule(r, op, 1, &arg); } static void emit_2(Reserve r, uint32_t op, uint32_t arg1, uint32_t arg2) { uint32_t arr[2] = {arg1, arg2}; emit_rule(r, op, 2, arr); } static void emit_3(Reserve r, uint32_t op, uint32_t arg1, uint32_t arg2, uint32_t arg3) { uint32_t arr[3] = {arg1, arg2, arg3}; emit_rule(r, op, 3, arr); } /* * Specials */ static void bitmap_set(uint32_t *bitmap, uint8_t c) { bitmap[c >> 5] |= ((uint32_t)1) << (c & 0x1F); } static void spec_range(Builder *b, int32_t argc, const Janet *argv) { peg_arity(b, argc, 1, -1); if (argc == 1) { Reserve r = reserve(b, 2); const uint8_t *str = peg_getrange(b, argv[0]); uint32_t arg = str[0] | (str[1] << 16); emit_1(r, RULE_RANGE, arg); } else { /* Compile as a set */ Reserve r = reserve(b, 9); uint32_t bitmap[8] = {0}; for (int32_t i = 0; i < argc; i++) { const uint8_t *str = peg_getrange(b, argv[i]); for (uint32_t c = str[0]; c <= str[1]; c++) bitmap_set(bitmap, c); } emit_rule(r, RULE_SET, 8, bitmap); } } static void spec_set(Builder *b, int32_t argc, const Janet *argv) { peg_fixarity(b, argc, 1); Reserve r = reserve(b, 9); const uint8_t *str = peg_getset(b, argv[0]); uint32_t bitmap[8] = {0}; for (int32_t i = 0; i < janet_string_length(str); i++) bitmap_set(bitmap, str[i]); emit_rule(r, RULE_SET, 8, bitmap); } static void spec_look(Builder *b, int32_t argc, const Janet *argv) { peg_arity(b, argc, 1, 2); Reserve r = reserve(b, 3); int32_t rulearg = argc == 2 ? 1 : 0; int32_t offset = argc == 2 ? peg_getinteger(b, argv[0]) : 0; uint32_t subrule = peg_compile1(b, argv[rulearg]); emit_2(r, RULE_LOOK, (uint32_t) offset, subrule); } /* Rule of the form [len, rules...] */ static void spec_variadic(Builder *b, int32_t argc, const Janet *argv, uint32_t op) { uint32_t rule = janet_v_count(b->bytecode); janet_v_push(b->bytecode, op); janet_v_push(b->bytecode, argc); for (int32_t i = 0; i < argc; i++) janet_v_push(b->bytecode, 0); for (int32_t i = 0; i < argc; i++) { uint32_t rulei = peg_compile1(b, argv[i]); b->bytecode[rule + 2 + i] = rulei; } } static void spec_choice(Builder *b, int32_t argc, const Janet *argv) { spec_variadic(b, argc, argv, RULE_CHOICE); } static void spec_sequence(Builder *b, int32_t argc, const Janet *argv) { spec_variadic(b, argc, argv, RULE_SEQUENCE); } /* For (if a b) and (if-not a b) */ static void spec_branch(Builder *b, int32_t argc, const Janet *argv, uint32_t rule) { peg_fixarity(b, argc, 2); Reserve r = reserve(b, 3); uint32_t rule_a = peg_compile1(b, argv[0]); uint32_t rule_b = peg_compile1(b, argv[1]); emit_2(r, rule, rule_a, rule_b); } static void spec_if(Builder *b, int32_t argc, const Janet *argv) { spec_branch(b, argc, argv, RULE_IF); } static void spec_ifnot(Builder *b, int32_t argc, const Janet *argv) { spec_branch(b, argc, argv, RULE_IFNOT); } static void spec_between(Builder *b, int32_t argc, const Janet *argv) { peg_fixarity(b, argc, 3); Reserve r = reserve(b, 4); int32_t lo = peg_getnat(b, argv[0]); int32_t hi = peg_getnat(b, argv[1]); uint32_t subrule = peg_compile1(b, argv[2]); emit_3(r, RULE_BETWEEN, lo, hi, subrule); } static void spec_repeater(Builder *b, int32_t argc, const Janet *argv, int32_t min) { peg_fixarity(b, argc, 1); Reserve r = reserve(b, 4); uint32_t subrule = peg_compile1(b, argv[0]); emit_3(r, RULE_BETWEEN, min, UINT32_MAX, subrule); } static void spec_some(Builder *b, int32_t argc, const Janet *argv) { spec_repeater(b, argc, argv, 1); } static void spec_any(Builder *b, int32_t argc, const Janet *argv) { spec_repeater(b, argc, argv, 0); } static void spec_atleast(Builder *b, int32_t argc, const Janet *argv) { peg_fixarity(b, argc, 2); Reserve r = reserve(b, 4); int32_t n = peg_getnat(b, argv[0]); uint32_t subrule = peg_compile1(b, argv[1]); emit_3(r, RULE_BETWEEN, n, UINT32_MAX, subrule); } static void spec_atmost(Builder *b, int32_t argc, const Janet *argv) { peg_fixarity(b, argc, 2); Reserve r = reserve(b, 4); int32_t n = peg_getnat(b, argv[0]); uint32_t subrule = peg_compile1(b, argv[1]); emit_3(r, RULE_BETWEEN, 0, n, subrule); } static void spec_opt(Builder *b, int32_t argc, const Janet *argv) { peg_fixarity(b, argc, 1); Reserve r = reserve(b, 4); uint32_t subrule = peg_compile1(b, argv[0]); emit_3(r, RULE_BETWEEN, 0, 1, subrule); } static void spec_repeat(Builder *b, int32_t argc, const Janet *argv) { peg_fixarity(b, argc, 2); Reserve r = reserve(b, 4); int32_t n = peg_getnat(b, argv[0]); uint32_t subrule = peg_compile1(b, argv[1]); emit_3(r, RULE_BETWEEN, n, n, subrule); } /* Rule of the form [rule] */ static void spec_onerule(Builder *b, int32_t argc, const Janet *argv, uint32_t op) { peg_fixarity(b, argc, 1); Reserve r = reserve(b, 2); uint32_t rule = peg_compile1(b, argv[0]); emit_1(r, op, rule); } static void spec_not(Builder *b, int32_t argc, const Janet *argv) { spec_onerule(b, argc, argv, RULE_NOT); } static void spec_error(Builder *b, int32_t argc, const Janet *argv) { spec_onerule(b, argc, argv, RULE_ERROR); } static void spec_drop(Builder *b, int32_t argc, const Janet *argv) { spec_onerule(b, argc, argv, RULE_DROP); } /* Rule of the form [rule, tag] */ static void spec_cap1(Builder *b, int32_t argc, const Janet *argv, uint32_t op) { peg_arity(b, argc, 1, 2); Reserve r = reserve(b, 3); uint32_t tag = (argc == 2) ? emit_tag(b, argv[1]) : 0; uint32_t rule = peg_compile1(b, argv[0]); emit_2(r, op, rule, tag); } static void spec_capture(Builder *b, int32_t argc, const Janet *argv) { spec_cap1(b, argc, argv, RULE_CAPTURE); } static void spec_accumulate(Builder *b, int32_t argc, const Janet *argv) { spec_cap1(b, argc, argv, RULE_ACCUMULATE); } static void spec_group(Builder *b, int32_t argc, const Janet *argv) { spec_cap1(b, argc, argv, RULE_GROUP); } static void spec_reference(Builder *b, int32_t argc, const Janet *argv) { peg_arity(b, argc, 1, 2); Reserve r = reserve(b, 3); uint32_t search = emit_tag(b, argv[0]); uint32_t tag = (argc == 2) ? emit_tag(b, argv[1]) : 0; emit_2(r, RULE_GETTAG, search, tag); } static void spec_tag1(Builder *b, int32_t argc, const Janet *argv, uint32_t op) { peg_arity(b, argc, 0, 1); Reserve r = reserve(b, 2); uint32_t tag = (argc) ? emit_tag(b, argv[0]) : 0; (void) argv; emit_1(r, op, tag); } static void spec_position(Builder *b, int32_t argc, const Janet *argv) { spec_tag1(b, argc, argv, RULE_POSITION); } static void spec_backmatch(Builder *b, int32_t argc, const Janet *argv) { spec_tag1(b, argc, argv, RULE_BACKMATCH); } static void spec_argument(Builder *b, int32_t argc, const Janet *argv) { peg_arity(b, argc, 1, 2); Reserve r = reserve(b, 3); uint32_t tag = (argc == 2) ? emit_tag(b, argv[1]) : 0; int32_t index = peg_getnat(b, argv[0]); emit_2(r, RULE_ARGUMENT, index, tag); } static void spec_constant(Builder *b, int32_t argc, const Janet *argv) { janet_arity(argc, 1, 2); Reserve r = reserve(b, 3); uint32_t tag = (argc == 2) ? emit_tag(b, argv[1]) : 0; emit_2(r, RULE_CONSTANT, emit_constant(b, argv[0]), tag); } static void spec_replace(Builder *b, int32_t argc, const Janet *argv) { peg_arity(b, argc, 2, 3); Reserve r = reserve(b, 4); uint32_t subrule = peg_compile1(b, argv[0]); uint32_t constant = emit_constant(b, argv[1]); uint32_t tag = (argc == 3) ? emit_tag(b, argv[2]) : 0; emit_3(r, RULE_REPLACE, subrule, constant, tag); } static void spec_matchtime(Builder *b, int32_t argc, const Janet *argv) { peg_arity(b, argc, 2, 3); Reserve r = reserve(b, 4); uint32_t subrule = peg_compile1(b, argv[0]); Janet fun = argv[1]; if (!janet_checktype(fun, JANET_FUNCTION) && !janet_checktype(fun, JANET_CFUNCTION)) { peg_panicf(b, "expected function|cfunction, got %v", fun); } uint32_t tag = (argc == 3) ? emit_tag(b, argv[2]) : 0; uint32_t cindex = emit_constant(b, fun); emit_3(r, RULE_MATCHTIME, subrule, cindex, tag); } /* Special compiler form */ typedef void (*Special)(Builder *b, int32_t argc, const Janet *argv); typedef struct { const char *name; Special special; } SpecialPair; /* Keep in lexical order (vim :sort works well) */ static const SpecialPair peg_specials[] = { {"!", spec_not}, {"$", spec_position}, {"%", spec_accumulate}, {"*", spec_sequence}, {"+", spec_choice}, {"->", spec_reference}, {"/", spec_replace}, {"<-", spec_capture}, {">", spec_look}, {"?", spec_opt}, {"accumulate", spec_accumulate}, {"any", spec_any}, {"argument", spec_argument}, {"at-least", spec_atleast}, {"at-most", spec_atmost}, {"backmatch", spec_backmatch}, {"backref", spec_reference}, {"between", spec_between}, {"capture", spec_capture}, {"choice", spec_choice}, {"cmt", spec_matchtime}, {"constant", spec_constant}, {"drop", spec_drop}, {"error", spec_error}, {"group", spec_group}, {"if", spec_if}, {"if-not", spec_ifnot}, {"look", spec_look}, {"not", spec_not}, {"opt", spec_opt}, {"position", spec_position}, {"quote", spec_capture}, {"range", spec_range}, {"repeat", spec_repeat}, {"replace", spec_replace}, {"sequence", spec_sequence}, {"set", spec_set}, {"some", spec_some}, }; /* Compile a janet value into a rule and return the rule index. */ static uint32_t peg_compile1(Builder *b, Janet peg) { /* Keep track of the form being compiled for error purposes */ Janet old_form = b->form; JanetTable *old_grammar = b->grammar; b->form = peg; /* Resolve keyword references */ int i = JANET_RECURSION_GUARD; JanetTable *grammar = old_grammar; for (; i > 0 && janet_checktype(peg, JANET_KEYWORD); --i) { Janet nextPeg = janet_table_get_ex(grammar, peg, &grammar); if (!grammar || janet_checktype(nextPeg, JANET_NIL)) { nextPeg = janet_table_get(b->default_grammar, peg); if (janet_checktype(nextPeg, JANET_NIL)) { peg_panic(b, "unknown rule"); } } peg = nextPeg; b->form = peg; b->grammar = grammar; } if (i == 0) peg_panic(b, "reference chain too deep"); /* Check cache - for tuples we check only the local cache, as * in a different grammar, the same tuple can compile to a different * rule - for example, (+ :a :b) depends on whatever :a and :b are bound to. */ Janet check = janet_checktype(peg, JANET_TUPLE) ? janet_table_rawget(grammar, peg) : janet_table_get(grammar, peg); if (!janet_checktype(check, JANET_NIL)) { b->form = old_form; b->grammar = old_grammar; return (uint32_t) janet_unwrap_number(check); } /* Check depth */ if (b->depth-- == 0) peg_panic(b, "peg grammar recursed too deeply"); /* The final rule to return */ uint32_t rule = janet_v_count(b->bytecode); /* Add to cache. Do not cache structs, as we don't yet know * what rule they will return! We can just as effectively cache * the structs main rule. */ if (!janet_checktype(peg, JANET_STRUCT)) { JanetTable *which_grammar = grammar; /* If we are a primitive pattern, add to the global cache (root grammar table) */ if (!janet_checktype(peg, JANET_TUPLE)) { while (which_grammar->proto) which_grammar = which_grammar->proto; } janet_table_put(which_grammar, peg, janet_wrap_number(rule)); } switch (janet_type(peg)) { default: peg_panic(b, "unexpected peg source"); return 0; case JANET_NUMBER: { int32_t n = peg_getinteger(b, peg); Reserve r = reserve(b, 2); if (n < 0) { emit_1(r, RULE_NOTNCHAR, -n); } else { emit_1(r, RULE_NCHAR, n); } break; } case JANET_STRING: { const uint8_t *str = janet_unwrap_string(peg); int32_t len = janet_string_length(str); emit_bytes(b, RULE_LITERAL, len, str); break; } case JANET_STRUCT: { /* Build grammar table */ const JanetKV *st = janet_unwrap_struct(peg); JanetTable *new_grammar = janet_table(2 * janet_struct_capacity(st)); for (int32_t i = 0; i < janet_struct_capacity(st); i++) { if (janet_checktype(st[i].key, JANET_KEYWORD)) { janet_table_put(new_grammar, st[i].key, st[i].value); } } new_grammar->proto = grammar; b->grammar = grammar = new_grammar; /* Run the main rule */ Janet main_rule = janet_table_rawget(grammar, janet_ckeywordv("main")); if (janet_checktype(main_rule, JANET_NIL)) peg_panic(b, "grammar requires :main rule"); rule = peg_compile1(b, main_rule); break; } case JANET_TUPLE: { const Janet *tup = janet_unwrap_tuple(peg); int32_t len = janet_tuple_length(tup); if (len == 0) peg_panic(b, "tuple in grammar must have non-zero length"); if (!janet_checktype(tup[0], JANET_SYMBOL)) peg_panicf(b, "expected grammar command, found %v", tup[0]); const uint8_t *sym = janet_unwrap_symbol(tup[0]); const SpecialPair *sp = janet_strbinsearch( &peg_specials, sizeof(peg_specials) / sizeof(SpecialPair), sizeof(SpecialPair), sym); if (sp) { sp->special(b, len - 1, tup + 1); } else { peg_panicf(b, "unknown special %S", sym); } break; } } /* Increase depth again */ b->depth++; b->form = old_form; b->grammar = old_grammar; return rule; } /* * Post-Compilation */ static int peg_mark(void *p, size_t size) { (void) size; JanetPeg *peg = (JanetPeg *)p; if (NULL != peg->constants) for (uint32_t i = 0; i < peg->num_constants; i++) janet_mark(peg->constants[i]); return 0; } static void peg_marshal(void *p, JanetMarshalContext *ctx) { JanetPeg *peg = (JanetPeg *)p; janet_marshal_size(ctx, peg->bytecode_len); janet_marshal_int(ctx, (int32_t)peg->num_constants); janet_marshal_abstract(ctx, p); for (size_t i = 0; i < peg->bytecode_len; i++) janet_marshal_int(ctx, (int32_t) peg->bytecode[i]); for (uint32_t j = 0; j < peg->num_constants; j++) janet_marshal_janet(ctx, peg->constants[j]); } /* Used to ensure that if we place several arrays in one memory chunk, each * array will be correctly aligned */ static size_t size_padded(size_t offset, size_t size) { size_t x = size + offset - 1; return x - (x % size); } static void *peg_unmarshal(JanetMarshalContext *ctx) { size_t bytecode_len = janet_unmarshal_size(ctx); uint32_t num_constants = (uint32_t) janet_unmarshal_int(ctx); /* Calculate offsets. Should match those in make_peg */ size_t bytecode_start = size_padded(sizeof(JanetPeg), sizeof(uint32_t)); size_t bytecode_size = bytecode_len * sizeof(uint32_t); size_t constants_start = size_padded(bytecode_start + bytecode_size, sizeof(Janet)); size_t total_size = constants_start + sizeof(Janet) * (size_t) num_constants; /* DOS prevention? I.E. we could read bytecode and constants before * hand so we don't allocated a ton of memory on bad, short input */ /* Allocate PEG */ char *mem = janet_unmarshal_abstract(ctx, total_size); JanetPeg *peg = (JanetPeg *)mem; uint32_t *bytecode = (uint32_t *)(mem + bytecode_start); Janet *constants = (Janet *)(mem + constants_start); peg->bytecode = NULL; peg->constants = NULL; peg->bytecode_len = bytecode_len; peg->num_constants = num_constants; for (size_t i = 0; i < peg->bytecode_len; i++) bytecode[i] = (uint32_t) janet_unmarshal_int(ctx); for (uint32_t j = 0; j < peg->num_constants; j++) constants[j] = janet_unmarshal_janet(ctx); /* After here, no panics except for the bad: label. */ /* Keep track at each index if an instruction was * reference (0x01) or is in a main bytecode position * (0x02). This lets us do a linear scan and not * need to a depth first traversal. It is stricter * than a dfs by not allowing certain kinds of unused * bytecode. */ uint32_t blen = (int32_t) peg->bytecode_len; uint32_t clen = peg->num_constants; uint8_t *op_flags = calloc(1, blen); if (NULL == op_flags) { JANET_OUT_OF_MEMORY; } /* verify peg bytecode */ uint32_t i = 0; while (i < blen) { uint32_t instr = bytecode[i]; uint32_t *rule = bytecode + i; op_flags[i] |= 0x02; switch (instr & 0x1F) { case RULE_LITERAL: i += 2 + ((rule[1] + 3) >> 2); break; case RULE_NCHAR: case RULE_NOTNCHAR: case RULE_RANGE: case RULE_POSITION: case RULE_BACKMATCH: /* [1 word] */ i += 2; break; case RULE_SET: /* [8 words] */ i += 9; break; case RULE_LOOK: /* [offset, rule] */ if (rule[2] >= blen) goto bad; op_flags[rule[2]] |= 0x1; i += 3; break; case RULE_CHOICE: case RULE_SEQUENCE: /* [len, rules...] */ { uint32_t len = rule[1]; for (uint32_t j = 0; j < len; j++) { if (rule[2 + j] >= blen) goto bad; op_flags[rule[2 + j]] |= 0x1; } i += 2 + len; } break; case RULE_IF: case RULE_IFNOT: /* [rule_a, rule_b (b if not a)] */ if (rule[1] >= blen) goto bad; if (rule[2] >= blen) goto bad; op_flags[rule[1]] |= 0x01; op_flags[rule[2]] |= 0x01; i += 3; break; case RULE_BETWEEN: /* [lo, hi, rule] */ if (rule[3] >= blen) goto bad; op_flags[rule[3]] |= 0x01; i += 4; break; case RULE_ARGUMENT: case RULE_GETTAG: /* [searchtag, tag] */ i += 3; break; case RULE_CONSTANT: /* [constant, tag] */ if (rule[1] >= clen) goto bad; i += 3; break; case RULE_ACCUMULATE: case RULE_GROUP: case RULE_CAPTURE: /* [rule, tag] */ if (rule[1] >= blen) goto bad; op_flags[rule[1]] |= 0x01; i += 3; break; case RULE_REPLACE: case RULE_MATCHTIME: /* [rule, constant, tag] */ if (rule[1] >= blen) goto bad; if (rule[2] >= clen) goto bad; op_flags[rule[1]] |= 0x01; i += 4; break; case RULE_ERROR: case RULE_DROP: case RULE_NOT: /* [rule] */ if (rule[1] >= blen) goto bad; op_flags[rule[1]] |= 0x01; i += 2; break; default: goto bad; } } /* last instruction cannot overflow */ if (i != blen) goto bad; /* Make sure all referenced instructions are actually * in instruction positions. */ for (i = 0; i < blen; i++) if (op_flags[i] == 0x01) goto bad; /* Good return */ peg->bytecode = bytecode; peg->constants = constants; free(op_flags); return peg; bad: free(op_flags); janet_panic("invalid peg bytecode"); } static int cfun_peg_getter(JanetAbstract a, Janet key, Janet *out); const JanetAbstractType janet_peg_type = { "core/peg", NULL, peg_mark, cfun_peg_getter, NULL, peg_marshal, peg_unmarshal, JANET_ATEND_UNMARSHAL }; /* Convert Builder to JanetPeg (Janet Abstract Value) */ static JanetPeg *make_peg(Builder *b) { size_t bytecode_start = size_padded(sizeof(JanetPeg), sizeof(uint32_t)); size_t bytecode_size = janet_v_count(b->bytecode) * sizeof(uint32_t); size_t constants_start = size_padded(bytecode_start + bytecode_size, sizeof(Janet)); size_t constants_size = janet_v_count(b->constants) * sizeof(Janet); size_t total_size = constants_start + constants_size; char *mem = janet_abstract(&janet_peg_type, total_size); JanetPeg *peg = (JanetPeg *)mem; peg->bytecode = (uint32_t *)(mem + bytecode_start); peg->constants = (Janet *)(mem + constants_start); peg->num_constants = janet_v_count(b->constants); safe_memcpy(peg->bytecode, b->bytecode, bytecode_size); safe_memcpy(peg->constants, b->constants, constants_size); peg->bytecode_len = janet_v_count(b->bytecode); return peg; } /* Compiler entry point */ static JanetPeg *compile_peg(Janet x) { Builder builder; builder.grammar = janet_table(0); builder.default_grammar = janet_get_core_table("default-peg-grammar"); builder.tags = janet_table(0); builder.constants = NULL; builder.bytecode = NULL; builder.nexttag = 1; builder.form = x; builder.depth = JANET_RECURSION_GUARD; peg_compile1(&builder, x); JanetPeg *peg = make_peg(&builder); builder_cleanup(&builder); return peg; } /* * C Functions */ static Janet cfun_peg_compile(int32_t argc, Janet *argv) { janet_fixarity(argc, 1); JanetPeg *peg = compile_peg(argv[0]); return janet_wrap_abstract(peg); } static Janet cfun_peg_match(int32_t argc, Janet *argv) { janet_arity(argc, 2, -1); JanetPeg *peg; if (janet_checktype(argv[0], JANET_ABSTRACT) && janet_abstract_type(janet_unwrap_abstract(argv[0])) == &janet_peg_type) { peg = janet_unwrap_abstract(argv[0]); } else { peg = compile_peg(argv[0]); } JanetByteView bytes = janet_getbytes(argv, 1); int32_t start; PegState s; if (argc > 2) { start = janet_gethalfrange(argv, 2, bytes.len, "offset"); s.extrac = argc - 3; s.extrav = janet_tuple_n(argv + 3, argc - 3); } else { start = 0; s.extrac = 0; s.extrav = NULL; } s.mode = PEG_MODE_NORMAL; s.text_start = bytes.bytes; s.text_end = bytes.bytes + bytes.len; s.depth = JANET_RECURSION_GUARD; s.captures = janet_array(0); s.scratch = janet_buffer(10); s.tags = janet_buffer(10); s.constants = peg->constants; s.bytecode = peg->bytecode; const uint8_t *result = peg_rule(&s, s.bytecode, bytes.bytes + start); return result ? janet_wrap_array(s.captures) : janet_wrap_nil(); } static int cfun_peg_getter(JanetAbstract a, Janet key, Janet *out) { (void) a; if (janet_keyeq(key, "match")) { *out = janet_wrap_cfunction(cfun_peg_match); return 1; } return 0; } static const JanetReg peg_cfuns[] = { { "peg/compile", cfun_peg_compile, JDOC("(peg/compile peg)\n\n" "Compiles a peg source data structure into a <core/peg>. This will speed up matching " "if the same peg will be used multiple times.") }, { "peg/match", cfun_peg_match, JDOC("(peg/match peg text &opt start & args)\n\n" "Match a Parsing Expression Grammar to a byte string and return an array of captured values. " "Returns nil if text does not match the language defined by peg. The syntax of PEGs are very " "similar to those defined by LPeg, and have similar capabilities.") }, {NULL, NULL, NULL} }; /* Load the peg module */ void janet_lib_peg(JanetTable *env) { janet_core_cfuns(env, NULL, peg_cfuns); janet_register_abstract_type(&janet_peg_type); } #endif /* ifdef JANET_PEG */
{ "redpajama_set_name": "RedPajamaGithub" }
4,906
{"url":"https:\/\/listserv.uni-heidelberg.de\/cgi-bin\/wa?A2=LATEX-L;4c9216c8.9808&FT=&P=619191&H=A&S=b","text":"LATEX-L@LISTSERV.UNI-HEIDELBERG.DE\n\n Options: Use Classic View Use Monospaced Font Show Text Part by Default Condense Mail Headers Topic: [<< First] [< Prev] [Next >] [Last >>]\n\n Sender: Mailing list for the LaTeX3 project <[log in to unmask]> Subject: Re: Modules From: Frank Mittelbach <[log in to unmask]> Date: Mon, 17 Aug 1998 21:27:01 +0200 In-Reply-To: <13778.40005.889304.525595@isidor> Reply-To: Mailing list for the LaTeX3 project <[log in to unmask]> Parts\/Attachments: text\/plain (27 lines) ```Bernd \u00a0> On Wed, 12 August 1998 21:49:41 +0200, \u00a0> Martin Schroeder <[log in to unmask]> writes: \u00a0> > In <[log in to unmask]> Frank Mittelbach <[log in to unmask]> writes: \u00a0> > >b) will be drastic: a current LaTeX format (without any packages \u00a0> > >loaded) uses about 51088 words of memory before begin document; if the \u00a0> > >average word length in commands is 10 (which is far too low with a \u00a0> > >consequent implemented module concept) then this gets basically blown \u00a0> > >to 500000 which is twice the amount of main mem \u00a0> [...] \u00a0> \u00a0> Frank, either I misunderstand your ``word'' or you are wrong with this \u00a0> analysis. i guess neither. :-) the problem is that Martin cited me out of context. I was replying to a suggestion to replace TeX's token based mechanism, ie \\foobar being internally one token in main mem and a few bits of char mem, with a mechanism in which \\foobar is 7 tokens --- only that we were discussing \\foo\/bar_bas_... eg even longer streams of tokens stored and processed each time. my claim back then is that TeX is tailored to be a token based program and that giving this up is undesirable for several reasons. frank ```","date":"2022-05-20 18:25:50","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8742339611053467, \"perplexity\": 13905.90597288185}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662533972.17\/warc\/CC-MAIN-20220520160139-20220520190139-00553.warc.gz\"}"}
null
null
Q: how to append selected input value to anoter div in Ajax AutoComplete for jQuery? hi i am using Ajax AutoComplete for jQuery library http://www.devbridge.com/projects/autocomplete/jquery/ there are 2 demos there . in the first demo (Ajax auto-suggest sample (start typing country name)) when you select a country from the drop down that country and a image is added to a div like this <div id="selection"><img alt="" src="/global/flags/small/ht.png"> Haiti</div> the selected dropdown value is Haiti how can i do this . i want to do this when mouse clicking on drop-down value and also when press enter on selected drop-down value . i tried but could not think a way ................. please help :( A: In the autocomplete-parameters you can define an onSelect callback. Add the function to change the div with the id selection. E.g.: $(>selector<).autocomplete({ ... onSelect: function(value, data) { $('#selection').html('<img src="/global/flags/small/' + data + '.png" alt="" /> ' + value); } ... });
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,478
Q: Flutter Speech recognition Navigate to next sacreen? I am implementing the speech_text recognition feature in my flutter app which I have done successfully my problem is that I want to perform a task when user finish up their speech it goes to the next screen after finishing speech any expert is here who can help me here is my code below void _listen() async { if (!islistening) { bool available = await speechToText.initialize( onStatus: (val) => print('onStatus: $val'), onError: (val) => print('onError: $val'), ); if (available) { setState(() { islistening = true; }); speechToText.listen( onResult: (result) => setState(() { text = result.recognizedWords; if (_dialogKey.currentState != null && _dialogKey.currentState!.mounted) { _dialogKey.currentState!.setState(() { text =result.recognizedWords; }); } print(result.recognizedWords); }) ); } } else { setState(() => islistening = false ); speechToText.stop(); } } A: run the code and in your debug console see what the OnError message is. I am getting the error: onError: SpeechRecognitionError msg: no-speech, permanent: false the error relates to ChromeProxyService https://github.com/flutter/flutter/issues/45380 import 'dart:async'; import 'dart:math'; import 'package:flutter/material.dart'; import 'package:speech_to_text/speech_recognition_error.dart'; import 'package:speech_to_text/speech_recognition_result.dart'; import 'package:speech_to_text/speech_to_text.dart' as stt; void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', home: MyHomePage(), ); } } class MyHomePage extends StatefulWidget { MyHomePage({Key? key}) : super(key: key); @override _MyHomePageState createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { late stt.SpeechToText _speechToText; bool _speechEnabled = false; String _lastWords = ''; bool _isListening=false; double _confidence=1.0; @override void initState() { super.initState(); _speechToText= stt.SpeechToText(); setState(() { _isListening=false; }); } /// This has to happen only once per app //void _initSpeech() async { // _speechEnabled = await _speechToText.initialize(); // setState(() {}); //} /// Each time to start a speech recognition session _listen() async { if (!_isListening) { print("reached"); _speechEnabled = await _speechToText.initialize( onStatus: (val)=>print('onStatus: $val'), onError: (val)=>print('onError: $val'), ); if (_speechEnabled) { setState(() { _isListening=true; }); } _speechToText.listen(onResult:(val)=>setState((){ _lastWords = val.recognizedWords; if(val.hasConfidenceRating && val.confidence>0) { _confidence=val.confidence; } })); } else{ setState(()=>_isListening=false); _speechToText.stop(); } //setState(() {}); } /// Manually stop the active speech recognition session /// Note that there are also timeouts that each platform enforces /// and the SpeechToText plugin supports setting timeouts on the /// listen method. //void _stopListening() async { // await _speechToText.stop(); // setState(() {}); //} /// This is the callback that the SpeechToText plugin calls when /// the platform returns recognized words. void _onSpeechResult(SpeechRecognitionResult result) { setState(() { _lastWords = result.recognizedWords; }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text('Speech Demo'), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ Container( padding: EdgeInsets.all(16), child: Text( 'Recognized words:', style: TextStyle(fontSize: 20.0), ), ), Expanded( child: Container( padding: EdgeInsets.all(16), child: Text( // If listening is active show the recognized words _speechToText.isListening ? '$_lastWords' // If listening isn't active but could be tell the user // how to start it, otherwise indicate that speech // recognition is not yet ready or not supported on // the target device : _speechEnabled ? 'Tap the microphone to start listening...' : 'Speech not available', ), ), ), ], ), ), floatingActionButton: FloatingActionButton( onPressed:() async {await _listen();}, tooltip: 'Listen', child: Icon(_isListening? Icons.mic_off : Icons.mic), ), ); } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,859
Q: Modify CSS rule when another one is set My code is set to change the layout of the site through options (simple hide/display, left/right position for the sidebar), using the Wordpress Customizer. $wp_customize->add_setting( 'sidebar_display', array( 'default' => '', 'section' => 'layout', 'sanitize_callback' => 'sanitize_layout' ) ); $wp_customize->add_control('sidebar_display', array( 'label' => __('Sidebar Display', ' '), 'section' => 'layout', 'settings' => 'sidebar_display', 'type' => 'radio', 'choices' => array( 'inline-block' => 'Display', 'none' => 'Hide', ), )); css #sidebar-primary {display: <?php echo $sidebar_display; ?>;} Since images of the main content (not in sidebar) change their dimensions whenever the sidebar is displayed or not, I have an issue with that. I'm looking for a way to modify a css rule width:, whith something that would say "If the sidebar is display:inline-block, .element img {width:76%}, else .element img{width:100%}" I did some researches and I believe I can achieve this with LESS (?), but is there any other way to do this ? I'd be glad to have any advice regarding my issue ! A: Since CSS can't process conditions like this, it is impossible to do it with plain CSS. JavaScript seems like the right option for you. It is really easy if you use jQuery a very powerful JavaScript framework jQuery. The code should looke something like this: var display = $("#sidebar").css("display"); var percentage; if(display == "inline-block"){ percentage = "76%"; } else{ percentage = "100%"; } $("#result").css("width",percentage); $("#result").html("This one is "+percentage); #sidebar{ display: inline-block; } #result{ height: 50px; background-color: red; } #compare{ height: 50px; width: 100%; background-color: green; } <!-- This is the jQuery framework --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div id="sidebar"></div> <div id="compare">This one is 100%</div> <div id="result"></div>
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,715
Q: Cannot use [noexcept] in MIDL 3 In this year's Build Talk C++/WinRT 2.0: Faster and smarter in the open, Kenny Kerr demonstrates the use of the [noexcept] attribute in IDL, providing better optimization opportunities to the compiler by eliding exception handling at the ABI. Trying this for myself, however, I ended up with MIDL compiler errors. The following stripped down IDL file namespace NS { interface IMyInterface { [noexcept] String DoStuff(); }; } produces the following diagnostic output: error MIDL2025: [msg]syntax error [context]: expecting . near "]" error MIDL2009: [msg]undefined symbol [context]: noexcept.String error MIDL2025: [msg]syntax error [context]: expecting ] or , near "DoStuff" error MIDL2025: [msg]syntax error [context]: expecting . near "(" error MIDL2026: [msg]cannot recover from earlier syntax errors; aborting compilation Am I doing something wrong here, or is the [noexcept] attribute not yet available in the GA releases of Visual Studio (16.1.4) or the Windows SDK (10.0.18362.0)? A: You'll need a newer version of MIDLRT. This feature is currently available in the insider builds of the Windows SDK and will ship with the next major update of Windows.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,760
The Washington Post, May 25, 2013 Blast kills 12 at Afghan mosque The insurgents placed explosives in a corner of the mosque, in Ghazni province's Andar district, before joining worshipers, according to Qasim Deswal, a local official By Sayed Salahuddin KABUL — A blast during Friday night prayers in a mosque in central Afghanistan killed 12 people, eight of them Taliban insurgents, officials said Saturday. The insurgents placed explosives in a corner of the mosque, in Ghazni province's Andar district, before joining worshipers, according to Qasim Deswal, a local official. They had been passing the village carrying the explosives they routinely use for roadside-bomb or suicide attacks against Afghan and NATO targets when they stopped at the mosque, Deswal said. Afghan people gather at the vehicle blown by a suicide bomb in Ghazni province on September 28, 2010. (Photo: Mustafa Andalib/Reuters) "We have a number of wounded people, too, from this explosion, some in critical condition," he said. Also Saturday, a would-be suicide bomber in the capital, Kabul, died when his explosives-rigged vest detonated early, the Associated Press reported police as saying. The mosque explosion in Ghazni came on the same day as an attack by another group of insurgents, including suicide bombers, on a police compound and a guest house used by foreigners in the heart of Kabul. In addition to the six assailants, four other people, including a Nepalese guard at the guest house, were killed in that attack, which lasted for hours. Several expatriate officials of the International Organization for Migration were wounded, the United Nations said. The Taliban used rocket-propelled grenades, hand grenades and assault rifles in the attack, which took place less than a mile from the Interior Ministry. Hashmat Stanikzai, a police official, said the attackers used a car bomb at the start of the raid and that all of them wore burqas, the Islamic dress commonly used by women in Afghanistan. "We had to pull out our family members from the area because of continued gun battles and at times successive explosions," said Shah Maluk, a resident. Characters Count: 2459 URL for news «Blast kills 12 at Afghan mosque» http://www.rawa.org/temp/runews/2013/05/25/blast-kills-12-at-afghan-mosque.html News Archive of the «Revolutionary Association of the Women of Afghanistan» (RAWA)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,683
namespace Gu.SerializationAsserts.Tests.Dtos { using System.Xml; using System.Xml.Schema; using System.Xml.Serialization; public class ForgotReadEndElement : IXmlSerializable { public int Value { get; set; } XmlSchema IXmlSerializable.GetSchema() => null; void IXmlSerializable.ReadXml(XmlReader reader) { reader.MoveToContent(); reader.Read(); this.Value = XmlConvert.ToInt32(reader.ReadElementString(nameof(this.Value))); } void IXmlSerializable.WriteXml(XmlWriter writer) { writer.WriteElementString(nameof(this.Value), XmlConvert.ToString(this.Value)); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,805
{"url":"https:\/\/stackoverflow.com\/questions\/7655471\/ld-exe-cannot-open-output-file-permission-denied","text":"# ld.exe: cannot open output file \u2026 : Permission denied\n\nI recently installed CodeBlocks with mingw32 on Windows 7 Ultimate 32bit in order to dust off my c skills, but this problem has me somewhat stumped.\n\nI decided to fire off a short Fibonacci generator to make sure my setup was working, but I ran into a hurdle.\n\nThe program compiles, links and whatnot like a charm and I get a corresponding executable which runs as expected.\n\nThe problems occur if I try to compile again, then I get the following:\n\nc:\/codeblocks\/mingw\/bin\/..\/lib\/gcc\/mingw32\/4.4.1\/..\/..\/..\/..\/mingw32\/bin\/ld.exe: cannot open output file bin\\Debug\\Fibonacci.exe: Permission denied\n\n\nI can't even edit the permissions of the generated executable.\n\nI've checked the usual suspects:\n\n\u2022 Executable is verily not running.\n\u2022 Path to executable is read\/writable to mingw32 (otherwise it wouldn't be able to build in the first place)\n\u2022 I'm not running cygwin in any shape or form.\n\nAnd now for the funny bit: Usually after a few minutes, any executables generated by mingw32 which are displaying this Access Denied behaviour will automatically vanish without any intervention from me.\n\nI've googled this somewhat, but most of the other results were either vague or inapplicable.\n\nI wonder whether there is some Windows 7 security setting playing havoc with my .exe's, but I'm not that knowledgeable about Win 7 as to know where to begin searching.\n\nAny one have any ideas?\n\n\u2022 Here's a total guess... if you're building to the bin\\Debug directory then code::blocks may be doing something with its integrated debugger, keeping a file handle open on the executable. Try building it for release and see if you have the same problem \u2013\u00a0Brian Gordon Oct 5 '11 at 0:34\n\u2022 I think @BrianGordon's guess is a good one. If the program is running, kill it before trying to rebuild. \u2013\u00a0Keith Thompson Oct 5 '11 at 0:38\n\u2022 The executables vanish? As in they get deleted automatically? Sounds like a virus scanner issue. \u2013\u00a0tinman Oct 7 '11 at 12:40\n\u2022 Victor T. : I just get Permission denied tinman: No anti-virus installed, I just run mcafee stinger occasionally. I'm going to have a look at UAC tonight. \u2013\u00a0gzzzur Oct 7 '11 at 21:16\n\u2022 Your Code::Blocks Projects should not be created in directories like C , C:\/\/users\/\/yourname or C:\/\/ProgramFiles or C:\/\/ProgramFiles\/Code::Blocks. \u2013\u00a02147483647 Nov 5 '12 at 2:36\n\nI had exactly the same problem right after switching off some (in my opinion unneccessary) Windows services. It turned out that when I switched ON again the \"Application Experience\" everything resumed working fine.\n\nMay be you simply have to turn on this service? To switch ON Application Experience:\n\n1. Click the Windows start buttonn.\n\n2. In the box labeled \"Search programs and files\" type services.msc and click the search button. A new window with title \"Services\" opens.\n\n3. Right click on \"Application Experience\" line and select \"Properties\" from popup menu.\n\n4. Change Startup type to \"Automatic (delayed start)\".\n\n5. Restart computer.\n\nApplication Experiences should prevent the problem in the future.\n\n\u2022 This seems to be the actually good solution to this really nasty problem. I've seen it unsolved on some forum threads and the like. \u2013\u00a0Cimbali Nov 15 '12 at 16:36\n\u2022 Worked also for me, still I don't understand what's going on. \u2013\u00a05agado Apr 3 '14 at 14:16\n\u2022 Great answer came back after restart to +1 \u2013\u00a0Philip Rego Sep 25 '14 at 5:11\n\u2022 Worked for me as well, cannot explain it. \u2013\u00a0Robin Bruegger Mar 16 '15 at 16:45\n\u2022 I'm trying to build a software package with MSYS2\/ming32 and encounter the same problem. I get the ld error when the configure script is trying to compile a test program. The \"Application Experience\" magic didn't work. \u2013\u00a0Seppo Enarvi Apr 14 '15 at 17:31\n\nIf you think the executable is locked by a process, try Process Explorer from SysInternals. In the File\/handle, enter Fibonacci.exe and you should see who holds the file.\n\nIf it is not enough, you can use Process Monitor (from SysInternals, again) to follow the activity of all processes on your system on Fibonacci.exe. With a little bit of analysis (call stacks), you'll may find out why the access to the file is denied and what make it disappear.\n\n\u2022 I picked your answer because it looks like the most plausible solution given my problem statement. I wasn't able to try it out though, since the machine I had it running on crashed. Since I've now decided to go with linux it kind of makes this a moot point for me. \u2013\u00a0gzzzur Oct 25 '11 at 13:50\n\nYour program is still running. You have to kill it by closing the command line window. If you press control alt delete, task manager, processs (kill the ones that match your filename).\n\nThe Best solution is go to console in eclipse IDE and click the red button to terminate the program. You will see the your program is running and output can be seen there. :) !!\n\n\u2022 This is the best solution , everyone using eclipse should use this . \u2013\u00a0shadrack Mwangi Dec 14 '17 at 5:16\n\nI had the same behaviour, and fixed it by running Code::Blocks as administrator.\n\n1. Open task manager -> Processes -> Click on .exe (Fibonacci.exe) -> End Process\n\nif it doesn't work\n\n2. Close eclipse IDE (or whatever IDE you use) and repeat step 1.\n\nI had a similar problem. Using a freeware utility called Unlocker (version 1.9.2), I found that my antivirus software (Panda free) had left a hanging lock on the executable file even though it didn't detect any threat. Unlocker was able to unlock it.\n\n\u2022 Thanks, I was able to fix the same problem by turning off File System Auto-Protect from Symantec Endpoint Protection. \u2013\u00a0Seppo Enarvi Apr 14 '15 at 17:38\n\nGot the same issue. Read this. Disabled the antivirus software (mcaffee). Et voila\n\nConfirmed by the antivirus log:\n\nBlocked by Access Protection rule d:\\mingw64\\x86_64-w64-mingw32\\bin\\ld.exe d:\\workspace\\cpp\\bar\\foo.exe User-defined Rules:ctx3 Action blocked : Create\n\n# It may be your Antivirus Software.\n\nIn my case Malwarebytes was holding a handle on my program's executable:\n\nUsing Process Explorer to close the handle, or just disabling antivirus for a bit work just fine.\n\n\u2022 [SOLVED]In my case it was the anti-virus that was blocking it. I opened the antivirus program and it had logged an event that ld.exe was blocked. If you just add the exception to ld.exe, this error goes away, no need to disable Antivirus. \u2013\u00a0Chandan Apsangi May 12 '18 at 23:37\n\nProblem Cause : The process of the current program is still running without interuption. (This is the reason why you haven't got this issue after a restart)\n\nThe fix is simple : Go to cmd and type the command taskkill -im process-name.exe -f\n\nEg:\n\n taskkill -im demo.exe -f\n\n\nhere,\n\ndemo - is my program name\n\nI got this error when using the Atom editor and mingw (through a package called gpp-compiler`) for C++. Closing the open console window fixed my issue.\n\ni experienced a similar issue. Bitdefender automatically quarantined each exe-file i created by MinGW g++. Instead of the full exe-file i found a file with a weird extension 'qzquar' testAutoPtr1.exe.48352.gzquar\n\nWhen i opened quarantined items in Bitdefender i found my exe-file quarantined there.\n\n## protected by Community\u2666Jul 4 '12 at 14:44\n\nThank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).\n\nWould you like to answer one of these unanswered questions instead?","date":"2019-11-18 21:16:21","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.40694576501846313, \"perplexity\": 3789.7948604579774}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496669847.1\/warc\/CC-MAIN-20191118205402-20191118233402-00095.warc.gz\"}"}
null
null
Biography & True Stories > Arts & entertainment biographies > Arts & entertainment autobiographies Entertainment > Film, TV & radio > Television Nothing But the Truth: My Story (Paperback) Vicky Pattison (author) NEW AND UPDATED EDITION, WITH ALL THE JUNGLE GOSSIP Vicky Pattison was once best known as the outspoken, fiery star of the notorious reality show Geordie Shore. It took the challenging conditions and terrifying trials of the I'm a Celebrity jungle for the nation to see Vicky's true colours: brave, kind, a team-player and loyal friend - and mistress of the wicked one-liner! Millions of viewers fell in love with Vicky and it was no surprise when they crowned her their Queen of the Jungle in a landslide victory. Now, in her number one bestselling autobiography, Vicky takes us back to where it all began: to the loving family who have always had her back; to the showbiz daydreams of an ambitious little girl and to the outrageous adventures of an outgoing young women making her way in the world. With courageous honesty, Vicky reveals how she experienced the highs and lows of fame on Geordie Shore, how she hit rock bottom when a turbulent relationship fell apart and how she dug deep to turn her life around and come out fighting. And for the first time Queen Vicky shares her exclusive behind-the-scenes I'm a Celeb gossip and reveals all her exciting plans for the future. Think you know Vicky Pattison? It's time to read the truth, the whole truth and NOTHING BUT THE TRUTH. Publisher: Little, Brown Book Group Theroux The Keyhole Louis Theroux A Funny Life Odd Boy Out Gyles Brandreth Tomorrow Will Be A Good Day This Much is True Miriam Margolyes Tall Tales and Wee Stories 1000 Years of Joys and Sorrows Just Kids My Unapologetic Diaries Greenlights And Away... Bob Mortimer Let's Do It: The Authorised Biography of Victoria Wood Jasper Rees
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,684
namespace WebSite.Utils.Enums { public enum AuthorizationLevels : int { NoUserRequired = 0, JustContributor, JustAdministator, ContributorOrAdministator } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,467
{"url":"https:\/\/www.vedantu.com\/chemistry\/latex-chemical-compound","text":"\u00d7\n\n# Latex Chemical Compound\n\nTop\nFAQ\n\n## Overview of Latex Chemical Compound\n\nView Notes\n\nIf you ever pluck a leaf or flower or cut any branches of certain angiosperms, you will find a white milky solution coming out of the cut stem. This white solution is called latex. It is defined as the dispersion of several microparticles of different polymers in water. The latex chemical compound occurs in several sources in nature and can also be synthesized in laboratories and industries. For example, rubber is a form of latex that is routinely produced in industries.\n\n### Natural Sources of Latex\n\nMore than 10% of all angiosperms produce latex. This percentage surmounts to around 20,000 flowering plants belonging to more than 40 families. Both dicot and monocot plants produce latex. Around 14 percent of tropical plants and 6 percent of temperate plant species produce and use latex. Some of the plant families that produce latex include Asclepiadoideae, Apocynaceae, Euphorbiaceae, Sapotaceae, Moraceae, Asteraceae, and Papaveraceae. For example, the opium poppy plant is the major source of opium and morphine.\n\nYou will also find some fungal species to produce latex when it experiences an injury. Examples of such fungus are Lactarius deliciosus and related milk-cap fungi.\n\n[Image will be Uploaded Soon]\n\n### Role of Latex in Plants\n\nLatex has defined defense function in plants. It protects the plants from several herbivores. Several studies have shown that slugs prefer to eat leaves from where latex is drained off. However, they prefer to avoid the intact leaves. It is considered a better protection mechanism than other means like hairs, prickles, and thorns. The latex of the sandhill milkweed plant can trap and kill newly hatched caterpillars of the monarch caterpillar.\n\nSeveral studies have been conducted to look into the ingredients of the latex of different plants. From these studies, it was observed that latex contains around 50-1000X higher concentrations of different defense proteins and other substances compared to other plant tissues. Sometimes, latex contains compounds that can prove to be toxic to the parent plant. However, these toxins are effectively compartmentalized in the plant body. They can also prove to be antinutritive for the plant.\n\nLatex also displays unique clotting properties. For example, in the plant Cryptostegia grandiflora, the latex rushes to the site of injury to clot the wound. As a result, it limits wastage of plant sap and other products and traps the insects' mouthparts due to its stickiness.\n\nLatex is also considered a medium for the movement and storage of plant nutrients like sugar, salt, alkaloids, tannin, enzymes, plant waste, etc. It is also believed to be involved in maintaining the water concentration in different plant parts. It enables the complex mixing of different substances like waxes, fats, resins, gums, etc. The latex chemical compound moves into the circulation and travels longitudinally. Thus it helps in conducting different substances from one part to another.\n\nLatex also acts as an excretory reservoir for different plant products. The plant excretes several waste products into the latex solution.\n\n### Application of Latex in Our Daily Lives\n\nLatex has found several applications in our daily lives. The most commonly used latex is that obtained from the rubber industry. Around 12000 species of plants produce latex that contains rubber. However, most of these rubber thus obtained are not deemed suitable for commercial uses. Such rubber is used to make different products like tires, rubber bands, grips of bat, mattresses, gloves, balloons, swimming caps, to health care products like condoms and catheters. The gutta percha and balata latex resemble the rubber latex as it contains an inelastic polymer.\n\nChewing gum is another important contribution of plant latex. Most people have used such gums in their daily life. Many companies have started injecting compounds of medicinal values into these chewing gums. The basis of such chewing gum is the jelutong and chicle tree latex.\n\nAs stated earlier, the dried latex obtained from opium poppy seeds is known as opium. Opium is the source of several alkaloids with analgesic properties like thebaine, codeine, and morphine. Some of these opioids are used to make stronger variants of them. The latex also contains non-analgesic alkaloids like noscapine and papaverine.\n\nLatex has also been used for clothing purposes. The cloth sticks to the skin and produces the effect of a second skin. Several people around the world wear such latex-based clothing.\n\nFAQ (Frequently Asked Questions)\n\nQ1. What is the Natural Source of Latex?\n\nAns: Latex is naturally obtained from several plant sources. It is produced by more than 10% of all angiosperms, summing up to more than 20000 flowering plants belonging to around 40 families. The plant families that produce latex include Asclepiadoideae, Apocynaceae, Euphorbiaceae, Sapotaceae, Moraceae, Asteraceae, and Papaveraceae. Both monocots and dicot plants produce latex.\n\nAlong with plants, some fungus also produces latex. One example of such fungi is Lactarius deliciosus. Other related milk-cap fungi also produce latex. These fungi produce latex as a defense response to some injury.\n\nDue to its varied sources, several plants are cultivated to obtain latex on a large scale. The most cultivated plant in the world is the rubber plant. Rubber plants produce rubber that is utilized in the manufacture of several products. Some of these products include tires, rubber bands, grips of bat, mattresses, gloves, balloons, swimming caps, to health care products like condoms and catheters.\n\nQ2. What is the Function of Latex in Plants?\n\nAns: Latex performs different functions in plants. It is used as a source and a sink for different plant waste products, thus performing the task of an external reservoir. Since latex travels longitudinally, the plant uses it to conduct different compounds to different parts of it. Latex contains the polymer of several compounds like amino acids, rubber, wax, fat, sugar, etc.\n\nLatex is also used for defense purposes by the plant. It is found to be more effective than other plant defense responses like thorns and hairs. It contains toxins that can kill or deter several herbivores. For example, the latex of the sandhill milkweed plant can trap and kill newly hatched caterpillars of the monarch caterpillar.\n\nThe latex also demonstrates clotting properties in different plants. This clotting property helps to minimize the wastage of plant products. It also has a sticky effect on the mouthparts of different insects, thus preventing them from causing any further damage to the plant.","date":"2021-05-18 08:50:40","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8656518459320068, \"perplexity\": 5155.396323793814}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243989756.81\/warc\/CC-MAIN-20210518063944-20210518093944-00000.warc.gz\"}"}
null
null