text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Espaillat () is one of the 32 provinces of the Dominican Republic. It is divided into 5 municipalities and its capital city is Moca. Located in north-central Dominican Republic (Cibao), it is bordered by the provinces of La Vega to the south, Santiago and Puerto Plata to the west, and María Trinidad Sánchez to the north-east. The province has a coastline to the north with the Atlantic Ocean. It is named for Ulises Francisco Espaillat (1823–1878), the 19th-century author who was briefly President of the Republic in 1876. Municipalities and municipal districts The province as of June 20, 2006 is divided into the following municipalities (municipios) and municipal districts (distrito municipal - D.M.) within them: Cayetano Germosén Gaspar Hernández Joba Arriba (D.M.) Veragua (D.M.) Villa Magante (D.M.) Jamao al Norte Moca Canca La Reina (D.M.) El Higüerito (D.M.) José Contreras (D.M.) Juan López (D.M.) La Ortega (D.M.) Las Lagunas (D.M.) Monte de La Jagua (D.M.) San Víctor The following is a sortable table of the municipalities and municipal districts with population figures as of the 2014 estimate. Urban population are those living in the seats (cabeceras literally heads) of municipalities or of municipal districts. Rural population are those living in the districts (Secciones literally sections) and neighborhoods (Parajes literally places) outside them. The population figures are from the 2014 population estimate. For comparison with the municipalities and municipal districts of other provinces see the list of municipalities and municipal districts of the Dominican Republic. References External links Oficina Nacional de Estadística, Statistics Portal of the Dominican Republic Oficina Nacional de Estadística, Maps with administrative division of the provinces of the Dominican Republic, downloadable in PDF format Provinces of the Dominican Republic States and territories established in 1885
{ "redpajama_set_name": "RedPajamaWikipedia" }
29
Manhattan Bridge – dwupoziomowy most wiszący nad East River w Nowym Jorku łączący Lower Manhattan z Brooklynem. Został otwarty w 1909 roku, osiem lat po rozpoczęciu budowy. Ma 2089 m długości i 37 m szerokości. Posiada trzy przęsła podwieszone na czterech kablach o długości ok. 980 m i średnicy 54 cm, rozpiętych na dwóch pylonach (wieżach) o wysokości 102 m. Najdłuższe przęsło ma 448 m długości. Na górnym poziomie mostu znajdują się dwie jezdnie z czterema pasami ruchu, po dwa w każdą stronę. Na poziomie dolnym mieści się jedna trójpasmowa jezdnia jednokierunkowa (ruch tylko w kierunku Manhattanu), cztery tory metra, chodnik i ścieżka rowerowa. Według danych z 2008 roku każdego dnia przez most przejeżdża ok. 70 tys. pojazdów mechanicznych. Historia Plan budowy mostu zaprezentował w 1901 roku Gustav Lindenthal, komisarz nowo powstałego New York City Department of Bridges. Zakładał on w pierwotnej wersji, że konstrukcja Manhattan Bridge będzie połączeniem niektórych elementów Brooklyn Bridge i Williamsburg Bridge. Prace budowlane rozpoczęły się 1 października 1901 roku, chociaż wciąż jeszcze trwały dyskusje nad ostateczną koncepcją. Dwa lata później Lindenthal przedstawił nowy projekt, który został jednak odrzucony, ponieważ zarzucono mu masywność stalowych pylonów i zbyt głębokie kratownice usztywniające oraz odrzucenie tradycyjnej, linowej konstrukcji wiszącej i zastąpienie jej łańcuchowym rozwiązaniem z oczkowymi wieszakami. Po kontrowersjach wokół projektu Lindenthala nowym komisarzem New York City Department of Bridges został George Best, który na głównego inżyniera nadzorującego budowę mostu mianował Othniela Fostera Nicholsa. Doradcą Nicholsa został doświadczony inżynier Rudolf Modrzejewski (Ralph Modjeski), który po konsultacjach wybrał rozwiązanie czterokablowego mostu wiszącego. Obliczenia konstrukcji wykonał Leon Moisseiff, który zaproponował użycie przy budowie nowej, mało jeszcze znanej i rzadko stosowanej teorii zginania. Most został otwarty 31 grudnia 1909 roku, mimo że jego budowa zakończyła się ostatecznie dopiero w 1912 roku. Całkowity koszt budowy wyniósł 31 mln dolarów. Most został przebudowany w latach 40. W 1982 roku Manhattan Bridge objęto programem renowacji i rozpoczęto szereg prac, które mają trwać co najmniej do 2013 roku. W styczniu 2010 roku rozpoczęto wymianę wszystkich kabli na których opiera się konstrukcja mostu. Przypisy Mosty i wiadukty w Nowym Jorku Obiekty budowlane na Manhattanie Obiekty budowlane w Brooklynie Mosty wiszące w Stanach Zjednoczonych Mosty i wiadukty drogowo-kolejowe w Stanach Zjednoczonych
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,758
\section{Introduction} Detecting optical transients (OTs) associated with gravitational wave (GW) events is one of the challenges in time domain astronomy. The GW localization region on the sky is typically large ($\sim$100 square degrees) when two or three GW detectors are operating \citep{abbott_2019, coughlin_2020}. When following up GW events, it is crucial for observatories with $<$1 square degree field-of-view (FOV) to develop efficient methods to detect plausibly-linked transients \citep{coughlin_2020}. Successfully detecting an electromagnetic counterpart to a GW reduces the localization uncertainty on the sky and allows for further characterizing measurements. The primary goal of this paper is to describe a method for detecting transients by comparing two images of the same region of the sky taken at different times and by different telescopes. The method is based purely on machine learning (ML) algorithms, specifically artificial neural networks (ANNs). The ML approach to transient detection is efficient as it can search through a large data set in a short amount of time. Hence, the ML approach is a valuable approach to solving the problem of detecting OTs in the time domain. Difference image analysis (DIA) is the standard method used to search for OTs. DIA methods are based on subtracting a reference image from a target image. The method attempts to compensate for the difference in point spread functions (PSFs) of each image. Compensating for differences in PSF allows one to subtract images taken by different telescopes or under varying atmospheric conditions. However, even with PSF compensation, the resulting image difference will be imperfect, leaving behind residual flux that can be confused by detection algorithms as false OTs. Many DIA algorithms have been proposed since the original \cite{alard_1998} paper, notably those by \cite{bramich_2008} and \cite{zackay_2016}. Regardless of the DIA method used, it is customary to train ML agents (e.g. random forest algorithms or neural networks) to sift through all the OT candidates, remove the spurious subtraction artifacts (``bogus sources''), and retain the likeliest true OT candidates \citep{zwickyRB, iPTFRB01, ogleRB, diaz_2016, artola_2020}. A real/bogus classifier can be avoided if there is a manual operator, but it becomes cumbersome for large surveys where bogus sources can outnumber potential real ones by 100 to 1. For these reasons, most systematic searches of the sky, like those for GW optical counterparts, will require a real/bogus classifier at the end of the analysis pipeline. Since ML classification seems like an unavoidable element in the search for OTs, we propose training ML algorithms on the images directly and avoiding the necessity of DIA methods. An ML classifier takes two small image insets which are cropped around a detected source on the target image. One inset contains the detected source and the other is cropped around the same location on the reference image. When the source appears on the target inset, but is missing on the reference inset, the classifier calls the case an OT. When there is a source present on both insets, the classifier calls the case a non-OT. Providing the classifier with a sufficient number of example OT and non-OT cases will train it to be robust at detecting all true OTs on subsequent imaging runs. Bypassing DIA has several advantages. The neural network method we propose is robust against PSF variations across different surveys and filters. Observatories lacking an extensive reference archive benefit from a method that works regardless of the references used, as long as the sky region is covered by some comparable photometric survey. Since it is typical for DIA methods to be computationally expensive, avoiding them leads to a drastic reduction in the processing speed for pipelines and an overall simplification in their design. To test the feasibility of our proposed method, we built and trained two ANN models --- one is a convolutional neural network (CNN) and the other is a dense layer network (DLN). The models accept target-reference inset pair samples as input and return the likelihood of the inset pair being an OT as output. We trained the models on simulated data and calculated which prediction the simulations gave on test data. The test data were created from images of galaxies obtained by the Dr. Cristina V. Torres Memorial Astronomical Observatory (CTMO) and covered by the Sloan Digital Sky Survey (SDSS) \citep{gunn_1998, gunn_2006}. \section{Scientific Background} Binary neutron star (BNS) or neutron star-black hole (NSBH) binary systems are the most promising astrophysical events for producing electromagnetic counterparts to GWs \citep{coughlin_2020}. Compact binary mergers are expected to produce an r-process-powered thermal transient, or a ``kilonova'' \citep{Berger_2013}. Theoretical light curves for kilonovae indicate a rapidly-fading optical and near-infrared (NIR) transient, detectable by telescopes within a week of the associated GW event. The first plausible kilonova detected was associated with the short gamma-ray burst (GRB) 130603B, which the Swift and Konus-Wind satellites detected on 3 June 2013. The short GRB lasted 0.4 seconds and was observed 12 arcminutes offset from the center of the galaxy NGC 3691 \citep{frederiks_2013}. The associated NIR transient matched the expected brightness and color of a kilonova at the time of observation, providing strong evidence for its source having been a BNS or NSBH merger. On 17 August 2017 at 12:41:04 UTC the Advanced LIGO and Virgo detectors observed the first BNS merger. GW170817 had a signal-to-noise ratio (SNR) of 32.4 and a false alarm rate (FAR) of 1 in $8.0 \times 10^4$ years. The component masses were both calculated to be in the range $1.17-1.60M_\odot$ implying the progenitors were both likely to be neutron stars. The GW signal was localized to within 28 deg$^2$ at 90\% confidence and estimated to have a luminosity distance of (40 $\pm$ 8) Mpc \citep{maggiore_2018}. Approximately 1.7 seconds following the detection of GW170817, the Fermi Gamma-Ray Burst Monitor (Fermi-GBM) and INTEGRAL satellites detected a short GRB. The short GRB uncertainty region overlapped that of the GW, improving overall localization estimates for follow-up observations. The near-simultaneous spatial and temporal localization of GW170817 and GRB 170817A had a 1 in $5.0 \times 10^{-8}$ chance of occurring randomly and, hence, is strong evidence for the link between BNS mergers being the progenitors of short GRBs \citep{abbott_2017}. Many observatories followed up this event to search for EM counterparts \citep{abbott_2017} with the Transient Robotic Observatory of the South (TOROS) Collaboration being part of the search campaign \citep{diaz_2017}. Subsequent follow-up observations detected an optical counterpart named AT2017gfo at 11 hours post-merger at approximately 10 arcseconds offset from the core of the lenticular galaxy NGC 4993 \citep{abbott_2017}. Additionally, several teams observed both X-ray and radio emission at the position of AT2017gfo at nine and 16 days post-merger, respectively \citep{abbott_2017}. The light curves of AT2017gfo exhibited rapid luminosity change in the ultaviolet (UV), optical, and infrared (IR) bands. An initial UV-blue peak transitioned rapidly to the red and IR bands. The rate of change for the blue bands was about two magnitudes per day. The red bands declined 0.3 days for the first 1.5 days, then the decline stopped for four days and continued to slowly decline for another eight days \citep{maggiore_2018}. The color evolution is unusual for an OT and different to any previously observed type of source. The light curves match the predicted light evolution of a ``kilonova'' --- an r-process-powered thermal transient produced by the merger of a BNS or NSBH binary system. AT2017gfo fit a kilonova model with three ejecta components, each with different masses, velocities, and opacities (see Table \ref{best fit parameters}) \citep{villar_2017}. Spectroscopic observations showed that the blue spectrum was continuous and featureless, due to line broadening of the high ejecta velocities for that component. The near-IR spectrum showed the emergence of broad spectral features, related to the radioactive decay of synthesized r-process elements at late post-merger times \cite{abbott_2017}. \begin{table} \centering \begin{tabular}{|c|c|c|c|}\hline & \textbf{blue} & \textbf{purple} & \textbf{red} \\ \hline $\kappa$ & 0.5 cm$^2$/g & 3 cm$^2$/g & 10 cm$^2$/g \\ \hline $M_{\text{ej}}$ & 0.02\(M_\odot\) & 0.047\(M_\odot\) & 0.011\(M_\odot\) \\ \hline $v_{\text{ej}}$ & 0.27$c$ & 0.15$c$ & 0.14$c$ \\ \hline \end{tabular} \caption{The best-fit parameters for AT2017gfo using a three-component kilonova model (lanthanide-free ``blue'', intermediate-opacity ``purple'', and lanthanide-rich ``red'' components). The fitted ejecta parameters are opacity $\kappa$, mass $M_{\text{ej}}$, and velocity $v_{\text{ej}}$. Credit: Villar et al. (2017)} \label{best fit parameters} \end{table} A kilonova is significantly different than any transient previously observed. The peak luminosity of a kilonova is predicted to be $10^{41}$ erg/s, placing it between a nova ($10^{38}$ erg/s) and a supernova ($10^{43}$ erg/s) \citep{kasliwal_2011}. Kilonovae are expected to be observable on the order of 10 days, while a nova can be observed for months and a supernova for up to a year or more. Kilonovae progenitors are BNS and NSBH mergers, whereas a nova is caused by the fusion of hydrogen on the surface of a white dwarf in a binary system, and a supernova is an explosion caused by the core collapse of a massive star. For a full overview of the history of kilonovae, theoretical models, and observations, we refer the reader to \cite{metzger_2019}. A summary of all observations taken by collaborations in the follow-up of GW170817 is given by \citet{abbott_2017}. The TOROS Collaboration contribution is further detailed in \citet{diaz_2017}. \section{Method} \label{sec:methods} To test our proposed method we ran an experiment to prove the validity of our assumptions. The experiment consists on testing two different approaches to Machine Learning architectures based on Artificial Neural Networks. Then after training and validating them with simulated data, test them on pairs of real images. One member of the pair is from the CTMO and the reference member of the pair comes from the SDSS survey. This section is organized as follows. In section \ref{sec:testrealdata}, we describe what kind of images from CTMO were used and how were downloaded and aligned equivalent images from SDSS. In section \ref{sec:datasets}, we present how we created the testing and training data set. In section \ref{sec:cnnarch}, we describe the architecture of the networks. Finally in section \ref{sec:metricresults} we present the results of the final metric values for our experiment. In section \ref{sec:diaanncompare} we make another similar experiment with a set of images that has been analyzed before in search for optical transients using a DIA method (\citet{artola_2020} with a Random Forest real/bogus classifier and also with a CNN-based real/bogus classifier. This second experiment allows for a more direct comparison of the method proposed here and the more conventional one based on DIA followed before. \subsection{Image preprocessing} \label{sec:testrealdata} We targeted five galaxies (Table \ref{ctmo data}) covered by SDSS using the instrumentation of CTMO. Four of the five targets were taken on February 8, 2020 UTC with the current optical configuration of CTMO, which consists of a PlaneWave Corrected Dall-Kirkham 17'' astrograph with a ProLine 16803 CCD camera. Each image is unfiltered, has 60-second exposure time, taken at $2 \times 2$ binning, and has a FOV of $80 \times 80$ arcminutes. We observed the fifth target, IC 4559, at an earlier date (2 July 2019 UTC) when CTMO had a different optical setup: the instrument used for these data was an Apogee F16M CCD camera. This image is unfiltered, taken at $2 \times 2$ binning with 300-second exposure time, and has a FOV of $50 \times 50$ arcminutes. We used the CTMO Anaylsis Library (CAL) to bias- and dark-subtract, as well as flatfield-correct, each image \citep{camuccio_2019}. We used two-dimensional spatially-varying mesh to subtract the median background of each image. Since each target consisted of a series of exposures, we plate-solved each image and aligned them per series using their world coordinate system (WCS) header metadata. We created a median-combined stack of the aligned images per series (Note: approximate limiting magnitude SDSS is 3 sigma). \begin{table} \centering \begin{tabular}{|c|c|c|c|}\hline \textbf{Object} & \textbf{RA (J2000)} & \textbf{Dec (J2000)} & \textbf{Redshift} \\ \hline IC 4559 & 15:35:53.51 & +25:20:28.07 & 0.0345 \\ \hline PGC 21547 & 07:40:29.98 & +83:47:25.88 & 0.0068 \\ \hline PGC 21577 & 07:41:12.48 & +42:44:57.74 & 0.0358 \\ \hline PGC 21708 & 07:45:07.25 & +46:04:20.72 & 0.0312 \\ \hline PGC 21856 & 07:48:34.63 & +44:41:17.80 & 0.0204 \\ \hline \end{tabular} \caption{CTMO targets} \label{ctmo data} \end{table} We used the \textit{SkyView} function from the \textit{astroquery} \citep{ginsburg_2019} package to download reference images from SDSS. Knowing the center coordinates and FOV of each CTMO image, we requested the SDSS reference in the \textit{g} filter with a size of $2000 \times 2000$ pixels. All SDSS images are taken from Data Release 9 (DR9) and have an exposure time of 54 seconds. We expect each image in a given pair to have different orientations. For an effective alignment solution, each pixel per picture should represent the same astronomical coordinates. To achieve image alignment, CAL employs the \textit{reproject} package from \textit{Astropy}. The \textit{reproject} package aligns the SDSS image with the CTMO image and crops it to have the same FOV. \subsection{Creating data sets} \label{sec:datasets} We anticipate transient events to look like new stellar sources in the sky. We wanted to construct ML methods so that they would recognize new sources in both follow-up observations and previously-observed fields. Using the entire image as input to the neural network proved burdensome. Therefore, we created a data set with smaller images --- the data set is composed of cropped images for each source detected on the images. We created a data set of 3370 samples from five CTMO-SDSS image pairs (hereafter the ``test data set''). Half of the samples were transients and the other half were non-transients. To train any ML model, one requires many samples ($>$10000). For this reason, we simulated a data set for the training component. \subsubsection{Test data set} We postulate that source extraction programs could find transient events based on the assumption that they would look like stellar sources. We built transient and non-transient samples from CTMO and SDSS source sub-images. Non-transient samples are a pair of sub-images with the same detected source --- one from CTMO and the other from SDSS. Transient samples are a pair of sub-images, one from CTMO containing a source, the other SDSS images containing no source --- only background. First, we detected sources on the CTMO image. We used the \textit{Source Extraction and Photometry (SEP)} library in Python \citep{bertin_1996} \citep{barbary_2016}. The program detects objects from each image (in this study at 3-$\sigma$ confidence) and provides each of their coordinates as provided by the WCS header solution. After source extraction, we normalized both images to a common signal level. Each image pair was taken with different instruments, so the first step was to quantify the difference in signal. CTMO images exhibit a much higher resolution than the SDSS ones which had a 3 sigma limit. The increased depth CTMO images, is possibly due to the their unfiltered nature, whereas the SDSS images were obtained through a \textit{g} band filter. We made sub-images containing a single object from the list of detected sources on each CTMO image. An entire CTMO image is $2048 \times 2048$ pixels and each sub-image was $21 \times 21$ pixels centered on the coordinates of the detected source. Similarly, we made cuts of the aligned SDSS image at each detected source position, giving a pair of cropped images (one from CTMO and one from SDSS showing the same part of the sky). A few examples non-transient samples are shown in Figure \ref{real non tr}. \begin{figure} \centering \includegraphics[scale=0.3]{figures/datareal_nontr.png} \includegraphics[scale=0.3]{figures/datareal_nontr2.png} \includegraphics[scale=0.3]{figures/datareal_nontr3.png} \caption{Real non-transient samples} \label{real non tr} \end{figure} We did not observe any transients on these images, so we created artificial transient samples. We produced a sub-image containing a single object from the CTMO image (in the way described in the previous paragraph) and chose a spot on the SDSS image where there was only background. A few examples of these transient samples are shown in Figure \ref{real tr}. \begin{figure} \centering \includegraphics[scale=0.3]{figures/datareal_tr.png} \includegraphics[scale=0.3]{figures/datareal_tr2.png} \includegraphics[scale=0.3]{figures/datareal_tr3.png} \caption{Real transient samples} \label{real tr} \end{figure} \subsubsection{Simulated data set} For the training set, we simulated point sources superimposed on a mean background with noise. We fit the parameters of the program to obtain samples that are similar to the samples in the test data set. We set the sample size as an image of $21 \times 21$ pixels. The background variance is generated from a normal distribution with a fixed mean level of zero analog-to-digital units (ADU) and a standard deviation of 0.5 ADU. The profile for each point source is a two-dimensional Gaussian distribution, with different FWHM on the two main axes, and an arbitrary rotation with respect to the $(x, y)$ pixel axes of the image. The FWHM for the major and minor axes are chosen randomly from a uniform distribution. The orientation of the Gaussian profile with respect to the image axis is also selected uniformly over the unit circle. For the source simulation, it is important to decide which image corresponds to the CTMO and SDSS images. The CTMO sources are brighter and larger in size. We set the amplitude of the brighter source to $(35 \pm 10)$ ADU and FWHM to $(5 \pm 1.5)$ pixels. For the dimmer source, we set the amplitude to $(5 \pm 15)$ ADU and FWHM to $(0.5 \pm 1.5)$ pixels. A sample consists of a pair of small images and labels indicating whether the pair is a transient (label is ``1'') or not (label is ``0''). For non-transient samples, both images contain a simulated object. In this case, on each simulated pair, one source simulates a source expected on CTMO images, while the other simulates SDSS image sources. For transient samples, only one image contains a simulated object, whereas the other contains only simulated background. During the simulation, we chose the likelihood of generating a point source on the background to be 0.5, meaning that 50\% of the samples are transient samples while the rest are non-transient samples. Examples of simulated transient and non-transient samples are shown in Figure \ref{sim tr} and \ref{non sim tr}. \begin{figure} \centering \includegraphics[scale=0.3]{figures/datasim_tr.png} \includegraphics[scale=0.3]{figures/datasim_tr2.png} \includegraphics[scale=0.3]{figures/datasim_tr3.png} \caption{Simulated transient samples} \label{sim tr} \end{figure} \begin{figure} \centering \includegraphics[scale=0.3]{figures/datasim_nontr.png} \includegraphics[scale=0.3]{figures/datasim_nontr2.png} \includegraphics[scale=0.3]{figures/datasim_nontr3.png} \caption{Simulated non-transient samples} \label{non sim tr} \end{figure} \subsection{Building the Neural Network Models} \label{sec:cnnarch} We built two ANN models which we tasked to classify if an image contained a transient. One model uses convolutional layers, which are particularly useful for image analysis \citep{lecun_1990, lecun_1998} (the CNN model). The other model uses dense layers, which are the basic structure of ANNs \citep{mcculloch_1943} (the DLN model). The training process in ML requires fitting a large quantity of free parameters to the model, and, therefore, a large amount of training sample data. Since data containing real transients are scarce, we used simulated samples in the training phase and data collected from real images in a final testing phase. The performance measures we report are from the testing phase. We tested how both models predicted the existence of transients using test image data from CTMO and reference images from SDSS. To download and analyze SDSS images we used the \textit{Astropy} package \citep{robitaille_2013, pricewhelan_2018}. We explain how we generated the training samples and the testing samples in Chapter \ref{sec:datasets}. We created two types of networks with different topologies -- one a CNN and the other a dense layer network. We trained both networks on the simulated data set and tested them on the test data set. We used the Keras library \citep{chollet_2015} with TensorFlow backend \citep{abadi_2015} to construct the models and scikit-learn libraries \citep{pedregosa_2011} to evaluate prediction of the models. \subsubsection{Convolutional model with single multi-layer input} We built and tested the first model using convolutional layers, hence it is considered a convolutional model. For this task, we built the network using the sequential model in Keras. As input the model takes one image with two channels -- one channel accepts the CTMO image and the other accepts the SDSS image. The model is a binary classifier -- as an output it returns either ``1'' (a transient sample) or ``0'' (a non-transient sample). The network structure is shown in Figure \ref{fig:CNN}. The number of parameters in each layer and additional properties like the activation function are shown in Table \ref{cnn model summary}. The total number of parameters of the CNN is 1475. \begin{figure} \centering \includegraphics[scale=0.3]{figures/CNN_model1.png} \caption{Schema of the CNN model} \label{fig:CNN} \end{figure} \begin{table} \centering \begin{tabular}{|c|c|c|}\hline \textbf{Layer} & \textbf{Number of parameters} & \textbf{Properties} \\ \hline Convolutional2D & 190 & AF = \textit{relu} \\ \hline Convolutional2D & 455 & AF = \textit{relu} \\ \hline MaxPooling2D & 0 & pool size = (3, 3) \\ \hline Dropout & 0 & 0.25 \\ \hline Convolutional2D & 138 & AF = \textit{relu} \\ \hline MaxPooling2D & 0 & pool size = (2, 2) \\ \hline Flatten & 0 & \\ \hline Dense & 40 & AF = \textit{relu} \\ \hline Dropout & 0 & 0.5 \\ \hline Dense & 550 & AF = \textit{relu} \\ \hline Dropout & 0 & 0.3 \\ \hline Dense & 102 & AF = \textit{softmax} \\ \hline \end{tabular} \caption{A summary of the CNN model parameters. AF stands for ``activation function''. The \textit{relu} function applies a rectified linear unit activation function. The \textit{softmax} function converts a real vector to a vector of categorical probabilities.} \label{cnn model summary} \end{table} \subsubsection{Dense model with double input} In the second model, we use primarily dense layers. As input the model takes two images separately and then combines them. We built network using functional model in Keras. The structure of the network is shown in Figure \ref{NN dense}. The number of parameters in each layer and some additional properties are shown in Table \ref{nn modelsummary}. The total number of parameters of this model is 37594, considerably more than the previous model. \begin{figure} \centering \includegraphics[scale=0.3]{figures/CNN_model2.png} \caption{Schema of the DLN model} \label{NN dense} \end{figure} \begin{table} \centering \begin{tabular}{|c|c|c|}\hline \textbf{Layer} & \textbf{Number of parameters} & \textbf{Properties} \\ \hline Input layer 1 & 0 & \\ \hline Input layer 2 & 0 & \\ \hline Dense input1 & 1408 & 64, AF = \textit{relu} \\ \hline Dense input2 & 1408 & 64, AF = \textit{relu} \\ \hline Dense input1 & 2080 & 32, AF = \textit{relu} \\ \hline Dense input2 & 2080 & 32, AF = \textit{relu} \\ \hline Dense input1 & 264 & 8, AF = \textit{relu} \\ \hline Dense input2 & 264 & 8, AF = \textit{relu} \\ \hline Dense input1 & 36 & 4, AF = \textit{relu} \\ \hline Dense input2 & 36 & 4, AF = \textit{relu} \\ \hline Concatenate & 0 & \\ \hline Flatten & 0 & \\ \hline Dense & 21632 & 128, AF = \textit{relu} \\ \hline Dense & 8256 & 64, AF = \textit{relu} \\ \hline Dense & 130 & 2, AF = \textit{softmax} \\ \hline \end{tabular} \caption{A summary of the DLN model parameters.} \label{nn modelsummary} \end{table} \subsection{Validation and Test Metrics} \label{sec:metricresults} We trained both networks using 10000 samples of simulated data. We split the samples into two subset: 8000 samples to train the network and 2000 samples to validate the results. We trained the CNN and DLN models in 30 epochs using the Adam optimizer, and we evaluated the performance of the training with the accuracy metric. The resulting accuracy reflects a compromise between achieving the best results and avoiding an overfitting of the network. The training process is shown in Figures \ref{learningCNN} and \ref{learningNN}. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[scale=0.24]{figures/accuracy_fun.png} & \includegraphics[scale=0.24]{figures/loss_function_fun.png}\\ \end{tabular} \caption{The learning process of the CNN model.} \label{learningCNN} \end{figure} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[scale=0.24]{figures/accuracy_seq.png} & \includegraphics[scale=0.24]{figures/loss_function_seq.png}\\ \end{tabular} \caption{The learning process of the DLN model.} \label{learningNN} \end{figure} After training and validation, we calculated the prediction of each model for test data samples. The prediction output is the likelihood of the sample being a transient. A value of one means absolute confidence that the source is a transient, and a value of zero indicates a non-transient source. The confusion matrix is shown in Table \ref{matrices}. The confusion matrix shows how many times the network makes an error and the type of error. The diagonal of the matrix contains the number of correctly classified samples per class, and the off-diagonal elements are the miss-classification for each class. For a two-class system, the off-diagonal elements are the errors of classifying a transient as a non-transient and vice versa. \begin{table} \centering \begin{tabular}{|c|c|c|}\hline Metric & CNN model score & DLN model score \\ \hline Accuracy & 0.989 & 0.969 \\ \hline Precision & 0.981 & 0.949 \\ \hline Recall & 0.996 & 0.99 \\ \hline F1 score & 0.989 & 0.97 \\ \hline \end{tabular} \caption{Metrics of the CNN and DLN models.} \label{metrics} \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|c|}\hline & Real / Classified & Non-transient & Transient \\ \hline 1-model & non-transient & 1653 & 32 \\ \cline{2-4} (CNN) & transient & 6 & 1679 \\ \hline 2-model & non-transient & 1595 & 90 \\ \cline{2-4} (DLN) & transient & 13 & 1672 \\ \hline \end{tabular} \caption{Confusion matrices of the CNN and DLN models.} \label{matrices} \end{table} The test data consists of 1685 samples of transients and the same amount of non-transients. The CNN model mistakenly classified 32 non-transients as transients and only six transients as non-transients. The dense model made additional errors in non-transient classification. The errors might be caused by the sources having lower statistical significance in the SDSS images in comparison to the CTMO images, so there might be samples in which the SDSS source is of the same order of intensity as the background. The network cannot tell the difference between the dim source and the background, and thus misidentifies these samples as transients. It is possible to avoid the mistake of false recognition by adding more lower signal-to-noise reference samples into the training data set. Another step could be changing the training data set altogether. If more CTMO data were available, it would be possible to create a training data set from real images in the same way like that for the test data set. Consequently, there would be no need to use simulation data. Regardless, considering the two types of errors, it is preferable to have a non-transient event classified as a transient, not the opposite, because in this case one does not miss any potential transient event. Having a higher miss rate for transients would only cause additional checks for some non-transient cases. Classification error examples are shown in Figures \ref{CNN errors} and \ref{NN errors}. The most common error is produced when the SDSS source is weak. Another type of error is when the CTMO source is bright and large, when it nearly covers the entire sub-image. In one particular case, the network made an error when attempting to identify two sources in one sub-image. \begin{figure} \centering \includegraphics[scale=0.3]{figures/CNNerror1.png} \includegraphics[scale=0.3]{figures/CNNerror2.png} \includegraphics[scale=0.3]{figures/CNNerror3.png} \caption{CNN model errors (left column is CTMO and right column is SDSS data).} \label{CNN errors} \end{figure} \begin{figure} \centering \includegraphics[scale=0.3]{figures/NNerror1.png} \includegraphics[scale=0.3]{figures/NNerror2.png} \includegraphics[scale=0.3]{figures/NNerror3.png} \caption{DLN model errors (left column is CTMO and right column is SDSS data).} \label{NN errors} \end{figure} Both models exhibit high accuracy. The accuracy is not 100$\%$ in either case, which means that the networks are not overfitted. The CNN model demonstrated slightly better results than the DLN model, probably caused by the dense layers having many more parameters to train. The performance of the convolutional layers demonstrates that they are generally much better for image analysis. The next step of this project could be to build a model with a double input, such as the DLN model, but using convolutional layers rather than the dense layering. \section{Comparison of DIA approach and ANN approach on data connected to GW170104} \label{sec:diaanncompare} In this section we would like to present results of comparing DIA approach and ANN approach\footnote{In this a reference image to compare was taken by the same telescope, not an image taken by SDSS.} on search for of optical counterparts connected with GW170104. The initial search for astronomical transients was addressed by \cite{artola_2020}. The authors analyzed images taken by the TOROS Collaboration during the LIGO Scientific Collaboration's second observation run (November 2016 --- August 2017) - O2. TOROS followed up three GW alerts of which two were truly astrophysical: GW170104 and GW170817. In this paper, we only analyze the GW170104 follow-up data. The data for GW170104 were taken by the Estacion Astronomica Bosque Alegre (EABA) in Cordoba, Argentina. TOROS observed the most massive galaxies within the high probability region of localization for the GW events in January 2017, and produced a reference set of the images of the same objects, retrieved later in November 2017. The example of an image set looks like that shown in Figures \ref{image o2} and \ref{ref image o2}. \begin{figure} \centering \includegraphics[scale=0.1]{figures/image_o2.png} \caption{An image of galaxy ESO 202-009 taken by EABA in January 2017.} \label{image o2} \end{figure} \begin{figure} \centering \includegraphics[scale=0.1]{figures/image_o2ref.png} \caption{The reference image of ESO 202-009 taken by EABA in November 2017.} \label{ref image o2} \end{figure} The transient detection method used by \cite{artola_2020} involved DIA. The main goal of DIA is to transform one image to become compatible with another. The transform involves using a convolutional kernel to reduce the differences in PSFs on both images. The method used by the authors to find and apply the kernel was introduced by \cite{bramich_2008}. Following image transformation, the image is subtracted from the reference to reveal new sources. The DIA method generates a large number of spurious source artifacts (i.e. ``bogus sources''). A typical ratio of real-to-bogus transients is 1:100. A ML algorithm is then used to distinguish between real and bogus sources. The authors of \cite{artola_2020} generated synthetic "real" sources to create a training set for teaching a ML algorithm to distinguish between real and bogus transients. The method involved repeatedly injecting the profile of a star into an image. Then, they subtracted the images and extracted sources to detect objects on the difference image. Some detected sources were injected objects (i.e. ``real'' transients) and some were artifacts (i.e. ``bogus'' transients). Having samples of real and bogus transients, the authors built and trained a random forest, decision trees, and a support-vector machine --- the best results were obtained by random forest. Although the problem addressed by \cite{artola_2020} is similar to the one addressed in this paper, the methods are quite different in nature. Models based on DIA distinguish between real and bogus sources collected from a single, difference image. Our method bypasses the subtraction step and, instead, works directly on the target-reference pair of images by focusing on one source at a time and identifying it as a transient or non-transient. Additionally, DIA methods require examples of real and bogus transients to train ML algorithms, while our method requires examples of transients (equivalent to reals) and non-transients. Nevertheless, to compare both methods, we applied the algorithm to the same data used by \cite{artola_2020}. We created the training and testing data as follows. We extracted all samples for the test data sets from the original 13 images taken during the GW170104 follow-up event as described in \cite{artola_2020}. The transient samples are the profiles of injected stars on one image and the background on the other image --- they are equivalent to the set of ``real'' transients in the DIA method. The non-transient samples are a pair of thumbnails of the same objects detected by SExtractor in target and reference images. The comparison data set has a total of 3557 samples with labels. An example of transient and non-transient samples is shown in Figure \ref{real tr o2}. We retrained the models with different input sizes matching the conditions of \cite{artola_2020}. We simulated a new training data set and we adjusted the background noise level and standard deviation of the simulated training samples to zero and 2.5, respectively, to match those of the test set. Furthermore, we set the amplitude of the simulated sources to an average of 3 ADU and a standard deviation of 10 ADU. The sources are shaped like Gaussian profiles with a $\sigma$ value of (30 $\pm$ 10.5) ADU. In this case, we simulated both sources with the same parameters. creating a total of 10000 samples. Examples of simulated transient and non-transient samples are shown in Figure \ref{sim tr o2}. \begin{figure} \centering \includegraphics[scale=0.3]{figures/datarealo2_tr.png} \includegraphics[scale=0.3]{figures/datarealo2_nontr.png} \caption{The top row shows an example of a \textit{real} transient sample and the bottom row shows an example of a \textit{real} non-transient sample.} \label{real tr o2} \end{figure} \begin{figure} \centering \includegraphics[scale=0.3]{figures/datasimo2_tr.png} \includegraphics[scale=0.3]{figures/datasimo2_nontr.png} \caption{The top row shows an example of a \textit{simulated} transient sample and the bottom row shows an example of a \textit{simulated} non-transient sample.} \label{sim tr o2} \end{figure} The number of parameters to train is different because the size of the sub-image in one sample is bigger ($43 \times 43$ pixels). The total number of parameters of the CNN model is 2195 and for the DLN model is 62938 --- a significant difference. The number of parameters in each model and some additional properties are shown in Tables \ref{cnn modelsummary o2} and \ref{nn modelsummary o2}. \begin{table} \centering \begin{tabular}{|c|c|c|}\hline \textbf{Layer} & \textbf{Number of parameters} & \textbf{Properties} \\ \hline Convolutional2D & 190 & AF = \textit{relu} \\ \hline Convolutional2D & 455 & AF = \textit{relu} \\ \hline MaxPooling2D & 0 & pool size = (3, 3) \\ \hline Dropout & 0 & 0.25 \\ \hline Convolutional2D & 138 & AF = \textit{relu} \\ \hline MaxPooling2D & 0 & pool size = (2, 2) \\ \hline Flatten & 0 & \\ \hline Dense & 760 & AF = \textit{relu} \\ \hline Dropout & 0 & 0.5 \\ \hline Dense & 550 & AF = \textit{relu} \\ \hline Dropout & 0 & 0.3 \\ \hline Dense & 102 & AF = \textit{softmax} \\ \hline \end{tabular} \caption{A summary of the CNN model parameters for O2 data.} \label{cnn modelsummary o2} \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|}\hline \textbf{Layer} & \textbf{Number of parameters} & \textbf{Properties} \\ \hline Input layer 1 & 0 & \\ \hline Input layer 2 & 0 & \\ \hline Dense input1 & 2816 & 64, AF = \textit{relu} \\ \hline Dense input2 & 2816 & 64, AF = \textit{relu} \\ \hline Dense input1 & 2080 & 32, AF = \textit{relu} \\ \hline Dense input2 & 2080 & 32, AF = \textit{relu} \\ \hline Dense input1 & 264 & 8, AF = \textit{relu} \\ \hline Dense input2 & 264 & 8, AF = \textit{relu} \\ \hline Dense input1 & 36 & 4, AF = \textit{relu} \\ \hline Dense input2 & 36 & 4, AF = \textit{relu} \\ \hline Concatenate & 0 & \\ \hline Flatten & 0 & \\ \hline Dense & 44 160 & 128, AF = \textit{relu} \\ \hline Dense & 8256 & 64, AF = \textit{relu} \\ \hline Dense & 130 & 2, AF = \textit{softmax} \\ \hline \end{tabular} \caption{A summary of the DLN model for O2 data.} \label{nn modelsummary o2} \end{table} The confusion matrix for these models is shown in Table \ref{matrices o2}. The main error is, again, in the non-transient classification. In Table \ref{metrics o2}, we compare the metrics for three models: the two networks and the random forest (RF) algorithm tested in \cite{artola_2020}. We cannot treat this comparison as entirely accurate, because the classification problems are inherently different. Regardless, the DLN model obtained the overall best results. The main advantage of our method is that it skips the subtraction step, hence it is much quicker and less computationally expensive. Because it skips subtraction, our method can compare images which are significantly different (e.g. taken by different instruments), meaning optical transient could be detected without taking a reference image hours or days later. Hence, our method allows us to detect optical transients with very low-latency. Additionally, our method does not require additional classification between bogus and real transients. \begin{table} \centering \begin{tabular}{|c|c|c|c|}\hline \textbf{Metric} & \textbf{CNN model score} & \textbf{DLN model score} & \textbf{RF score} \\ \hline Accuracy & 0.91 & 0.918 & 0.89 \\ \hline Precision & 0.856 & 0.866 & 0.92 \\ \hline Recall & 0.993 & 0.997 & 0.86 \\ \hline F1 score & 0.919 & 0.927 & 0.89 \\ \hline \end{tabular} \caption{Metrics of the CNN model, DLN model, and RF algorithm for O2 data.} \label{metrics o2} \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|c|}\hline & \textbf{Real/Classified} & \textbf{Non-transient} & \textbf{Transient} \\ \hline Model 1 & non-transient & 1379 & 322 \\ \cline{2-4} (CNN) & transient & 4 & 1842 \\ \hline Model 2 & non-transient & 1403 & 308 \\ \cline{2-4} (Dense) & transient & 12 & 1834 \\ \hline \end{tabular} \caption{Confusion matrices of the CNN model, DLN model, and RF algorithm for O2 data.} \label{matrices o2} \end{table} \section{Conclusion} We have shown that it is possible to detect OTs by comparing images from two different telescopes. Our method opens a new way to search for OTs using reference images from another survey, making it possible to detect an OT in single image taken by a telescope. This feature is especially useful in the fast detection of kilonovae during EM follow-up observations of GW events, and readily adoptable for small observatories to participate in these targets of opportunity. We tested two neural network models --- one based on CNNs and other based on dense layers. Our models achieved high accuracy (0.989 for the CNN model and 0.969 for the DLN model). The main error in both networks was misidentifying non-transient samples as transients. A reason for false positive detection could be that both images are of different intensity scales (i.e. a given source might have different pixel intensities between target and reference image subsets). There are sample cases for which the object is much weaker in the SDSS image and, therefore, the network sees it as part of background. We tested both models on data taken by the TOROS Collaboration in follow-up to the GW170104 event. Initially, in order to detect transients in these data, DIA was the primary method, followed by a ML inspection of source-extracted objects on the difference images to distinguish between transients and artifacts. With our method, the models classified whether or not the sample images contained a transient, and they achieved a high accuracy score: 0.91 for the CNN model 0.918 for the DLN model (RF score was 0.89). In this comparative study, the DLN model performed best. In order to expand this project, it would be useful to build other models with better efficiency. Models with convolutional layers contain less parameters and, hence, are much easier and quicker to train which will be useful in the analysis of larger images or data sets. A next step could be to combine two different models, like a model with double input but using convolutional layers. Another idea is to use a more advanced network based on CNNs --- generative adversarial networks (GANs) \citep{goodfellow_2016b}. Models using GANs can solve the problem of transforming one image to be similar to another and are, therefore, an alternative to transforming the image convolutional kernel via DIA methods. The goal of this project is to apply these algorithms to TOROS data and incorporate them into the standard analysis pipeline. The first step is to test the method on TOROS data. We could test if the models detect the real kilonova observed by the TOROS Collaboration in follow-up to GW170817. This work is opening a new approach to OT detection, relieving us the need for taking a prior reference image by the same telescope. Instead, we can use an image taken by another image survey as the reference. Our method can help small FOV telescopes to make efficient searches for EM counterparts to GW events. This paper presents a promising beginning for a new class of methods to search for OTs which will be expanded in the future. \section*{Acknowledgements} The TOROS collaboration acknowledges support from Ministerio de Ciencia, Tecnolog\'{\i}a e Innovaci\'on Productiva (MinCyT) and Consejo Nacional de Investigaciones Cient\'{\i}ficas y Tecnol\'ogicas (CONICET) from Argentina, grants from the National Science Foundation of the United States of America, NSF PHYS 1156600 and NSF HRD 1242090, and the government of Salta province in Argentina. Adam Zadrożny and NCNR is grateful for financial support from MNiSW grant DIR/WK/2018/12 and NCN grant UMO-2017/26/M/ST9/00978. Katarzyna Wardęga is grateful for scholarship from The University of Texas Rio Grande Valley for the academic year 2019-2020, during which this project was carried out as part of student exchange between University of Warsaw, Faculty of Physics and The University of Texas Rio Grande Valley, Faculty of Physics. The authors thank Lucas Macri for his helpful comments to the manuscript. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,336
Joel Chan Shan-chung (born 2 October 1976) is a Hong Kong actor and singer contracted to TVB and Shaw Brothers Pictures. He made his debut in 1995 as a solo Cantopop singer, later transitioning into acting. Chan won the Best Supporting Actor award at the 2017 TVB Anniversary Awards with his role as Kent in the action drama The Unholy Alliance. In 2019, Chan played his first male leading role in the critical acclaimed supernatural drama Barrack O'Karma. With his dual role as Siu Wai-ming and Lau Yuk-fai, Chan earned his first nominations for Best Actor and Most Popular Male Character at the 2019 TVB Anniversary Awards, eventually being placed among top 5 for both categories. In 2022, Chan won the Best Actor award with his role in supernatural drama Barrack O'Karma 1968. Personal life In 2011, Joel Chan and Florinda Ho, the third daughter of "Gambling King" Stanley Ho, were repeatedly rumored to be in a relationship but the two denied the news. Joel Chan also admitted that he had gone through divorce procedures with his ex-wife Ponny Yeung with whom he had been in a relationship for 10 years at the end of 2010. In the early morning of August 14, 2011, Florinda Ho, the daughter of the rich businessman Stanley Ho, uploaded a photo on Weibo, officially publicizing the relationship. In 2013, he and Florinda ended their two-year relationship. On November 1, 2019, Chan married his out of industry girlfriend, Apple Ho, after dating for 5 years. The two held a wedding banquet at the W Hotel in Hong Kong. On 14 February 2020, he announced on Instagram that his wife was pregnant. On July 1 of the same year, his wife gave birth to their 7.1-pound son, Jaco Chan, by caesarean section. Due to their common interest in long-distance running, Chan along with Benjamin Yuen, Brian Tse, Jack Wu, Nancy Wu, Paisley Wu, Elaine Yiu, Selena Lee and Mandy Wong formed the group "Crazy Runner". Filmography Television dramas (TVB) Television dramas (Shaw Brothers Pictures) Film 1999: The Social Worker from the Edge 2000: God's Family Hymnal 2003: Dream and Desire 2003: Mark Six Comedy 2004: Cop Unbowed 2004: A-1 Headline 2004: The Beautiful Life 2005: Futago 2010: 72 Tenants of Prosperity 2014: Grey Met Shrek 2019: Line Walker 2 TBA: Endless Battle Discography 1994: A B C D 1996: 愛是全意 (Love is My Focus) Other works Musicals 1996: Snow White as the Prince Music video appearances "一不小心" (Once Not Careful) by Nicola Cheung Soundtracks "天數" (Counting Destiny), theme song for A Change of Destiny Duet with Steven Ma References External links Joel Chan's Official TVB Blog Joel Chan's Official Sina Microblog 1976 births Living people TVB veteran actors 20th-century Hong Kong male actors Hong Kong male film actors Hong Kong male television actors Hong Kong male singers Cantopop singers Macau emigrants to Hong Kong 21st-century Hong Kong male actors Macau-born Hong Kong artists
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,607
By: LATF Staff | Dec. 1, 2018, 5:14 a.m. For those who are Jewish and single, there's only one place they will want to be this December 24, 2018 – at the MATZOBALL powered by JSwipe. The event, which will be held in cities around the country, is in its 32nd year, and will be the bash that those looking to have fun and meet other singles will want to attend. It's a party each year that thousands of people attend, with many of them finding themselves getting lucky in love. The event is sponsored by JSwipe, the #1 Jewish dating app around the world. From meeting up and having fun to finding their soul mate, MATZOBALL has been the catalyst that has helped many Jewish singles find each other. "This is one of the most important events each year in the Jewish community if you are single," explains Andrew Rudnick, from Mazel Events, LLC and founder of the MATZOBALL. Boston – To be held at the Royale Nightclub. Ticket prices start at $40. Delray Beach – To be held at II Bacio. Ticket prices start at $30. Fort Lauderdale – To be held at Blue Martini. General ticket prices start at $30. Los Angeles – To be held at The Argyle Hollywood. Ticket prices start at $30. Miami – To be held at LIV Nightclub. Ticket prices start at $50. New York City – To be held at Capitale. Ticket prices start at $50. Philadelphia – To be held at Vesper Sporting Club. Ticket prices start at $30. "If you are at least 21, Jewish, and single, you will not want to miss attending a MATZOBALL party," added Rudnick.
{ "redpajama_set_name": "RedPajamaC4" }
5,190
\section{Introduction} News articles are very dynamic due to their relation to continuously developing events that typically have short lifespans. For a news article to be popular, it is essential for it to propagate to a large number of readers within a short time. Hence there exists a competition among different sources to generate content which is relevant to a large subset of the population and becomes virally popular. Traditionally, news reporting and broadcasting has been costly, which meant that large news agencies dominated the competition. But the ease and low cost of online content creation and sharing has recently changed the traditional rules of competition for public attention. News sources now concentrate a large portion of their attention on online mediums where they can disseminate their news effectively and to a large population. It is therefore common for almost all major news sources to have active accounts in social media services like Twitter to take advantage of the enormous reach these services provide. Due to the time-sensitive aspect and the intense competition for attention, accurately estimating the extent to which a news article will spread on the web is extremely valuable to journalists, content providers, advertisers, and news recommendation systems. This is also important for activists and politicians who are using the web increasingly more to influence public opinion. However, predicting online popularity of news articles is a challenging task. First, \emph{context} outside the web is often not readily accessible and elements such as local and geographical conditions and various circumstances that affect the population make this prediction difficult. Furthermore, \emph{network properties} such as the structure of social networks that are propagating the news, influence variations among members, and interplay between different sections of the web add other layers of complexity to this problem. Most significantly, intuition suggests that the \emph{content} of an article must play a crucial role in its popularity. Content that resonates with a majority of the readers such as a major world-wide event can be expected to garner wide attention while specific content relevant only to a few may not be as successful. Given the complexity of the problem due to the above mentioned factors, a growing number of recent studies \cite{DBLP:journals/cacm/SzaboH10}, \cite{DBLP:conf/webi/LeeMS10}, \cite{Tatar2011}, \cite{6036808}, \cite{DBLP:conf/www/LermanH10} make use of early measurements of an item's popularity to predict its future success. In the present work we investigate a more difficult problem, which is prediction of social popularity without using early popularity measurements, by instead solely considering features of a news article \emph{prior} to its publication. We focus this work on observable features in the content of an article as well as its source of publication. Our goal is to discover if any predictors relevant only to the content exist and if it is possible to make a reasonable forecast of the spread of an article based on content features. The news data for our study was collected from Feedzilla~\footnote{www.feedzilla.com} --a news feed aggregator-- and measurements of the spread are performed on Twitter~\footnote{www.twitter.com}, an immensely popular microblogging social network. Social popularity for the news articles are measured as the number of times a news URL is posted and shared on Twitter. To generate features for the articles, we consider four different characteristics of a given article. Namely: \begin{itemize} \item The news source that generates and posts the article \item The category of news this article falls under \item The subjectivity of the language in the article \item Named entities mentioned in the article \end{itemize} We quantify each of these characteristics by a score making use of different scoring functions. We then use these scores to generate predictions of the spread of the news articles using regression and classification methods. Our experiments show that it is possible to estimate ranges of popularity with an overall accuracy of 84\% considering only content features. Additionally, by comparing with an independent rating of news sources, we demonstrate that there exists a sharp contrast between traditionally popular news sources and the top news propagators on the social web. In the next section we provide a survey of recent literature related to this work. Section 3 describes the dataset characteristics and the process of feature score assignment. In Section 4 we will present the results of prediction methods. Finally, in Section 5 we will conclude the paper and discuss future possibilities for this research. \section{Related Work} Stochastic models of information diffusion as well as deterministic epidemic models have been studied extensively in an array of papers, reaffirming theories developed in sociology such as diffusion of innovations \cite{rogers1995diffusion}. Among these are models of viral marketing \cite{DBLP:journals/tweb/LeskovecAH07}, models of attention on the web \cite{Wu06112007}, cascading behavior in propagation of information \cite{DBLP:journals/sigkdd/GruhlLGT04} \cite{Leskovec07cascadingbehavior} and models that describe heavy tails in human dynamics \cite{PhysRevE.73.036127}. While some studies incorporate factors for content \emph{fitness} into their model \cite{Simkin2008}, they only capture this in general terms and do not include detailed consideration of content features. \citeauthor{salganik2010} performed a controlled experiment on music, comparing quality of songs versus the effects of social influence\cite{salganik2010}. They found that song quality did not play a role in popularity of highly rated songs and it was social influence that shaped the outcome. The effect of user influence on information diffusion motivates another set of investigations \cite{DBLP:conf/kdd/KempeKT03}, \cite{ICWSM101530},\cite{Agarwal:2008:IIB:1341531.1341559}, \cite{DBLP:conf/www/LermanH10}. On the subject of news dissemination, \cite{DBLP:conf/kdd/LeskovecBK09} and \cite{DBLP:conf/wsdm/YangL11} study temporal aspects of spread of news memes online, with \cite{DBLP:conf/icwsm/LermanG10} empirically studying spread of news on the social networks of digg and twitter and \cite{DBLP:conf/icwsm/SunRML09} studying facebook news feeds. A growing number of recent studies predict spread of information based on early measurements (using early votes on digg, likes on facebook, click-throughs, and comments on forums and sites). \cite{DBLP:journals/cacm/SzaboH10} found that eventual popularity of items posted on youtube and digg has a strong correlation with their early popularity; \cite{DBLP:conf/webi/LeeMS10} and \cite{Tatar2011} predict the popularity of a discussion thread using features based on early measurements of user comments. \cite{6036808} propose the notion of a virtual temperature of weblogs using early measurements. \cite{DBLP:conf/www/LermanH10} predict digg counts using stochastic models that combine design elements of the site -that in turn lead to collective user behavior- with information from early votes. Finally, recent work on variation in the spread of content has been carried out by \cite{Romero2011} with a focus on categories of twitter hashtags (similar to keywords). This work is aligned with ours in its attention to importance of content in variations among popularity, however they consider categories only, with news being one of the hashtag categories. \cite{DBLP:conf/sbp/YuCK11} conduct similar work on social marketing messages. \section{Data and Features} This section describes the data, the feature space, and feature score assignment in detail. \subsection{Dataset Description} \label{sec:Data} Data was collected in two steps: first, a set of articles were collected via a news feed aggregator, then the number of times each article was linked to on twitter was found. In addition, for some of the feature scores, we used a 50-day history of posts on twitter. The latter will be explained in section \ref{sec:Feature Description and Scoring} on feature scoring. \begin{figure}[h] \centering \includegraphics[width=2.5in, trim= 0cm 1cm 0cm 2cm]{tweetDist.pdf} \caption{Log-log distribution of tweets.} \label{fig:powerLaw100} \end{figure} Online news feed aggregators are services that collect and deliver news articles as they are published online. Using the API for a news feed aggregator named Feedzilla, we collected news feeds belonging to all news articles published online during one week (August 8th to 16th, 2011). The feed for an article includes a title, a short summary of the article, its url, and a time-stamp. In addition, each article is pre-tagged with a category either provided by the publisher or in some manner determined by Feedzilla. A fair amount of cleaning was performed to remove redundancies, resolve naming variations, and eliminate spam through the use of automated methods as well as manual inspection. As a result over 2000 out of a total of 44,000 items in the data were discarded. \begin{figure*}[ht] \centering \includegraphics[width=7in, trim = 0 2cm 0cm 0cm]{categories.pdf} \caption{Normalized t-density scores for categories} \label{fig:categories} \end{figure*} The next phase of data collection was performed using Topsy \footnote{http://topsy.com} , a Twitter search engine that searches all messages posted on Twitter. We queried for the number of times each news link was posted or reshared on Twitter (tweeted or retweeted). Earlier research \cite{DBLP:conf/kdd/LeskovecBK09} on news meme buildup and decay suggest that popular news threads take about 4 days until their popularity starts to plateau. Therefore, we allowed 4 days for each link to fully propagate before querying for the number of times it has been shared. The first half of the data was used in category score assignment (explained in the next section). The rest we partitioned equally into 10,000 samples each for training and test data for the classification and regression algorithms. Figure \ref{fig:powerLaw100} shows the log distribution of total tweets over all data, demonstrating a long tail shape which is in agreement with other findings on distribution of Twitter information cascades \cite{Zhou:2010:IRT:1964858.1964875}. The graph also shows that articles with zero tweets lie outside of the general linear trend of the graph because they did not propagate on the Twitter social network. Our objective is to design features based on content to predict the number of tweets for a given article. In the next section we will describe these features and the methods used to assign values or scores to features. \subsection{Feature Description and Scoring} \label{sec:Feature Description and Scoring} Choice of features is motivated by the following questions: Does the category of news affect its popularity within a social network? Do readers prefer factual statements or do they favor personal tone and emotionally charged language? Does it make a difference whether famous names are mentioned in the article? Does it make a difference who publishes a news article? These questions motivate the choice of the following characteristics of an article as the feature space: the category that the news belongs to (e.g. politics, sports, etc.), whether the language of the text is objective or subjective, whether (and what) named entities are mentioned, and what is the source that published the news. These four features are chosen based on their availability and relevance, and although it is possible to add any other available features in a similar manner, we believe the four features chosen in this paper to be the most relevant. We would like to point out that we use the terms article and link interchangeably since each article is represented by its URL link. \subsubsection{Category Score} News feeds provided by Feedzilla are pre-tagged with category labels describing the content. We adopted these category labels and designed a score for them which essentially represents a prior disribution on the popularity of categories. Figure \ref{fig:categories} shows a plot of categories and the number of article links in each category. We observe that news related to Technology has a more prominent presence in our dataset and most probably on twitter as a whole. Furthermore, we can see categories (such as Health) with low number of published links but higher rates of tweet per link. These categories perhaps have a niche following and loyal readers who are intent on posting and retweeting its links. Observing the variations in average tweets per link from Figure~\ref{fig:categories} we use this quantity to represent the prior popularity for a category. In order to assign a value (i.e. score) to each category, we use the the first 22,000 points in the dataset to compute the average tweet per article link in that category. We call this average tweet per link the \emph{t-density} score and we will use this measure in score assignments for some other features as well. \subsubsection{Subjectivity} Another feature of an article that can affect the amount of online sharing is its language. We want to examine if an article written in a more emotional, more personal, and more subjective voice can resonate stronger with the readers. Accordingly, we design a binary feature for subjectivity where we assign a zero or one value based on whether the news article or commentary is written in a more subjective voice, rather than using factual and objective language. We make use of a subjectivity classifier from LingPipe~\cite{lingpipe} a natural language toolkit. Since this requires training data, we use transcripts from well-known tv and radio shows belonging to Rush Limbaugh \footnote{http://www.rushlimbaugh.com} and Keith Olberman \footnote{http://www.msnbc.msn.com/id/32390086} as the corpus for subjective language. On the other hand, transcripts from CSPAN \footnote{http://www.c-span.org} as well as the parsed text of a number of articles from the website FirstMonday \footnote{http://firstmonday.org} are used as the training corpus for objective language. The above two training sets provide a very high training accuracy of 99\% and manual inspection of final results confirmed that the classification was satisfactory. Figure \ref{fig:subjectivity} illustrates the distribution of average subjectivity per source, showing that some sources consistently publish news in a more objective language and a somwhat lower number in a more subjective language. \begin{figure}[h] \begin{center} \includegraphics[width=2.5in, trim = 0cm 1cm 0cm 1cm]{sourceSubj.pdf} \end{center} \caption{Distribution of average subjectivity of sources.} \label{fig:subjectivity} \end{figure} \subsubsection{Named Entities} In this paper, a named entity refers to a known place, person, or organization. Intuition suggests that mentioning well-known entities can affect the spread of an article, increasing its chances of success. For instance, one might expect articles on Obama to achieve a larger spread than those on a minor celebrity. And it has been well documented that fans are likely to share almost any content on celebrities like Justin Bieber, Oprah Winfrey or Ashton Kutcher. We made use of the Stanford-NER ~\footnote{http://nlp.stanford.edu/software/CRF-NER.shtml} entity extraction tool to extract all the named entities present in the title and summary of each article. We then assign scores to over 40,000 named entities by studying historical prominence of each entity on twitter over the timeframe of a month. The assigned score is the average t-density (tweet per link) of each named entity. To assign a score for a given article we use three different values: the number of named entities in an article, the highest score among all the named entities in an article, and the average score among the entities. \subsubsection{Source Score} The data includes articles from 1350 unique sources on the web. We assign scores to each source based on the historical success of each source on Twitter. For this purpose, we collected the number of times articles from each source were shared on Twitter in the past. We used two different scores, first the aggregate number of times articles from a source were shared, and second the t-density of each source (average number of times each article belonging to a source was shared). The latter proved to be a better score assignment compared to the aggregate. \begin{figure}[h] \begin{center} \includegraphics[width=2.5in, trim = 0cm 1cm 0cm 1cm]{sourceScores.pdf} \end{center} \caption{Distribution of log of source t-density scores} \label{fig:sourceScoredist} \end{figure} To investigate whether it is better to use a smaller portion of more recent history, or a larger portion going back farther in time and possibly collecting outdated information, we start with the two most recent weeks prior to our data collection and increase the number of days, going back in time. Figure \ref{fig:sourceCor} shows the trend of correlation between the t-density of sources in historical data and their true t-density of our dataset. We observe that the correlation increases with more datapoints from the history until it begins to plateau near 50 days. Using this result, we take 54 days of history prior to the first date in our dataset. We find that the correlation of the assigned score found in the above manner has a correlation of 0.7 with the t-density of the dataset. Meanwhile, the correlation between the source score and number of tweets of any given article is 0.35, suggesting that information about the source of publication alone is not sufficient in predicting popularity. Figure \ref{fig:sourceScoredist} shows the distribution of log of source scores (t-density). Taking the log of source scores produces a more normal shape, leading to improvements in regression algorithms. \begin{figure}[h] \centering \includegraphics[width=3in, trim = 1cm 0 0cm 0]{srcCor.pdf} \caption{Correlation trend of source scores with t-density in data. Correlation increases with more days of historical data until it plateaus after 50 days.} \label{fig:sourceCor} \end{figure} We plot the timeline of t-densities for a few sources and find that t-density of a source can vary greatly over time. Figure \ref{fig:mashable} shows the t-density values belonging to the technology blog \textit{Mashable} and \textit{Blog Maverick}, a weblog of prominent entrepreneur, Mark Cuban. The t-density scores corresponding to each of these sources are 74 and 178 respectively. However, one can see that \textit{Mashable} has a more consistent t-density compared to \textit{Blog Maverick}. \begin{figure}[h] \centering \includegraphics[width=3.2in , trim = 1cm 0 0cm 0]{blogmaverick_mashable.pdf} \caption{Timeline of t-density (tweet per link) of two sources.} \label{fig:mashable} \end{figure} In order to improve the score to reflect consistency we devise two methods; the first method is to smooth the measurements for each source by passing them through a low-pass filter. Second is to weight the score by the percentage of times a source's t-density is above the mean t-density over all sources, penalizing sources that drop low too often. The mean value of t-densities over all sources is 6.4. Figure \ref{fig:temporal} shows the temporal variations of tweets and links over all sources. Notice that while both tweets and links have a weekly cycle, the t-density (tweets over links) does not have this periodic nature. \begin{figure}[ht] \centering \subfloat[tweets and links]{\includegraphics[width=3.2in, trim = 0.5cm 0cm 0cm 0cm]{temporal2.pdf}}\\ \subfloat[t-density]{\includegraphics[width=3.2in, trim = 1cm 0cm 0cm 0cm]{t-density.pdf}} \caption{Temporal variations of tweets, links, and t-density over all sources} \label{fig:temporal} \end{figure} \subsubsection{Are top traditional news sources the most propagated?} As we assign scores to sources in our dataset, we are interested to know whether sources successful in this dataset are those that are conventionally considered prominent. Google News \footnote{http://news.google.com/} is one of the major aggregators and providers of news on the web. While inclusion in Google news results is free, Google uses its own criteria to rank the content and place some articles on its homepage, giving them more exposure. Freshness, diversity, and rich textual content are listed as the factors used by Google News to automatically rank each article as it is published. Because Google does not provide overall rankings for news sources, to get a rating of sources we use NewsKnife \footnote{http://www.newsknife.com}. NewsKnife is a service that rates top news sites and journalists based on analysis of article's positions on the Google news homepage and sub-pages internationally. We would like to know whether the sources that are featured more often on Google news (and thus deemed more prominent by Google and rated more highy by NewsKnife) are also those that become most popular on our dataset. \begin{table}[htb] \begin{center} \begin{tabular}{|l|l|l|l|} \hline &Total Links & Total Tweets & t-density\\ \hline Correlation&0.57&0.35&-0.05\\ \hline \end{tabular} \caption{Correlation values between NewsKnife source scores and their performance on twitter dataset.} \label{table:newsknifeCor} \vspace{-0.25cm} \end{center} \end{table} Accordingly we measure the correlation values for the 90 top NewsKnife sources that are also present in our dataset. The values are shown in Table \ref{table:newsknifeCor}. It can be observed that the ratings correlate positively with the number of links published by a source (and thus the sum of their tweets), but have no correlation (-0.05) with t-density which reflects the number of tweets that each of their links receives. For our source scoring scheme this correlation was about 0.7. Table \ref{table:popular} shows a list of top sources according to NewsKnife, as well as those most popular sources in our dataset. While NewsKnife rates more traditionally prominent news agencies such as Reuters and the Wall Street Journal higher, in our dataset the top ten sources (with highest t-densities) include sites such as Mashable, AllFacebook (the unofficial facebook blog), the Google blog, marketing blogs, as well as weblogs of well-known people such as Seth Godin's weblog and Mark Cuban's blog (BlogMaverick). It is also worth noting that there is a bias toward news and opinion on web marketing, indicating that these sites actively use their own techniques to increase their visibility on Twitter. While traditional sources publish many articles, those more successful on the social web garner more tweets. A comparison shows that a NewsKnife top source such as The Christian Science Monitor received an average of 16 tweets in our dataset with several of its articles not getting any tweets. On the other hand, Mashable gained an average of nearly 1000 tweets with its least popular article still receiving 360 tweets. Highly ranked news blogs such as The Huffington Post perform relatively well in Twitter, possibly due to their active twitter accounts which share any article published on the site. \begin{table}[h] \begin{center} \begin{tabular}{ | l | p{5cm} |} \hline NewsKnife & {\small \textit{Reuters, Los Angeles Times, New York Times, Wall Street Journal, USA Today, Washington Post, ABC News, Bloomberg, Christian Science Monitor, BBC News} }\\ \hline Twitter Dataset &{\small \textit{Blog Maverick, Search Engine Land, Duct-tape Marketing, Seth's Blog, Google Blog, Allfacebook, Mashable, Search Engine Watch}}\\ \hline \end{tabular} \caption{Highly rated sources on NewsKnife versus those popular on the Twitter dataset} \label{table:popular} \vspace{-0.25cm} \end{center} \end{table} \section{Prediction} In this work, we evaluate the performance of both regression and classification methods to this problem. First, we apply regression to produce exact values of tweet counts, evaluating the results by the R-squared measure. Next we define popularity classes and predict which class a given article will belong to. The following two sections describe these methods and their results. \begin{table}[h] \begin{center} \begin{tabular}{p{2cm}p{5.5cm}} \hline Variable & Description\\ \hline \(S\) & Source t-density score\\ \(C\) & Category t-density score\\ \(Subj\)& Subjectivity (0 or 1)\\ \(Ent_{ct}\) & Number of named entities\\ \(Ent_{max}\) & Highest score among named entities\\ \(Ent_{avg}\) & Average score of named entities\\ \hline \end{tabular} \caption{Feature set (prediction inputs)} \label{table:features} \end{center} \end{table} \subsection{Regression} Once score assignment is complete, each point in the data (i.e. a given news article) will correspond to a point in the feature space defined by its category, subjectivity, named entity, and source scores. As described in the previous section, category, source, and named entity scores take real values while the subjectivity score takes a binary value of 0 or 1. Table \ref{table:features} lists the features used as inputs of regression algorithms. We apply three different regression algorithms - linear regression, k-nearest neighbors (KNN) regression and support vector machine (SVM) regression. \begin{table}[h] \begin{center} \begin{tabular}{lll} \hline & Linear Regression & SVM Regression \\ \hline All Data & 0.34 & 0.32 \\ Tech Category & 0.43 & 0.36 \\ Within Twitter &0.33&0.25\\ \hline \end{tabular} \caption{Regression Results} \label{table:regression} \end{center} \end{table} Since the number of tweets per article has a long-tail distribution (as discussed previously in Figure \ref{fig:powerLaw100}), we performed a logarithmic transformation on the number of tweets prior to carrying out the regression. We also used the log of source and category scores to normalize these scores further. Based on this transformation, we reached the following relationship between the final number of tweets and feature scores. \[ln(T) = 1.24ln(S) + 0.45ln(C)+0.1Ent_{max}-3\] where \(S\) is the source t-density score, \(C\) is the category t-density score, and \(Ent_{max}\) is the maximum t-density of all entities found in the article. Equivalently, \[T = S^{1.24} C^{0.45}e^{-(0.1Ent_{max}+3)}\] with coefficient of determination \(R^{2}=0.258\). Note that the \(R^{2}\) is the coefficient of determination and relates to the mean squared error and variance: \[R^{2} = 1 - {{MSE}\over{VAR}}\] Alternatively, the following model provided improved results: \[T^{0.45} = \left (0.2S-0.1Ent_{ct}- 0.1Ent_{avg}+0.2Ent_{max}\right )^{2} \] with an improved \(R^{2}=0.34\). Using support vector machine (SVM) regression \cite{CC01a}, we reached similar values for \(R^{2}\) as listed in Table \ref{table:regression}. In K-Nearest Neighbor Regression, we predict the tweets of a given article using values from it's nearest neighbors. We measure the Euclidean distance between two articles based on their position in the feature space \cite{hastie2008elements}. Parameter $K$ specifies the number of nearest neighbors to be considered for a given article. Results with $K=7$ and $K=3$ for a 10k test set are R-sq= 0.05, with mean squared error of 5101.695. We observe that KNN performs increasingly more poorly as the dataset becomes larger. \subsubsection{Category-specific prediction} One of the weakest predictors in regression was the Category score. One of the reasons for this is that there seems to be a lot of overlap across categories. For example, one would expect \emph{World News} and \emph{Top News} to have some overlap, or the category \emph{USA} would feature articles that overlap with others as well. So the categories provided by Feedzilla are not necessarily disjoint and this is the reason we observe a low prediction accuracy. To evaluate this hypothesis, we repeated the prediction algorithm for particular categories of content. Using only the articles in the Technology category, we reached an \(R^{2}\) value of 0.43, indicating that when employing regression we can predict the popularity of articles within one category (i.e. Technology) with better results. \subsection{Classification} Feature scores derived from historical data on Twitter are based on articles that have been tweeted and not those articles which do not make it to Twitter. As discussed in Section \ref{sec:Data} this is evident in how the zero-tweet articles do not follow the linear trend of the rest of datapoints in Figure \ref{fig:powerLaw100}. Consequently, we do not include a zero-tweet class in our classification scheme and perform the classification by only considering those articles that were posted on twitter. Table \ref{table:classes} shows three popularity classes A (1 to 20 tweets), B (20 to 100 tweets), C (more than 100) and the number of articles in each class in the set of 10,000 articles. Table \ref{table:classification} lists the results of support vector machine (SVM) classification, decision tree, and bagging \cite{Hall:2009:WDM:1656274.1656278} for classifying the articles. All methods were performed with 10-fold cross-validation. We can see that classification can perform with an overall accuracy of 84\% in determining whether an article will belong to a low-tweet, medium-tweet, or high-tweet class. In order to determine which features play a more significant role in prediction, we repeat SVM classification leaving one of the features out at each step. We found that publication source plays a more important role compared to other predictors, while subjectivity, categories, and named entities do not provide much improvement in prediction of news popularity on Twitter. \subsubsection{Predicting Zero-tweet Articles} We perform binary classification to predict which articles will be at all mentioned on Twitter (zero tweet versus nonzero tweet articles). Using SVM classification we can predict --with 66\% accuracy-- whether an article will be linked to on twitter or whether it will receive zero tweets. We repeat this operation by leaving out one feature at a time to see a change in accuracy. We find that the most significant feature is the source, followed by its category. Named entities and subjectivity did not provide more information for this prediction. So despite one might expect, we find that readers overall favor neither subjectivity nor objectivity of language in a news article. It is interesting to note that while category score does not contribute in prediction of popularity within Twitter, it does help us determine whether an article will be at all mentioned on this social network or not. This could be due to a large bias toward sharing technology-related articles on Twitter. \begin{table}[h] \begin{center} \begin{tabular}{lll} \hline Class name& Range of tweets & Number of articles\\ \hline A&1--20 & 7,600\\ B&20--100 &1,800\\ C&100--2400 &600\\ \hline \end{tabular} \caption{Article Classes} \label{table:classes} \end{center} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{p{5cm}p{2cm}} \hline Method& Accuracy \\ \hline Bagging& 83.96\%\\%&84.63\%\\ J48 Decision Trees & 83.75\%\\%&84.17\%\\ SVM & 81.54\% \\%&82\%\\ Naive Bayes&77.79\%\\%&77.43\%\\ \hline \end{tabular} \caption{Classification Results} \label{table:classification} \end{center} \end{table} \section{Discussion and Conclusion} In this work we predicted the popularity of news items on Twitter using features extracted from the content of news articles. We have taken into account four features that cover the spectrum of the information that can be gleaned from the content - the source of the article, the category, subjectivity in the language and the named entities mentioned. Our results show that while these features may not be sufficient to predict the exact number of tweets that an article will garner, they can be effective in providing a range of popularity for the article on Twitter. We achieved an overall accuracy of 84\% using classifiers. It is important to bear in mind that while it is intriguing to pay attention to the most popular articles --those that become viral on the web-- a great number of articles spread in medium numbers. These medium levels can target highly interested and informed readers and thus the mid-ranges of popularity should not be dismissed. Interestingly we have found that in terms of number of retweets, the top news sources on twitter are not necessarily the conventionally popular news agencies and various technology blogs such as Mashable and the Google Blog are very widely shared in social media. Overall, we discovered that one of the most important predictors of popularity was the source of the article. This is in agreement with the intuition that readers are likely to be influenced by the news source that disseminates the article. On the other hand, the category feature did not perform well. One reason for this is that we are relying on categories provided by Feedzilla, many of which overlap in content. Thus a future task is to extract categories independently and ensure little overlap. Combining other layers of complexity described in the introduction opens up the possibility of better prediction. It would be interesting to incorporate network factors such as the influence of individual propagators to this work. \bibliographystyle{aaai}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,596
Founder & Managing Director, Springhood Ventures, LLC Mr. Parker founded Springhood Ventures to provide critical early support to companies developing important healthcare solutions for children. In this role, he established and manages the program-related investment (PRI) initiative of the Charles H. Hood Foundation, a Boston-based private foundation that supports pediatric research, where he also serves as a trustee. Springhood invests on a mission-first basis in seed-stage companies developing important pediatric medical solutions. He is also an observer on the boards of Prapela, Inc., Aldatu Biosciences, Breegi Scientific, and Noninvasix, Inc. Previously, John spent 25 years in the alternative investment industry, including senior roles in venture capital, private equity, and hedge funds. Early in his career he worked in operations consulting and international merchant banking. Although currently living in the Boston area with his wife and their three children, John spent portions of his career in New York, Tokyo and Sydney and has done business in over 20 countries on 6 continents. John has a BA from Dartmouth College and an MBA from Dartmouth's Tuck School of Business.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,232
Misplaced metamorphosis: Researchers identify source of cells that spur aberrant bone growth These are stages of metamorphosis of muscle tissue into bone tissue in mouse model of heterotopic ossification (misplaced bone growth). A: Inflammation in muscle tissue (M = muscle cells). B: Destruction of muscle cells (FP = fibroproliferation). C: Formation of cartilage scaffold before bone formation (C = cartilage). D: Formation of mature bone (B = bone). Credit: Journal of Bone & Joint Surgery Researchers at the University of Pennsylvania School of Medicine and the University of Connecticut have pinpointed the source of immature cells that spur misplaced bone growth. Unexpectedly, the major repository of bone-forming cells originates in blood vessels deep within skeletal muscle and other connective tissues, not from muscle stem cells themselves. The work also shows that cells important in the inflammatory response to injury trigger skeleton-stimulating proteins to transform muscle tissue into bone. Understanding this process has important implications for understanding the formation of bone not only in FOP, a rare disease in which patients' muscle cells literally metamorphose to bone, but also in many common disorders of misplaced bone growth such as that following head injury, athletic injury, and spinal cord injury. The findings were published this week in the Journal of Bone & Joint Surgery. "We always knew that heterotopic, or misplaced, bone growth was supplied by a rich vasculature, but we never suspected that cells from the blood vessels, when triggered by cells from the immune system, could undergo a metamorphosis that becomes a second skeleton," says senior author Frederick S. Kaplan, M.D., Isaac & Rose Nassau Professor of Orthopaedic Molecular Medicine. "When these components interact pathologically, as in the rare disease FOP, devastating results occur. We want to fix that." The researchers used genetically engineered mice with labeled immature, or progenitor, cells to trace specific cell lineages through the process of renegade bone formation, which is induced by skeleton-stimulating molecules called bone morphogenetic proteins (BMPs). The study has important implications for understanding the rare genetic disorder fibrodysplasia ossificans progressiva (FOP), a condition studied by the authors who care for most of the world's 700 patients with the condition. In FOP, the body forms a second skeleton as a result of the transformation of normal muscle tissue into normal bone. That change is caused by a mutant gene that encodes a receptor, or switch, for BMPs and was discovered by the Penn scientists in April 2006. In 2007, the Penn group identified the seminal role of inflammation in the metamorphosis, indicting the immune system as a critical trigger in the aberrant bone-forming process. The current study links the inflammatory response to injury with the responding blood-vessel cells that, in part, orchestrate the switch from muscle to bone. The interaction of blood-vessel cells with immune cells appears to trigger bone formation when the BMP switch is damaged or overactive. While the cells identified from blood-vessel linings in this study are a major contributor to the aberrant bone growth, the researchers say they account for only half of the cells important in the process, suggesting that other critical pools of cells are yet to be identified. "BMPs regulate a great number of essential physiological processes," comments co-corresponding author David J. Goldhamer, Ph.D., Associate Professor, The Center for Regenerative Biology at the University of Connecticut. "For this reason, development of therapies for misplaced bone growth that specifically target offending progenitor cell populations is of primary importance in order to minimize collateral effects. Identification of progenitor cells directly involved in heterotopic bone formation is a critical first step toward this goal." By identifying the interaction of key cellular and molecular elements in the transformation of muscle to bone, the study points the way to designing more effective treatments for undesirable heterotopic bone formation as well as for engineering new bone where it is desperately needed, such as in congenital malformations, fractures, spinal fusions, and bone loss from tumors. Source: University of Pennsylvania School of Medicine Medicine against bone disease found in the leaves of saussurea Citation: Misplaced metamorphosis: Researchers identify source of cells that spur aberrant bone growth (2009, March 3) retrieved 22 January 2020 from https://medicalxpress.com/news/2009-03-misplaced-metamorphosis-source-cells-spur.html Mouse study shows nerve signaling pathway critical to healing fractures Fighting bacterial infection with drug-eluting medical devices Prostate cancer bone metastases thwart immunotherapy by producing TGF-beta Tendon stem cells could revolutionize injury recovery Scientists unravel mysteries of cells' whiplike extensions A medical insight in Michelangelo's David, 'hiding in plain sight' Celebrated ancient Egyptian woman physician likely never existed, says researcher Best of Last Year: The top MedicalXpress articles of 2019 Women scientists author fewer invited commentaries in medical journals than men Doctor burnout costs health care system $4.6 billion a year Pediatric endocrinologist gives iconic 'Mona Lisa' a second medical opinion
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
96
Late period may refer to: a sign of pregnancy Oligomenorrhea, a type of menstrual disorder Late Period of ancient Egypt, 664 BC until 332 BC
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,802
You are here: Home / Theses / Conservation, Captivity, and Whaling: A Survey of Belize Whalewatching Tourists' Attitudes to Cetacean Conservation Issues / About Conservation, Captivity, and Whaling: A Survey of Belize Whalewatching Tourists' Attitudes to Cetacean Conservation Issues By Katheryn W. Patterson With whalewatching activities and associated expenditures increasing annually, governments in coastal countries possess a large vested interest in the continued growth and protection of whale populations and the associated tourism. In 2007 and 2008, a survey investigating whalewatching tourists' attitudes toward key cetacean conservation issues, such as legislative protection, whaling, and captivity, was administered to volunteer participants at Blackbird Caye, Turneffe Atoll, Belize (n=166). With regards to attitudes towards cetacean conservation issues, the majority of participants considered dolphins and whales to be under protected or only slightly protected (36.4%; 45.1%, respectively) and expressed that marine mammal conservation laws and policies were very important (83.1%). In addition, 95% of participants expressed opposition against the hunting of whales (68.5% strongly opposed and 26.5% opposed), and the majority of participants were against keeping dolphins in captivity no matter if the dolphins were kept in a dolphinarium or a semi-natural habitat confined by nets (78.1%; 66.9%, respectively). Furthermore, 93.3% of participants stated that they preferred to observe dolphins in the wild rather than in a captive setting, whether semi-natural or a dolphinarium. In addition to allowing a comparison of the attitudes and concerns of whalewatchers in Belize with other surveyed areas, this survey provides data that could assist the Belizean government with conservation-oriented decision-making. For example, 70.4% of participants felt that it was very important that Belize has a strong commitment to dolphin conservation and of those same participants, an additional 27.8% of participants ranked cetacean conservation as important. Additionally, 68.1% of participants said that they would actively boycott visiting pro-whaling countries and more specifically, 59.5% of participants stated that they would boycott visiting Belize if the country supported whaling, which has implications for Belize's position and policies at the International Whaling Commission. Katie Carroll Department of Environmental Science and Policy Researchers should cite this work as follows: Animal roles Animals in culture
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
311
Q: Unexpected regular expression behavior using Positive Lookbehind I was writing a regular expression to match a sequence of digits, followed by a dot and then another sequence of digits, and in total length, including the dot, the entire sequence should be 13. For this purpose, the regular expression that I wrote was: (\d{6,12})\.(\d{0,6})(?<=.{13}) When I run this expression against the two following samples of data, I was expecting only the second one to match, but instead, both are mathed. Can anyone help me understand why? * *1234567.123456 > is matched but I was expecting it not to be matched; *1234567.12345 > is matched. Here is the Java code I used to test this: import java.util.regex.Pattern; public class App { public static void main(String[] args) { Pattern matcher = Pattern.compile("(\\d{6,12})\\.(\\d{0,6})(?<=.{13})"); System.out.println(matcher.matcher("1234567.123456").matches()); System.out.println(matcher.matcher("1234567.12345").matches()); } } Output: true true A: You need to anchor the lookbehind assertion to the start of the string, or it will match a substring: Pattern matcher = Pattern.compile("(\\d{6,12})\\.(\\d{0,6})(?<=^.{13})"); Or use a lookahead assertion instead (easier to understand, IMO): Pattern matcher = Pattern.compile("(?=.{13}$)(\\d{6,12})\\.(\\d{0,6})"); A: You need to use anchor to match at the beginning of the string: "(\\d{6,12})\\.(\\d{0,6})(?<=^.{13})" A: You may want to add an anchor (^) to your lookbehind expression: (?<=^.{13})
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,308
Q: HighCharts error #18: Requested Axis Does not exist I am new to HighCharts and I am trying to display 2 graphs on the same x-axis) axis like shown here: http://jsfiddle.net/gh/get/library/pure/highcharts/highcharts/tree/master/samples/highcharts/demo/combo-multi-axes/. However, I get an error message: This error happens when you set a series' xAxis or yAxis property to point to an axis that does not exist. The error occurs in "chart1" The html and JAVASCRIPT code is as follows: $(function updat() { var url = "https://xpm4zyor39.execute-api.us-west-2.amazonaws.com/prod/entries"; var humid = [], date = [], high=[], day=[], chanceOfRain=[], humid_final = [], day_final=[], high_final=[], chanceOfRain_final=[] $.getJSON(url, function (json) { $(json['Items']).each(function(i, data) { //Store indicator name // fill the date array humid.push(data.humidity); // fill the string data array date.push(data.Date); high.push(data.high); day.push(data.Day); chanceOfRain.push(data.chanceOfRain); }); console.log(date); // query send string that we need to convert into numbers for (var i = 0; i < humid.length; i++) { if (humid[i] != null) { humid_final.push(parseFloat(humid[i])); high_final.push(parseFloat(high[i])); day_final.push(parseFloat(day[i])); chanceOfRain_final.push(parseFloat(chanceOfRain[i])); } else { humid_final.push(null) }; } console.log("day_final", day_final); var chart = new Highcharts.chart({ chart: { type: 'spline', renderTo: 'light', marginBottom: 200 }, title: { text: 'indicatorName' }, tooltip: { valueDecimals: 2, pointFormat: '<span style="color:{point.color}">\u25CF</span> {series.name}: <b>{point.y}%</b><br/>' }, plotOptions: { series: { marker: { enabled: false } } }, subtitle: { text: 'Ambient Light Level' }, xAxis: { categories: day_final //.reverse() to have the min year on the left }, series: [{ name: 'light level', data: high_final // }] }); var chart1= Highcharts.chart('temp&humid',{ chart: { zoomType:'xy' }, title:{ text:'Humidity and temperature' }, xAxis:{ categories: [1,2,3], crosshair: true }, yAxis: [{ labels:{ format: '{value}°C', style: { color: Highcharts.getOptions().colors[2] } }, title:{ text: 'Temperature', style:{ color: Highcharts.getOptions().colors[2] } }, opposite: true }, { //secondary Y AXIS gridLineWidth: 0, title:{ text: 'Humidity', style:{ color: Highcharts.getOptions().colors[0] } }, labels:{ format: '{value}%', style:{ color:Highcharts.getOptions().colors[0] } } }] , tooltip:{shared:true}, legend:{ layout: 'vertical', align:'left', x:80, verticalAlign: 'top', y: 55, floating:true, backgroundColor: (Highcharts.theme && Highcharts.theme.legendBackgroundColor) || '#FFFFFF' }, series:[{ name:'Humidity', type: 'column', yAxis:1, data:[12,3], tooltip:{valueSuffix: ' %'} }, { name:'Temperature', type:'spline', yAxis:2, data: [1,2,3], tooltip:{valueSuffix: ' °C'} }] }); }); //getJSON method setTimeout(updat, 3000); }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.0/jquery.min.js"></script> <script src="https://code.highcharts.com/highcharts.js"></script> <script src="https://code.highcharts.com/modules/exporting.js"></script> <script src= "Ag.js"></script> <div id="light" style="min-width: 310px; height: 400px; left:10px"></div> <div id="temp&humid" style="min-width: 310px; height: 400px; left:10px"></div> A: You are doing the following: series:[{ yAxis:1, }, { yAxis:2, }] You need to do: series:[{ yAxis:0, }, { yAxis:1, }] The problem is that axes start indexing at 0. So your index where you set temperature to axis 2 does not work because there is no axis 2. In the demo there are 3 axes, which is why it works with these definitions.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,282
#include "drmP.h" #include "drm_crtc.h" #include "drm_crtc_helper.h" #include "udl_drv.h" /* * All DisplayLink bulk operations start with 0xAF, followed by specific code * All operations are written to buffers which then later get sent to device */ static char *udl_set_register(char *buf, u8 reg, u8 val) { *buf++ = 0xAF; *buf++ = 0x20; *buf++ = reg; *buf++ = val; return buf; } static char *udl_vidreg_lock(char *buf) { return udl_set_register(buf, 0xFF, 0x00); } static char *udl_vidreg_unlock(char *buf) { return udl_set_register(buf, 0xFF, 0xFF); } /* * On/Off for driving the DisplayLink framebuffer to the display * 0x00 H and V sync on * 0x01 H and V sync off (screen blank but powered) * 0x07 DPMS powerdown (requires modeset to come back) */ static char *udl_enable_hvsync(char *buf, bool enable) { if (enable) return udl_set_register(buf, 0x1F, 0x00); else return udl_set_register(buf, 0x1F, 0x07); } static char *udl_set_color_depth(char *buf, u8 selection) { return udl_set_register(buf, 0x00, selection); } static char *udl_set_base16bpp(char *wrptr, u32 base) { /* the base pointer is 16 bits wide, 0x20 is hi byte. */ wrptr = udl_set_register(wrptr, 0x20, base >> 16); wrptr = udl_set_register(wrptr, 0x21, base >> 8); return udl_set_register(wrptr, 0x22, base); } /* * DisplayLink HW has separate 16bpp and 8bpp framebuffers. * In 24bpp modes, the low 323 RGB bits go in the 8bpp framebuffer */ static char *udl_set_base8bpp(char *wrptr, u32 base) { wrptr = udl_set_register(wrptr, 0x26, base >> 16); wrptr = udl_set_register(wrptr, 0x27, base >> 8); return udl_set_register(wrptr, 0x28, base); } static char *udl_set_register_16(char *wrptr, u8 reg, u16 value) { wrptr = udl_set_register(wrptr, reg, value >> 8); return udl_set_register(wrptr, reg+1, value); } /* * This is kind of weird because the controller takes some * register values in a different byte order than other registers. */ static char *udl_set_register_16be(char *wrptr, u8 reg, u16 value) { wrptr = udl_set_register(wrptr, reg, value); return udl_set_register(wrptr, reg+1, value >> 8); } /* * LFSR is linear feedback shift register. The reason we have this is * because the display controller needs to minimize the clock depth of * various counters used in the display path. So this code reverses the * provided value into the lfsr16 value by counting backwards to get * the value that needs to be set in the hardware comparator to get the * same actual count. This makes sense once you read above a couple of * times and think about it from a hardware perspective. */ static u16 udl_lfsr16(u16 actual_count) { u32 lv = 0xFFFF; /* This is the lfsr value that the hw starts with */ while (actual_count--) { lv = ((lv << 1) | (((lv >> 15) ^ (lv >> 4) ^ (lv >> 2) ^ (lv >> 1)) & 1)) & 0xFFFF; } return (u16) lv; } /* * This does LFSR conversion on the value that is to be written. * See LFSR explanation above for more detail. */ static char *udl_set_register_lfsr16(char *wrptr, u8 reg, u16 value) { return udl_set_register_16(wrptr, reg, udl_lfsr16(value)); } /* * This takes a standard fbdev screeninfo struct and all of its monitor mode * details and converts them into the DisplayLink equivalent register commands. ERR(vreg(dev, 0x00, (color_depth == 16) ? 0 : 1)); ERR(vreg_lfsr16(dev, 0x01, xDisplayStart)); ERR(vreg_lfsr16(dev, 0x03, xDisplayEnd)); ERR(vreg_lfsr16(dev, 0x05, yDisplayStart)); ERR(vreg_lfsr16(dev, 0x07, yDisplayEnd)); ERR(vreg_lfsr16(dev, 0x09, xEndCount)); ERR(vreg_lfsr16(dev, 0x0B, hSyncStart)); ERR(vreg_lfsr16(dev, 0x0D, hSyncEnd)); ERR(vreg_big_endian(dev, 0x0F, hPixels)); ERR(vreg_lfsr16(dev, 0x11, yEndCount)); ERR(vreg_lfsr16(dev, 0x13, vSyncStart)); ERR(vreg_lfsr16(dev, 0x15, vSyncEnd)); ERR(vreg_big_endian(dev, 0x17, vPixels)); ERR(vreg_little_endian(dev, 0x1B, pixelClock5KHz)); ERR(vreg(dev, 0x1F, 0)); ERR(vbuf(dev, WRITE_VIDREG_UNLOCK, DSIZEOF(WRITE_VIDREG_UNLOCK))); */ static char *udl_set_vid_cmds(char *wrptr, struct drm_display_mode *mode) { u16 xds, yds; u16 xde, yde; u16 yec; /* x display start */ xds = mode->crtc_htotal - mode->crtc_hsync_start; wrptr = udl_set_register_lfsr16(wrptr, 0x01, xds); /* x display end */ xde = xds + mode->crtc_hdisplay; wrptr = udl_set_register_lfsr16(wrptr, 0x03, xde); /* y display start */ yds = mode->crtc_vtotal - mode->crtc_vsync_start; wrptr = udl_set_register_lfsr16(wrptr, 0x05, yds); /* y display end */ yde = yds + mode->crtc_vdisplay; wrptr = udl_set_register_lfsr16(wrptr, 0x07, yde); /* x end count is active + blanking - 1 */ wrptr = udl_set_register_lfsr16(wrptr, 0x09, mode->crtc_htotal - 1); /* libdlo hardcodes hsync start to 1 */ wrptr = udl_set_register_lfsr16(wrptr, 0x0B, 1); /* hsync end is width of sync pulse + 1 */ wrptr = udl_set_register_lfsr16(wrptr, 0x0D, mode->crtc_hsync_end - mode->crtc_hsync_start + 1); /* hpixels is active pixels */ wrptr = udl_set_register_16(wrptr, 0x0F, mode->hdisplay); /* yendcount is vertical active + vertical blanking */ yec = mode->crtc_vtotal; wrptr = udl_set_register_lfsr16(wrptr, 0x11, yec); /* libdlo hardcodes vsync start to 0 */ wrptr = udl_set_register_lfsr16(wrptr, 0x13, 0); /* vsync end is width of vsync pulse */ wrptr = udl_set_register_lfsr16(wrptr, 0x15, mode->crtc_vsync_end - mode->crtc_vsync_start); /* vpixels is active pixels */ wrptr = udl_set_register_16(wrptr, 0x17, mode->crtc_vdisplay); wrptr = udl_set_register_16be(wrptr, 0x1B, mode->clock / 5); return wrptr; } static int udl_crtc_write_mode_to_hw(struct drm_crtc *crtc) { struct drm_device *dev = crtc->dev; struct udl_device *udl = dev->dev_private; struct urb *urb; char *buf; int retval; urb = udl_get_urb(dev); if (!urb) return -ENOMEM; buf = (char *)urb->transfer_buffer; memcpy(buf, udl->mode_buf, udl->mode_buf_len); retval = udl_submit_urb(dev, urb, udl->mode_buf_len); DRM_INFO("write mode info %d\n", udl->mode_buf_len); return retval; } static void udl_crtc_dpms(struct drm_crtc *crtc, int mode) { struct drm_device *dev = crtc->dev; struct udl_device *udl = dev->dev_private; int retval; if (mode == DRM_MODE_DPMS_OFF) { char *buf; struct urb *urb; urb = udl_get_urb(dev); if (!urb) return; buf = (char *)urb->transfer_buffer; buf = udl_vidreg_lock(buf); buf = udl_enable_hvsync(buf, false); buf = udl_vidreg_unlock(buf); retval = udl_submit_urb(dev, urb, buf - (char *) urb->transfer_buffer); } else { if (udl->mode_buf_len == 0) { DRM_ERROR("Trying to enable DPMS with no mode\n"); return; } udl_crtc_write_mode_to_hw(crtc); } } static bool udl_crtc_mode_fixup(struct drm_crtc *crtc, struct drm_display_mode *mode, struct drm_display_mode *adjusted_mode) { return true; } #if 0 static int udl_pipe_set_base_atomic(struct drm_crtc *crtc, struct drm_framebuffer *fb, int x, int y, enum mode_set_atomic state) { return 0; } static int udl_pipe_set_base(struct drm_crtc *crtc, int x, int y, struct drm_framebuffer *old_fb) { return 0; } #endif static int udl_crtc_mode_set(struct drm_crtc *crtc, struct drm_display_mode *mode, struct drm_display_mode *adjusted_mode, int x, int y, struct drm_framebuffer *old_fb) { struct drm_device *dev = crtc->dev; struct udl_framebuffer *ufb = to_udl_fb(crtc->fb); struct udl_device *udl = dev->dev_private; char *buf; char *wrptr; int color_depth = 0; buf = (char *)udl->mode_buf; /* for now we just clip 24 -> 16 - if we fix that fix this */ /*if (crtc->fb->bits_per_pixel != 16) color_depth = 1; */ /* This first section has to do with setting the base address on the * controller * associated with the display. There are 2 base * pointers, currently, we only * use the 16 bpp segment. */ wrptr = udl_vidreg_lock(buf); wrptr = udl_set_color_depth(wrptr, color_depth); /* set base for 16bpp segment to 0 */ wrptr = udl_set_base16bpp(wrptr, 0); /* set base for 8bpp segment to end of fb */ wrptr = udl_set_base8bpp(wrptr, 2 * mode->vdisplay * mode->hdisplay); wrptr = udl_set_vid_cmds(wrptr, adjusted_mode); wrptr = udl_enable_hvsync(wrptr, true); wrptr = udl_vidreg_unlock(wrptr); ufb->active_16 = true; if (old_fb) { struct udl_framebuffer *uold_fb = to_udl_fb(old_fb); uold_fb->active_16 = false; } udl->mode_buf_len = wrptr - buf; /* damage all of it */ udl_handle_damage(ufb, 0, 0, ufb->base.width, ufb->base.height); return 0; } static void udl_crtc_disable(struct drm_crtc *crtc) { } static void udl_crtc_destroy(struct drm_crtc *crtc) { drm_crtc_cleanup(crtc); kfree(crtc); } static void udl_load_lut(struct drm_crtc *crtc) { } static void udl_crtc_prepare(struct drm_crtc *crtc) { } static void udl_crtc_commit(struct drm_crtc *crtc) { udl_crtc_dpms(crtc, DRM_MODE_DPMS_ON); } static struct drm_crtc_helper_funcs udl_helper_funcs = { .dpms = udl_crtc_dpms, .mode_fixup = udl_crtc_mode_fixup, .mode_set = udl_crtc_mode_set, .prepare = udl_crtc_prepare, .commit = udl_crtc_commit, .disable = udl_crtc_disable, .load_lut = udl_load_lut, }; static const struct drm_crtc_funcs udl_crtc_funcs = { .set_config = drm_crtc_helper_set_config, .destroy = udl_crtc_destroy, }; int udl_crtc_init(struct drm_device *dev) { struct drm_crtc *crtc; crtc = kzalloc(sizeof(struct drm_crtc) + sizeof(struct drm_connector *), GFP_KERNEL); if (crtc == NULL) return -ENOMEM; drm_crtc_init(dev, crtc, &udl_crtc_funcs); drm_crtc_helper_add(crtc, &udl_helper_funcs); return 0; } static const struct drm_mode_config_funcs udl_mode_funcs = { .fb_create = udl_fb_user_fb_create, .output_poll_changed = NULL, }; int udl_modeset_init(struct drm_device *dev) { struct drm_encoder *encoder; drm_mode_config_init(dev); dev->mode_config.min_width = 640; dev->mode_config.min_height = 480; dev->mode_config.max_width = 2048; dev->mode_config.max_height = 2048; dev->mode_config.prefer_shadow = 0; dev->mode_config.preferred_depth = 24; dev->mode_config.funcs = &udl_mode_funcs; drm_mode_create_dirty_info_property(dev); udl_crtc_init(dev); encoder = udl_encoder_init(dev); udl_connector_init(dev, encoder); return 0; } void udl_modeset_cleanup(struct drm_device *dev) { drm_mode_config_cleanup(dev); }
{ "redpajama_set_name": "RedPajamaGithub" }
4,707
Adolphe Cochery, homme politique français (1819-1900) Georges Cochery, homme politique français (1855-1914) Bertrand Cochery, diplomate français (1959-) Cochery (entreprise), entreprise du groupe Vinci
{ "redpajama_set_name": "RedPajamaWikipedia" }
33
{"url":"http:\/\/bardama.com.au\/iran-regime-zjzspc\/81ab3d-what-does-ipv4-connectivity-mean","text":"# what does ipv4 connectivity mean\n\nThese aspects, including data integrity, are addressed by an upper layer transport protocol, such as the Transmission Control Protocol (TCP). 1 and 3 are reserved. {\\displaystyle 0} For online gamers, having IPv6 connectivity is a dream come true. Because of the different sizes of fields in different classes, each network class had a different capacity for addressing hosts. This is known as the \"dual stack\" problem and is caused by the computer favouring an IPv6 connection which isn't working over an IPv4 connection. G\u00a0 \u00a0 In the original design of IPv4, an IP address was divided into two parts: the network identifier was the most significant octet of the address, and the host identifier was the rest of the address. 0 There is a problem which exists with some clients which can mean if IPv6 is enabled on a website, it will not work or will be very slow. Typically the link layer encapsulates IP packets in frames with a CRC footer that detects most errors, many transport-layer protocols carried by IP also have their own error checking.[28]. An IP packet has no data checksum or any other footer after the data section. TCP\/IP is the technology that devices use to interact online. The offsets are [21] The main market forces that accelerated address depletion included the rapidly growing number of Internet users, who increasingly used mobile computing devices, such as laptop computers, personal digital assistants (PDAs), and smart phones with IP data services. IPv4 addresses are 32 bits long. In the time since the IANA handed out the last IPv4 address space, the price of IPv4 addresses has skyrocketed. IPv4 uses 32-bit addresses which limits the address space to 4294967296 (232) addresses. Cryptocurrency: Our World's Future Economy? Please advise. This is a noob question, but networking isn't my forte. 8 2480 Till date, it is considered the primary Internet Protocol and carries 94% of Internet traffic. Are These Autonomous Vehicles Ready for Our World? If the IP address is set to a static IP address, you need to change the adapter's settings to obtain an address from the DHCP server automatically. Protocols for such inverse correlations exist in the Internet Protocol Suite. = Additionally, IPv4 is a relatively constrained network - when operating on IPv4, network administrators need to figure out a way to efficiently allocate \u2026 Viable Uses for Nanotechnology: The Future Has Arrived, How Blockchain Could Change the Recruiting Game, 10 Things Every Modern Web Developer Must Know, C Programming Language: Its Important History and Why It Refuses to Go Away, INFOGRAPHIC: The History of Programming Languages, Internet Protocol Version 4 Packet Header (IPv4 packet header), Transmission Control Protocol\/Internet Protocol (TCP\/IP). With the phase-out of the 6bone experimental network starting in 2004, permanent formal deployment of IPv6 commenced in 2006. Used for local communications within a private network. some options may be considered as dangerous, \"Understanding IP Addressing: Everything You Ever Wanted To Know\", \"Requirements for Internet Hosts\u00a0\u2013 Communication Layers\", \"World 'running out of Internet addresses, \"Free Pool of IPv4 Address Space Depleted\", \"Five \/8s allocated to RIRs\u00a0\u2013 no unallocated IPv4 unicast \/8s remain\", \"APNIC IPv4 Address Pool Reaches Final \/8\", \"Internet Protocol, Version 6 (IPv6) Specification\", \"Practical network support for IP traceback\", https:\/\/www.iana.org\/assignments\/ip-parameters\/ip-parameters.xhtml#ip-parameters-1, IPv6 vs. carrier-grade NAT\/squeezing more out of IPv4, RIPE report on address consumption as of October 2003, Official current state of IPv4 \/8 allocations, as maintained by IANA, Dynamically generated graphs of IPv4 address consumption with predictions of exhaustion dates\u2014Geoff Huston, IP addressing in China and the myth of address shortage, Countdown of remaining IPv4 available addresses, https:\/\/en.wikipedia.org\/w\/index.php?title=IPv4&oldid=998722067, Pages using Sister project links with default search, Creative Commons Attribution-ShareAlike License. N\u00a0 \u00a0 Network hosts can take any address from this range; however, address 192.168.255.255 is reserved for broadcast within the network. = This notification usually indicates that your computer is unable to satisfy the requirements of Internet Protocol version 6 (IPv6) \u2026 If the packet size is bigger than the MTU, and the Do not Fragment (DF) bit in the packet's header is set to 0, then the router may fragment the packet. J\u00a0 \u00a0 An IPv4 address is typically written in decimal digits, formatted as four 8-bit fields separated by periods. In the past, conflict between network addresses and broadcast addresses arose because some software used non-standard broadcast addresses with zeros instead of ones.[19]. whereas IPv6 binary bits are separated by a colon(:). Class A has subnet mask 255.0.0.0 or \/8, B has subnet mask 255.255.0.0 or \/16 and class C has subnet mask 255.255.255.0 or \/24. We\u2019re Surrounded By Spying Machines: What Can We Do About It? 8 IPv4 is a connectionless protocol, and operates on a best-effort delivery model, in that it does not guarantee delivery, nor does it assure proper sequencing or avoidance of duplicate delivery. + Note: If the header length is greater than 5 (i.e., it is from 6 to 15) it means that the options field is present and must be considered. Other address representations were in common use when classful networking was practiced. [25] It provides a vastly increased address space, but also allows improved route aggregation across the Internet, and offers large subnetwork allocations of a minimum of 264 host addresses to end users. The router divides the packet into fragments. Note: Copied, Option Class, and Option Number are sometimes referred to as a single eight-bit field, the. I went into control panel and clicked on the network and it say Ipv4 connected, but then it says ipv6 no network access. [22][23] APNIC was the first RIR to exhaust its regional pool on 15 April 2011, except for a small amount of address space reserved for the transition technologies to IPv6, which is to be allocated under a restricted policy.[24]. Many years later, in May 2005, the IETF defined a formal standard in RFC 3927, entitled Dynamic Configuration of IPv4 Link-Local Addresses. Ensure that DHCP ends up enabled and that there isn't \u2026 The 4 Most Confusing Concepts in Networking Explained. = However, many routers and servers don't support it, making a connection between a device with an IPv6 address to a router or server that only supports IPv4 impossible. Until content providers universally provide IPv6 connectivity, clients will continue to need an IPv4 address to reach hosts that have yet to receive IPv6 connectivity or configuration. Straight From the Programming Experts: What Functional Programming Language Is Best to Learn Now? This is analogous to looking up a phone number in a phone book using the recipient's name. A native IPv6 connection lets you connect directly to the site in question, skipping the transition process. Reinforcement Learning Vs. They are most often written in dot-decimal notation, which consists of four octets of the address expressed individually in decimal numbers and separated by periods. The 6 Most Amazing AI Advances in Agriculture. Assigned as TEST-NET-2, documentation and examples. O\u00a0 \u00a0 Make the Right Choice for Your Needs. IPv4, or Internet Protocol version 4, was developed back in the early 1980s. An IPv4 address is typically written in decimal digits, formatted as four 8-bit fields that are separated by periods. IPv4 allowed computers to connect globally and had a limit of addresses totaling 4.29 billion, a number that seemed at the time sufficient. What does IPv4 and IPv6 dual-stack mean? The fields in the header are packed with the most significant byte first (big endian), and for the diagram and discussion, the most significant bits are considered to come first (MSB 0 bit numbering). A: IPv4 stands for Internet Protocol version 4. It was deployed for production in the ARPANET in 1983. It uses a logical addressing system and performs routing, which is the forwarding of packets from a source host to the next router that is one hop closer to the intended destination host on another network. [17] The last address has all host bits set to 1. Currently used methods are Dynamic Host Configuration Protocol (DHCP), Bootstrap Protocol (BOOTP) and, infrequently, reverse ARP. These addresses are only valid on the link (such as a local network segment or point-to-point connection) directly connected to a host that uses them. To avoid ambiguity in representation, this address is reserved. Also if I wanted to setup a socket connection for example, does my server have to be ipv6 too, or does the code just need to be able to handle it. IPv4 is a connectionless protocol, and operates on a best-effort delivery model, in that it does not guarantee delivery, nor does it assure proper sequencing or avoidance of duplicate delivery. A\u00a0 \u00a0 These addresses are primarily used for address autoconfiguration (Zeroconf) when a host cannot obtain an IP address from a DHCP server or other internal configuration methods. Today, right after an update from Asus Live Update, my internet stopped working. This model guarantees neither delivery nor avoidance of duplicate delivery; these aspects are handled by the upper layer transport. It provides the logical connection between network devices by providing identification for each device. IPv4 is 32-Bit IP address whereas IPv6 is a 128-Bit IP address. On my Wireless connection it states IPv4 Connectivity: Internet IPv6 Connectivity: Limited Media State: Enabled Speed 54.Mbps Signal quality is all bars but lately some movies stall, seem to pause every few seconds which i watch online . When a router receives a packet, it examines the destination address and determines the outgoing interface to use and that interface's MTU. The Internet Protocol enables traffic between networks. The class A network 127.0.0.0 (classless network 127.0.0.0\/8) is reserved for loopback. M\u00a0 \u00a0 Not all hosts are compatible at this point in time. The use of domain names requires translating, called resolving, them to addresses and vice versa. Used for benchmark testing of inter-network communications between two separate subnets. Of the approximately four billion addresses defined in IPv4, about 18 million addresses in three ranges are reserved for use in private networks. IPv4 is a connectionless protocol used in packet-switched layer networks, such as Ethernet. . The most common type of IP address is an iPv4 address (for version 4 of the IP technology). Background: IPv4 \u2013 IPv6, What does this mean? In the medium term, both IPv6 and IPv4 will have to be used simultaneously. For example if I have an ipv4 server and an ipv6 client connects, what would their ip show as? Its contents are interpreted based on the value of the Protocol header field. For example, with a \/16 subnet mask, the network 192.168.0.0 may use the address range of 192.168.0.0 to 192.168.255.255. This field may not exist for simple options. IPv4 and IPv6 dual-stack is a dual-stack network that is like a temporary bridge (temporary because eventually, IPv6 will be the only IP version to be used in the future) between IPv4 and IPv6. In 1993, based on this work, RFC\u00a01517 introduced Classless Inter-Domain Routing (CIDR),[4] which expressed the number of bits (from the most significant) as, for instance, \/24, and the class-based scheme was dubbed classful, by contrast. [18] The addresses 192.168.1.0, 192.168.2.0, etc., may be assigned, despite ending with 0. The packet payload is not included in the checksum. The Address Resolution Protocol (ARP) performs this IP-address-to-hardware-address translation for IPv4. Like private addresses, these addresses cannot be the source or destination of packets traversing the internet. The router puts each fragment into its own packet, each fragment packet having following changes: For example, for an MTU of 1,500 bytes and a header size of 20 bytes, the fragment offsets would be multiples of The broadcast address of the network is 192.168.5.255. Assigned as TEST-NET-1, documentation and examples. #\u00a0 \u00a0 The field \"fragment offset\" is nonzero, which is true for all fragments except the first. When fewer than four numbers are specified in the address in dotted notation, the last value is treated as an integer of as many bytes as are required to fill out the address to four octets. . In the given example, this calculation was 495*8 + 540 = 4500 bytes. In addition, the reverse correlation is often necessary. \u201cLimited connectivity\u201d happens when: Your computer detects that a network is present and operating. What does this mean \u2026 26 Real-World Use Cases: AI in the Insurance Industry: 10 Real World Use Cases: AI and ML in the Oil and Gas Industry: The Ultimate Guide to Applying AI in Business. What does IPv4 Connectivity = Not Connected mean in Windows?Helpful? But first, let\u2019s take a closer look at both protocols and see some of the differences between IPv4 and IPv6. {\\displaystyle {\\frac {1500-20}{8}}=185} Option-specific data. Internet Protocol version 4 (IPv4) is the fourth version of the Internet Protocol (IP). It is used to identify devices on a network using an addressing system. The Internet Protocol is designed for use in interconnected systems of packet-switched computer communication networks (see RFC:791). The IP layer was originally separated in the v3 of the TCP for design improvement, and stabilised in version 4. IPv4 reserves special address blocks for private networks (~18 million addresses) and multicast addresses (~270 million addresses). In addition, high-speed Internet access was based on always-on devices. IPv4 is based on the best-effort model. CIDR notation combines the address with its routing prefix in a compact format, in which the address is followed by a slash character (\/) and the count of leading consecutive 1 bits in the routing prefix (subnet mask). Deep Reinforcement Learning: What\u2019s the Difference? In addition, IPv6 improves upon IPv4 with end-to-end connectivity, QoS, security, and even mobility. Big Data and 5G: Where Does This Intersection Lead? RFC 3927 defines the special address block 169.254.0.0\/16 for link-local addressing. What does this mean \u2026 With the dual stack solution, every networking device, server, switch, router and firewall in an ISP's network will be configured with both IPv4 and IPv6 connectivity capabilities. IPv4 address sizes are 32-bit; however, IPv6\u2019s are 128-bit, which\u2014thinking spatially\u2014could cover the globe at 1,000 addresses every square meter. When one network wants to transmit datagrams to a network with a smaller MTU, it may fragment its datagrams. The Internet Protocol is the protocol that defines and enables internetworking at the internet layer of the Internet Protocol Suite. 3960 0 is for \"control\" options, and 2 is for \"debugging and measurement\". These are separated by full stops. You may have encountered the \"IPv6 Connectivity: No Internet access\" problem and cannot access the Internet. If you have only IPv6 as the only connection, there is a chance IPv4 is disabled. IPv4 was the first version of IP. This increasingly unnecessary expense can be a burden for companies that have not made the switch to IPv6. (A hardware address is also called a MAC address.) In essence it forms the Internet. The threat of exhaustion motivated the introduction of a number of remedial technologies, such as Classless Inter-Domain Routing (CIDR) methods by the mid-1990s, pervasive use of network address translation (NAT) in network access provider systems, and strict usage-based allocation policies at the regional and local Internet registries. The translation between addresses and domain names is performed by the Domain Name System (DNS), a hierarchical, distributed naming system that allows for the subdelegation of namespaces to other DNS servers. CIDR was designed to permit repartitioning of any address space so that smaller or larger blocks of addresses could be allocated to users. DHCP Enabled: Yes, then Wi-Fi status IPV4 Connectivity: No network access. The Internet Engineering Task Force (IETF) and IANA have restricted from general use various reserved IP addresses for special purposes. The long-term solution to address exhaustion was the 1998 specification of a new version of the Internet Protocol, IPv6. TCP\/IP, on the other hand, establishes a connection between two hosts so that they can send messages back and forth for a period of time. Some of the common payload protocols are: See List of IP protocol numbers for a complete list. + Because of its 128-bit address length, it can define up to 2,128 addresses. IPv4 (Internet Protocol Version 4) is the fourth revision of the Internet Protocol (IP) used to to identify devices on a network through an addressing system. There is no network identifier or broadcast address for these networks.[20]. But Why? IPv4 is a connectionless protocol used in packet-switched layer networks, such as Ethernet. It encapsulates IPv6 data in IPv4 transmissions, effectively letting you see newer-format sites with an older transmission protocol. If you can get an internet connection via IPv4, then you should be able the browse the web unless your drivers are faulty. U\u00a0 \u00a0 IPv4 was designed as a transport and communications medium, and increasingly any work on IPv4 is to find ways around the constraints. R\u00a0 \u00a0 This field may not exist for simple options. Since the 1980s, it was apparent that the pool of available IPv4 addresses was depleting at a rate that was not initially anticipated in the original design of the network. This way, even if fragments are re-fragmented, the receiver knows they have initially all started from the same packet. What is the difference between cloud computing and virtualization? Your LAN or Wi-Fi\/WLAN drivers might also be the problem in this case. In IPv4, this function was placed at the Internet Layer, and is performed in IPv4 routers, which thus require no implementation of any higher layers for the function of routing IP packets. Techopedia Terms:\u00a0 \u00a0 IPv4 binary bits are separated by a dot(.) Everytime it looses connectivity, it still remains connected to the network but... ipv6 connectivity in Network and Sharing hi all windows 10 users how to get ipv6 to connect acer E1-570. A receiver knows that a packet is a fragment, if at least one of the following conditions is true: The receiver identifies matching fragments using the foreign and local address, the protocol ID, and the identification field. Its 32-bit addressing provides about 4.3 billion IP addresses, but with the proliferation of mobile devices and Internet of Things devices, more IP addresses were needed. IP addresses are not tied in any permanent manner to hardware identifications and, indeed, a network interface can have multiple IP addresses in modern operating systems. Packets addresses in these ranges are not routable in the public Internet; they are ignored by all public routers. The revised system defined five classes. For example, when an IP host is booted or connected to a network it needs to determine its IP address, unless an address is preconfigured by an administrator. When the address block was reserved, no standards existed for address autoconfiguration. F\u00a0 \u00a0 These networks are typically used for point-to-point connections. What is IPv4 -- Internet Protocol Version 4? P\u00a0 \u00a0 These aspects, including data integrity, are addressed by an upper layer transport protocol, such as the Transmission Control Protocol (TCP). W\u00a0 \u00a0 Hence why you're not able to see some websites. 1500 Z, Copyright \u00a9 2021 Techopedia Inc. - Also, 192.168.0.0 is the network identifier and must not be assigned to an interface. By their binary nature, IP addresses are a finite resource and Vint & Bob established, at the time, 2^32 unique IP Addresses or ~ 4.3 Billion addresses. IPv4 (Internet Protocol Version 4) is the fourth revision of the Internet Protocol (IP) used to to identify devices on a network through an addressing system. It is possible that a packet is fragmented at one router, and that the fragments are further fragmented at another router. Find your local IP address if your network is using DHCP.. 540 Most importantly, dual stack technology allows ISPs to process IPv4 and IPv6 data traffic simultaneously. IPv4 is the fourth version of IP, it is the basis of the Internet, and establishes the rules for the computer networks functioning on the principle of packet exchange. An IP packet consists of a header section and a data section. IPv4 address sizes are 32-bit; however, IPv6\u2019s are 128-bit, which\u2014thinking spatially\u2014could cover the globe at 1,000 addresses every square meter. 5 Common Myths About Virtual Reality, Busted! It provides the logical connection between network devices by providing identification for each device. Terms of Use - Microsoft created an implementation called Automatic Private IP Addressing (APIPA), which was deployed on millions of machines and became a de facto standard. Set to 1 if the options need to be copied into all fragments of a fragmented packet. The rest of the address was used as previously to identify a host within a network. Class D addresses are reserved for multicasting, while class E addresses are reserved for future use. H\u00a0 \u00a0 Does it have any effect on you at all? The IPv4 communication protocol was created in 1977 by Vinton Cerf\u2014often referred to as the \u201cFather of the Internet.\" + . 255.255.255.0 ) the Internet Protocol version 4 ( IPv4 ) was and is currently most! Four numbers, each ranging from 0 to 255 has capacity for addressing hosts 32-bit decimal number 3221226219, is... Internet Protocol Suite in these ranges are not routable in the given example, in the layer... All fragments of a new version of Internet traffic today, [ 1 ] the... Bit lengths for network identification, [ 1 ] despite the ongoing deployment of header... Typically written in decimal digits, formatted as four 8-bit fields separated by periods to 127.0.255.250 though end. Most widespread Protocol used in packet-switched layer networks, such as Ethernet the Internet Engineering Task Force IETF... Private addresses, scalability and flexibility of IPv6, IPv4 is a connectionless Protocol in! Minus the IP header size ( 20 bytes minimum ; 60 bytes maximum ): what can Do... age of innocence '' with a smaller MTU, it can define up to 2,128 addresses and in. Protocol and carries 94 % of Internet traffic today, right after an update Asus. Open a browser and search it says no Internet access '' problem and can not interoperable! Model guarantees neither delivery nor avoidance of duplicate delivery ; these aspects are handled by the upper layer transport thanks... Most Internet traffic today, [ 1 ] despite the ongoing deployment of commenced. Medium, and C had different bit lengths for network identification most commonly used is.. Which in hexadecimal format is 0xC00002EB upgrade to the site in question, but require network translation! Consists of 14 fields, of which 13 are required method whereas IPv6 an. Which was quickly found to be copied into all fragments of a fragmented packet Dynamic host Configuration Protocol ( ). Value of the approximately four billion addresses defined in IPv4, then you should be the... It provides the logical connection between network devices by providing identification for each.. Of 192.168.0.0 to 192.168.255.255 address 192.168.255.255 is reserved for future use case, a number seemed... Which is true for all fragments except the first version deployed for production on SATNET in and... The different sizes of fields in different classes, each ranging from 0 to 255 time since the IANA out. Byte of the underlying technology that makes it possible for us to connect globally had! At least not yet address range of 192.168.0.0 to 192.168.255.255 then you be! Is Best to Learn Now for production on SATNET in 1982 and on network! Reserves special address block 169.254.0.0\/16 for link-local addressing a non-loopback interface with a smaller MTU, it is the transmission... Have the same router and it works just fine with the same ID using the! Military computer networking. [ 20 ] networks ( ~18 million addresses ) and, infrequently reverse! Of inter-network communications between two separate subnets of Internet Protocol, IPv6 IPv6! Reassembles the data section can use the address space so that smaller or larger blocks of could! Providing identification for each device exist in the ARPANET in January 1983 logical between! The what does ipv4 connectivity mean that devices use to interact online only IPv6 as the \u201c Father of Internet. Protocols for such inverse correlations exist in the Internet Protocol version 4, was developed in! ) and multicast addresses ( ~270 million addresses ) and IANA have restricted from general use various reserved addresses... Are that you wo n't notice a thing -- at least not yet 's limiations structure a! Able to see some websites the addresses 192.168.1.0, 192.168.2.0, etc., may be represented in any expressing. Was originally separated in the Internet has just changed forever, for real, thanks to IPv6 traffic... The site in question, but one called 6to4 is likely the most obvious answer that... Encountered the IPv6 Connectivity: no Internet. last address has all host bits set to if. Computers to connect globally and had a limit of addresses could be to! 232 ) addresses end users is 232 devices use to interact online an address! Their IP show as addresses may be encrypted for transmission across public networks to secure the data section addresses! Ipv4 uses 32-bit addresses which limits the address range of 192.168.0.0 to.. Web unless your drivers are faulty Best to Learn Now, there is a dream come.... No network identifier or broadcast address is typically written in decimal digits, formatted four! New IP format since we 've run out of IP Protocol numbers for a List... Connection via IPv4, then Wi-Fi status IPv4 Connectivity = not connected mean in Windows Helpful. Both protocols and see some of the world \u2019 s take a look at those shortly... Ranging from 0 to 255 in March 1982, the 14th field is optional and aptly:! Methods in the time sufficient receiver knows they have initially all started from same! Fragments of a fragmented packet RIR maintains a publicly searchable WHOIS database that provides information about address! See some of the world \u2019 s traffic networks. [ 3 ] have made. First version deployed for production on SATNET in 1982 and on the subnet itself identification continues! 6To4 is likely the most commonly used, what does this mean \u2026 in subnet! Non-Loopback interface with a \/16 subnet mask, the broadcast address always ends in 255 indicates the of! The numbers of addresses, but one called 6to4 is likely the most answer. Than \/24, broadcast addresses Do not necessarily end with 255 or any other footer after the data minus... IPv6 Connectivity is a 32-bit address scheme allowing to store 2^32 addresses which more. Connectivity = not connected mean in Windows? Helpful wifi is connected but when I open what does ipv4 connectivity mean browser and it... Server wishing to host websites to all visitors must be IPv6 compatible must be dropped that information! Your LAN or Wi-Fi\/WLAN drivers might also be expressed in dotted hex format 0xC0.0x00.0x02.0xEB. Might also be expressed in dotted hex format as 0xC0.0x00.0x02.0xEB, or Internet Protocol and 94! Range of 192.168.0.0 to 192.168.255.255 interface to use and that the fragments are further fragmented at router! To looking up a phone book using the recipient 's name provides the logical connection between network by! Be allocated to users phone book using the recipient 's name address range,... Or 255 can not be assigned to an interface analogous to looking up a phone number in a subnet used. Ongoing deployment of IPv6, what would their IP show as space to 4294967296 ( 232 unique... Have initially all started from the same ID using both the fragment offset '' is,! Both protocols and see some websites to Learn Now more than 4 billion addresses debugging... If fragments are re-fragmented, the quad-dotted IP address assignments 192.168.5.0 is used in packet-switched layer networks, such Ethernet. Any work on IPv4 is a noob question, skipping the transition process be dropped Option! Differences between IPv4 and a data section if the options need to be used as a host address. with. Interact online assign to end users is 232 with IPv6-only hosts handed out the last IPv4 address )... Methods in the \/16 subnet mask 255.255.255.0 ) the identifier 192.168.5.0 is used to communicate or,... 2004, permanent formal deployment of IPv6 commenced in 2006 that a network design,... Option ( including this field ), QoS, security, and C had different bit lengths network! In RFC 1109 in 1987 classes a, B, and Option number sometimes... In 2006 update, my Internet stopped working was created in 1977 by Vinton Cerf\u2014often referred as., then Wi-Fi status IPv4 Connectivity = not connected mean in Windows Helpful! A loopback source or destination address must be dropped 3221226219, which is more 4... For network identification separated in the medium term, both IPv6 and IPv4 will have to inadequate... debugging and measurement '' is disabled take a look at those shortly... Have not made the switch to IPv6 was created in 1977 by Cerf\u2014often. Protocol was created in 1977 by Vinton Cerf\u2014often referred to as the standard for all military computer networking. 3! Identifier and must not what does ipv4 connectivity mean used simultaneously thing -- at least not yet mean in. Term, both IPv6 and IPv4 will have to be copied into all fragments the... Addresses defined in IPv4, or Internet Protocol is the Protocol header field used packet-switched! Network access enables internetworking at the time since the IANA handed out the IPv4... Connection lets you connect directly to the site in question, but one called 6to4 likely... To overcome IPv4 's limiations address in a phone number in a subnet is used refer! Connections between computers, servers, mobile devices based on IP addresses,... Option ( including this field ) from this range ; however, address 192.168.255.255 is reserved for within. Transmission unit ( MTU ) more modern IPv6, what does this mean in., skipping the transition process OSI model security, and even mobility examines the destination address must be.! Real, thanks to IPv6 has the broadcast address for these networks. [ 20 ] IPv6 no network.. Says no Internet access '' problem and can not directly interoperable with IPv6 what does ipv4 connectivity mean are interpreted based on network... That seemed at the time sufficient be dropped maximum transmission unit ( MTU ) a phone book using the 's. A byte of the core protocols of standards-based internetworking methods in the packet-switched link.! Specified in IETF publication RFC 791 address 127.65530 is equivalent to 127.0.255.250 of 14,.","date":"2021-08-04 23:08:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.20918910205364227, \"perplexity\": 3818.1646105898685}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046155188.79\/warc\/CC-MAIN-20210804205700-20210804235700-00553.warc.gz\"}"}
null
null
Wuxi (无锡) és una antiga ciutat industrial situada en la província de Jiangsu, en la República Popular de la Xina. La ciutat està localitzada a la vora del llac Taihu, a uns 130 km al nord-oest de la ciutat de Xangai. Ocupa una àrea total de 517,7 km². La seva població, segons dades del 2001, era d'1.000.000 d'habitants només en la ciutat, als que cal afegir-hi més de 4,3 milions en l'àrea metropolitana. La planificació de Wuxi és la típica de les antigues ciutats xineses, amb una ciutat central a la que s'afegeix una sèrie de barris en una planificació circular. Aquests barris estan creuats per antics canals que s'utilitzaven per al transport de mercaderies, essent el canal principal el que acumula més trànsit. El clima en la ciutat és extrem, amb estius molt càlids i hiverns gèlids. La temperatura mitjana anual és de 18 °C. Degut a la seva proximitat al mar de la Xina, té una estació monzònica i la mitjana de precipitació anual és de 1.000mm. Wuxi fou en origen una ciutat minera amb petits dipòsits de minerals que ràpidament van quedar esgotats. Posteriorment es convertí en un centre cultural. La ciutat té diversos escenaris naturals. Destaca el llac Taihu, el tercer en mida de tot el país. La ciutat està també creuada pel Gran Canal de la Xina. Per Wuxi hi passen dos canals: un és l'original mentre que l'altre es construí el 1949. Enllaços externs Government website of Wuxi (en xinès, japonès i anglès) Fills il·lustres Qian Zhongshu (1910 - 1998) traductor, escriptor i intel·lectual xinès-
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,779
\section{Introduction} \label{sec:intro} Image super-resolution (SR) is a fundamental problem in image processing. \yukai{Single image SR approaches, which aim at restoring a high-resolution (HR) image only from a single low-resolution (LR) image, have been applied to many} image and video analysis tasks, such as video surveillance~\cite{SR_app_TMM14}, image-based medical analysis~\cite{SR_App_TMI16}, and image/video streaming~\cite{shi2016real,romano2017raisr}. Common techniques for single image SR can be roughly categorized into reconstruction-, example- and interpolation- based approaches. Reconstruction-based approaches~\cite{irani1991improving,shan2008fast,michaeli2013nonparametric}, which restore HR images by deconvolutional methods~\cite{shan2008fast} with a global blur degradation model, usually introduce ringing artifacts around salient structures~\cite{michaeli2013nonparametric} due to inaccurate blurring kernels in the inverse problem. \yukai{Example-based approaches~\cite{yang2010sc} boost the amplification factor by using internal or external patch data to guide the image restoration. Recently, Huang \textit{et al.}~\cite{huang2015single}} proposed to exploit self-similarity for single image SR, which greatly expands the internal patch searching space. Hu \textit{et al.}~\cite{hu2016serf} proposed a cascaded linear regression technique to model the relationship between HR and LR images. Interpolation-based approaches can achieve acceptable trade-off between performance and efficiency with a pre-defined kernel. However, pre-defined kernels use fixed weights for interpolation, which will inevitably cause blur when the weight definition is inconsistent with image structures. To address issue, various adaptive interpolations~\cite{chu2008gradient,van2012polygon, pisa15PAMI} are proposed. But the improvements in restoration quality are still limited. The success of deep convolutional neural network (CNN) in computer vision tasks has inspired novel trends in low-level image restoration researches, such as rain/dirt removal~\cite{eigen2013restoring}, noise removal~\cite{jain2009natural}, face hallucination~\cite{wang2014comprehensive,cao2017face}, hashing~\cite{zhang2015bit} and image inpainting~\cite{xie2012image}. \chongyu{Focusing on learning an end-to-end mapping between the LR images and their corresponding HR images, several CNN-based methods~\cite{dong2014srcnn, NIPS2015xuli, wang2015deep,kim2015accurate} have been proposed to perform image SR} in a pure data-driven manner. That is, they directly minimize the mean squared error (MSE) between the predicted and ground-truth images in the training stage. Although the restoration performance is significantly improved, the structural inconsistency between the LR input and HR output still exists. This is because human visual system is more sensitive to structural changes, which are difficult to be exploited from MSE-based loss functions. Recent advances in image SR try to address this issue~\cite{bruna2015super,ledig2016photo,johnson2016perceptual} by introducing feature-based perceptive loss functions to the training stage. However, unwanted artifacts and unreal details are also introduced, which make their SR results look unrealistic. \chongyu{ Considering single image SR is an ill-defined problem, it is necessary to exploit the priors of natural image to further improve the SR performance. Motivated by recent advances in deep learning researches that exploit priors in the form of context information in designing neural networks~\cite{VideoContex_TMM15, Parsing_Contex_ICCV15}, in this work, we propose to design neutral networks to investigate two types of image structural information, i.e.,} \textit{global structural information} which corresponds to salient boundaries in a global perspective and \textit{residual structural information} which contains noticeable details that are critical to visual quality. \chongyu{The success of multi-task learning framework inspires us to \keze{leverage} such structural information in a unified manner. \keze{For instance, Yang \textit{et al.}~\cite{yang2013feature} proposed to utilize the common knowledge (e.g., feature selection functions) of multiple tasks as supplementary information to facilitate decision making. Considering aforementioned structural information are usually considered as complementary context rather than common knowledge, in this work, we concentrate on complimentary contextualized multi-task learning for structure-preserving single image SR.} In particular, we propose a deep joint contextualized multi-task learning framework\keze{, where} three types of image components are imposed as complimentary contexts and jointly learned, i.e., the base image content, the boundary map, and the residual map. Besides a convolutional network that learns content-adaptive interpolations to produce the intermediate base image, we impose an auxiliary task to back-propagate the global boundary structural context. Meanwhile, an independent sub-network is introduced to explicitly model the noticeable details to provide residual structural context. } \chongyu{ The major contribution of this work is the proposed contextualized multi-task learning framework, which is the first attempt to incorporate joint learning of local, global, and residual contexts into CNNs for single image SR. Other contributions mainly come from the proposed content-adaptive interpolation and the sub-networks for capturing complementary image contents, which enables better trade-off between restoration quality and the number of network parameters. } \yukai{Extensive experiments on several benchmarks datasets (e.g. \textit{Set5}, \textit{Set14}, \textit{BSD500}) demonstrate that the proposed framework shows superior performance to most learning-based approaches in the perspective of both visual quality and quantitative metrics, which facilitates \keze{the} real-time image SR process.} \chongyu{ We would like to point out that a preliminary version of this work is reported in~\cite{This_ICME16}, which coarsely \keze{concatenates} content-adaptive interpolation and holistic edge context. In this paper, we inherit the idea of preserving structures and \keze{refining} the network architecture. A simple yet powerful sub-network is further employed to capture noticeable image details for better visual quality. The whole framework is re-interpreted from the aspect of joint context learning and multi-task learning. Besides, more comparisons with state-of-the-art approaches and more detailed \keze{analyses} of the proposed modules are added to further verify our statements.} The rest parts of this paper are organized as follows. Section~\ref{sec:Related} briefly reviews existing machine learning-based SR approaches which motivate this work. Section~\ref{sec:proposed} presents the details of the proposed framework, with thorough analysis of every component. Section~\ref{sec:exp} demonstrates the experimental results on several public benchmarks, comparing with state-of-the-art alternatives. Finally, \keze{Section}~\ref{sec:con} concludes this paper. \section{Related Work} \label{sec:Related} \subsection{Interpolation-based image super-resolution} Interpolation-based approaches typically start from evenly placing the pixels of LR image to the HR grid (the integral coordinates in the HR image domain). The basic idea of these approaches is to estimate the unknown pixel values in the HR grid by weighted average of surrounding known pixels. Considering common pixel changes in a local region can be approximated by continuous functions, people have proposed various weight definitions for image interpolation. For example, bilinear interpolation is proposed to utilize local linearity, and bicubic interpolation is proposed to exploit the high-order continuity~\cite{Cubic_TASSP81}. However, there are plenty of pixel changes that cannot be described by these pre-defined functions, especially for regions with rich image structures. In this case, structures will be blurred due to improper pixel averaging. To address this problem, various adaptive interpolation~\cite{chu2008gradient,van2012polygon} are proposed. For instance, Walt~\textit{et al.}~\cite{van2012polygon} proposed to express polygonal pixel overlap as a linear operator to improve the interpolation performance. But the improvements are still limited. \subsection{Multi-task learning in image super-resolution} Decades of researches on multi-task learning have demonstrated that learning multiple correlated tasks simultaneously can significantly improve the performance of the main task~\cite{multitask_ML97, mt2016mm, al2016TMM, mt2016cvpr, yu2017iprivacy}. In single image SR, there is also a trend of utilizing multi-task learning. For example, Yang~\textit{et al.}~\cite{multitask_IWMR11} proposed a multi-task K-SVD learning for image SR, in which example image patches are divided into different groups and K-SVD is applied to every group. It is shown that simultaneous learning multiple dictionaries can lead to better SR quality. Liang~\textit{et al.}~\cite{multitaskSR__ICIP15} proposed a multi-task learning framework that jointly considers image SR process and the image degeneration process. These works claim that the multi-task learning framework is a feasible way of utilizing priors in learning-based image SR. \subsection{Deep learning in image super-resolution} \chongyu{Recently, deep learning has achieved significant quality improvements in image SR}. For example, Dong~\textit{et al.}~\cite{dong2014srcnn} utilized a three-layer fully convolutional network to learn the non-linear mapping between HR and LR patches, which has a close relationship to sparse coding. \yukai{Ren~\textit{et al.}~\cite{NIPS2015xuli} introduced Shepard CNNs to facilitate translation variant interpolation, which gives a solution to both inpainting and SR. Wang~\textit{et al.}~\cite{wang2015deep} proposed a sparse coding based network for image SR. Based on learned iterative shrinkage and thresholding algorithm(LISTA)~\cite{gregor2010learning}, they employ a set of neural networks to restore images. Zeng~\textit{et al.}~\cite{zeng2017coupled} proposed a deep autoencoder for SR,} \chongyu{which explores the consistent representations of HR and LR images and demonstrate a superior efficiency compared to similar methods based on sparse representation. Kumar~\textit{et al.}~\cite{TMM_SR_Kumar} studied on several factors that affect the training phase to facilitate learning-based SR with fewer training samples.} The models of these methods, although being proposed from different aspects, are trained to minimize the squared error w.r.t. the ground-truth HR image, which is not necessarily correlated to good perceptual quality. Bruna~\textit{et al.}~\cite{bruna2015super} referred this problem as \textit{regression to mean}. Their proposed solution is a conditional generative model, which demonstrates improvement over visual quality, but with high time cost in both training and testing. \yukai{More recently, researchers notice the importance of image details and make various of attempts for exploration. Kim \textit{et al.}~\cite{kim2015accurate,kim2015deeply} further improved the SR quality by different network architectures such as very deep and recursive network structures. However, these methods heavily rely on very deep networks with plenty of parameters. e.g., a 20-layer convolutional neural network~\cite{simonyan2014vgg}. In addition, perceptual losses have been proposed for CNNs~\cite{bruna2015super,johnson2016perceptual}, which conduct the loss from the image space to high-level feature space of a pre-trained VGG-net~\cite{simonyan2014vgg}. At the same time, \keze{Ledig \textit{et al.}}~\cite{ledig2016photo} proposed to apply adversarial network to the task of SR, which results in more image details but lower PSNR score. More related to our work, there are several attempts to accelerate image SR. By developing a sub-pixel convolutional layer, Shi \textit{et al.}~\cite{shi2016real} used a single model to handle real-time image SR. Similarity, Dong \textit{et al.}~\cite{dong2016sr} applied convolutional layers on LR image and \keze{upscaled} it with deconvolution. They both promise low computational complexity, but there still exists plenty of room for performance improvement.} \section{Contextualized Multi-task Learning}~\label{sec:proposed} \begin{figure*}[ht] \centering \includegraphics[width=1\textwidth]{01_overview_of_framework} \caption{ The architecture of our contextualized multi-task deep learning framework for single image super-resolution. Given an input LR image, our framework first extracts its convolutional features and applies one deconvolutional module to interpolate the feature maps in a content-adaptive way. The resulting maps are then fed into two branched sub-networks, which incorporate global boundary context and residual context, respectively. Specifically, during the neural network training, one sub-network outputs salient image boundaries and the intermediate HR image; the other sub-network outputs the local residual map, i.e., the residual difference of the generated HR image and ground-truth image. The final HR estimation is obtained by fusing the intermediate HR image and the local residual map.} \label{fig:overall-pipeline} \end{figure*} In this section, we present \chongyu{the details of our} framework. As sketched in Fig.~\ref{fig:overall-pipeline}, the proposed framework \chongyu{includes} three components: feature extraction, content-adaptive interpolation, and multi-task estimation. \begin{table*}[t] \centering \footnotesize \center \vspace{0.05cm} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline Component & \multicolumn{4}{c|}{Feature Extraction} & Interpolation-1 & \multicolumn{2}{c|}{BCN} & Interpolation-2 & \multicolumn{2}{c}{RCN} \\ \hline \hline \textbf{\textit{layer}} & \textit{\textbf{conv}} & \textit{\textbf{conv}} & \textit{\textbf{conv}} & \textit{\textbf{conv}}& \textit{\textbf{deconv}} & \textit{\textbf{conv}} & \textit{\textbf{conv}} & \textit{\textbf{deconv}} & \textit{\textbf{conv}} & \textit{\textbf{conv}} \\ \textbf{\textit{filter}} & 5 & 3 & 3 & 1 & 11 & 3 & 3 & 11 & 3 & 3 \\ \textbf{\textit{channels}} & 16 & 32 & 128 & 8 & 8 & 12 & 2 & 8 & 12 & 1 \\ \textbf{\textit{size}} & 128 & 124 & 124 & 124 & 372 & 372 & 370 & 372 & 372 & 370 \\ \textbf{\textit{parameters}} & 400 & 4,608 & 36,864 & 1,024 & 7,744 & 864 & 216 & 7,744 & 864 & 108 \\ \hline \end{tabular} \caption{\chongyu{Detailed setup} \yukai{of each component in our framework. The five rows of the table represent the ``layer type'', ``filter size'', ``output channels'', ``size of output feature maps'' and ``number of parameters'', respectively.} \chongyu{The content-adaptive interpolation layers for RCN and BCN are ``Interpolation-1'' and ``Interpolation-2'', respectively. Note that this table takes the magnification factor of 3 and input images of resolution $128\times128$ as an example of parameter setup. }} \vspace{0.1cm} \label{table:Network_complexity} \end{table*} \subsection{Feature Extraction}\label{sec:FE} \yukai{Inspired by the Pyramid-Net~\cite{han2016deep}, we design a pyramid network structure for feature extraction.} \chongyu{That is, there are 3 convolutional layers with 16, 32 and 128 kernels, respectively. Detailed setup is summarized in Table~\ref{table:Network_complexity}. The first layer with kernel size $5\times5$ is designed as a spacious receptive field to capture as much image information as possible, as illustrated in~\cite{he2016deep}.} \yukai{ The other two layers with $3\times3$ kernel are adopted for better efficiency as~\cite{chetlur2014cudnn}. Note that we focus on extracting features from original LR images instead of the interpolated images. Thanks to the decreased computations of convolutional operations caused by the small size of feature maps, the proposed feature extraction can significantly accelerate the speed without obvious quality drop. Since the LR image has been represented as high-dimension feature maps through the first 3 layers, the computation cost may become pretty high if we import the high-dimension feature maps to content-adaptive interpolation directly.} \chongyu{ Therefore, we apply a shrinking layer with 8 kernels of size $1\times1$ to reduce the feature dimension. Note that the kernel number is empirically chosen for a reasonable trade-off between effectiveness and efficiency.} \yukai{ Benefitting from the shrinking layer, our model not only \keze{avoids} parameter explosion but also \keze{promotes} the restoration efficiency. } \begin{figure}[ht]\centering \subfloat[][ \centering Bicubic kernel \par PSNR: 32.71 dB] { \includegraphics[width=0.24\textwidth] {NB-convolution-kernel-and-result_a.png} } \subfloat[][ \centering Learned kernel \par PSNR: 33.10 dB] { \includegraphics[width=0.24\textwidth]{NB-convolution-kernel-and-result_b.png} } \caption{A comparison between image interpolations by bicubic and learned kernels. } \label{fig:LSP_Kernels} \end{figure} \subsection{Content-adaptive Interpolation}\label{sec:LSPM} The second component is one deconvolutional layer, which is used to interpolate the LR feature maps in a content-adaptive way. The deconvolutional layer has 8 kernels of size $n \times n$. Note that in this work, $n$ is determined by the upscaling factor, which follows the principles of bicubic interpolation. That is, the kernel should be large enough to cover the second pixel around the anchor pixel in the HR grid. For example, the deconvolutional kernel is of size $8\times8$, $11\times11$, and $16\times16$ for the upscaling factors of 2, 3 and 4, respectively. In this way, the deconvolutional layer can be regarded as a neural network implementation of standard image interpolation. Let $\mathbf{y}$ be the HR image with a HR grid. We construct another HR image $\mathbf{x}$ by evenly placed the LR image in the HR grid with identical pixel intervals. Then, standard interpolation can be written as: \begin{equation} \mathbf{y}_j = \sum_{i \in \Omega_j}{ \mathbf{x}_{i}~\omega_{ji}}, \end{equation} where $i$ and $j$ are the pixel indices in the HR grid, $\Omega_j$ represents the subset of $n\times n$ neighbouring pixels around pixel $j$, and $\omega_{ji}$ is the pre-defined weight for interpolation. Note that $ \mathbf{x}_i$ is non-zero only when it comes from a pixel in the LR image. With these definitions, we re-formulate the interpolation process as a basic component of a deconvolutional layer, i.e., \begin{equation} \mathbf{y}_j = \delta (\sum_{i\in \Omega_j} \mathbf{x}_i W(i') + b) \end{equation} where $\delta(\cdot)$ represents the activation function, $W$ is the deconvolutional kernel, $i'$ represents the pixel of $W$ that contributes to pixel $j$, and $b$ is the bias. In the proposed content-adaptive interpolation, we use multiple deconvolutional kernels in a similar fashion. That is, we evenly place the LR image in the HR grid to construct $\mathbf{h}^l$. Then, \begin{equation} \mathbf{h}^{l+1}_k = \delta ( \mathbf{h}^{l} \otimes W_k + b_k ), \end{equation} where the subscript $k$ represents the kernel index, ``$\otimes$'' represents the convolutional operator, and $\mathbf{h}^{l+1}$ is the output image of the $l^{th}$ layer. In this way, content-adaptive image interpolation can be accomplished via a deconvolutional layer, whose kernels are learned from sufficient training data. \yukai{Note that the deconvolutional layer is in the middle of the proposed network, which is different from other CNN-based SR methods~\cite{NIPS2015xuli, dong2014srcnn} that use deconvolution as the last layer. It is shown empirically that the proposed network can achieve nice restoration quality with reasonably increasing network parameters.} \yukai{To compare the proposed network with the bicubic interpolation, we construct a small network which only has one deconvolutional layer to learn an adaptive kernel, taking BSD300 as training data and bicubic interpolation parameters for initialization. The intensity changes of bicubic and our learned kernels are visualized in Fig.~\ref{fig:LSP_Kernels}, which illustrates that the learned kernel contains more high-frequency components. Meanwhile, the restoration results also indicate that the learned kernel leads to a superior restoration quality with more recovered details compared to the bicubic kernel. Thus, the effectiveness of the proposed adaptive interpolation is verified.} \subsection{Contextualized Multi-task Learning}\label{sec:Multi} \begin{figure} [t]\centering \subfloat[][ \centering original images] { \includegraphics[width=0.22\textwidth] {holistic_boudaries_1.png} } \subfloat[][ \centering boundary maps] { \includegraphics[width=0.22\textwidth]{holistic_boudaries_2.png} } \caption{Example images with salient boundaries. (a) Original images. (b) Manually labeled edge maps.} \label{fig:HSP-boundaries} \end{figure} In spired by the multi-task learning principles, we make an attempt to introduce auxiliary knowledge to SR issue. \begin{figure}[t] \centering \centering \includegraphics[width=0.115 \textwidth]{LSP_eg_conv_5.png} \includegraphics[width=0.115 \textwidth]{LSP_eg_conv_6.png} \includegraphics[width=0.115 \textwidth]{LSP_eg_conv_7.png} \includegraphics[width=0.115 \textwidth]{LSP_eg_conv_8.png} \includegraphics[width=0.115 \textwidth]{LSP_eg_conv_1.png} \includegraphics[width=0.115 \textwidth]{LSP_eg_conv_2.png} \includegraphics[width=0.115 \textwidth]{LSP_eg_conv_3.png} \includegraphics[width=0.115 \textwidth]{LSP_eg_conv_4.png} \caption{Illumination of several representative feature maps produced by the first three layers of feature extraction. The top row and bottom row show image-like and edge-like features, respectively. } \label{fig:LR_fmap} \end{figure} \textit{\textbf{Global Boundary Context}}: We develop a Boundary Context sub-Network (BCN) to preserve salient boundaries that represent global image structures. BCN consists of two convolutional layers with $3\times3$ kernels, where one layer is with 12 kernels and the other layer is with 2 kernels. In the training phase of BCN, we propose to exploit salient image boundaries by regarding edge detection as a joint task of HR image restoration. In particular, we introduce an auxiliary term into the objective function, which computes the error between predicted and human-labeled edge/boundary maps. These boundary maps are from Berkeley Segmentation Dataset (BSD)~\cite{amfm_pami2011}. Note that there are multiple boundary maps in BSD500 data set, we use their summation for better visualization and \keze{show} the examples in Fig.\ref{fig:HSP-boundaries}. With the two tasks of image restoration and edge detection, image components and structural features are firstly extracted and enlarged by content-adaptive interpolation before being fed into the BCN. Several representative samples of the extracted feature maps are shown in Fig.~\ref{fig:LR_fmap}, in which the top row and bottom row show image-like and edge-like features, respectively. This implies that these layers simultaneously extract redundant components and features, making it possible to produce base image and boundary maps in the HR image domain. Through joint optimization in an end-to-end manner, feature extraction, content-adaptive interpolation and BCN can provide complimentary context information to each other. In this way, structure-aware feature representations can be learned with the content-adaptive interpolation. \textit{\textbf{Residual Context}}: As a result of paying close attention to generating the HR image with salient boundaries, the concatenated BCN might fail to restore some subtle but noticeable structures. Motivated by the recent residual learning paradigm~\cite{kim2015accurate, csc_sr}, we make an attempt to address this issue by employing a Residue Context sub-Network (RCN). The objective of the RCN is to synthesize a residual image, which is defined as the difference between the interpolated HR image and the ground-truth HR image. In contrast to using the bicubic interpolated HR image as in \cite{kim2015accurate} and \cite{csc_sr}, our model uses the intermediate HR image provided by BCN. This can bring us two benefits: i) Higher image SR performance. As the HR image provided by BCN achieves comparable performance to the state-of-the-art methods, RCN can focus on remedying the overlooked information for higher SR quality; ii) A lightweight network architecture for RCN. Our used interpolated image contains significantly richer information than the bicubic one. Hence, compared with \cite{kim2015accurate} and \cite{csc_sr}, the synthesization of residual images is much easier. As illustrated in Fig.~\ref{fig:overall-pipeline}, the architecture of RCN is the same as that of the concatenated BCN. For the joint optimization of content-adaptive interpolation, BCN and RCN, we develop a fusion layer to merge the intermediate output of RCN and BCN in a data-driven way. In particular, the final HR image $\mathbf{y}$ of our framework is obtained by: \begin{equation} \mathbf{y}=\mathbf{f} \otimes \mathbf{I}_{interHR} + \mathbf{I}_r, \label{equation:the-whole-obj2} \end{equation} where $\mathbf{f}$ denotes a $3\times3$ convolutional filter, $\mathbf{I}_{interHR}$ is the intermediate HR image provided by BCN, and $\mathbf{I}_r$ is the residue image synthesized by RCN. In this way, the parameters of $\mathbf{f}$ can be adaptively \keze{updated} during the learning process. \section{Framework Training} The proposed framework is jointly optimized on a set of ``LR image, HR image and HR edge map\footnote{In BSD data sets, more than one boundary maps are provided for every image, which are all used in our training process. Since multiple boundary maps are used in the same way, in this subsection, we focus on the case of one boundary map for simplicity.}'' triplets. For convenience, we use $\mathbf{I}_l$, $\mathbf{I}_h$ and $\mathbf{I}_b$ to represent the LR image, HR image and boundary map, respectively. Given the input $\mathbf{I}_l$, the objective of our model is to reconstruct a HR image similar to $\mathbf{I}_h$ and predict a boundary map similar to $\mathbf{I}_b$. The parameter $\mathbf{W}$ of our model can be divided into 4 disjoint parts, i.e., $\mathbf{W}=\{\mathbf{W}_{s}, \mathbf{W}_{h}, \mathbf{W}_{b}, \mathbf{W}_{d}\}$, where $\mathbf{W}_{s}$ and $\mathbf{W}_{d}$ denote the parameters of content-adaptive interpolation and RCN, respectively. We denote the parameter of feature extraction stage has combined into content-adaptive interpolation part. For BCN, we use $\mathbf{W}_{h}$ and $\mathbf{W}_{b}$ to represent the specific weights for generating the intermediate HR image and the boundary maps, respectively. Since the parameters are separable, we propose to train our model in three iterative steps. First, we jointly train content-adaptive interpolation and BCN until their convergence; Second, fixing the parameters of content-adaptive interpolation and BCN, we update the parameters of RCN. Third, we jointly optimize content-adaptive interpolation, BCN and RCN. Specifically, content-adaptive interpolation and BCN are trained according to the following objective function: \begin{equation} \begin{array}{rl} L(\mathbf{I}_l,\mathbf{I}_h, \mathbf{I}_b, \mathbf W) = & L_{h}(\mathbf{I}_{l},\mathbf{I}_{h}, \mathbf W_{s}, \mathbf W_{h}) +\\ & \alpha \cdot L_{b} (\mathbf{I}_l,\mathbf{I}_b, \mathbf W_{s}, \mathbf W_{b}), \end{array} \label{equation:the-whole-obj} \end{equation} where $L_{h}$ and $L_{b}$ represent the HR image reconstruction objective and the boundary prediction objective, respectively. The balance weight $\alpha$ is used to control the importance of $L_{h}$ and $L_{b}$, which is empirically set to 1 in all our experiments. Both $L_{h}$ and the $L_{b}$ are in the form of mean squared error (MSE), i.e., \begin{equation} L_{h} = \frac{1}{N}\sum_{i=1}^{N}\left ( \mathbf{I}_h^{i} - f_{h}( \mathbf{W}_{s}, \mathbf{W}_{h}, \mathbf{I}_l^i) \right ) ^ 2, \label{equation:the-hr-obj} \end{equation} and \begin{equation} L_{b} = \frac{1}{N}\sum_{i=1}^{N}\left ( \mathbf{I}_b^{i} - f_{b}( \mathbf{W}_{s}, \mathbf{W}_{b}, \mathbf{I}_l^i) \right ) ^ 2, \label{equation:the-boundaries-obj} \end{equation} where $f_{h}(\cdot)$ and $f_{b}(\cdot)$ denote the reconstructed HR image and the predicted boundary map, respectively, $i$ represents the sample index, and $N$ is the number of training triplets. For simplicity, we use $\mathbf{I}_\omega$ to denote $f_{b}( \mathbf{W}_{s}, \mathbf{W}_{b}, \mathbf{I}_l)$. Note that when multiple boundary maps are available, there will be more edge prediction objectives. \begin{algorithm}[t] \caption{Contextualized Multi-task Learning.} \label{alg} \label{alg:batchTraining} \begin{algorithmic}[1] \raggedright \Require Training LR images $I_l$; HR images $I_h$; boundary images $I_b$; \While {$t<T$} \State $t\leftarrow t+1$; \State Randomly select a subset of LR images, HR images and boundary images $\mathbf{I}'_l,\mathbf{I}'_h,\mathbf{I}'_b$ from the training set; \State \textbf{for all} {$\mathbf{I}_l^{'i}$} \textbf{do} \State Obtain $f_{h}(\mathbf{W}_{s}, \mathbf{W}_{h}, \mathbf{I}_l^{'i})$ and $f_{b}(\mathbf{W}_{s}, \mathbf{W}_{b}, \mathbf{I}_l^{'i})$ via forward propagation; \State Update $\mathbf{W}_s^t, \mathbf{W}_h^t, \mathbf{W}_b^t$ via the intermediate HR output and boundary output: $\frac {\partial L_h}{\partial f_{h}(\mathbf{W}_{s}, \mathbf{W}_{h}, \mathbf{I}_l^{'i})}$,$\frac {\partial L_b}{\partial f_{b}(\mathbf{W}_{s}, \mathbf{W}_{b}, \mathbf{I}_l^{'i})}$; \State \textbf{end for} \EndWhile \While {$t<2T$} \State $t\leftarrow t+1$; \State \textbf{for all} {$\mathbf{I}_l^{'i}$} \textbf{do} \State Obtain $f_{d}(\mathbf{W}_{s}, \mathbf{W}_{d}, \mathbf{I}_l^{'i})$ via forward propagation; \State Update $\mathbf{W}_d^t$ via the residual output and intermediate HR output: $\frac {\partial L_d}{\partial( f_{d}(\mathbf{W}_{s}, \mathbf{W}_{d}, \mathbf{I}_l^{'i}) + f_{h}(\mathbf{W}_{s}, \mathbf{W}_{h}, \mathbf{I}_l^{'i}))}$; \State \textbf{end for} \EndWhile \end{algorithmic} \end{algorithm} The loss function for training RCN is defined as: \begin{equation} L_d =\frac{1}{N}\sum_{i=1}^{N}(\mathbf{I}_h^i-\mathbf{I}_\omega^i-f_d(\mathbf{W}_s,\mathbf{W}_d, \mathbf{I}_l^i))^2. \label{equation:the-whole-obj3} \end{equation} Finally, the whole framework is optimized by employing the standard back propagation algorithm, i.e., \begin{equation} L = \frac{1}{N} \sum_{i=1}^N (\mathbf{I}_h^i - y)^2, \end{equation} where $y$, the output of fusion layer, is the final HR image in the testing phase. The whole training phase is summarized as Algorithm~\ref{alg}, which accords with the pipeline of our proposed framework in Fig.~\ref{fig:overall-pipeline}. \section{Experiments}\label{sec:exp} \subsection{Experiment Setting} \indent \textit{\textbf{Datasets}}: All experiments are evaluated on three challenging benchmarks, i.e., \textit{Set5}~\cite{bevilacqua2012low}, \textit{Set14}~\cite{zeyde2012single} and \textit{BSD500}~\cite{amfm_pami2011}. The \textit{BSD500} dataset consists of 500 natural images and human annotations for corresponding boundaries. We use the 300 images \keze{from} its training and validation set for training. The rest of 200 images in \textit{BSD500} dataset form a widely used benchmark called \textit{BSD200}. Besides, the \textit{Set5} and \textit{Set14} datasets are also adopted as testing sets in other state-of-the-art methods such as~\cite{dong2014srcnn,wang2015deep,kim2015accurate}. Thus, we conduct experiments on the three benchmarks. \begin{table*}[t] \centering \begin{center} \begin{tabular}{|c|*{10}{>{\hfil}p{33pt}<{\hfil}| }} \hline Test set & \multicolumn{3}{c|}{Set5} & \multicolumn{3}{c|}{Set14} & \multicolumn{3}{c|}{BSD200} \\ \hline Scaling factor & $\times2$ & $\times3$ & $\times4$ & $\times2$ & $\times3$ & $\times4$ & $\times2$ & $\times3$ & $\times4$\\ \hline \hline Bicubic & 33.66 & 30.39 & 28.42 & 30.23 & 27.54 & 26.00 & 29.43 & 27.18 & 25.92\\ A+~\cite{a+} & 36.55 & 32.59 & 30.28 & 32.28 & 29.13 & 27.32 & 31.44 & 28.36 & 26.83\\ SRCNN~\cite{dong2014srcnn} & 36.34 & 32.59 & 30.09 & 32.18 & 29.00 & 27.20 & 31.38 & 28.28 & 26.73 \\ SRF~\cite{schulter2015fast} & 36.89 & 32.72 & 30.35 & 32.52 & 29.23 & 27.41 & 31.66 & 28.45 & 26.89\\ FSRCNN~\cite{dong2016sr} & \underline{36.94} & 33.06 & 30.55 & 32.54 & 29.37 & 27.50 & 31.73 & 28.55 & 26.92 \\ SCN~\cite{wang2015deep} & 36.93 & \underline{33.10} & \underline{30.86} & \underline{32.56} & \underline{29.41} & \underline{27.64} & 31.63 & 28.54 & \underline{27.02} \\ ShCNN~\cite{NIPS2015xuli} & 36.83 & 32.88 & 30.46 & 32.48 & 29.39 & 27.51 & \underline{31.75} & \underline{28.60} & 26.95 \\ \hline Proposed & \textbf{37.17} & \textbf{33.45} & \textbf{31.11}& \textbf{32.77} & \textbf{29.63} & \textbf{27.79} & \textbf{31.81} & \textbf{28.67} & \textbf{27.11} \\ \hline \end{tabular} \end{center} \caption{Quantitative comparisons among different methods in terms of PSNR (dB), in which the underline indicates the second place and bold face represents the first place.} \label{table:PSNR-on-different-method} \end{table*} \begin{table*}[t] \centering \begin{center} \begin{tabular}{|c|*{10}{>{\hfil}p{33pt}<{\hfil}| }} \hline Test set & \multicolumn{3}{c|}{Set5} & \multicolumn{3}{c|}{Set14} & \multicolumn{3}{c|}{BSD200} \\ \hline \hline Scaling factor & $\times2$ & $\times3$ & $\times4$ & $\times2$ & $\times3$ & $\times4$ & $\times2$ & $\times3$ & $\times4$\\ \hline Bicubic & 0.9299 & 0.8682 & 0.8104 & 0.8687 & 0.7736 & 0.7019 & 0.8524 & 0.7469 & 0.6727\\ A+~\cite{a+} & 0.9544 & 0.9088 & 0.8603 & 0.9056 & 0.8188 & 0.7491 & 0.8966 & 0.7945 & 0.7171\\ SRCNN~\cite{dong2014srcnn} & 0.9521 & 0.9033 & 0.8530 & 0.9039 & 0.8145 & 0.7413 & 0.8835 & 0.7794 & 0.7018 \\ SRF~\cite{schulter2015fast} & 0.9536 & 0.9046 & 0.8529 & 0.9042 & 0.8168& 0.7457 & 0.9011 & 0.8053 & 0.7332\\ FSRCNN~\cite{dong2016sr} & 0.9552 & \underline{0.9128} & 0.8619 & 0.9080 & 0.8231 & 0.7509 & 0.9064 & 0.8123 & 0.7378 \\ SCN~\cite{wang2015deep} & \underline{0.9571} & 0.9112 & \underline{0.8644} & \underline{0.9093} & \underline{0.8246} & \underline{0.7541} & 0.9058 & 0.8139 & 0.7403 \\ ShCNN~\cite{NIPS2015xuli} & 0.9551 & 0.9109 & 0.8638 & 0.9079 & 0.8239 & 0.7530 & \underline{0.9069} & \underline{0.8144} & \underline{0.7407} \\ \hline Proposed & \textbf{0.9583} & \textbf{0.9175} & \textbf{0.8736}& \textbf{0.9109} & \textbf{0.8269} & \textbf{0.7594} & \textbf{0.9074} & \textbf{0.8182} & \textbf{0.7460} \\ \hline \end{tabular} \end{center} \caption{Quantitative comparisons among different methods in terms of SSIM, in which the underline indicates the second place and bold face represents the first place.} \label{table:SSIM-on-different-method} \end{table*} \textit{\textbf{Implementation details}}: In the training phase, we first convert the original color image to grayscale image by extracting the luminance component in YCbCr color space. Then, we downscale the training images by requested scaling factors (e.g., 2, 3, and 4) to obtain the LR images. The LR images are cropped into a set of patches with a stride of 4. { The size of patches \keze{is} set to be same as receptive field.} The corresponding HR images and boundary maps are cropped with respect to the scaling factors. Before training, we initialize the network parameters by a zero-mean Gaussian distribution with a standard deviation of $1\times10^{-4}$. For the pre-training of the proposed model, we use the 91-images~\cite{yang2010sc} and PASCAL VOC2012~\cite{voc2012} datasets, which totally contain 13,487 clear images. Specifically, the model using LR and HR image pairs is pre-trained following the same strategy as~\cite{dong2014srcnn}. Since the feature extraction stage employ pyramid structure, we speed it up with the help of Factorized CNN~\cite{Wang2016Factorized}. In the training on \textit{BSD300} dataset, The learning rate of the last layer is set to $1\times10^{-5}$, while the rest layers are using a fixed learning rate of $1\times10^{-4}$. To increase the number of training samples, we also employ data augmentation for \textit{BSD300} dataset, as reported in~\cite{wang2015deep}. \begin{table}[tbp] \centering \smal \center \begin{tabular}{|*{4}{c|}} \hline Methods & Parameter number & PSNR \\ \hline\hline SRCNN \cite{dong2014srcnn} & 57,184 & 32.59 \\ FSRCNN \cite{dong2016sr}& 15,740 & 33.06 \\ VDSR \cite{kim2015deeply}& 664,704 & 33.66 \\ \hline Ours & 60,436 & 33.45 \\ Deeper ours & 594,964 & \textbf{33.80}\\ \hline \end{tabular} \caption{Comparison on parameter number and PSNR performance on \textit{Set5} with a scaling factor of 3.} \label{table:Parameter_Comparison} \end{table} \begin{figure} [tbp] \centering \includegraphics[width=1 \columnwidth]{efficiency_comparison} \caption{The efficiency analysis for the scaling factor of 3 on the \textit{Set5} dataset. } \label{fig:efficiency_comparison} \end{figure} \textit{\textbf{Methods and metrics}}: We compare our model with several recent state-of-the-art methods, including a three-layer CNN (SRCNN)~\cite{dong2014srcnn}, super-resolution forest (SRF)~\cite{schulter2015fast}, sparse coding-based network (SCN)~\cite{wang2015deep}, anchored neighborhood regression (A+)~\cite{kim2015accurate}, shepard interpolation neural network (ShCNN)~\cite{NIPS2015xuli}, very deep convolutional network (VDSR)~\cite{kim2015accurate}, and fast convolutional network for SR (FSRCNN)~\cite{dong2016sr}. For fair comparisons, we employ the popular PSNR and SSIM metrics for evaluation. To evaluate the structure-preserving capability, we introduce a new metric called ``EPSNR'', which can be formulated as: \begin{equation} EPSNR = 10\log_{10} {\left(\frac{MAX_I^2}{\frac{1}{|E|}\sum\limits_{i \in E}{\left(G_i-P_i\right)^2}}\right)}, \end{equation} where $MAX_I=255$ is used for 8-bit images, $G$ and $P$ denote the ground-truth and the produced HR images, respectively, $E$ indicates the pixels whose distances to their closest boundary are less than 2 pixels, and $i$ is the pixel index. It is believed that EPSNR can better exploits image fidelity on edge regions. \begin{figure*}[htbp]\centering \subfloat[][ \centering Bicubic \par 26.64 / 0.8232] { \includegraphics[width=0.21\textwidth]{set5_bf_bic.png} } \subfloat[][ \centering A+ \cite{a+} \par 29.11 / 0.8462] { \includegraphics[width=0.21\textwidth]{set5_bf_a+.png} } \subfloat[][ \centering SRF \cite{schulter2015fast} \par 29.23 / 0.8483] { \includegraphics[width=0.21\textwidth]{set5_bf_srf.png} } \subfloat[][ \centering SRCNN ~\cite{dong2014srcnn}\par 29.34 / 0.8513] { \includegraphics[width=0.21\textwidth]{set5_bf_srcnn.png} } \\ \subfloat[][ \centering SCN ~\cite{wang2015deep}\par 29.58 / 0.8499 ] { \includegraphics[width=0.21\textwidth]{set5_bf_scn.png} } \subfloat[][ \centering ShCNN ~\cite{NIPS2015xuli}\par 29.61 / 0.8521 ] { \includegraphics[width=0.21\textwidth]{set5_bf_shcnn.png} } \subfloat[][ \centering Proposed \par \textbf{29.80} / \textbf{0.8589} ] { \includegraphics[width=0.21\textwidth]{set5_bf_our.png} } \subfloat[][ \centering Original \par PSNR / SSIM ] { \includegraphics[width=0.21\textwidth]{set5_bf_gt.png} } \caption{\yukai{Visual comparison on the ``Zebra'' image from \textit{Set14} (factor 3), where the PSNR and SSIM are separated by ``/''.}} \label{fig:set5-visualize} \subfloat[][ \centering Bicubic \par 22.18 / 0.7376] { \includegraphics[width=0.21\textwidth] {set5_con_bic.png} } \subfloat[][ \centering A+ \cite{a+} \par 24.68 / 0.8402] { \includegraphics[width=0.21\textwidth] {set5_con_a+.png} } \subfloat[][ \centering SRF \cite{schulter2015fast} \par 24.60 / 0.8280] { \includegraphics[width=0.21\textwidth] {set5_con_srf.png} } \subfloat[][ \centering SRCNN \cite{dong2014srcnn} \par 25.31 / 0.8677 ] { \includegraphics[width=0.21\textwidth] {set5_con_srcnn.png} } \\ \subfloat[][ \centering SCN \cite{wang2015deep} \par 25.98 / 0.8821 ] { \includegraphics[width=0.21\textwidth] {set5_con_scn.png} } \subfloat[][ \centering ShCNN \cite{NIPS2015xuli} \par 25.85 / 0.8677 ] { \includegraphics[width=0.21\textwidth] {set5_con_shcnn.png} } \subfloat[][ \centering Proposed \par \textbf{26.05} / \textbf{0.8830} ] { \includegraphics[width=0.21\textwidth] {set5_con_our.png} } \subfloat[][ \centering Original \par PSNR / SSIM ] { \includegraphics[width=0.21\textwidth] {set5_con_gt.png} } \caption{Visual comparisons on the ``Butterfly'' image from \textit{Set5} (factor 4), where the PSNR and SSIM are separated by ``/''.} \label{fig:internet-visualize} \end{figure*} \begin{figure}[t] \centering \centering \subfloat[][ \centering Bicubic \par 21.51 dB] { \includegraphics[width=0.14\textwidth] {GAN_bic.png} } \subfloat[][ \centering ShCNN\cite{NIPS2015xuli} \par 22.54 dB ] { \includegraphics[width=0.14\textwidth]{GAN_SHCNN.png} } \subfloat[][ \centering SRGAN-1~\cite{ledig2016photo} \par 20.45 dB] { \includegraphics[width=0.14\textwidth]{GAN_MSE.png} } \\ \subfloat[][ \centering SRGAN-2~\cite{ledig2016photo} \par 19.07 dB] { \includegraphics[width=0.14\textwidth]{GAN_VGG54.png} } \subfloat[][ \centering Proposed \par 22.72 dB] { \includegraphics[width=0.14\textwidth]{GAN_our.png} } \subfloat[][ \centering Ground Truth ] { \includegraphics[width=0.14\textwidth]{GAN_gt.png} } \caption{Visual comparison on Bicubic, ShCNN, our proposed and SRGAN methods. Note that, \keze{`SRGAN-1'} represents the adversarial network with MSE-based content loss only. \keze{`SRGAN-2'} is the adversarial network with perceptual loss as mentioned in~\cite{ledig2016photo}. } \label{fig:GAN} \vspace{-8pt} \end{figure} We have also investigated the model complexity from the aspect of parameter number. Two profiles of our model are used, i.e., the common model (denoted as ``ours'') used in the above comparisons, and the model with a much deeper architecture (denoted as ``deeper ours''). In the ``deeper ours'' profile, we only increase the convolutional layer number of feature extraction stage from 4 to 18. Thus our model has similar number of parameters compared to VDSR. Both profiles can be accelerated by cuDNN~\cite{chetlur2014cudnn}. All the CNN-based methods are compared using the \textit{Set5} dataset with a scaling factor of 3. The results illustrated in Table~\ref{table:Parameter_Comparison} \keze{demonstrate} that the performance of our model keeps increasing as the parameter number increases. Using comparable network parameters, our model can achieve a PSNR gain of 0.14~dB compared to VDSR. Since fewer parameters can benefit both the training and testing phases, we recommend our model with the common profile. Fig.~\ref{fig:efficiency_comparison} illustrates the efficiency of all the compared methods using the ``time-quality'' diagram. It is demonstrated that our model with common profile runs nearly 2 times faster than VDSR while maintaining the second best SR performance, which is quite suitable for lightweight and fast implementation on consumer-grade devices. For applications that require extremely high SR quality, deeper ours will be a nice choice. \begin{figure}[t] \centering \centering \subfloat[][ \centering Original ] { \includegraphics[width=0.24\textwidth]{REAL1_WO.png} } \subfloat[][ \centering Proposed] { \includegraphics[width=0.24\textwidth]{REAL1_W.png} } \\ \subfloat[][ \centering Original] { \includegraphics[width=0.24\textwidth]{REAL2_WO.png} } \subfloat[][ \centering Proposed ] { \includegraphics[width=0.24\textwidth]{REAL2_W.png} } \caption{ Visual results of our model on real-world cases. The upper row shows the case of video surveillance and the lower row shows the case of mobile device. To see clear comparisons, it is better to zoom in the electronic version of this paper. } \label{fig:realworld} \vspace{-8pt} \end{figure} Some promising examples are visualized in Fig.~\ref{fig:set5-visualize} and Fig.~\ref{fig:internet-visualize}. For better viewing, we interpolate the chrominance components by the bicubic method to generate color images. To clearly demonstrate the difference, we choose one patch from each image and attach them below. Compared to other methods, our model can produce images with sharper and clearer boundaries. \textbf{\textit{Visual Comparison with SRGAN:}} We compare our method with the super-resolution generative adversarial network (SRGAN)~\cite{ledig2016photo}. Because of their proposed adversarial loss, SRGAN has obtained promising performance. However, it still has problems in recovering real details, which is verified by the comparisons shown in Fig.~\ref{fig:GAN}. It is shown in the enlarged patches of Fig.~\ref{fig:GAN}~(c) and (d) that some waterdrops exist in the ground-truth image disappear, which are produced by SRGAN methods. But these waterdrops are captured by our method and ShCNN. As pointed out in~\cite{Mehdi2016enhance}, SRGAN tends to bring in similar textures instead of recovering real details. Therefore, our proposed framework performs better than SRGAN on recovering more accurate details. {\textbf{\textit{Discussion on real-world cases:}} To justify the effectiveness of our method, we move one step forward to deal with images from video surveillance and mobile device. Specifically, we apply our model on real-world images with a scaling factor of 3. As reported in Fig.~\ref{fig:realworld}, ``Original'' indicates the original images and ``Proposed'' represent the images processed with our model. As one can observe from results shown in Fig.~\ref{fig:realworld}, ``Proposed'' have fewer artifacts compared with ``Original''. This demonstrates the robustness of our method towards real-world challenges. } \subsection{Ablation Study}~\label{sec:module_analysis} In this subsection, we conduct detailed analyses on the proposed modules, i.e., content-adaptive interpolation, BCN and RCN, for better understanding of our framework. We hope such analysis can lead to new insights into image restoration researches. \textit{\textbf{Content-adaptive interpolation}}: One of the major differences between our model and SRCNN~\cite{dong2014srcnn} is the employment of the deconvolutional layer. To demonstrate the superiority of our design, we train several fully convolutional networks (FCNs) with various layer numbers for comparisons. Specifically, we increase the number of middle layers from 5 to 16, resulting in FCN-5, FCN-9, FCN-12, and FCN-16. These FCNs follow the bicubic upsampling strategy as in SRCNN~\cite{dong2014srcnn}. Our content-adaptive interpolation consist of 5 convolutional layers and one deconvolutional layer, which contain feature extraction stage, content-adaptive interpolation and BCN. We remove the task of boundary objective to address the effectiveness of content-adaptive interpolation. By comparing content-adaptive interpolation with these FCNs on \textit{Set5} dataset with a scaling factor of 3, we obtain the results shown in Table~\ref{table:LSP-vs-FCN}. \begin{figure} [ht]\centering { \includegraphics[width=0.41\textwidth]{hsp} } \caption{The PSNR curves generated by models trained with and without edge prediction objective.} \label{fig:comparison-on-HSP-and-noHSP} \end{figure} It is indicated in these results that although the SR performance of FCN keeps increasing as the network depth increases, it still cannot outperform content-adaptive interpolation even when there are 16 layers. Nevertheless, our content-adaptive interpolation network, which only has 6 layers, surpasses these FCNs by a clear margin. More specifically, content-adaptive interpolation network outperforms FCN-16 by 0.32~dB. This explicitly verifies the superiority of the content-adaptive interpolation. \begin{table}[t] \centering \footnotesize % \center \begin{tabular}{|c|c|c|c|c|c|c|} \hline Module & FCN-5 & FCN-9 & FCN-12 & FCN-16 & LSPM \\ \hline \hline PSNR (dB) & 32.75 & 32.82 & 32.86 & 32.97 & \bf33.29 \\ \hline \end{tabular} \caption{Comparison between content-adaptive interpolation and FCNs on \textit{Set5} dataset with a scaling factor of 3. We remove the edge prediction objective to justify the effectiveness of content-adaptive interpolation.} \label{table:LSP-vs-FCN} \end{table} \textit{\textbf{Global Boundary Context}}: The proposed BCN is motivated by the paradigm of mult-task learning, which incorporates edge estimation as a co-task of HR image generation. Therefore, its analysis is conducted by comparing the SR performance between with and without the edge prediction objective. Since the \textit{BSD200} dataset contains manually labeled boundary maps, based on which we can easily compute the EPSNR. We compare two profiles of our model on this dataset with a scaling factor of 3 using both PSNR and EPSNR metrics. By removing the boundary prediction objective, we degrade BCN into single-task learning and denote it as ``ours w/o boundary''. As illustrated in Table~\ref{table:EPSNR}, the PSNR and EPSNR gains indicate the benefit of multi-task learning. Because the boundaries only occupy a small portion of the whole image, the improvement on overall PSNR is minor. However, the large improvement on EPSNR verifies the effectiveness of BCN. Another benefit of incorporating boundary prediction objective is the acceleration of training process. As shown in the PSNR curves of Fig.~\ref{fig:comparison-on-HSP-and-noHSP}, the edge prediction objective not only accelerates the convergence, but also contributes to a higher restoration quality. \begin{table}[t] \centering \center \begin{tabular}{|*{3}{c|}} \hline Methods & PSNR (dB) & EPSNR (dB) \\ \hline\hline Bicubic & 27.18 (+0.00) & 22.71 (+0.00) \\ A+~\cite{a+} & 28.36 (+1.21) & 24.28 (+1.57) \\ SRCNN~\cite{dong2014srcnn} & 28.28 (+1.1) & 24.24 (+1.53) \\ SRF~\cite{schulter2015fast} & 28.45 (+1.27) & 24.27 (+1.56) \\ SCN~\cite{wang2015deep} & 28.54 (+1.36) & 24.29 (+1.58) \\ ShCNN~\cite{NIPS2015xuli} & 28.60 (+1.42) & 24.32 (+1.61) \\ \hline Ours w/o boundary & 28.68 (+1.46) & 24.36 (+1.65) \\ Ours & \textbf{28.69 (+1.47)} & \textbf{24.43 (+1.72)} \\ \hline \end{tabular} \caption{Comparisons on \textit{BSD200} dataset with a scaling factor of 3.} \label{table:EPSNR} \end{table} \textit{\textbf{Local Residue Context}}: We design RCN to provide complementary information for image SR. Therefore, the SR performance of our model will be degraded if RCN is removed. To verify our statement, we use another profile named ``ours w/o RCN'', which is very similar to the previous version of this work~\cite{This_ICME16}, to conduct more comparisons on the aforementioned datasets with a scaling factor of 3. Table~\ref{table:dcm} reports the comparison results. It is shown that, although content-adaptive interpolation and BCN can produce HR image of high quality, the SR performance can still be further improved. The improvement on PSNR is minor because PSNR is a squared error-based metric, which is difficult to reveal subtle structure differences. In contrast, because SSIM concentrates on structure similarity, the improvement on SSIM is more significant. \begin{table}[t] \centering \smal \center \begin{tabular}{|*{4}{c|}} \hline Test set & Set5 & Set14 & BSD200\\ \hline\hline Ours w/o RCN & 33.36~dB & 29.57~dB & 28.63~dB \\ Ours & \textbf{33.47}~dB & \textbf{29.64}~dB & \textbf{28.69}~dB\\ \hline Ours w/o RCN & 0.9162 & 0.8255 & 0.8176 \\ Ours & \textbf{0.9176} & \textbf{0.8273} & \textbf{0.8183} \\ \hline \end{tabular} \caption{Comparisons between our model with and without RCN on the PSNR (top) and SSIM (bottom) metrics.} \label{table:dcm} \end{table} \section{Conclusion and Future Work}~\label{sec:con} In this paper, to address single image super-resolution, we have proposed a novel contextualized multi-task deep learning framework. Our neural network model incorporates global boundary context and residual context to super-resolve images while well preserving their structural details. { Moreover, we have introduced ``content-adaptive interpolation", which leverages a set of filters that are adaptive to the training samples. Different from the kernel estimation in blind image SR which usually employs only a single filter, our proposed content-adaptive interpolation has more filtering parameters and better convenience of being embedded into CNNs.} Our extensive experiments suggest that the proposed method outperforms other leading image super-resolution approaches, and achieves state-of-the-art performances on both popular evaluation metrics and visual quality comparison. There are several directions to extend our method. First, we are considering to introduce a perceptual loss into the multi-task optimization, aiming to better capture realistic and meaningful image details. Second, we shall generalize this framework to adapt to video data by taking spatio-temporal coherency into consideration. Third, considering that additional common knowledge in deep neural networks would be an interesting trial, we intend to utilize complementary spatial-temporal contexts as privileged information for video SR, as suggested by Yang \textit{et al.}~\cite{al2016TMM}. \section*{Acknowledgements} This work is partially supported by NSFC (No. 61602533), The Fundamental Research Funds for the Central Universities, in part by Hong Kong Scholars Program and Hong Kong Polytechnic University Mainland University Joint Supervision Scheme. We are grateful to acknowledge NVIDIA for GPU donations. \bibliographystyle{IEEEbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
794
Q: Preview de foto erróneo Tengo un problema y necesito ayuda pro. Tengo un proyecto que sirve para gestiona gastos de empresas. Los gastos utilizan notas para representar la información, y además incluye una foto. El problema que ha surgido es que las fotos que aparecen son incorrectas, aparecen fotos de otras capturas anteriores. Aparentemente, las fotos se descargan correctamente y así mismo, el nombre que las identifica también es correcto. He comprobado la ruta de almacenamiento de fotos en el servidor, y el contenido es también correcto, ni rastro de las otras fotos. Deduzco que es un tema de caché o de previews de fotos anteriores que crean conflictos. He leído que si las pruebas se realizan en el emulador, se pueden dar fallos de este tipo. Pero lo he probado en terminales, instalando y borrando toda la información, y me sigue pasando. He encontrado algo que puede solucionar el problema utilizando: imagePipeline.evictFromCache Me gustaría tener mas información, porque es un tema muy desconcertante. Si alguien ha sufrido el mismo problema y ha encontrado una solución, le agradecería que lo compartiera conmigo. ¡Gracias de antemano! A: Gracias por las contestaciones, y perdón por no aportar suficientes datos. Al final, un compañero mío ha encontrado la solución. Efectivamente, era un tema de caché. Como es una situación que se puede dar, os explico cómo se ha solucionado. La situación es la que explico en el anunciado. Damos por sentado dos cosas: que la API funciona bien, es decir, que recibe y guarda bien la foto nueva. Y también, cuando lo necesitamos descargamos la foto correcta, pero no muestra la que debería. Dicho esto, sabemos que el problema está sólo en la app. Android tiene un mecanismo de ahorro de datos, que consiste en descargar las fotos y guardaras con un nombre para no tener que descargarlas de nuevo más adelante. Cuando se descarga la nueva foto con el mismo nombre, Android interpreta que dicha foto ya la tenemos guardada, y por tanto no la descarga de nuevo, y muestra la foto antigua guardada. Así que para solucionarlo, utilizamos unas utilidades de la librería Fresco (que es la que estamos utilizando para todo), para borrar cualquier dato relacionado con la foto. De este modo, siempre se descarga la foto independientemente del nombre. Luego para optimizar la lógica, hay que distinguir las fotos modificadas, de las no modificadas, pero esto ya es otro tema. Espero que ayude! Gracias. Saludos. A: Ahora repasando mis posts, aprovecho para dejar el código que arregla el problema, ya que nunca se sabe quien quede necesitarlo. La librería a importar: implementation('com.facebook.fresco:fresco:1.5.0') Crear el objeto: private ImagePipeline imagePipeline; Inicializar plugin Fresco (1 vez) al iniciar la app: Fresco.initialize(this); Inicializar en cada actividad que se necesite configurar la caché: imagePipeline = Fresco.getImagePipeline(); Configurar de esta manera: imagePipeline.evictFromMemoryCache(imgUri); imagePipeline.evictFromDiskCache(imgUri); imagePipeline.evictFromCache(imgUri); img.setImageURI(imgUri); Siempre poner en la última linea, las operaciones que realizamos con las imágenes, para que pipeline pueda actuar debidamente.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,029
using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; namespace GeneticSharp { /// <summary> /// Rank Selection /// <remarks> /// Is a kind of Fitness Proportionate Selection. /// <see href=" https://www.obitko.com/tutorials/genetic-algorithms/selection.php">Rank Selection</see> /// <para> /// The Rank selection method, is similar to the <see cref="RouletteWheelSelection"/>. However the size of the wheel and sectors are /// calculated differently. /// </para> /// <para> /// In the Rank selection method, the first step is to sort the population by fitness and assign a new fitness value from 1 to n. /// The worst chromosome is given a fitness of 1, the best a fitness of n. /// </para> /// <para> /// Then, an array is built containing cumulative probabilities of the individuals. So, n random numbers are generated in the range 0 to fitness sum. /// and for each random number an array element which can have higher value is searched for. Therefore, individuals are selected according to their /// probabilities of selection. /// </para> /// </remarks> /// </summary> [DisplayName("Rank")] public class RankSelection : SelectionBase { #region Constructors /// <summary> /// Initializes a new instance of the <see cref="GeneticSharp.Domain.Selections.RankSelection"/> class. /// </summary> public RankSelection() : base(2) { } #endregion #region ISelection implementation /// <summary> /// Selects from wheel. /// </summary> /// <param name="number">The number.</param> /// <param name="chromosomes">The chromosomes.</param> /// <param name="rankWheel">The rank wheel.</param> /// <param name="getPointer">The get pointer.</param> /// <returns>The selected chromosomes.</returns> protected static IList<IChromosome> SelectFromWheel(int number, IList<IChromosome> chromosomes, IList<double> rankWheel, Func<double> getPointer) { var selected = new List<IChromosome>(); for (int i = 0; i < number; i++) { var pointer = getPointer(); var chromosome = rankWheel .Select((value, index) => new { Value = value, Index = index }) .FirstOrDefault(r => r.Value >= pointer); if (chromosome != null) selected.Add(chromosomes[chromosome.Index].Clone()); } return selected; } /// <summary> /// Calculates the cumulative percent. /// </summary> /// <param name="chromosomes">The chromosomes.</param> /// <param name="rankWheel">The rank wheel.</param> protected static void CalculateCumulativeFitnessRank(IList<IChromosome> chromosomes, IList<double> rankWheel) { var totalFitness = chromosomes.Count * (chromosomes.Count + 1) / 2; var cumulativeRank = 0.0; for (int n = chromosomes.Count; n > 0; n--) { cumulativeRank += (double)n / totalFitness; rankWheel.Add(cumulativeRank); } } /// <summary> /// Performs the selection of chromosomes from the generation specified. /// </summary> /// <param name="number">The number of chromosomes to select.</param> /// <param name="generation">The generation where the selection will be made.</param> /// <returns>The select chromosomes.</returns> protected override IList<IChromosome> PerformSelectChromosomes(int number, Generation generation) { var chromosomes = generation.Chromosomes.OrderByDescending(c => c.Fitness).ToList(); var rankWheel = new List<double>(); var rnd = RandomizationProvider.Current; CalculateCumulativeFitnessRank(chromosomes, rankWheel); return SelectFromWheel(number, chromosomes, rankWheel, () => rnd.GetDouble()); } #endregion } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,516
How to make openwork candle with his own hands Make openwork candle let's try today. This original jewelery can be a highlight of your home or room. Moreover, the process of creation is not too time-consuming. How to make a snow effect on a simple glass bottle Glass bottle there is in every house. Let's do the beautiful interior things. It is not difficult and economically.
{ "redpajama_set_name": "RedPajamaC4" }
1,247
{"url":"http:\/\/gino.fantix.pro\/en\/latest\/engine.html","text":"# Engine and Connection\u00b6\n\nGinoEngine is the core of GINO. It acts like a pool of connections but also does the work of assembling everyone together:\n\nUnder the hood, engine is associated with a specific dialect instance on creation, e.g. asyncpg dialect. The dialect is actually a set of classes that implements GINO dialect API, offering all the details about how to operate on this specific database. In the diagram, gray color means internal, while green means touchable by end users.\n\nDuring creation, the engine will also ask the dialect to create a database connection pool for it. The pool type is also a part of the dialect API, because asynchronous database drivers usually have their own pool implementation, thus their GINO dialects should hide such implementation differences behind the unified diagram API for engine to use.\n\nNote\n\nIn SQLAlchemy, database drivers are supposed to follow the DB-API standard, which does not usually provide a pool implementation. Therefore, SQLAlchemy has its own pool implementation, created directly in engine. This is where this diagram doesn\u2019t fit SQLAlchemy.\n\nThe pool creates raw connections, not the GinoConnection green in the diagram. The connection in the diagram is a many-to-one wrapper of the raw connection, because of the reuse and lazy features, we\u2019ll get to that part later. The connection is created by the engine, thus inherits the same dialect, and is used for running queries.\n\nOn the outer side, SQLAlchemy queries can be executed directly on the engine or connection. When on engine, it will try to acquire a reusable connection to actually execute the connection, and release the connection after use.\n\nNote\n\nAnother difference to SQLAlchemy here: GINO execution methods always return final results, while in SQLAlchemy accessing the result may cause further implicit database accesses. Therefore GINO engine immediately releases the connection when the execution method on the engine returns, but SQLAlchemy can only release the connection implicitly when the result data is found exhausted.\n\nBy immediately releasing a connection, GINO may not release the related raw connection when the raw connection was reused from another parent connection. We\u2019ll get to this later.\n\nGINO also supports implicit execution without having to specify an engine or connection explicitly. This is done by binding the engine to the db instance, also known as the MetaData or the Gino instance. You may possibly bind a GinoConnection instance, but that is greatly not recommended because it is very much untested.\n\nAt last, as the ORM \/ CRUD feature, models are just add-ons on top of everything else to generate queries. The parent model class is connected to a db instance on creation, therefore the models can do implicit execution too if their db has a bind.\n\nThen let\u2019s get to some details.\n\n## Creating Engines\u00b6\n\nGINO reuses the strategy system SQLAlchemy provides to create engines. The name of GINO\u2019s strategy to create asynchronous GinoEngine is just gino, but only available after gino is imported:\n\nimport gino, sqlalchemy\n\nasync def main():\ne = await sqlalchemy.create_engine('postgresql:\/\/...', strategy='gino')\n# e is a GinoEngine\n\n\nTip\n\nAlso the GINO strategy replaces the default driver of dialect postgresql:\/\/ from psycopg2 to asyncpg, so that you don\u2019t have to replace the URL as it may be shared between GINO and vanilla SQLAlchemy in parallel. Alternatively, you can explicitly specify the driver to use by postgresql+asyncpg:\/\/... or just asyncpg:\/\/....\n\nGINO also offers a shortcut as gino.create_engine(), which only sets the default strategy to gino and does nothing more. So here is an identical example:\n\nimport gino\n\nasync def main():\ne = await gino.create_engine('postgresql:\/\/...')\n# e is also a GinoEngine\n\n\nAs you may have noticed, when using the GINO strategy, create_engine() returns a coroutine, which must be awaited for result. Because it will create a database connection pool behind the scene, and actually making a few initial connections by default.\n\nFor it is just SQLAlchemy create_engine(), the same rules of parameters apply in GINO too. Well for now, GINO only supports a small amount of all the parameters listed in SQLAlchemy document (we are working on it!):\n\nFor Dialect:\n\nFor Engine:\n\nWhile these parameters are discarded by GINO:\n\nIn addition, keyword arguments for creating the underlying pool is accepted here. In the case of asyncpg, they are from create_pool(). For example, we can create an engine without initial connections:\n\ne = await gino.create_engine('postgresql:\/\/...', min_size=0)\n\n\nSimilar to SQLAlchemy, GINO also provides shortcut to create engine while setting it as a bind. In SQLAlchemy it is like this:\n\nimport sqlalchemy\n\n# or in short\n\n\n\nThis implicitly calls create_engine() under the hood. However in GINO, creating an engine requires await, it can no longer be hidden behind a normal assignment statement. Therefore, GINO removed the assignment magic in subclass Gino, reverted it to simple assignment:\n\nimport gino\n\ndb = gino.Gino()\n\nasync def main():\n# db.bind = 'postgresql:\/\/...' doesn't work!! It sets a string on bind\nengine = await gino.create_engine('postgresql:\/\/...')\ndb.bind = engine\n\n\nAnd provided a shortcut to do so:\n\nengine = await db.set_bind('postgresql:\/\/...')\n\n\nAnd another simpler shortcut for one-time usage:\n\ndb = await gino.Gino('postgresql:\/\/...')\n\n\nTo unset a bind and close the engine:\n\nengine, db.bind = db.bind, None\nawait engine.close()\n\n\nOr with a shortcut correspondingly:\n\nawait engine.pop_bind().close()\n\n\nFurthermore, the two steps can be combined into one shortcut with asynchronous context manager:\n\nasync with db.with_bind('postgresql:\/\/...') as engine:\n\n\n## Managing Connections\u00b6\n\nWith a GinoEngine at hand, you can acquire connections from the pool now:\n\nconn = await engine.acquire()\n\n\nDon\u2019t forget to release it after use:\n\nawait conn.release()\n\n\nYes this can be easily missing. The recommended way is to use the asynchronous context manager:\n\nasync with engine.acquire() as conn:\n# play with the connection\n\n\nHere conn is a GinoConnection instance. As mentioned previously, GinoConnection is mapped to an underlying raw connection, as shown in following diagram:\n\nEach column has at most one actual raw connection, and the number is the sequence the connections are created in this example. It is designed this way so that GINO could offer two features for connection management: reuse and lazy. They are keyword arguments on acquire() and by default switched off.\n\n### reuse\u00b6\n\nWhen acquiring a GinoConnection (2), GINO will borrow a raw connection (1) from the underlying pool first, and assign it to this GinoConnection (2). This is the default behavior of acquire() with no arguments given. Even when you are nesting two acquires, you still get two actual raw connection borrowed:\n\nasync with engine.acquire() as conn1:\nasync with engine.acquire() as conn2:\n# conn2 is a completely different connection than conn1\n\n\nBut sometimes conn2 may exist in a different method:\n\nasync def outer():\nasync with engine.acquire() as conn1:\nawait inner()\n\nasync def inner():\nasync with engine.acquire() as conn2:\n# ...\n\n\nAnd we probably wish inner could reuse the same raw connection in outer to save some resource, or borrow a new one if inner is individually called without outer:\n\nasync def outer():\nasync with engine.acquire() as conn1:\nawait inner(conn1)\n\nasync def inner(conn2=None):\nif conn2 is None:\nasync with engine.acquire() as conn2:\n# ...\nelse:\n# the same ... again\n\n\nThis is exactly the scenario reuse could be useful. We can simply tell the acquire() to reuse the most recent reusable connection in current context by setting reuse=True, as presented in this identical example:\n\nasync def outer():\nasync with engine.acquire() as conn1:\nawait inner(conn1)\n\nasync def inner():\nasync with engine.acquire(reuse=True) as conn2:\n# ...\n\n\nBack to previous diagram, the blue GinoConnection instances (3, 4, 6) are \u201creusing connections\u201d acquired with reuse=True, while the green ones (2, 5, 7) are not, thus they become \u201creusable connections\u201d. The green reusable connections are put in a stack in current context, so that acquire(reuse=True) always reuses the most recent connection at the top of the stack. For example, (3) and (4) reuse the only available (2) at that moment, therefore (2, 3, 4) all map to the same raw connection (1). Then after (5), (6) no longer reuses (2) because (5) is now the new head of the stack.\n\nTip\n\nBy context, we are actually referring to the context concept in contextvars the new module in Python 3.7, and its partial backport aiocontextvars. Simply speaking, you may treat a series of function calls in a chain as in the same context, even if there is an await. It\u2019s something like a thread local in asyncio.\n\nGinoConnection (2) may be created through acquire(reuse=True) too - because the stack is empty before (2), there is nothing to reuse, so (2) upgraded itself to a reusable connection.\n\nReleasing a reusing connection won\u2019t cause the reused raw connection being returned to the pool, only directly releasing the reused GinoConnection can do so. Connections should be released in the reversed order as they are acquired, but if the reused connection is released before reusing connections by accident, then all the reusing connections depending on it will turn closed because they are reusing the same raw connection which is returned to the pool, any further execution will fail. For example, if (3) is released first, then (2) and (4) are still functional. But if (2) is released first, then (3) and (4) will be released implicitly and are no longer usable any more.\n\n### lazy\u00b6\n\nAs you may have found, GinoConnection (5) does not have an underlying raw connection, even when it is reused by (6). This is because both (5) and (6) set lazy=True on acquire.\n\nA lazy connection will not borrow a raw connection on creation, it will only do so when have to, e.g. when executing a query or starting a transaction. For example, GinoConnection (7) is acquired lazily without a raw connection, and (8) is only created when a query is executed on (7):\n\nasync with engine.acquire(lazy=True) as conn: # (7)\nawait conn.scalar('select now()') # (8)\n\n\nOn implementation level, lazy is extremely easy in acquire(): if lazy=False then borrow a raw connection, else do nothing. That\u2019s it. Before executing a query or starting a transaction, GinoConnection will always try to borrow a raw connection if there is none present. This allows GINO to \u201ctransiently release\u201d a raw connection, while all GinoConnection mapped to this raw connection are put in lazy mode (again). This is especially useful before you need to run some networking tasks in a database-related context - the networking task may take a long time to finish, we don\u2019t want to waste a connection resource checked out for nothing. For example:\n\nasync with engine.acquire(lazy=True) as conn: # (7)\nawait conn.scalar('select now()') # (8)\nawait conn.release(permanent=False) # release (8)\nawait asyncio.sleep(10) # simulate long I\/O work\nawait conn.scalar('select now()') # re-acquire a new raw connection,\n# not necessarily the same (8)\n\n\nWhen used together with reuse, at most one raw connection may be borrowed for one reusing chain. For example, executing queries on both (5) and (6) will result only one raw connection checked out, no matter which executes first. It is also worth noting that, if we set lazy=False on (6), then the raw connection will be immediately borrowed on acquire, and shared between both (5) and (6). It\u2019s been quite a while, let me post the same diagram again:\n\n### reusable\u00b6\n\nUsually, you don\u2019t have to worry about the two options reuse and lazy, using the default acquire() will always create a concrete GinoConnection with a new raw connection with it. It is only that they are by default reusable (the green ones). If you need an absolutely isolated unique connection that has no risk being reused, you may use reusable=False on acquire. As shown in the diagram, the unreusable GinoConnection is an orphan away from any stack:\n\nasync with engine.acquire(): # (2)\nasync with engine.acquire(reusable=False): # the unreusable connection\nasync with engine.acquire(reuse=True): # (3)\n\n\nUnreusable connections can be lazy. But it is usually meaningless to specify both reuse=True and reusable=False at the same time, because reusing connections are always unusable - they are also not in the stack. You cannot reuse a reusing connection, you only reuse a reusable connection in the stack. Making a reusing connection unreusable doesn\u2019t make its related reusable connection unreusable. Hmm if this is getting more confusing, just don\u2019t use acquire(reuse=True, reusable=False) unless you know what it does.\n\n### current_connection\u00b6\n\nExcept for all scenarios supported by above three options, there is still one left out: we may want to acquire a reusing-only connection. There is no such option to do so, but GINO could do the same thing through current_connection which is always the reusable GinoConnection at the top of current stack, or None if current stack is empty.\n\nTip\n\nThe different between current_connection and acquire(reuse=True) is, the latter always produces a GinoConnection, while the former may not.\n\n## Executing Queries\u00b6\n\nOnce you have a GinoConnection instance, you can start executing queries with it. There are 4 variants of the execute method: all(), first(), scalar() and status(). They are basically the same: accepting the same parameters, calling the same underlying methods. The difference is how they treat the results:\n\n\u2022 all() returns all results in a list, which may be empty when the query has no result, empty but still a list.\n\u2022 first() returns the first result directly, or None if there is no result at all. There is usually some optimization behind the scene to efficiently get only the first result, instead of loading the full result set into memory.\n\u2022 scalar() is similar to first(), it returns the first value of the first result. Quite convenient to just retrieve a scalar value from database, like NOW(), MAX(), COUNT() or whatever generates a single value. None is also returned when there is no result, it is up to you how to distinguish no result and the first value is NULL.\n\u2022 status() executes the query and discard all the query results at all. Instead it returns the execution status line as it is, usually a textual string. Note, there may be no optimization to only return the status without loading the results, so make your query generate nothing if you don\u2019t want any result.\n\nBy \u201cresult\u201d, I meant RowProxy of SQLAlchemy - an immutable row instance with both tuple and dict interfaces. Database values are translated twice before they are eventually stored in a RowProxy: first by the database driver (dialect) from network payload to Python objects (see Type Conversion of how asyncpg does this), second by SQLAlchemy result_processor() depending on the actual type and dialect.\n\nThe arguments taken by these 4 methods are identical to the ones accepted by SQLAlchemy execute() (click to read more), usually a plain string of SQL directly or a SQLAlchemy query clause, followed by query parameters. In the case when multiple dictionaries are given to multiparams, all 4 methods will always return None discarding all results. Likewise, the parameter values are processed twice too: first by bind_processor() then the database driver.\n\nGINO also supports SQLAlchemy execution_options() provided either on engine level, connection level or on queries. At the moment we are working on being compatible with SQLAlchemy execution options. In the mean while, GINO provides several new execution options, for example enabling return_model and providing a model will make all() and first() return ORM model instance(s) instead of RowProxy instance(s). See also execution_options() for more information.\n\nIn addition, GINO has an iterate() method to traverse the query results progressively, instead of loading all the results at once. This method takes the same arguments as the other 4 execute methods do, and follows the same rule of data handling. For now with asyncpg, this creates a server-side cursor.\n\n## Implicit Execution\u00b6\n\nAcquire a GinoConnection and execute queries on it, that is the most explicit way. You can also execute queries on a GinoEngine instance. In this case, a connection will be acquired with reuse=True for you implicitly, and released after returning:\n\nawait engine.scalar('select now()')\n\n\nEquals to:\n\nasync with engine.acquire(reuse=True) as conn:\nawait conn.scalar('select now()')\n\n\nThis allows you to easily write connectionless code. For example:\n\nasync def get_now():\nreturn await engine.scalar('select now()')\n\nasync def main():\nasync with engine.acquire():\nnow = await get_now()\nawait engine.status('UPDATE ...')\n\n\nIn this example, main() will take only one raw connection. get_now() can also work alone out of any acquire() context, thanks to reuse.\n\nFurthermore, GINO provides the same query APIs on Gino directly. They are simply delegates to corresponding API methods on the bind. This allows even engine-less programming:\n\ndb = gino.Gino()\n\nasync def get_now():\nreturn await db.scalar('select now()')\n\nasync def main():\nasync with db.with_bind('postgresql:\/\/...'):\nnow = await get_now()\nawait db.status('UPDATE ...')\n\n\nNote\n\nIn this example we didn\u2019t put the two queries in an acquire() block, so they might be executed in two different connections.\n\nAt last, the SQLAlchemy implicit execution on queries also work in GINO, under an extension named gino:\n\nawait users_table.select().gino.all()\n\n\nBy default, the extension GinoExecutor is injected on Executable as a property of name gino at the creation of Gino instance. Therefore, any Executable object has the gino property for implicit execution. Similarly, the execution methods calls the corresponding ones on the bind of the db instance.","date":"2018-09-20 01:44:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.1942102015018463, \"perplexity\": 4092.636134579283}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-39\/segments\/1537267156314.26\/warc\/CC-MAIN-20180919235858-20180920015858-00431.warc.gz\"}"}
null
null
{"url":"http:\/\/math.stackexchange.com\/questions\/308075\/prove-that-a-simple-graph-with-2n-vertices-without-triangles-has-at-most-n2","text":"# Prove that a simple graph with $2n$ vertices without triangles has at most $n^2$ lines.\n\nProve that a simple graph with $2n$ vertices without triangles has at most $n^2$ lines.\n\nI've been struggling with this exercise for some time, but I can't come up with a decent proof.\n\n-\nWhat have you tried? Aside: It's clear you can reach $n^2$ because you can take the complete bipartite graph $K_{n,n}$. \u2013\u00a0 Thomas Andrews Feb 19 '13 at 14:08\n@ThomasAndrews Why would you take the complete bipartite graph $K_{n,n}$ ? \u2013\u00a0 Kasper Feb 19 '13 at 14:31\nI called it an aside because it doesn't pertain directly to your actual problem, but $K_{n,n}$ is a graph with $2n$ vertices and $n^2$ edges that does not have any triangles, so the upper bound of $n^2$ is reachable. \u2013\u00a0 Thomas Andrews Feb 19 '13 at 14:35\nUse whatever answer appears here: math.stackexchange.com\/questions\/308094\/\u2026 \u2013\u00a0 Benjamin Dickman Feb 19 '13 at 15:07\n\nHint: Show if two nodes have an edge between them, then the degrees of those two nodes must add up to at most $2n$.\n\nIf you remove those two nodes from the graph, you have a graph with $2(n-1)$ nodes and no triangles. From the first line, how many edges do you remove, at most? Proceed by induction.\n\n-\nFor the first line, if you add the degree of those nods $a,b$ and it's higher then $2n$, then there must be a nod $c$ connected to both $a$ and $b$. So then there is a triangle, right ? \u2013\u00a0 Kasper Feb 19 '13 at 15:10\n@Kasper Yes, that's the gist. \u2013\u00a0 Thomas Andrews Feb 19 '13 at 15:20\nSuppose it is true for $n-1$, then we get $2(n-1)$ nodes and at most $(n-1)^2$ lines. If we now add 2 extra nods, then we can't add more then $1+2(n-1)$ lines to this graph. In total we get $1+2(n-1)+(n-1)^2=(1+(n-1))^2=n^2$ lines. \u2013\u00a0 Kasper Feb 19 '13 at 19:23\nBingo, @Kasper. Exactly. Basically, if two nodes with an edge between them have a sum of degrees less than $2n$, then removing them removes $2n-1$ nodes at most. \u2013\u00a0 Thomas Andrews Feb 19 '13 at 19:26\nThere is a more general theorem, and Wikipedia has a non-inductive proof for it: en.wikipedia.org\/wiki\/Turan%27s_theorem \u2013\u00a0 Thomas Andrews Feb 19 '13 at 19:39\n\nI answered the other question, but here is a totally different kind of argument.\n\nFact: If $G$ has $n$ vertices and $m$ edges, then it has an independent set of size at least $n\/(D+1)$, where $D=2m\/n$ is the average degree.\n\nTo see how this implies your statement, consider switching the edges and non-edges of $G$ to get the graph complement. If this has an independent set of size $3$, $G$ had a triangle. On the other hand, if $G$ has at least $n^2 + 1$ edges, then, for the complement, $D < n - 1$, so $2n\/(D + 1) > 2$.\n\nA slick proof (I've heard it attributed to Ravi Boppana) of the fact is: uniformly shuffle the vertices of $G$ and then select each vertex that appears before all of its neighbors to get a subset $I$. This $I$ is an independent set and, since each vertex $v$ is first among its $d_v$ neighbors with probability $(d_v + 1)^{-1}$, the expected size of $I$ is $$\\sum_{i\\in V(G)} \\frac{1}{d_i + 1} \\ge \\frac{n}{D+1}$$ with the inequality coming from symmetry and convexity.\n\n-\n\nSee F.Harary, Graph Theory, 1969, chapt.2, \"Extremal graphs\"\n\n-","date":"2014-12-18 16:53:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9122068881988525, \"perplexity\": 243.45778713553506}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-52\/segments\/1418802767274.159\/warc\/CC-MAIN-20141217075247-00123-ip-10-231-17-201.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} Maritime history is the study of human activity at sea. It covers a broad thematic element of history, focusing on understanding humankind's various relationships to the oceans, seas, and major waterways of the globe \cite{hattendorf2012maritime}. A large area of research in this field requires the collection and integration of data coming from multiple and diverse historical sources, in order to perform qualitative and quantitative analysis of empirical facts and draw conclusions on possible impact factors \cite{fafalios2021FastCat,petrakis2021}. Consider, for instance, the real use case of the SeaLiT project (ERC Starting Grant in the field of maritime history)\footnote{\url{https://sealitproject.eu/}}, which studies the transition from sail to steam navigation and its effects on seafaring populations in the Mediterranean and the Black Sea between the 1850s and the 1920s \cite{delis2020seafaring}. Historians in this project have collected a large number of archival documents of different types and languages, including crew lists, payrolls, sailor registers, naval ship register lists, and employment records, gathered from multiple authorities in different countries (more about this project in Sect.~\ref{subsec:sealit}). Complementary information about the same entity of interest, such as a ship, a port, or a captain, may exist in different archival documents. For example, for the same ship, one source may provide information about its owners, another source may provide construction details and characteristics of the ship (length, width, tonnage, horsepower, etc.), while other sources may provide information about the ship's voyages and crew. Information integration is crucial in this context for performing valid data analysis and drawing safe conclusions, such as finding answers to questions that require combining and aggregating information, like \textit{\q{finding the number of sailors per residence location that arrived at a specific port and who were crew members in ships of a specific type, e.g. Brig}}. Moreover, information integration under a common data model can produce data of high value and long-term validity that can be reused beyond a particular research activity or project, as well as integrated with other datasets by the wider (historical science) community. To this end, this paper describes the construction and use of the \textit{SeaLiT Ontology}. The ontology aims at facilitating a shared understanding of maritime history information by providing a common and extensible semantic framework for information modeling and integration. It uses and extends the CIDOC Conceptual Reference Model (CRM) (ISO 21127:2014)\footnote{\url{https://cidoc-crm.org/}} as a formal ontology of human activity, things and events happening in space and time \cite{doerr2003cidoc}. The ontology was designed considering requirements and knowledge of domain experts (a large group of maritime historians), expressed through research needs, inference processes they follow, and exceptions they make. It was developed in a bottom-up manner by analysing large and heterogeneous amounts of primary data, in particular archival documents of different types and languages gathered from authorities in several countries, including crew lists, payrolls, civil registers, sailor registers, naval ship registers, employments records, censuses, and others. All modeling decisions were validated by the domain experts and, in practice, by transforming their data (transcripts) to a rich semantic network based on the SeaLiT Ontology, which enables them (through a user-friendly interface) to find answers to information needs that require combining information of different sources. We describe the methodology and the steps we followed for designing the ontology, and provide its specification, RDFS and OWL implementations, as well as knowledge graphs that make use of the ontology for integrating data transcribed from a large and diverse set of archival documents. We also describe a data exploration application that operates over these knowledge graphs and which currently supports maritime historians in exploring and analysing the integrated data. Table \ref{tab:links} provides the key access links to the SeaLiT Ontology as well as related resources and information. \begin{table}[h] \begin{center} \caption{Key access links and information of the SeaLiT Ontology.} \label{tab:links} \vspace{-2mm} \begin{tabular}{ll} \toprule SeaLiT Ontology Specification & \url{https://zenodo.org/record/6797750} \\ DOI of the SeaLiT Ontology & 10.5281/zenodo.6797750\\ Namespace of the SeaLiT Ontology & \url{http://www.sealitproject.eu/ontology/} \\ SeaLiT Ontology RDFS (Turtle) & \url{https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1_RDFS.ttl} \\ SeaLiT Ontology RDFS (RDF/XML) & \url{https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1_RDFS.rdf} \\ SeaLiT Ontology OWL (RDF/XML) & \url{https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1.owl} \\ \midrule SeaLiT Knowledge Graphs (KGs) & \url{https://zenodo.org/record/6460841} \\ DOI of SeaLiT KGs & 10.5281/zenodo.6460841 \\ ResearchSpace application over the KGs & \url{http://rs.sealitproject.eu/} \\ \midrule License of SeaLiT Ontology \& KGs & Creative Commons Attribution 4.0 \\ \bottomrule \end{tabular} \end{center} \end{table} The rest of this paper is organised as follows: Section~\ref{sec:background} describes the context of this work, provides the required background, and discusses related work. Section~\ref{sec:methodology} details the methodology and principles we have followed for building the ontology. Section~\ref{sec:ontology} presents the ontology, describes an example on how a part of the model was revised several times to incorporate new historical knowledge, and provides its specification as well as an RDFS and an OWL implementation. Section~\ref{sec:application} describes the application of the ontology in a real context. Section~\ref{sec:usage} discusses its usage and sustainability. Finally, Section~\ref{sec:conclusion} concludes the paper and outlines future work. \section{Context, Background and Related Work} \label{sec:background} \subsection{The SeaLiT Project} \label{subsec:sealit} The ontology has been developed in the context of the SeaLiT project\footnote{\url{https://sealitproject.eu/}}, a European project in the field of maritime history (ERC Starting Grant, No 714437). The project studies the transition from sail to steam navigation and its effects on seafaring populations in the Mediterranean and the Black Sea between the 1850s and the 1920s. Historians in SeaLiT investigate the maritime labour market, the evolving relations among ship-owners, captain, crew, and local societies, and the development of new business strategies, trade routes, and navigation patterns, during the transitional period from sail to steam. The main concepts on which the scientific research focuses, are the ships (including various information such as type, usage, dimensions, technology), the people related to the ships (sailors, ship owners, students, relatives) and the historical events/activities related to these (such as voyages, recruitments, payments). The archival sources considered and studied in SeaLiT range from hand written ship log books, crew lists, payrolls and employment records, to registers of different types such as civil, sailors, students and naval ship registers. These archival sources have been gathered from different authorities in countries of the Mediterranean and the Black Sea, and are written in different languages, including Spanish, Italian, French, Russian, and Greek. The full archival corpus studied in SeaLiT is described in the project's web site.\footnote{\url{https://sealitproject.eu/archival-corpus}} \subsection{The ISO standard CIDOC-CRM} The SeaLiT Ontology uses and extends the CIDOC-CRM (Conceptual Reference Model)\footnote{\url{http://www.cidoc-crm.org/}}, in particular its stable version 7.1.1, which means that each class of the SeaLiT Ontology is a direct subclass or a descendant of a CIDOC-CRM class. CIDOC-CRM is a high-level, event-centric ontology of human activity, things and events happening in spacetime, providing definitions and a formal structure for describing the implicit and explicit concepts and relationships used in cultural heritage documentation \cite{doerr2003cidoc}. It is the international standard (ISO 21127:2014)\footnote{\url{https://www.iso.org/standard/57832.html}} for the controlled exchange of cultural heritage information, intended to be used as a common language for domain experts and implementers to formulate requirements for information systems, providing a way to integrate cultural heritage information of different sources. The considered stable release of CIDOC-CRM (version 7.1.1) consists of 81 classes and 160 unique properties. The highest-level distinction in CIDOC-CRM is represented by the top-level concepts of {\tt E77 Persistent Item} (equivalent to the philosophical notion of endurant), {\tt E2 Temporal Entity} (equivalent to the philosophical notion of perdurant) and, further, the concept of {\tt E92 Spacetime Volume} which describes the entities whose substance has or is an identifiable, confined geometrical extent in the material world that may vary over time. Fig.~\ref{fig:crm1} depicts how the high level classes of CIDOC-CRM are connected. \begin{figure}[h] \centering \fbox{\includegraphics[width=0.8\textwidth]{figures/crm_mainClasses.png}} \vspace{-2mm} \caption{High level properties and classes of CIDOC-CRM.} \label{fig:crm1} \end{figure} \subsection{Related Work} Over the last years, methods and technologies of the Semantic Web have started playing a significant and ever increasing role in historical research. The survey in \cite{merono2015semantic} reviews the state of the art in the application of semantic technologies to historical research, in particular works related to i) knowledge modeling (ontologies, data linking), ii) text processing and mining, iii) search and retrieval, and iv) semantic interoperability (data integration, classification systems). As regards ontologies for the modeling of \textit{maritime history} information, the most relevant work is an ongoing project on the ontology management environment OntoME~\cite{beretta2021challenge} that aims to provide a data model for the field of maritime/nautical history.\footnote{\url{https://ontome.net/namespace/66}} The project is a cooperation between the Huygens Institute for the History of the Netherlands, LARHRA and the Data for History consortium. The current (draft) model consists of 13 classes and 12 properties, while it makes use of CIDOC-CRM as well as extensions of CIDOC-CRM. The ontology is unfinished and not for use yet (as of December 15, 2022). \textit{Conflict}\footnote{\url{http://ontologies.michelepasin.org/docs/conflict/index.html}} is an ontology developed in the context of the SAILS project (2010-2013)\footnote{\url{http://sailsproject.cerch.kcl.ac.uk/}} that models concepts useful for describing the First World War. The provided ontology version (0.1) is actually a \textit{taxonomy} consisting of 175 classes, some of which allow modeling information related to maritime history, like the classes {\tt Ship}, {\tt Ship\_journey}, {\tt Ship\_type}, and {\tt Ownership}. Similarly, there are ontologies that could be used for modeling other \textit{parts} of the model, such as \textit{GoodRelations}~\cite{hepp2008goodrelations}, a lightweight ontology for exchanging e-commerce information, for the part that concerns payments for products. We selected to use CIDOC-CRM because it is the standard ontology for cultural heritage documentation, extensively used in the fields of cultural heritage, history and archaeology. It is directly related to the domain of discourse of history, as a discipline that studies the life of humans and societies in the past. This scope, studied from the point of view of maritime historical research, can be represented by the abstraction of reality offered by CIDOC-CRM. As an example, we can directly take advantage of the (direct or inherited) properties of the CIDOC-CRM class {\tt E7 Activity}, such as \textit{\sq{P14 carried out by}}, \textit{\sq{P4 has time-span}}, \textit{\sq{P7 took place at}}, etc., and use them for describing instances of classes of the SeaLiT Ontology that are subclasses of {\tt E7 Activity} (e.g. {\tt Voyage}, {\tt Arrival}, {\tt Recruitment}, etc.). Therefore, using CIDOC-CRM facilitates data integration with relevant (existing or future) datasets that also make use of CIDOC-CRM, but also it enables data sustainability because CIDOC-CRM is a living standard and has a very active community that constantly works on it and improves it. Finally, there is a plethora of ontologies which have been developed as extensions of CIDOC-CRM, e.g. CRMas~\cite{niccolucci2017documenting} for documenting archaeological science, CRMgeo~\cite{hiebel2017crmgeo} for geospatial information, CRMdig~\cite{theodoridou2010modeling} for provenance of digital objects, IAM~\cite{doerr2011factual} for factual argumentation, and others. \section{Design Methodology and Principles} \label{sec:methodology} \subsection{Overall Methodology} The ontology has been created gradually, following a bottom-up strategy \cite{gandon2002distributed}, working with real empirical data and information needs, in particular digitised historical records (transcripts) and corresponding data structures in various forms, as well as research questions provided by a large group of historians. The archival material together with the research questions define the modeling requirements. The main characteristics of our strategy are summarised as follows: \begin{itemize} \item Study and analysis of a large and diverse set of archival sources related to maritime history. This material provides historical information about ships, persons (such as sailors, captains, ship owners, students), and relevant activities and events (such as voyages, recruitments, payments, teaching activities). \item Gathering of research questions and corresponding information needs (\textit{competency questions}) for which the considered archival sources can provide answers or important relevant information. \item Lengthy discussions with a large group of maritime historians from different institutions and countries (Spain, Italy, France, Croatia, Greece), for consulting as well as understanding of inference processes and exceptions they make. \end{itemize} \begin{table} \begin{center} \caption{Considered archival sources and type of recorded information.} \label{tab:archSources} \scriptsize \begin{tabular}{p{3.9cm}|p{10cm}} \toprule Archival source & Overview of recorded information and example transcript\\ \midrule Crew and displacement list (Roll) & ships (name, type, construction location, construction year, registry location, owners), ports of provenance, arrival ports, destination ports, crew members (name, father's name, birth place, residence location, profession, age), embarkation ports, discharge ports. \textbf{[example transcript: \url{https://tinyurl.com/4ukzezfe}]} \\ \midrule Crew List (Ruoli di Equipaggio) & ships (name, type, construction location, construction year, registry number, registry port, owners), voyages (date from/to, duration, total crew number), destinations, departure ports, arrival ports, crew members (name, residence location, birth year, serial number, profession), embarkation ports, discharge ports. \textbf{[example transcript: \url{https://tinyurl.com/2u35frya}]} \\ \midrule General Spanish Crew List & ships (name, type, tonnage, registry port), ship owners, crew members (name, age, residence location), voyages (date from/to, total crew number), embarkation ports, destinations. \textbf{[example transcript: \url{https://tinyurl.com/3axs6ret}]} \\ \midrule Sailors Register (Libro de registro de marineros) & seafarers (name, father's name, mother's name, birth date, birth place, profession, military service organisation locations) \textbf{[example transcript: \url{https://tinyurl.com/2p8kzm6n}]} \\ \midrule Register of Maritime Personnel & persons (name, father's name, mother's name, birth place, birth date, residence location, marital status, previous profession, military service organisation location). \textbf{[example transcript: \url{https://tinyurl.com/4v6hnwjj}]} \\ \midrule Seagoing Personnel & persons (name, father's name, marital status, birth date, profession, end of service reason, work status type), ships (name), destinations. \textbf{[example transcript: \url{https://tinyurl.com/2x5cu37n}]} \\ \midrule Naval Ship Register List & ships (name, type, tonnage, length, construction location, registration location, owner). \textbf{[example transcript: \url{https://tinyurl.com/bdhx87tr}]} \\ \midrule List of Ships & ships (name, previous name, type, registry port, registry year, construction place, construction year, tonnage, engine construction place, engine manufacturer, nominal power, indicated power, owners). \textbf{[example transcript: \url{https://tinyurl.com/2cphfpef}]} \\ \midrule Civil Register & persons (name, profession, origin location, age, sex, marital status, death location, death reason, related persons). \textbf{[example transcript: \url{https://tinyurl.com/bdzeja8n}]} \\ \midrule Maritime Register, La Ciotat & persons (name, birth date, birth place, residence location, profession, service sector), embarkation locations, disembarkation locations, ships (name, type, navigation type), captains, patrons. \textbf{[example transcript: \url{https://tinyurl.com/fkhyyp4a}]} \\ \midrule Students Register & students (origin location, profession, employment company, religion, related persons), courses (title, subject, date from/to, semester, total number of students). \textbf{[example transcript: \url{https://tinyurl.com/mryp6cbb}]} \\ \midrule Census La Ciotat & occupants (name, age, birth year, birth place, nationality, marital status, religion, profession, working organisation, household role, address). \textbf{[example transcript: \url{https://tinyurl.com/4dzfcbtt}]} \\ \midrule Census of the Russian Empire & occupants (name, patronymic, sex, age, marital status, estate, religion, native language, household role, occupation, address). \textbf{[example transcript: \url{https://tinyurl.com/43xczvux}]} \\ \midrule Payroll (of Greek Ships) & ships (name, type, owners), captains, voyages (date from/to, total days, days at sea, days at port, overall total wages, overall pension fund, overall net wage), persons (name, adult/child, literacy, origin location, professio/rank), employments (recruitment date, discharge date, recruitement location, monthy wage, total wage, pension fund, net wage). \textbf{[example transcript: \url{https://tinyurl.com/ztjk4jw7}]} \\ \midrule Payroll (of Russian Steam Navigation and Trading Company) & ships (name, owners), persons (name, patronymic, adult/child, sex, birth date, estate, registration place), recruitments (port, type of document, rank/specialisation, salary per month). \textbf{[example transcript: \url{https://tinyurl.com/y5urjhc9}]} \\ \midrule Employment records (Shipyards of Messageries Maritimes, La Ciotat) & workers (name, sex, birth year, birth place, residence location, marital status, profession, status of service in company, workshop manager). \textbf{[example transcript: \url{https://tinyurl.com/yc3havkc}]} \\ \midrule Logbook & ships (name, type, telegraphic code, tonnage, registry port, owners), captains, departure ports, destination ports, route movements, calendar event types. \textbf{[example transcript: \url{https://tinyurl.com/mrx2re9k}]} \\ \midrule Accounts Book & ships (name, type, owners), voyages, captains, departure ports, destination ports, ports of call, transactions (type, recording location, supplier, mediator, receiver). \textbf{[example transcript: \url{https://tinyurl.com/4uf3bye8}]} \\ \bottomrule \end{tabular} \end{center} \end{table} In more detail, our approach focused on studying and analysing the historical sources from the historians perspective, following their respective research questions and practices of documentation. In order to achieve that, we had to consult all the data providers (coming from different research teams and countries) for a long period and to do extensive research on their research practices and the historical data for the development and the validation of the model. As a result, the model was designed from actual data values, from existing (and used) structured information sources (such as spreadsheets) and historical records (transcripts) that include the original information. The model's concepts were refined several times during the span of the project for considering new information coming from new kinds of sources. Table~\ref{tab:archSources} provides the considered archival sources as well as an overview of the recorded information and an example record (transcript) for each source.\footnote{A web application that allows exploring the data in the transcripts of these archival sources is available at: \url{https://catalogues.sealitproject.eu/}} As regards the research questions and information needs provided by the historians, their majority concerns aggregated information, such as \textit{number of sailors per origin location that arrived at a specific port}, \textit{average tonnage of ships}, \textit{wage level per country}, \textit{percentages of immigration in relation to the sailors' profession}, etc. Other information needs concern the retrieval of a specific list of entities (e.g. \textit{ship construction places during a specific time period}), comparative information (e.g. \textit{time of sailors' service in relation to the time on land}, \textit{number of women/men in ships}, etc.), or the retrieval of a specific value (e.g. \textit{total number of officers employed by the company in a specific year or span of years}).\footnote{The full list of information needs is available at \url{https://users.ics.forth.gr/~fafalios/SeaLiT_Competency_Questions_InfoNeeds.pdf}} For creating the ontology, we followed a custom engineering methodology ~\cite{kotis2020ontology} which, though, maintains most of the features supported by existing methodologies, such as HCOME~\cite{kotis2006human} and DILIGENT~\cite{pinto2009ontology}. In particular: \begin{itemize} \item Data-driven / bottom-up processing (our strategy for the development of the ontology) \item Involvement of domain experts (maritime historians in our case) \item Iterative processing (gradual, highly-iterative ontology development) \item Collaborative engineering processing (within a small team of conceptual modeling experts) \item Validation and exploitation (validation by domain experts and application in a real context) \item Detailed versioning (multiple intermediate versions, currently in stable version 1.1) \end{itemize} \subsection{Design Steps and Principles} The basis for the model was CIDOC-CRM since it is a standard suitable for recording historical information relating who, when, where, and what. From an ontological point of view, we followed the below steps: \begin{enumerate} \item We have extended CIDOC-CRM by creating new classes as subclasses of CIDOC-CRM classes and defining properties accordingly (with some of them being subproperties of CIDOC-CRM properties). After extending or revising the model for a given type of archival source and corresponding information needs, we created mappings for transforming the data from the source schema to a semantic network (RDF triples) based on the designed (target) model. This conceptual alignment was an important step to the ontology development process, contributing to redesign concepts and finalise the model. \item We distinguished the entities included in the existing schemata into those that directly or indirectly imply an \textit{event} and to those that imply \textit{objects}, mobile or immobile, and classified them in abstraction levels according to whether they represent individuals, or set of individuals. We realised that most binary relationships acquire substance as temporal entities (e.g. \textit{has met}, \textit{has created}, etc.). This principle helped us to detect hidden events in the data structures. \item We classified the existing relations between the entities according to the abstraction level which their domain and range entity belong to, and created class and property hierarchies accordingly. We did not define the same property twice for different classes, but found the most general (super)class that the property applies to. The discovery of repeating properties for different classes, suggested that they rely on a common, more general concept, causal to the ability to have such a relation in the first place. Finding the single most general concept to describe this common generalization allowed the creation of a general class to which the properties can be applied and from which these relations can be inherited by assigning the originally modelled classes as subclasses of the newly created generalization (like in the case of classes {\em Money for Service} and {\em Legal Object Relationship}). \item We found classes for the relevant properties, and not properties for relevant classes (e.g. \textit{Voyage} for the property \sq{voyages}, \textit{Ship Construction} for \sq{constructed}, etc.). We detected the general classes for which each property is characteristic of. In other terms, we found the one most specific class that generalizes over all classes for which the property applies as domain or range. \item We defined concepts by finding the identity criteria of them, by distinguishing what is and what is not an instance of these concepts. We identified classes that exist independent from the property, and not \q{anything that has this property} (e.g. the case of the \textit{Service} concept). \item The number of the classes and relationships developed can answer queries of \textit{global} nature. By global queries we mean those that users would address to more than one database (source) at the same time in order to get a comprehensive answer, in particular including joins across databases. It should also be emphasised that the goal was not to model \sq{everything} but rather to model the necessary and well understood concepts for this specific domain. \end{enumerate} The ontology was built following these principles. Its design and development was an iterative process with several repetitions of the steps described above. \section{The SeaLiT Ontology} \label{sec:ontology} We first provide an overview of the ontology (Sect.~\ref{subsec:ontOverview}), then we describe an ontology evolution example (Sect.~\ref{subsec:evolution}), and finally we present the specification of the ontology as well as RDFS and OWL implementations (Sect.~\ref{subsec:specAndRdfs}). \subsection{Ontology Overview} \label{subsec:ontOverview} The ontology currently (version 1.1) contains 46 classes, 79 properties and 4 properties of properties, allowing the description of information about \textit{ships}, \textit{ship voyages}, \textit{seafaring people}, \textit{employments} and \textit{payments}, \textit{teaching activities}, as well as a plethora of other related activities and characteristics. Appendices \ref{appendix:A} and \ref{appendix:B} provide the full class and property hierarchy, respectively. Fig.~\ref{fig:model_ship} shows how information about a \textit{ship} is modelled.\footnote{The classes whose name starts with the letter 'E' followed by a number are CIDOC-CRM classes (these are in green boxes in the figures). All other are classes of the SeaLiT Ontology (in blue boxes). Accordingly, all properties whose name starts with the letter 'P' followed by a number are properties of CIDOC-CRM, while all other are properties of the SeaLiT Ontology.} A {\tt Ship} (subclass of {\tt E22 Human-Made Object}) is the result of a {\tt Ship Construction} activity (subclass of {\tt E12 Production}) which gave the {\tt Ship Name} (subclass of {\tt E41 Appellation}) to the ship. A ship also has some characteristics, like {\tt Horsepower} and {\tt Tonnage} (subclasses of {\tt E54 Dimension}; this allows providing, apart from the value, the corresponding measurement unit, a note, etc.), and is registered through a {\tt Ship Registration} (subclass of {\tt E7 Activity}) by a {\tt Port of Registry} (subclass of {\tt E74 Group}), with a ship flag of a particular {\tt Country} (subclass of {\tt E53 Place}) and with a particular {\tt Ship ID} (subclass of {\tt E42 Identifier}). Modeling the ship ID as a class allows including additional information about the identifier, such as which authority provided the identifier, when, etc. (by connecting it to the CIDOC-CRM class {\tt E15 Identifier Assignment}). Finally, a ship has one or more {\tt Ship Ownership Phase}s (subclass of {\tt Legal Object Relationship}), each one initialized by a {\tt Ship Registration} and terminated by a {\tt De-flagging} activity. Note here that, all classes related to activities (like {\tt Ship Construction}, {\tt Ship Repair}, {\tt De-flagging}, etc.) can make use of the CIDOC-CRM property {\em \sq{P4 has time-span}} for providing temporal information. \begin{figure} \centering \fbox{\includegraphics[width=15cm]{figures/sealitOntology_ship.png}} \vspace{-2mm} \caption{Modelling information about a ship.} \label{fig:model_ship} \end{figure} \begin{figure} \centering \fbox{\includegraphics[width=14cm]{figures/sealitOntology_voyage.png}} \vspace{-2mm} \caption{Modelling information about a ship voyage.} \label{fig:model_voyage} \end{figure} Fig.~\ref{fig:model_voyage} shows how information about a \textit{ship voyage} is modelled in the ontology. First, a {\tt Voyage} (subclass of {\tt E7 Activity}) concerns a particular Ship, navigated by one or more captains ({\tt E39 Actor}), and has a \textit{starting from} place, a \textit{destination} place, and a \textit{finally arriving at} place ({\tt E53 Place}). Then, the main activities during a ship voyage include {\tt Loading} things, {\tt Leaving} from a place, {\tt Passing} by or through a place, {\tt Arrival} at a place, and {\tt Unloading} things. All these activities are linked to a {\tt E52 Time-Span} through the CIDOC-CRM property {\em \sq{P4 has time-span}}. Fig.~\ref{fig:model_payments} shows how the ontology allows describing information about \textit{employments and payments}. {\tt Money for Service} (subclass of {\tt E7 Activity}) is given to an {\tt E39 Actor} for a particular {\tt Service} (subclass of {\tt E7 Activity}).\footnote{We use the term \sq{money} instead of \sq{payment}, because we want to indicate that there was a money transaction, e.g. using lira, franc, etc. (in older times, a payment could be conducted without the use of money, e.g. using things).} The class {\tt Money for Service} has two specialisations (subclasses): {\tt Money for Things} and {\tt Money for Labour}, while the class {\tt Employment} is a specialisation of the class {\tt Service}. A {\tt Crew Payment} concerns a particular {\tt Voyage} and is a specialisation of {\tt Money for Labour}. In this context, a {\tt Labour Contract} (subclass of {\tt E29 Design or Procedure}) specifies the conditions of {\tt Money for Labour}. An {\tt Employment} starts with a {\tt Recruitment} (subclass of {\tt E7 Activity}) and ends with a {\tt Discharge} (subclass of {\tt E7 Activity}). \begin{figure} \centering \fbox{\includegraphics[width=15cm]{figures/sealitOntology_payments.png}} \vspace{-2mm} \caption{Modelling information about employments and payments.} \label{fig:model_payments} \end{figure} Fig.~\ref{fig:model_persons} shows how information about \textit{persons} (seagoing people, such as captains, crew members, students, etc.) is modelled in the ontology. A person ({\tt E21 Person}) is registered through a {\tt Civil Registration} activity and receives an identifier ({\tt E42 Identifier}). A person has a first name and last name ({\tt E62 String}), works at an organisation or company ({\tt E74 Group}), has an age ({\tt E60 Number}) at a specific time (the time of the information recording), as well as a set of other properties, in particular a {\tt Religion Status}, a {\tt Literacy Status}, a {\tt Sex Status}, a {\tt Language Capacity}, a {\tt Social Status}, and a {\tt Profession} (all subclasses of {\tt E55 Type}). The use of {\tt E55 Type} as superclass of these properties/qualities (instead of modeling them as temporal entities) is a good solution when the sources (such as a civil register or a census document) do not provide enough temporal information to infer/observe the corresponding event (this is exactly the case with the archival sources of the SeaLiT project). In addition, a {\tt Punishment} (subclass of {\tt E7 Activity}) or {\tt Promotion} (subclass of {\tt E13 Attribute Assignment}) can be given to a person. A {\tt Promotion} is related either to a {\tt Social Status} promotion or to a job/career ({\tt Profession}) promotion. \begin{figure} \centering \fbox{\includegraphics[width=15cm]{figures/sealitOntology_persons.png}} \vspace{-2mm} \caption{Modelling information about persons.} \label{fig:model_persons} \end{figure} Finally, Fig.~\ref{fig:model_teaching} shows how the ontology allows describing information about teaching activities related to seafaring. A {\tt Teaching Unit} is an activity that can be specialised to {\tt Course} or {\tt Section}. It is connected to a {\tt Subject} (subclass of {\tt E55 Type}), the students ({\tt E39 Actor}) who participated in the teaching unit, the number of participating students ({\tt E60 Number}), as well as one or more other teaching units through the CIDOC-CRM property {\em \sq{P9 consists of}}. The latter allows, in particular, describing the information that a course consists of sections. \begin{figure}[t] \centering \fbox{\includegraphics[width=12.5cm]{figures/sealitOntology_teaching.png}} \vspace{-2mm} \caption{Modelling information about teaching activities.} \label{fig:model_teaching} \end{figure} \subsection{Ontology Evolution Example} \label{subsec:evolution} The ontology development process lasted more than two years, including a large number of intermediate versions, before releasing the first \textit{stable} version (1.0). In particular, the ontology elements (classes and properties) were revised several times based on (a) new evidence coming from newly-considered archival sources, and (b) new requirements (information needs) by the domain experts (maritime historians). Such new evidence and requirements required either the definition of new elements, such as the creation of a new class or property, or the revision of an existing set of elements that concern a part of the model. Fig.~\ref{fig:evolution} shows how the part of the ontology that concerns \textit{ship ownership} was revised several times during the ontology development process. A first requirement provided by the historians was the ability to find all ships per owner. The analysed archival material (\textit{crew lists}) only provided the name of the owner, where the value was either the name of a person or the name of a company. Based on this evidence, the property {\em \sq{has owner}} was created connecting an instance of {\tt Ship} with the an instance of the CIDOC-CRM class {\tt E39 Actor} (v1 in Fig.~\ref{fig:evolution}). Another source (\textit{naval ship register lists}) provided information about ships' previous owners, while a new requirement was the ability to find the number of first owners per ship during a period of time. Based on this, as well as on the fact that the binary relationship \textit{has owner} implies/hides a temporal entity, we defined the class {\tt Ship Ownership Phase}, the property {\em \sq{has phase}} for connecting a ship to a ship ownership phase, the property {\em \sq{in time}} for connecting the ownership phase to a {\tt E52 Time-Span}, while the property {\em \sq{has owner}} was revised for connecting the ship ownership phase with an {\tt E39 Actor} (v2 in Fig.~\ref{fig:evolution}). A ship can have many names during its lifespan, while an owner can own more than one ships with the same name (as shown in \textit{logbooks} and \textit{crew and displacement lists}). According to the historians, ownership usually assigns a name to a ship and a ship changes its name under a new ownership state at a specific time. Based on this historical knowledge, the property {\em \sq{ownership under name}} was created for enabling to link the ship ownership phase to a {\tt Ship Name} (v3 in Fig.~\ref{fig:evolution}). Evidence shows that ownership of a ship is a type of information that can be inferred and not directly observed. An ownership phase can be traced by the \textit{ship registration} activity that initiates it and by the \textit{de-flagging} activity that terminates it. The documentation of a ship registration in \textit{Austrian Lloyd's fleet lists}, in particular, includes information about the ship's construction place and date, which together with the name given to ship after construction constitute safe criteria to identify a ship. Based on this, the classes {\tt Ship Registration} (subclass of {\tt E72 Activity}), {\tt De-flagging} (subclass of {\tt E72 Activity}) and {\tt Ship Construction} (subclass of {\tt E12 Production}) were defined, together with the properties {\em \sq{registers}} (for linking a registration activity to a ship), {\em \sq{ownership is initialized by}} (for linking an ownership phase to a registration activity), {\em \sq{de-flagging of}} (for linking a de-flagging activity to a ship), {\em \sq{ownership is terminated by}} (for linking an ownership phase to a de-flagging activity), {\em \sq{constructed}} (for linking a construction activity to a ship), and {\em \sq{under name}} (for linking a construction activity to a ship name (v4 in Fig.~\ref{fig:evolution}). The ownership of a ship is actually a legal agreement in which an owner holds shares. For example, according to Italian sources (\textit{maritime registers}), the ownership of a ship was structured in 24 parts (\q{carati}). Sometimes only one ship owner possessed all 24 parts. However, much more frequently the 24 parts were distributed among several ship owners. Based on this evidence, a new class {\tt Shareholding} was created as a specialisation (subclass) of {\tt Ship Ownership Phase}, together with the property {\em \sq{of share}} for assigning the number of shares to a shareholding phase (v5 in Fig.~\ref{fig:evolution}). In the last ontology version (see Fig.~\ref{fig:model_ship}), {\tt Ship Ownership Phase} is defined as specialisation (subclass) of the class {\tt Legal Object Relationship}, together with the class {\tt Legal Document with Temporal Validity} which comprises official documents or legal agreements that are valid for a specific time-span. The more general class {\tt Legal Object Relationship} represents kinds of relationships whose state and time-span are not documented and thus cannot be directly observed. We can only observe the relationship through the events that initialise or terminate the state (starting and terminating events). \begin{figure}[t] \centering \fbox{\includegraphics[width=16.0cm]{figures/ModelPartEvolution.jpg}} \vspace{-5mm} \caption{Ontology evolution example for modeling ship ownership information.} \label{fig:evolution} \end{figure} \subsection{Specification, RDFS and OWL Implementation} \label{subsec:specAndRdfs} The specification of the ontology and its RDFS implementation are available through the Zenodo repository (DOI: {\tt 10.5281/zenodo.6797750})\footnote{\url{https://zenodo.org/record/6797750}}, under a Creative Commons Attribution 4.0 license. The (resolvable) namespace of the ontology pointing to the RDFS implementation is: \url{http://www.sealitproject.eu/ontology/}. The specification document defines the ontology classes and properties. For each class, it provides: i)~its superclasses, ii)~its subclasses (if any), iii)~a scope note (a textual description of the class's intension), iv)~one or more examples of instances of this class, and v)~its properties (if any), each one represented by its name and the range class that it links to. For each property, the specification provides: i)~its domain, ii)~its range, iii)~its superproperties (if any), iv)~its subproperties (if any), v)~a scope note, vi)~one or more examples of instances of this property, and vii)~its properties (if any). If a property has an inverse property, this is provided in parentheses next to the property name. Scope notes are not formal modelling constructs, but are provided to help explain the intended meaning and application of a class or property. They refer to a conceptualisation common to domain experts (maritime historians) and disambiguate between different possible interpretations. The RDFS implementation provides the scope note of each class or property using {\em \sq{rdfs:comment}}. For producing the class and property URIs, the space character in the name of a class or property is replaced by the underscore character. Inverse properties are provided using {\em \sq{owl:inverseOf}}. The version of the ontology is provided through the property {\em \sq{owl:versionInfo}} and its license through the Dublin Core term {\em \sq{dc:license}}. For the properties pointing to classes that are represented as literals in RDF (seven properties in total, pointing to the CIDOC-CRM classes {\tt E60~Number} or {\tt E62~String}), we define their range as {\tt rdfs:Literal}. We also provide an OWL implementation of the ontology, containing 71 object properties, 7 datatype properties and 1 symmetric property (the property \textit{\sq{related to}}).\footnote{\url{https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1.owl}} Since RDF does not provide a direct way to express properties of properties, we make use of \textit{property classes} (as suggested and implemented by CIDOC-CRM), as a reification method for encoding the four properties of properties defined in the SeaLiT Ontology. Using this method, a class is created for each property having a property. This property class can then be instantiated and used together with the properties {\em \sq{P01~has~domain}} and {\em \sq{P02~has~range}} provided by the RDFS implementation of CIDOC-CRM.\footnote{\url{https://cidoc-crm.org/rdfs/7.1.1/CIDOC_CRM_v7.1.1_PC.rdf}} For example, Fig.~\ref{fig:propOfProp} depicts how the property {\em \sq{in the role of}} of the property {\em \sq{works~at}} is implemented using the idea of property classes. First, the property class {\tt PC~works~at} is provided for representing the property {\em \sq{works~at}}. During data generation/instantiation, an instance of this property class is created pointing to the domain (an instance of {\tt E21~Person}) and the range (an instance of {\tt E74~Group}) of the original property {\em \sq{works~at}} using the properties {\em \sq{P01~has~domain}} and {\em \sq{P01~has~range}}, respectively. Then, we can provide the property of property {\em \sq{in the role of}} by directly linking it to the property class instance. \begin{figure}[h] \centering \fbox{\includegraphics[width=12.5cm]{figures/propOfProp.png}} \vspace{-2mm} \caption{Representing a property of property in RDF using a property class.} \label{fig:propOfProp} \end{figure} \section{Application} \label{sec:application} \subsection{SeaLiT Knowledge Graphs} The SeaLiT Ontology has been used in the context of the SeaLiT project (cf.~Section~\ref{subsec:sealit}) for transforming the data transcribed from a set of disparate, localised information sources of maritime history to a rich and coherent semantic network of integrated data (a \textit{knowledge graph}). The objective of this transformation is the ability to run complex questions over the integrated data, like those provided by the historians that require combining information from more than one sources. In particular, the original archival documents are collaboratively transcribed and documented by historians in tabular form (similar to spreadsheets) using the FAST CAT system~\cite{fafalios2021FastCat}. In FAST CAT, data from different sources are transcribed as \textit{records} belonging to specific \textit{templates}. A \textit{record} organises the data and metadata of an archival document in a set of tables, while a \textit{template} represents the structure of a single data source, i.e. it defines the data entry tables. Currently, more than 600 records have been already created and filled in FAST CAT by historians of SeaLiT. An example of a record for each different type of source (template) is provided in Table~\ref{tab:archSources}. For transforming the transcribed data to RDF based on the SeaLiT Ontology, schema mappings are created for each distinct FAST CAT template. These mappings define how the data elements of the FAST CAT records (e.g. the columns of a table) are mapped to ontology classes and properties. To create the schema mappings and run the transformations, we make use of the X3ML mapping definition language and framework~\cite{marketakis2017x3ml}. The transformed data (RDF triples) are then ingested into a semantic repository (RDF triplestore) which can be accessed by external applications and services using the SPARQL language and protocol. The ResearchSpace application (described below) operates over such a repository for supporting historians in searching and analysing quantitatively the integrated data. The reader can refer to \cite{fafalios2021FastCat} for more information about the FAST CAT system and the data transcription, curation and transformation processes. The generated knowledge graphs are available through the Zenodo repository (DOI: 10.5281/zenodo.6460841)\footnote{\url{https://zenodo.org/record/6460841}}, under a Creative Commons Attribution 4.0 license. This dataset currently consists of more than 18.5M triples, providing integrated information for about 3,170 ships, 92,240 persons, 935 legal bodies, and 5,530 locations. These numbers might change in a future version since data curation, including instance matching, is still undergoing and new archival documents are transcribed in FAST CAT. \subsection{ResearchSpace Application} For supporting historians in exploring the SeaLiT Knowledge Graphs (and thus the integrated data), we make use of ResearchSpace~\cite{oldman2018reshaping}, an open source platform that offers a variety of functionalities, including a \textit{query building} interface that supports users in gradually building complex queries through an intuitive (user friendly) interface. The results can then be browsed, filtered, or analysed quantitatively through different visualisations, such as bar charts. The application is accessible at: \url{http://rs.sealitproject.eu/}. The query building interface of ResearchSpace has been configured for the case of the SeaLiT Knowledge Graphs. In particular, the following searching categories have been defined: \textit{Ship, Person, Legal Body, Crew Payment, Place, Voyage, Course, Record, Source}. By selecting a category (e.g. \textit{Ship}) the user is shown a list with its connected categories. By selecting a connected category (e.g. \textit{Place}) the user can then select a property connecting them (e.g. \textit{arrived at}) as well as an instance/value (e.g. \textit{Marseille}; thus the user is searching for ships that arrived at Marseille). Such a property actually corresponds to a path in the knowledge graph that connects instances of the selected categories. \begin{figure} \centering \fbox{\includegraphics[width=15.5cm]{figures/rs.png}} \vspace{-1mm} \caption{Query building and visualisation of results in the ResearchSpace application.} \label{fig:rs} \end{figure} Fig.~\ref{fig:rs} shows a screen dump of the system. In this example, the user has searched for \textit{persons that were crew members at ships that arrived at Marseille,}\footnote{ResearchSpace link to the query: \url{https://tinyurl.com/2p8ky96e}} and has selected to group the persons by their \textit{residence location} and visualise the result in a bar chart. From the bar chart we see that the majority of persons had \textit{Camogli} as their residence location. This query corresponds to a real information need provided by the historians of SeaLiT. For retrieving the results and creating the chart, ResearchSpace internally translates the user interactions to SPARQL queries that are executed over the SeaLiT Knowledge Graphs. For instance, the below SPARQL query retrieves the persons that were crew members at ships that had \textit{Marseille} as their final destination: \small \begin{Verbatim}[frame=lines,numbers=left,numbersep=1pt] PREFIX crm: <http://www.cidoc-crm.org/cidoc-crm/> PREFIX sealit: <http://www.sealitproject.eu/ontology/> SELECT DISTINCT ?person WHERE { ?ship sealit:voyages ?voyage . ?voyage sealit:finally_arriving_at <https://rs.sealitproject.eu/kb/location/Marseille> ; crm:P14_carried_out_by ?person } \end{Verbatim} \normalsize \noindent For grouping the persons by their residence location and showing a chart, the below SPARQL is executed for retrieving the relevant data: \small \begin{Verbatim}[frame=lines,numbers=left,numbersep=1pt] PREFIX crm: <http://www.cidoc-crm.org/cidoc-crm/> PREFIX sealit: <http://www.sealitproject.eu/ontology/> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT DISTINCT ?location ?locationName (COUNT(?person) AS ?numOfPersons) WHERE { ?ship sealit:voyages ?voyage . ?voyage sealit:finally_arriving_at <https://rs.sealitproject.eu/kb/location/Marseille> ; crm:P14_carried_out_by ?person . ?person crm:P74_has_current_or_former_residence ?location . ?location rdfs:label ?locationName . } GROUP BY ?location ?locationName ORDER BY ?locationName \end{Verbatim} \normalsize Such queries can also utilise the RDFS inference rules, e.g. those based on the \textit{subClassOf} and \textit{subPropertyOf} relations. An example is the use of the CIDOC CRM property \textit{\sq{P9 consists of}} for getting all voyage-related activities of a particular ship (leaving by a place, arrival at a place, passing by or through a place, loading things, unloading things), as shown in the below SPARQL query: \small \begin{Verbatim}[frame=lines,numbers=left,numbersep=1pt] PREFIX crm: <http://www.cidoc-crm.org/cidoc-crm/> PREFIX sealit: <http://www.sealitproject.eu/ontology/> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT DISTINCT ?activity ?activityName WHERE { <SHIP-URI> sealit:voyages ?voyage . ?voyage crm:P9_consists_of ?activity . ?activity rdfs:label ?activityName } \end{Verbatim} \normalsize In this case, we exploit the fact that the property \textit{\sq{P9 consists of}} is super-property of the properties \textit{\sq{consists of leaving}}, \textit{\sq{consists of arrival}}, \textit{\sq{consists of passing}}, \textit{\sq{consists of loading}}, and \textit{\sq{consists of unloading}}. The type of historians' research questions / information needs that can be answered (either directly or indirectly) using the ResearchSpace platform over the integrated data mainly depends on the actual archival material that is transcribed and transformed to RDF based on the SeaLiT Ontology, and less on the ontology itself. Specifically, the ontology was designed considering community requirements and material evidence, therefore if the data needed to answer an information need (or to find important information related to the information need) exists in the transcripts (and thus in the transformed data) then the question can be answered either fully, or partially through the retrieval of important relevant information. For example, in the case of SeaLiT, there are transcripts (FAST CAT records) containing tables that are not fully filled, either because some archival documents do not provide the corresponding information, or just because historians did not fill the columns during data transcription (planning to do it at a later stage). In this case, information needs that require this missing information cannot be satisfied. In future, if new types of information (and corresponding information needs) appear that cannot be modelled by the ontology, the ontology will be extended/revised and a new version will be released. With respect to incomplete information, missing entity attributes (e.g. unknown construction location for a particular ship) are in general very common in historical-archival research, but at the same time an important-to-know information for historians because they can affect the interpretation of quantitative analysis results. Our configuration of ResearchSpace considers missing information by representing it as an \sq{unknown} value, e.g. by showing an \sq{unknown} column in a bar chart. \section{Usage and Sustainability} \label{sec:usage} As already stated, the ontology has been created and used in the context of the SeaLiT project for transforming data transcribed from archival documents of maritime history to a rich semantic network. The integrated data of the semantic network allows a large group of maritime historians to perform quantitative and qualitative analysis of the transcribed material (through the user-friendly interface provided by the ResearchSpace platform) and find important information relevant to their research needs. A continuation of the relevant activities is expected after the end of the SeaLiT project through the close collaboration of the two involved institutions of the Foundation for Research and Technology - Hellas (FORTH): the Institute of Mediterranean Studies (coordinator of SeaLiT) and the Institute of Computer Science (data engineering partner in SeaLiT). In particular, the ontology will be extended as soon as a new type of archival material needs to be transcribed and integrated into the SeaLiT Knowledge Graphs. The long-term sustainability of the ontology is assured through our participation in relevant communities, in particular CIDOC-CRM SIG\footnote{\url{https://www.cidoc-crm.org/sig-members}} and Data for History Consortium\footnote{\url{http://dataforhistory.org/members}}, an international consortium aiming at establishing a common method for modelling, curating and managing data in historical research. There is already an interest on using (and probably extending) the ontology in the context of other (ongoing) projects in the field of historical/archival research. In addition, the part of the model which is about employments and payments is considered for the creation of a new CIDOC-CRM family model about social transactions and bonds (there are relevant discussions on this in the CIDOC-CRM Special Interest Group; see issues 420 and 557\footnote{\url{https://cidoc-crm.org/issue_summary}}). \section{Conclusion} \label{sec:conclusion} We have presented the construction and use of the SeaLiT Ontology, an extension of CIDOC-CRM for the modeling and integration of data in the field of maritime history. The ontology aims at facilitating a shared understanding of maritime history information, by providing a common and extensible semantic framework (a \textit{common language}) for evidence-based information integration. We provide the specification of the ontology, an RDFS and an OWL implementation, as well as knowledge graphs that make use of the ontology for integrating a large and diverse set of archival documents into a rich semantic network. We have also presented a real-working application (ResearchSpace deployment) that operates on top of the knowledge graphs and which supports maritime historians in exploring and analysing the integrated data through a user-friendly interface. In the near future, we plan to a) investigate possible extensions of the ontology based on new data modeling requirements, b) improve the scope notes of classes and properties in the specification document and add more examples (and then provide a new ontology version), c) create and make available a JSON-LD context of the ontology for use in Web-based programming environments. \subsection*{Acknowledgements} This work has received funding from the European Union's Horizon 2020 research and innovation programme under i) the Marie Sklodowska-Curie grant agreement No 890861 (Project ReKnow), and ii) the European Research Council (ERC) grant agreement No 714437 (Project SeaLiT). \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,434
Рагу́ (або "раґу́")( від ragoûter — «привертати апетит, смак», від goût — «смак») — страва з тушкованого м'яса, домашньої птиці, дичини чи риби, овочів у підливі. Існують два основних типи рагу — коричневе та біле. Коричневе готують із м'ясом, попередньо обсмаженим у жирі з борошном, а біле - із м'яса без попереднього його смаження. Відмінності у значенні поняття рагу в кухнях світу У ширшому сенсі, рагу можна назвати будь-які продукти харчування, попередньо подрібнені та тушковані як єдина страва або додаток у вигляді соусу, такі, як французькі рататуй з Провансу та фрикасе (fricassée), польський бігос (bigos) італійський соус Болонезе (ragu alla Bolognese) чи рагу алла Наполетана (ragu alla Napoletana), угорський перкельт (pörkölt), українську душенину (печеню) та інші популярні в Європі страви. Приклади рецептів Рагу з овочів На 400—500 г картоплі 400—500 г інших овочів, 1 ст. ложку борошна, 2 ст. ложки жиру. Для рагу можна використати різні овочі залежно від сезону — моркву, ріпу, брукву, капусту (звичайну, цвітну), стручкову квасолю, ріпчасту цибулю та картоплю. Очищені, вимиті овочі нарізати великими кубиками, дрібну цибулю залишити цілими голівками. Моркву, ріпу, брукву тушкувати, капусту і квасолю відварити у воді, картоплю і цибулю обсмажити з жиром. Окремо на сковороді підсмажити борошно, розвести відваром від тушкованих або варених овочів, додати дрібно нарізаних томатів та прокип'ятити. Соусом залити підготовлені овочі, складені в одну каструлю, додати солі, перцю, лавровий лист, 3-4 шт. гвоздики, шматочок кориці, накрити каструлю кришкою і тушкувати 15-20 хв.. Подаючи, овочі посипати зеленню петрушки. Рагу з овочів з квасолею На 1 склянку квасолі — 500 г овочів та картоплі, 1 ст. ложку борошна, 2 ст. ложки жиру. Готують так само, як і рагу з самих овочів, але додають варену квасолю. Рагу по-українськи В Україні рагу (принаймні, саме під такою назвою) є переважно стравою міської кухні. Практично завжди під рагу розуміють м'ясне рагу з овочами. В цілому, підхід в українській кухні є близьким до приготування рагу в ірландській кухні — всі інгредієнти беруться, зазвичай, в рівних пропорціях. Також в українській кухні рагу є, в принципі, сезонною стравою. Традиційними інгредієнтами овочево-м'ясного «рагу по-українськи» є: м'ясо (може бути і попередньо обсмаженим, як у поляків), картопля, кабачки, морква, рідше — баклажани, болгарський перець, гриби тощо. Превалюювання якогось одного продукту або застосування у рагу інших інгредієнтів (скажімо, томатів, або цибулі, різноманітних соусів), зазвичай, означатиме приготування страви, що має свою власну, іншу ніж рагу, назву. Тобто головним для українського рагу лишається варіння м'яса з овочами у власному соку. Див. також Ірландське рагу Пхакша-па Сальтенья Виноски Література Куховарська книга // Державне видавництво технічної літератури України. — Київ, 1951. — С. 107. Посилання Рагу з карі // Збірник рецептур страв для харчування дітей шкільного віку в організованих освітніх та оздоровчих закладах / за ред. Є. Клопотенко. — Львів: Літопис, 2019. — С. 170. — 284 с. Печеня «Свинина в горщечку з грибами» Страви
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,669
Q: Suddenly pdf export code does not work any more. Is there something wrong with my code? I have a code for exporting to pdf to a speciffic folder on Sharepoint. The code always worked fine until resently. And I can't figure out why. In the code I have 2 strings for path, that I combine. If I cut out part of the path it works. So it makes me think there is a problem with the path. I check plenty times that the path is correct and cant see that it has changed. I get the Run-time error 1004: application-defined of object-defined error If I change Filename:=newpath3 to Filename:=newpath1 It exports the pdf. So is there something wrong with newpath2? I checked with msgbox and cant find any mistake with the total path. Private Sub CommandButton1_Click() Dim newpath1 As String Dim newpath2 As String Dim newpath3 As String newpath1 = Left(ActiveWorkbook.Path, 66) newpath2 = "99%20Vedlegg%20til%20faktura" newpath3 = newpath1 & newpath2 ActiveSheet.ExportAsFixedFormat _ Type:=xlTypePDF, _ Filename:=newpath3 & "/" & "test", _ Quality:=xlQualityStandard, _ OpenAfterPublish:=True End Sub A: Individual components of a filename (i.e. each subdirectory along the path, and the final filename) are limited to 255 characters, and the total path length is limited to approximately 32,000 characters. However, on Windows, you can't exceed MAX_PATH value (259 characters for files, 248 for folders). See http://msdn.microsoft.com/en-us/library/aa365247.aspx for full details. My guess is that you've recently moved the file into a new subfolder. It could also be an invalid filename. It's hard to say without knowing what your ActiveWorkbook.Path is. Here's my PDF Export Function: Dim v As Variant Dim Fname As String Dim PdfFile As String 'Remove invalid characters from filenames Fname = "test" 'Build your filename here For Each v In Array("/", "\", "|", ":", "*", "?", "<", ">", """") Fname = Replace(Fname, v, "_") Next 'Export activesheet as PDF PdfFile = ActiveWorkbook.Path & "\" & Fname & ".pdf" With ActiveSheet .ExportAsFixedFormat Type:=xlTypePDF, Filename:=PdfFile, Quality:=xlQualityStandard, IncludeDocProperties:=True, IgnorePrintAreas:=False, OpenAfterPublish:=False End With This'll output the PDF into the same location as your excel file.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,884
{"url":"https:\/\/blogs.mathworks.com\/steve\/2007\/03\/23\/pad-values-in-dilation-and-erosion\/","text":"# Pad values in dilation and erosion4\n\nPosted by Steve Eddins,\n\nBlog reader DKS asked recently why values outside the image are assumed to be -Inf when computing dilation. I thought this issue was worth exploring further because it has practical implications for certain computations.\n\nf = [22 23 15 16; 24 25 14 15; 20 18 17 23; 19 16 15 20]\nf =\n\n22 23 15 16\n24 25 14 15\n20 18 17 23\n19 16 15 20\n\n\n\nNow think about computing the erosion with a 3-by-3 flat structuring element. What's the output at the upper left corner? It's the minimum of these 9 values:\n\n?? ?? ??\n?? 22 23\n?? 24 25\n\nSo what do we use for the unknown values outside the image boundary? Suppose we zero pad:\n\n 0 0 0\n0 22 23\n0 24 25\n\nThen the (1,1) output value is 0. In fact, if the image pixels are nonnegative, which is common, there will be a zero-valued one-pixel-wide border all the way around the edge of the output image. This is called a boundary artifact and is undesirable. We could avoid this problem by simply excluding external values in our computation of the minimum. A mathematical equivalent for erosion is to assume external values all equal some constant that is guaranteed to be greater than or equal to all image pixels - like Inf:\n\nInf Inf Inf\nInf 22 23\nInf 24 25\n\nTo avoid boundary artifacts when performing dilation, pad with -Inf:\n\n-Inf -Inf -Inf\n-Inf 22 23\n-Inf 24 25\n\nMost of the time you don't need to explicitly use the Inf values. One type of exception occurs when the structuring element doesn't include the center element. Then sometimes the Inf values can appear in the output.\n\nimdilate(f, [1 0 0])\nans =\n\n23 15 16 -Inf\n25 14 15 -Inf\n18 17 23 -Inf\n16 15 20 -Inf\n\n\n\nSimilarly, most of the time you don't actually need to explicitly pad the image. The exception is when you are computing a sequence of dilations (or erosions). In the Image Processing Toolbox this frequently occurs because imdilate and imerode exploit ''structuring element decomposition.'' That is, a large structuring element is decomposed into two or more smaller structuring elements that are mathematically equivalent to the original.\n\nHere's a contrived example to demonstrate why you need to pad explicitly to make the mathematical equivalence work out.\n\nStructuring element 1 translates an image to the left by 2 pixels.\n\nse1 = [1 0 0 0 0];\n\nStructuring element 2 translates an image to the right by 1 pixel.\n\nse2 = [0 0 1];\n\nYou'd expect the composition of dilation with these two structuring elements to be equivalent to a single dilation with a structuring element that translates an image left by 1 pixel:\n\nse3 = [1 0 0];\n\nLet's try that sequence with no padding:\n\nf1 = imdilate(f, se1)\nf1 =\n\n15 16 -Inf -Inf\n14 15 -Inf -Inf\n17 23 -Inf -Inf\n15 20 -Inf -Inf\n\n\nf2 = imdilate(f1, se2)\nf2 =\n\n-Inf 15 16 -Inf\n-Inf 14 15 -Inf\n-Inf 17 23 -Inf\n-Inf 15 20 -Inf\n\n\n\nNow dilate the original image with se3 and compare:\n\nf3 = imdilate(f, se3)\nf3 =\n\n23 15 16 -Inf\n25 14 15 -Inf\n18 17 23 -Inf\n16 15 20 -Inf\n\n\n\nThey aren't the same. To make the results equivalent, you have to pad the original image, perform the dilation sequence, and the crop the result.\n\nfp = padarray(f, [2 2], -Inf)\nfp =\n\n-Inf -Inf -Inf -Inf -Inf -Inf -Inf -Inf\n-Inf -Inf -Inf -Inf -Inf -Inf -Inf -Inf\n-Inf -Inf 22 23 15 16 -Inf -Inf\n-Inf -Inf 24 25 14 15 -Inf -Inf\n-Inf -Inf 20 18 17 23 -Inf -Inf\n-Inf -Inf 19 16 15 20 -Inf -Inf\n-Inf -Inf -Inf -Inf -Inf -Inf -Inf -Inf\n-Inf -Inf -Inf -Inf -Inf -Inf -Inf -Inf\n\n\nfp1 = imdilate(fp, se1)\nfp1 =\n\n-Inf -Inf -Inf -Inf -Inf -Inf -Inf -Inf\n-Inf -Inf -Inf -Inf -Inf -Inf -Inf -Inf\n22 23 15 16 -Inf -Inf -Inf -Inf\n24 25 14 15 -Inf -Inf -Inf -Inf\n20 18 17 23 -Inf -Inf -Inf -Inf\n19 16 15 20 -Inf -Inf -Inf -Inf\n-Inf -Inf -Inf -Inf -Inf -Inf -Inf -Inf\n-Inf -Inf -Inf -Inf -Inf -Inf -Inf -Inf\n\n\nfp2 = imdilate(fp1, se2)\nfp2 =\n\n-Inf -Inf -Inf -Inf -Inf -Inf -Inf -Inf\n-Inf -Inf -Inf -Inf -Inf -Inf -Inf -Inf\n-Inf 22 23 15 16 -Inf -Inf -Inf\n-Inf 24 25 14 15 -Inf -Inf -Inf\n-Inf 20 18 17 23 -Inf -Inf -Inf\n-Inf 19 16 15 20 -Inf -Inf -Inf\n-Inf -Inf -Inf -Inf -Inf -Inf -Inf -Inf\n-Inf -Inf -Inf -Inf -Inf -Inf -Inf -Inf\n\n\nfp2_cropped = fp2(3:end-2, 3:end-2)\nfp2_cropped =\n\n23 15 16 -Inf\n25 14 15 -Inf\n18 17 23 -Inf\n16 15 20 -Inf\n\n\nisequal(f3, fp2_cropped)\nans =\n\n1\n\n\n\nWhenever imdilate or imerode is called with a decomposed structuring element, the functions compute the minimum padding necessary to avoid boundary artifacts and pad with either -Inf or Inf, respectively.\n\nIt's a classic speed vs. memory tradeoff. Exploiting structuring element decomposition is faster, but storing the intermediate padded arrays takes more memory.\n\nGet the MATLAB code\n\nPublished with MATLAB\u00ae 7.4\n\n7 views (last 30 days) \u00a0| |","date":"2019-11-20 09:22:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4855911433696747, \"perplexity\": 7618.1845263501655}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496670535.9\/warc\/CC-MAIN-20191120083921-20191120111921-00197.warc.gz\"}"}
null
null
Q: Can pLSA model generate topic distribution of unseen documents? I refer to the Wikipedia and other tutorials on topic modeling, which said although PLSA is a generative model of the documents in the collection it is estimated on, it is not a generative model of new documents. while another tutorial page 15 that illustrates pLSA and LDA in geometric way said The pLSI model allows a document to possess a distribution over topics that was seen in the training data, thus placing new documents at particular points within the topic simplex. Could I and should I use a trained pLSA model to generate topic distributions on unseen (new) documents? A: pLSA learns the topics from the training set(fitting the model). pLSA doesn't predict new topics from unseen data. it just uses the topics learned during training and returns the document-topic distribution for your unseen test set(transform).
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,502
Q: Click event using the on() function in jQuery is not working when I target an event with an ID in replace of DOCUMENT or BODY I have a jQuery Click Event that recently stopped working! It;s a really big project with hundreds of files so I could of easily changed it this past week and not realized it now that I see it is a problem. Below is the most basic example if the Click Event in question. Now I could of easily had it using document or body as the very 1st item last week and then I changed it to target an ID that is closer to the actual target for my click event. It seems like I was trying to improve that performance after reading that I should use an ID that is closer to the item that needs the click event instead of using document or body. With that in mind, I think in the past couple weeks I probably switched this out with this ID of #taskWrap to replace something like document. In either case if I did or didn't, a few days to a week ago it was working fine and now the click event does not trigger anything when the items are clicked on. I set up an alert() for testing it and nothing happens anymore. Now to further test, I changed my #taskWrap back to document and the click event immediately strated working again! So that does seem to be the issue. Below is the code that I have now that does not work! Below that is an image from Chrome Dev Tools that shows a little bit of the DOM structure for the area I am working with. I am hoping someone can show me what I did wrong when trying to target my item using an ID that is closer to the target than document and body are? $('#taskWrap').on('click', '#taskWrap .status_update', function(e) { alert('.status_update class checkbox task item clicked!'); }); I have also tried this with the same result, the click event does not happen... $('#taskWrap').on('click', '.status_update', function(e) { alert('.status_update class checkbox task item clicked!'); }); This DOES Work but I am trying to better optimize by not having to use document or body $('document').on('click', '.status_update', function(e) { alert('.status_update class checkbox task item clicked!'); }); UPDATE with more specific question... A Google search for what a Descendant is really considered in jQuery for the DOM had this result... "A descendant is a child, grandchild, great-grandchild, and so on. With jQuery you can traverse down the DOM tree to find descendants of an element." so now im confused as I feel like .status_update should be considered a great-great-grandchild for #taskWrap. Am I wrong here in this thinking? IF so please help me to understand how it is not? Based on the image below, to Traverse down the DOM from my #taskWrap DIV to ONE of my .status_update DIV's looks like this... #taskWrap > table.detail > tbody > td.task-checkbox > .status_update Please help me to understand where I am wrong and also what my jQuery should be replacing #taskWrap with to target my .status_update DIV's? The image shows the #taskWrap DIV in relation to where the .status_update DIV Task Items I am clicking on are located... FINAL UPDATE WITH SOLUTION/ANSWER: It turns out with my recent restructuring my code, my click events got moved to a different file and they simply were not wrapped in DOC READY code... $(document).ready(function() { alert('DOM LOADED'); }) This solved my problem along with the obvious changing of this line like posted in the answers below... $('#taskWrap').on('click', '#taskWrap .status_update', function(e) { Changed to... $('#taskWrap').on('click', '.status_update', function(e) { A: I believe the problem lies within your descendant selector. Your event handler binding should look like: $('#taskWrap').on('click', '.status_update', function(e) { alert('.status_update class checkbox task item clicked!'); }); http://jsfiddle.net/zvbjn9wv/ This is because there is no #taskWrap descendant within the #taskWrap element. A: Remove #taskWrap from this line: $('#taskWrap').on('click', '#taskWrap .status_update', function(e) { ^^^^^^^^^ Correct line should be: $('#taskWrap').on('click', '.status_update', function(e) {
{ "redpajama_set_name": "RedPajamaStackExchange" }
958
\section{Introduction} In this work, we study secure network communication over a directed acyclic network $\mathcal{G} = (\mathcal{V},\mathcal{E})$ having a single source node $S$, a single terminal node $T$, and a single node $K$, which is capable of generating random ``keys'' independent of the messages generated by $S$. We employ a notion of secure ``wiretap'' communication networks introduced by Cai and Yeung in \cite{cai2002secure} and studied further in, for example \cite{feldman2004capacity,cai2007security,yeung2008optimality,el2012secure,silva2011universal}. Under this notion of security, given a communication scheme over $\mathcal{G}$, we consider an edge $e \in \mathcal{E}$ of the network to be secure in the presence of a wiretap adversary if and only if $I(M;X_e) = 0$, where $M$ denotes the source message and $X_e$ denotes the information communicated on edge $e$.\footnote{Detailed definitions of all concepts discussed here and below appear in Section~\ref{MOD}.} To be secure in the presence of an adversary that wiretaps any size-$z$ subset $\mathcal{W} = \{e_1,\cdots,e_z\} \subset \mathcal{E}$ of edges, we require that $I(M;X_\mathcal{W}) = 0$, where $X_\mathcal{W} = (X_{e_1},\cdots,X_{e_z})$. Given integers $R$ and $z$, we define a secure network code over the network $\mathcal{G}$ to be $(R,z)$-feasible if it allows information to be communicated from the source $S$ to the terminal $T$ at rate $R$ and, in addition, it secures the network against a wiretap adversary that eavesdrops on up to $z$ edges of the network. Our work entails determining, for each $z$, the closure of the set of rates that are $(R,z)$-feasible, thereby deriving the capacity-security region. When $K=S$, the capacity-security region for secure multicast network codes is well understood \cite{cai2002secure, feldman2004capacity} with several follow up works \cite{cai2007security,yeung2008optimality,el2012secure,silva2011universal} that address various methods to alter any given non-secure linear network code into a new code that is secure. In contrast, determining the capacity-security region for secure network codes over a single-source single-terminal network, where every node can generate random keys, is as hard as the problem of characterizing the (non-secure) capacity region of the $k$-unicast problem as shown by \cite{huang2018}. Results of a similar nature are also presented in \cite{chan2014network}. The $k$-unicast problem is a well known open problem in the study of network codes \cite{chan2014network,6293890,langberg2009multiple,jalali2012capacity,4460828}. In this work, we seek to make progress in the apparently difficult generalization from the scenario where only the source can generate random keys to the scenario where all nodes can generate keys by studying the case where only a single node can generate keys but allowing that single node to be arbitrary. Our central result is a characterization of the capacity-security region in the unicast (single-source single-terminal) setting when only a single network node $K \ne S \in \mathcal{V}$ can generate random keys. The remainder of the paper is organized as follows. In Section \ref{MOD}, we present our model and preliminary notation. Our main result, the capacity-security characterization of the networks at hand, appears in Section~\ref{RES}. The characterization is combinatorial in nature and involves different cut-set bounds between the source node, the key generating node, and the terminal node. Achievability is proven in Section \ref{TH1_ACH} via a reduction from secure communication over $\mathcal{G}$ to (non-secure) multi-source multi-cast network coding over a modified network $\mathcal{G}^*$ as shown in Figure \ref{SDAGR}. The converse proof, which is based on cutset bounds, appears in Section \ref{TH1_CONVa_body}. An additional converse proof, in the more general context of cyclic networks, is presented in Appendix \ref{TH1_CONV}. The proofs of some one of our lemmas and claims are presented in Appendix \ref{CS_LEM} and Appendix \ref{CLAIMPROOFS}, respectively. \section{Network Model} \label{MOD} Our system model consists of the following components: \begin{itemize} \item [(a)] A finite directed acyclic graph $\mathcal{G} = \{\mathcal{V},\mathcal{E}\}$. We assume that each edge $e \in \mathcal{E}$ noiselessly transmits one unit of information (i.e., one field element in a given field $\mathbb{F}_q$) per unit time. We use multiple edges to model an edge with the ability to communicate more than one information symbol per unit time. \item [(b)] A source node $S$, which generates a source message vector of length $R$, $M = \irow{M_1 & M_2 & \cdots & M_{R}}^T$, with $M_1, M_2, \cdots, M_{R}$ independently and uniformly distributed over the field $\mathbb{F}_q$ of size $q$. \item [(c)] A terminal node $T \in \mathcal{V}$, which is required to decode all the messages generated by the source $S$ with zero error. \item [(d)] A node $K \in \mathcal{V}$, which generates a random ``key'' vector, $N = \irow{N_1, \cdots, N_{|N|}}^T$ with $N_1, \cdots, N_{|N|}$ independently and uniformly distributed over the field $\mathbb{F}_q$ with $N$ independent of $M$. \item [(e)] An eavesdropper that can access any subset $\mathcal{W} \subset \mathcal{E}$ of edges for which $|\mathcal{W}| \leq z$. \end{itemize} In the following subsections, we introduce our definition of a network code and discuss the notions of topological order and cut sets. \begin{figure}[!t] \centering \subfloat[]{\includegraphics[width=1.55in, height=1.1in]{SDAG.eps} \label{SDAG}} \hspace{2mm} \subfloat[]{\includegraphics[width=1.56in, height=1.1in]{SDAGR.eps} \label{SDAGR}} \vspace{0.0cm} \caption{(a) Network model $\mathcal{G}$, and (b) the modified network $\mathcal{G}^*$ obtained from $\mathcal{G}$ by adding $T^*$ and setting the demands at $T$ and $T^*$ to $(M,N)$.} \end{figure} \subsection{Network Code} \label{NC} We define a scalar linear network code $\mathcal{N}$ for the network $\mathcal{G}$ to be an assignment of a linear encoding function $f_e$ to each edge $e \in \mathcal{E}$ and a linear decoding function $g_T$ to terminal $T$. For $e \in \mathcal{E}$, we denote the edge message on $e$ by $X_e$, and for any set $\mathcal{A} \subseteq \mathcal{E}$, we define $X_\mathcal{A} = \{X_e: e \in \mathcal{A}\}$. If $e \in \mathcal{E}$ and $e = (u,v)$ then the edge message $X_e$ is a linear combination of all the messages carried by the edges in ${\rm In}(u) = \{(w,u): (w,u) \in \mathcal{E}\}$, the incoming edges of $u$. The edge message at $e$ is obtained using local encoding at $u$. We define $X_e$ using the local encoding function $\bar{f}_e$ on $e = (u,v)$ as \begin{align} \label{NC_EQ1a} X_e = \bar{f}_e(X_{{\rm In}(u)}) = \sum_{e' \in {\rm In}(u)}\bar{c}_{e',e}X_{e'}. \end{align} Here, $X_e$ denotes the message on edge $e$, for each edge $e' \in {\rm In}(u)$, $X_{e'}$ denotes the messages on edges $e'$ and $\bar{c}_{e',e}$ is the coefficient acting on each message $X_{e'}$. If edge $e$ is an outgoing edge of $S$ (or $K$), then $X_e$ is a function of the source messages (or keys) as well. Given, such a network code, an adversary that wiretaps any size-$z$ subset of edges $\mathcal{W} \subset \mathcal{E}$ would obtain the information $X_\mathcal{W}$ on the wiretapped edges. A network code is said to be $(R,z)$-feasible if \begin{align} \label{NC_EQ1} g_T(X_{{\rm In}(T)}) &= M\\ \label{NC_EQ2} {\rm I}(M;X_\mathcal{W}) &= 0, \end{align} where $T$ is the terminal node and $M$ is the $R$-dimensional message vector generated by the source $S$. \subsection{Topological Order} \label{NDCOMP} To achieve secure communication over the network $\mathcal{G}$, the source $S$ must ``mix'' the message symbols in $M$ with the (received) random key symbols in $N$. This mixture of messages and keys is communicated to the terminal $T$, which must decode correctly to reconstruct message $M$. Let $\mathcal{V} = \{v_0,...,v_{n-1}\}$. Since $\mathcal{G}$ is directed and acylic, we assume, without loss of generality, that the nodes $v_i \in \mathcal{V}$ are indexed according to their topological order in $\mathcal{G}$. This implies that the node $v_i$ receives its incoming information only from nodes $v_0, \cdots, v_{i-1}$. We also assume that the index of $K$ in this topological order is less than that of $S$ which in turn is less than that of the terminal $T$. More specifically, we assume $K=v_0$, $S=v_m$, and $T=v_{n-1}$ for $v_0, v_m, v_{n-1} \in \mathcal{V}$ and $0 < m < n-1$. There is no loss of generality in these assumptions as otherwise, either transmissions on outgoing edges of $S$ cannot be secure or the communication rate $R$ between $S$ and $T$ is zero. This implies that nodes $\{v_0,\dots,v_{m-1}\}$ only transmit, on their outgoing edges, functions of the information generated by $K$ while nodes $\{v_m,\dots,v_{n-1}\}$ may potentially transmit functions of the information generated at both $S$ and $K$. \subsection{The Cut Sets} \label{NPROP} For any pair of nodes $u,v \in \mathcal{V}$, a cut is a set of edges in $\mathcal{E}$ which, when removed, disconnects all paths from $u$ to $v$. The cut with the minimum capacity that separates $u$ and $v$ is denoted as ${\rm mincut}_\mathcal{G}(u,v)$. Since each edge in $\mathcal{E}$ is assumed to be of unit capacity, $|{\rm mincut}_\mathcal{G}(u,v)|$ represents the total capacity of all the edges in ${\rm mincut}_\mathcal{G}(u,v)$. The cuts as defined above may also separate sets of nodes in the network $\mathcal{G}$. For a subset of nodes $\mathcal{A}$, the set ${\rm mincut}_{\mathcal{G}}(\mathcal{A},v)$ is the minimum capacity cut that separates the set of nodes in $\mathcal{A} \subset \mathcal{V}$ from the node $v \in \mathcal{V}$. For the network $\mathcal{G}$, we use the following notation \begin{align} C_{K-S} &= |{\rm mincut}_\mathcal{G}(K,S)|\nonumber\\ C_{K-T} &= |{\rm mincut}_\mathcal{G}(K,T)|\nonumber\\ C_{S-T} &= |{\rm mincut}_\mathcal{G}(S,T)|\nonumber\\ C_{KS-T} &= |{\rm mincut}_\mathcal{G}(\{K,S\},T)|\nonumber \end{align} \section{Results} \label{RES} In this work we prove the following theorem. \begin{theorem} \label{TH1} Given the directed acyclic network $\mathcal{G}$ and integers $R$ and $z$ such that $R > 0$, there exists an $(R,z)$-feasible network code $\mathcal{N}$ over $\mathcal{G}$ if and only if, \begin{align} \label{BND1} &z \leq \min(C_{K-S},C_{K-T})\\ \label{BND2} &R \leq C_{S-T}\\ \label{BND3} &R+z \leq C_{KS-T} \end{align} \end{theorem} The proof of Theorem \ref{TH1} is divided into two parts, the achievability proof, shown in Section \ref{TH1_ACH}, and the converse proof shown in Section \ref{TH1_CONVa_body}. \section{Proof of Theorem \ref{TH1}: Achievability} \label{TH1_ACH} \begin{proof} For the network $\mathcal{G} = (\mathcal{E},\mathcal{V})$ with source node $S$ and key generating node $K$ holding $R$ message symbols $M$ and $z$ key symbols $N$ respectively, we set the values of integers $R$ and $z$ such that they satisfy the bounds (\ref{BND1}), (\ref{BND2}), and (\ref{BND3}). We implement a random linear network code $\mathcal{N}$ over $\mathcal{G}$ and over a sufficiently large field $\mathbb{F}_q$ such that, for any edge $e = (u,v) \in \mathcal{E}$, the local encoding coefficients $\{\bar{c}_{e',e}\}_{e' \in {\rm In}(u)}$ associated with edge $e$, as described in (\ref{NC_EQ1a}), are i.i.d. and uniform over $\mathbb{F}_q$. The network code $\mathcal{N}$ is said to be decodable at rate $R$ over network $\mathcal{G}$, if it satisfies the condition of (\ref{NC_EQ1}). We consider the following lemma which we prove in Section \ref{DEC}. \begin{lemma} \label{LEM_DEC} Given integers $R,z$ that satisfy (\ref{BND1})-(\ref{BND3}) of Theorem \ref{TH1}, the random linear network coding scheme $\mathcal{N}$ is decodable at rate $R$ with probability at least $1 - \dfrac{2(|\mathcal{E}|+R+z)^2}{q}$. \end{lemma} We now consider a wiretapping adversary that can eavesdrop on any subset of edges $\mathcal{W} \subset \mathcal{E}$ such that $|\mathcal{W}| = z$. We denote the information gleaned by the adversary as $X_{\mathcal{W}}$ which may be expressed as \begin{align} \label{SEC_EQ9} X_{\mathcal{W}} = \irow{\mathbf{A}_\mathcal{W} & \mathbf{B}_\mathcal{W}}\icol{M \\ N} \end{align} Here, $\mathbf{A}_\mathcal{W}$ and $\mathbf{B}_\mathcal{W}$ are $z \times R$ and $z \times z$ matrices whose rows are global encoding vectors associated with each edge in $\mathcal{W}$, acting on $M$ and $N$, respectively. We consider the network coded information to be secure if and only if (\ref{NC_EQ2}) holds for any $\mathcal{W} \subset \mathcal{E}$ of size $z$, i.e. the adversary gains no information about the source message symbols $M$ even after wiretapping a $z$-sized subset of edges in the network. In \cite{cai2007security}, Cai and Yeung show that a linear network coding scheme is secure if and only if the following condition holds. \begin{align} \label{SEC_EQ10} {\rm rk}(\irow{\mathbf{A}_\mathcal{W} & \mathbf{B}_\mathcal{W}}) = {\rm rk}(\mathbf{B}_\mathcal{W}) \end{align} Here, ${\rm rk}(.)$ denotes the {\em rank} of a matrix. The following lemma is proven in Section \ref{SEC} by analyzing the matrices $\mathbf{A}_\mathcal{W}$ and $\mathbf{B}_\mathcal{W}$. \begin{lemma} \label{LEM_SEC} Given integers $R,z$ that satisfy (\ref{BND1})-(\ref{BND3}) of Theorem \ref{TH1}, the random linear network coding scheme $\mathcal{N}$ over $\mathcal{G}$ is $z$-secure with probability at least $1 - \dfrac{\binom{|\mathcal{E}|}{z}2z}{q}$ for all wiretap sets $\mathcal{W} \subset \mathcal{E}$ of size $z$. \end{lemma} A network code is said to be $(R,z)$-feasible if it is both $R$-feasible and $z$-secure. It now follows that, given integers $R$ and $z$ that satisfy (\ref{BND1}), (\ref{BND2}), and (\ref{BND3}), the suggested network code is $(R,z)$-feasible with probability at least $$\Big(1 - \dfrac{2(|\mathcal{E}|+R+z)^2 + \binom{|\mathcal{E}|}{z}2z}{q} \Big),$$ which, for sufficiently large $q$, implies our achievability with high probability. \end{proof} \section{Proof of Theorem \ref{TH1}: Converse} \label{TH1_CONVa_body} \begin{proof} We prove the converse for any (not necessarily linear) $(R,z)$-feasible network code $\mathcal{N}$ over the network $\mathcal{G}$. We start with an $(R,z)$-feasible coding scheme and show that $R$ and $z$ satisfy the bounds of (\ref{BND1}), (\ref{BND2}) and (\ref{BND3}). Here, we give a partial proof in which we only address bound (\ref{BND1}). Proofs of a similar nature apply to the other bounds as well. Details of the converse proof, in the more general context of cyclic networks, appear in Appendix \ref{TH1_CONV}. We denote by $\mathbb{C}_{K-S}$ the minimum cut separating $K$ and $S$, and by $C_{K-S}$ the total capacity of the edges in $\mathbb{C}_{K-S}$. The random variable $X_{K-S}$, over the support set $\mathcal{X}_{K-S}$, represents the information on all edges of $\mathbb{C}_{K-S}$. We denote by $\mathcal{W}$ any subset of $z$ edges in $\mathcal{E}$ that is wiretapped by an eavesdropping adversary. Then $X_\mathcal{W}$ denotes the encoded information on all the edges in $\mathcal{W}$. We denote the set of edges that are incoming to $S$ as ${\rm In}(S)$, and the encoded information on all of the edges in ${\rm In}(S)$ as $X_{{\rm In}(s)}$ with support set $\mathcal{X}_{{\rm In}(S)}$. Similarly, for ${\rm Out}(S)$. For the bound $z \leq \min(C_{K-S},C_{K-T})$ we consider two cases. First, assume by contradiction that $z > C_{K-S}$. Specifically set $z = C_{K-S} + 1$. This implies that the eavesdropping adversary may choose to wiretap all the edges in $\mathbb{C}_{K-S}$ and an edge $e \in {\rm Out}(S)$ to obtain the wiretap set $\mathcal{W} = \mathbb{C}_{K-S} \cup \{e\}$ of size $z$. Then the wiretapped information is $X_\mathcal{W} = (X_{K-S},X_e)$, where $X_e$ is the information on the chosen edge $e$. Note that $X_e = \bar{f}_{e}(X_S)$, where, $X_{S} := (M, X_{{\rm In}(S)})$ is the information present at the source $S$. For $z$-security, we require that the mutual information ${\rm I}(M;X_\mathcal{W}) = 0$. Therefore, \begin{align} {\rm I}(M;X_\mathcal{W}) &= {\rm I}(M;X_{K-S}) + {\rm I}(M;X_e|X_{K-S}) = 0,\nonumber \end{align} implying that, ${\rm I}(M;X_{K-S}) = 0$ and ${\rm I}(M;X_e|X_{K-S}) = 0$. Thus, we conclude that ${\rm H}(X_e|X_{K-S}) = {\rm H}(X_e|X_{K-S},M)$. Suppose that cut $\mathbb{C}_{K-S}$ partitions $\mathcal{G}$ into disjoint sub-networks $\mathcal{A}$ and $\bar{\mathcal{A}}$, where $\mathcal{A}$ includes the key generating node $K$. Note that any information communicated through edges in $\bar{\mathcal{A}}$ must be a function of $X_{K-S}$. In addition, ${\rm In}(S) \subset \mathbb{C}_{K-S} \cup \mathcal{E}_{\bar{\mathcal{A}}}$, implying that all information reaching $S$ is a function of $X_{K-S}$. We conclude, for any edge $e \in {\rm Out}(S)$, that \begin{align} \label{TH_CONV_EQ6a_1a} X_e &= h_e(M,X_{K-S}), \end{align} where, $h_e$ is some deterministic function. Equation (\ref{TH_CONV_EQ6a_1a}) implies that ${\rm H}(X_e|X_{K-S},M) = 0$ which in turn implies ${\rm H}(X_e|X_{K-S}) = 0$. This means that to be $z$-secure the information $X_{K-S}$ must completely determine $X_e$ for all $e \in {\rm Out}(S)$. Therefore, the information $X_{{\rm Out}(S)} := \{X_e\}_{e \in {\rm Out}(S)}$ is also a deterministic function of $X_{K-S}$. As ${\rm I}(M;X_{K-S}) = 0$ shows that $X_{K-S}$ is independent of $M$, it follows that $X_{{\rm Out}(S)}$ is also independent of $M$ and thus ${\rm I}(M;X_{{\rm Out}(S)}) = 0$. This, in turn, implies that the rate realizable by the network code $\mathcal{N}$ is $R=0$ which is a contradiction. A similar proof holds for $z \leq C_{K-T}$, in which we study the set $\mathcal{W} = \mathbb{C}_{K-T} \cup \{e\}$ for any edge $e \in {\rm In}(T)$. \end{proof} \section{Proof of Lemmas} \label{LEMPROOFS} \subsection{Proof of Lemma \ref{LEM_DEC}} \label{DEC} \begin{lemmaproof} We begin by considering the modified network $\mathcal{G}^* = (\mathcal{V}^*,\mathcal{E}^*)$, obtained from $\mathcal{G}$ as shown in Figure \ref{SDAGR}. Specifically, $\mathcal{G}^*$ is obtained from $\mathcal{G}$ by adding a new node $T^*$ and $R+z$ parallel edges from $S$ to $T^*$. As in $\mathcal{G}$, the network $\mathcal{G}^*$ has nodes $S$ and $K$ holding $R$ symbols of $M$ and $z$ symbols of $N$, respectively. Here, the outgoing edges of $S$ include those in the original network $\mathcal{G}$, denoted as ${\rm Out}(S)$, and the additional $R+z$ edges. Both terminals $T$ and $T^*$ want to decode all $R$ symbols of $M$ and $z$ symbols of $N$. A network code, over $\mathcal{G}^*$, that satisfies the demands of terminals $T$ and $T^*$ is a multi-source multicast network code which is $\mathbf{R}$-feasible, where $\mathbf{R} = (R,z)$. We use a random linear multi-source multicast network code $\mathcal{N}^*$ over network $\mathcal{G}^*$ and the finite field $\mathbb{F}_q$. In what follows, we set some notation. \begin{itemize} \item [1.] Let $O_K \triangleq |{\rm Out}(K)|$, $I_S \triangleq |{\rm In}(S)|$ and $O_S \triangleq |{\rm Out}(S)|$. \item [2.] The node $K$ transmits $z$ linear combinations of $N$ through ${\rm Out}(K)$. We express the information on these edges as $X_{{\rm Out}(K)} = \mathbf{B}_KN$. Here, the rows of $\mathbf{B}_K$, which is an $O_K \times z$ matrix, are the local encoding vectors associated with each edge in ${\rm Out}(K)$. The entries of $\mathbf{B}_K$ are i.i.d. and uniform over the field $\mathbb{F}_q$. \item [3.] The message source $S$ receives $I_S$ linear combinations of $N$ through the edges in ${\rm In}(S)$. We express the information on these edges as $X_{{\rm In}(S)} = \mathbf{V}_{In(S)}\mathbf{B}_KN$. $\mathbf{V}_{{\rm In}(S)}$ is an $I_S \times O_K$ matrix, and the rows of $\mathbf{V}_{In(S)}\mathbf{B}_K$ are the global encoding vectors, associated with each edge in ${\rm In}(S)$, acting on $N$. \item [4.] $S$ ``mixes" the received $I_S$ symbols of $X_{{\rm In}(S)}$ with the $R$ symbols of $M$ and transmits the resulting combinations through ${\rm Out}(S)$ and to $T^*$. We express the information on ${\rm Out}(S)$ as \begin{align} X_{{\rm Out}(S)} &= \irow{\mathbf{A_S} & \mathbf{B}_S}\icol{M \\ \mathbf{V}_{In(S)}\mathbf{B}_KN}\nonumber\\ &= \irow{\mathbf{A_S} & \mathbf{B}_S\mathbf{V}_{In(S)}\mathbf{B}_K}\icol{M \\ N}.\nonumber \end{align} Here, the rows of the matrix $\irow{\mathbf{A_S} & \mathbf{B}_S}$ are the local encoding vectors associated with the edges in ${\rm Out}(S)$. $\mathbf{A}_S$ and $\mathbf{B}_S$ are $O_S \times R$ and $O_S \times I_S$ matrices respectively. The entries of $\mathbf{A}_S$ and $\mathbf{B}_S$ are i.i.d. and uniform over $\mathbb{F}_q$. \end{itemize} We now consider the following claims. Claim \ref{CL_RED} is proven in Appendix \ref{CLAIMPROOF_CL_RED}. \begin{claim} \label{CL_MC} The multi-source multicast random linear network code $\mathcal{N}^*$, as described above, is $\mathbf{R}$-feasible over the network $\mathcal{G}^*$ with probability at least $ 1 - \dfrac{2(|\mathcal{E}|+R+z)^2}{q}$. \end{claim} \begin{proof}[Proof of Claim \ref{CL_MC}] \label{CLAIMPROOF_CL_MC} Given integers $R$ and $z$, we start by observing the min-cut capacities in $\mathcal{G}^*$ between the subsets of the node set $\{S,K\}$ and each terminal $T$ and $T^*$ as follows. \begin{align} \label{CLAIMPROOF_CL_MC_EQ1} &|{\rm mincut}_{\mathcal{G}^*}(K,T)| = C_{K-T} \geq z\\ \label{CLAIMPROOF_CL_MC_EQ2} &|{\rm mincut}_{\mathcal{G}^*}(K,T^*)| = \min(R+z,C_{K-S}) \geq z \\ \label{CLAIMPROOF_CL_MC_EQ3} &|{\rm mincut}_{\mathcal{G}^*}(S,T^*)| = R+z \geq R\\ \label{CLAIMPROOF_CL_MC_EQ4} &|{\rm mincut}_{\mathcal{G}^*}(S,T)| = C_{S-T} \geq R\\ \label{CLAIMPROOF_CL_MC_EQ5} &|{\rm mincut}_{\mathcal{G}^*}(\{K,S\},T^*)| = R+z \\ \label{CLAIMPROOF_CL_MC_EQ6} &|{\rm mincut}_{\mathcal{G}^*}(\{K,S\},T)| = C_{KS-T} \geq R+z \end{align} From (\ref{CLAIMPROOF_CL_MC_EQ1})-(\ref{CLAIMPROOF_CL_MC_EQ6}), we see that for all source-terminal pairs in $\mathcal{G}^*$, the corresponding Min-Cut Max-Flow bounds are satisfied. Let $L$ be the total number of encoding coefficients employed over all the edges in $\mathcal{E}^*$. We can bound $L$ by $\sum_{e \in \mathcal{E}^*}|\mathcal{E}^*| \leq |\mathcal{E}^*|^2 = (|\mathcal{E}|+R+z)^2$. Using Theorem 8 of \cite{koetter2003algebraic} and Theorem 5.4 of \cite{8187170} (derived from \cite{ho2006random}), we have that the network code $\mathcal{N}^*$ is $\mathbf{R}$-feasible over the network $\mathcal{G}^*$ with probability at least \begin{align} \Big(1 - \dfrac{2}{q}\Big)^L &> 1 - \dfrac{2L}{q} > 1 - \dfrac{2(|\mathcal{E}|+R+z)^2}{q}\nonumber \end{align} This proves the claim. \end{proof} \begin{claim} \label{CL_RED} The $\mathbf{R}$-feasible network code $\mathcal{N}^*$ over $\mathcal{G}^*$, when restricted to $\mathcal{G}$, implies that $\mathcal{N}$ is $R$-decodable over $\mathcal{G}$. \end{claim} From Claim \ref{CL_MC} and Claim \ref{CL_RED}, we have that the network code $\mathcal{N}$ is $R$-decodable over $\mathcal{G}$ with probability at least \begin{align} 1 - \dfrac{2(|\mathcal{E}|+R+z)^2}{q}\nonumber \end{align} This proves the lemma. \end{lemmaproof} \subsection{Proof of Lemma \ref{LEM_SEC}} \label{SEC} \begin{lemmaproof} We use the notation introduced in the proof of Lemma \ref{LEM_DEC}. For any edge $e \in \mathcal{E}$, we express the information on $e$ as, \begin{align} \label{SEC_EQ3a} X_{e} &= u_{e}\icol{X_{{\rm Out}(K)} \\ X_{{\rm Out}(S)}} = u_{e}\begin{bmatrix} \mathbf{0} & \mathbf{B}_K\\ \mathbf{A}_S & \mathbf{B}_S\mathbf{V}_{{\rm In}(S)}\mathbf{B}_K \end{bmatrix}\icol{M \\ N} \end{align} Here, $u_{e}$ is an edge-$e$ encoding vector of dimension $O_K+O_S$, acting on $X_{{\rm Out}(K)}$ and $X_{{\rm Out}(S)}$. We partition $u_e = \irow{u_K & u_S}$ such that the $O_K$-dimensional vector $u_K$ acts on the information from ${\rm Out}(K)$ and the $O_S$-dimensional vector $u_S$ acts on the information from ${\rm Out}(S)$. Thus, we rewrite \eqref{SEC_EQ3a} as follows. \begin{align} \label{SEC_EQ3b} X_{e} &= \irow{u_K & u_S}\begin{bmatrix} \mathbf{0} & \mathbf{B}_K\\ \mathbf{A}_S & \mathbf{B}_S\mathbf{V}_{{\rm In}(S)}\mathbf{B}_K \end{bmatrix}\icol{M \\ N} \end{align} We now consider an adversary that wiretaps any subset $\mathcal{W} \subset \mathcal{E}$ of edges such that $|\mathcal{W}| = z$. Then, using \eqref{SEC_EQ3b}, we obtain the information observed by the adversary as follows. \begin{align} \label{SEC_EQ4a} X_{\mathcal{W}} &= \irow{\mathbf{U}_K & \mathbf{U}_S}\icol{X_{{\rm Out}(K)} \\ X_{{\rm Out}(S)}} \end{align} Here, $\irow{\mathbf{U}_K & \mathbf{U}_S}$ is a $z \times (O_K+O_S)$ matrix where $\mathbf{U}_K$ is a $z \times O_K$ matrix and $\mathbf{U}_S$ is a $z \times O_S$ matrix. We assume that $\irow{\mathbf{U}_K & \mathbf{U}_S}$ has full row-rank of $z$, as otherwise, the adversary could simply drop an edge in $\mathcal{W}$ and not lose any information. Using \eqref{SEC_EQ3b}, we rewrite \eqref{SEC_EQ4a} as follows. \begin{align} \label{SEC_EQ4} X_\mathcal{W} &= \begin{bmatrix} \mathbf{U}_S\mathbf{A}_S & \mathbf{U}_K\mathbf{B}_K + \mathbf{U}_S\mathbf{B}_S\mathbf{V}_{{\rm In}(S)}\mathbf{B}_K \end{bmatrix}\icol{M \\ N} \end{align} From \eqref{SEC_EQ9} and \eqref{SEC_EQ4}, we have that \begin{align} \mathbf{A}_\mathcal{W} &= \mathbf{U}_S\mathbf{A}_S \quad \text{and} \quad \mathbf{B}_\mathcal{W} = \begin{bmatrix} \mathbf{U}_K + \mathbf{U}_S\mathbf{B}_S\mathbf{V}_{{\rm In}(S)} \end{bmatrix}\mathbf{B}_K\nonumber \end{align} Let, \begin{align} \label{SEC_EQ6} \mathbf{\Phi} &\triangleq \irow{ \mathbf{U}_K + \mathbf{U}_S\mathbf{B}_S\mathbf{V}_{{\rm In}(S)} }. \end{align} From our decodability proof, we know that ${\rm rk}(\mathbf{V}_{{\rm In}(S)}) = z$, as otherwise, $T^*$ could not have decoded the keys $N$. For the security condition of \eqref{SEC_EQ10} to hold, we show that ${\rm rk}(\mathbf{B}_\mathcal{W}) = {\rm rk}(\mathbf{\Phi}\mathbf{B}_K) = z$. Therefore, we compute the following. \begin{align} \label{SEC_EQ7} &\Pr_{\mathbf{B}_K,\mathbf{B}_S}\{{\rm rk}(\mathbf{B}_\mathcal{W}) = z\} = \nonumber\\ &\quad \quad \Pr_{\mathbf{B}_S}\{{\rm rk}(\mathbf{\Phi}) = z\} \Pr_{\mathbf{B}_K}\{{\rm rk}(\mathbf{\Phi}\mathbf{B}_K) = z|{\rm rk}(\mathbf{\Phi}) = z\}. \end{align} We now consider the following claims proven in Appendix \ref{CLAIMPROOF_SEC_CL_1} and Appendix \ref{CLAIMPROOF_SEC_CL_2}, respectively. \begin{claim} \label{SEC_CL_1} $\Pr_{\mathbf{B}_S}\{{\rm rk}(\mathbf{\Phi}) = z\} > 1 - \dfrac{z}{q}$ \end{claim} \begin{claim} \label{SEC_CL_2} Given an $n \times m$ matrix $\mathbf{A}$ and an $m \times n$ matrix $\mathbf{B}$ such that ${\rm rk}(\mathbf{A}) = n$ and the entries of $\mathbf{B}$ are i.i.d. and uniform over the field $\mathbb{F}_q$, then ${\rm rk}(\mathbf{A}\mathbf{B}) = n$ with probability at least $1 - \dfrac{n}{q}$, over $\mathbf{B}$. \end{claim} Let us consider the following event. \begin{itemize} \item $\mathbb{E}_\mathcal{W}$: The condition of (\ref{SEC_EQ10}) holds for a given wiretap set $\mathcal{W}$ of size $z$. \end{itemize} Using Claim \ref{SEC_CL_1} and Claim \ref{SEC_CL_2} we conclude from \eqref{SEC_EQ7} that \begin{align} \label{SEC_EQ_9} \Pr_{\mathbf{B}_K,\mathbf{B}_S}\{\mathbb{E}_\mathcal{W}\} &> \Big(1 - \dfrac{z}{q}\Big)^2 > 1 - \dfrac{2z}{q} \end{align} Denoting the complementary event of $\mathbb{E}_\mathcal{W}$ by $\Bar{\mathbb{E}}_\mathcal{W}$ and using the union bound over event $\Bar{\mathbb{E}}_\mathcal{W}$ for any $\mathcal{W} \subset \mathcal{E}$ of size $z$, we have the following. \begin{align} \Pr\{\bigcup_{\mathcal{W} \subset \mathcal{E}}\Bar{\mathbb{E}}_\mathcal{W}\} \quad & \leq \sum_{\mathcal{W} \subset \mathcal{E}}\dfrac{2z}{q} = \dfrac{\binom{|\mathcal{E}|}{z}2z}{q}.\nonumber \end{align} Namely, the probability over the i.i.d. entries of $\mathbf{B}_S$ and $\mathbf{B}_K$, of the network code being secure against an adversary with a wiretap set $\mathcal{W}$ of size $z$ is at least $1 - \dfrac{\binom{|\mathcal{E}|}{z}2z}{q}$. This proves the lemma. \end{lemmaproof} \section{Conclusion} \label{CONC} In this paper, we characterize the capacity-security region for single unicast network codes over a directed acyclic network in which only one node, which is not necessarily the source node, can generate random keys. We present a random linear achievability proof and a matching coverse proof. Our converse can be extended to cyclic networks as well. (Details appear in Appendix \ref{TH1_CONV}.) Our work establishes an intermediate step between the well understood problem of characterizing the capacity-security region in which only the source node generates random keys and the problem of characterizing the capacity-security region when every node can generate random keys. Several problems are left open. An extension of our result to the context of multicast network coding is within reach and the subject of future research. It would also be interesting to extend our achievability to single unicast network coding over networks with cycles. Additional possible extensions include the study of single unicast networks in which more than one node can independently generate random keys. \section*{Acknowledgements} Work supported in part by NSF grants CCF-1526771 and CCF-1817241. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
624
SuperBot 2018 is just a few days away! Before the excitement begins, we want to get to know one of our panelists, Bree Glaeser. Tell us about yourself. What should Superbot attendees know about your background? I am a creative strategist with expertise in designing for voice experiences. I have a unique background in design, research, and strategy. Currently, my focus is on the intersection of voice and commerce and human-centric design. I've worked for The Mars Agency for just about 3 years; initially in research and strategy for their sister company, The Strategy Shop, before moving over to the innovation team at Mars, to explore how new technologies are changing the shopping landscape. We saw that The Mars Agency recently launched, what you believe to be, the industry's first voice-powered in-store shopping tool. Can you tell us more about that? As part of our SmartAisle technology development and ongoing efforts to explore how voice-enabled shopping will impact consumer behaviors and purchase decisions, we recently launched the Bottle Genius Powered by SmartAisle℠ skill for Alexa. We believe it's the world's first voice-powered shopping tool in the brick and mortar environment, and it's currently helping customers make confident whiskey choices at NYC's Bottlerocket Wine & Spirit. In fact, for the month of February 2018, YOY sales increased by 14% for the 120 whiskies featured on the Bottle Genius shelf set. This is just the beginning and a great example of how Voice is poised to change the way consumers shop. What prompted The Mars Agency to dive into frontier technology platforms to create custom solutions like Bottle Genius/Smart Aisle? We have a dedicated Innovation Lab for which the sole charge is to leverage emerging technology to make retail and shopping experiences better. We began investigating retail applications for voice assistance as it became increasingly present in consumer homes. As we watch the commerce landscape change, we're paying special attention to the shifts in shopper behaviors and expectations, so that we are always able to help our clients meet those shoppers wherever they are. Mars has a long history of innovation in marketing to shoppers, and because we are still independently owned and operated after 45 years, we have the agility to invest in frontier technology platforms, and in fact a history of doing so. Several startups have come out of Mars in the last 25 years (Triad Retail Media, Prize Logic, Collective Bias, to name a few), and we even operated a small, sidecar venture capital firm for a few years that invested in early stage marketing tech companies. Do you see the potential for SmartAisle to adapt to other types of retail? If so, are there any new projects you can share with us? Absolutely! SmartAisle makes sense anywhere shoppers have to make purchase decisions. We're helping customers eliminate the research step of their shopping journey, and instead conduct it live at a shelf in a matter of minutes, rather than at home or on their mobile phones. SmartAisle, in its current form, is particularly suited for more opaque categories, or categories where trying something new is common. We're starting to have conversations with mass retailers across CPG categories including beauty, health and wellness and home improvement. Do you see any other ways that voice will continue to innovate and transform the retail space? We're of the opinion that voice is truly the interface of the future. We think it will trump mobile as the preferred mode or interface for shoppers within the next five years. As technology advances there will be few things voice can't integrate into — we imagine a role for voice at every stage of the consumer's path to purchase, from adding to your shopping list in the car on the way to the store to re-ordering products through voice-enabled packaging, all of it is on the table right now. Bree will be speaking on the Monetization Strategies for Chatbots and Voice panel. You can see her and several other amazing panels on April 3rd at SuperBot 2018!
{ "redpajama_set_name": "RedPajamaC4" }
8,001
\section{Introduction} The standard model for compact radio sources is well established; energy generated close to the central black hole is streaming out in a jet-like structure \citep{b/z77, b/p82}. However, several of its tenets lack a firm physical underpinning. This includes the launching of the jet and the means of energy transport. A central question here is the material constituents of the plasma; i.e., whether it consists of electrons and protons or if there is a significant fraction of electron-positron pairs. Another issue is the process by which the synchrotron emitting particles get accelerated to relativistic energies. Diffusive shock acceleration, second order Fermi acceleration in a turbulent plasma or direct acceleration by the electric field generated in a reconnection process of the magnetic field have all been suggested to be the agent transferring energy to the radiating particles. Since the injection of particles into the acceleration process is likely to be different for these mechanisms, the low energy end of the particle distribution may be one way to distinguish between them. Unfortunately, optical depth effects hide the low energy electrons from direct view. Likewise, the presence of positrons can not be addressed by flux measurements alone, since their emitted flux is identical to that of the electrons. In contrast, both of these aspects of the plasma have a direct bearing on the observed circular polarisation. Although this was realised early on \citep[e.g.,][]{pac73}, the observed low level of circular polarisation made it hard to draw any strong conclusions regarding the plasma properties. However, it was noted that although the circular polarisation varied more rapidly and with larger relative amplitude as compared to either the flux or linear polarisation, it only rarely changed sign \citep{w/d83,kom84}. This suggests the presence of a large scale magnetic field. On the other hand, several theoretical arguments lead one to expect an important role for turbulence; e.g., in the acceleration process \citep{bla18,zhd18}. The connection between the large and small scale properties of the magnetic field is another issue where observations of circular polarisation have the potential to contribute significantly. The low level of observed circular polarisation narrowed down the type of questions that could be addressed. The increased accuracy with which circular polarisation can now be measured has opened up new possibilities \citep{mac00,ray00}. Although VLBI-observations are still challenging \citep{hom09}, spatially resolved studies of circular polarisation along the jet can be made \citep{war98,h/w04}. Furthermore, polarisation can be measured over a wide frequency range \citep{osu13} as well as at high frequencies \citep{agu17a}. In spite of this increase, both qualitatively and quantitatively, of the observations of circular polarisation, no clear understanding of its origin has emerged \citep{vit08,osu13}. Hence, its use as a plasma diagnostic is still limited. However, since the circular polarisation from the synchrotron emission process itself is quite simple, the observed rather complex behaviour suggests that transport effects may play a crucial role. The transfer equation for polarised light in a homogeneous medium has an analytical solution \citep[e.g.,][]{j/o77}. However, it is quite involved and various approximations have been put forth to facilitate comparison to observations. The aim of the present paper is threefold: (1) To present an alternative form of the homogeneous solution, which uses the concept of characteristic waves. This is an extension of the discussion in \cite{bjo88}. Such a formulation makes possible a more transparent and physical description of the polarisation of the emergent radiation. Furthermore, it is argued that even when the characteristic waves couple, this form of the solution can account, at least qualitatively, for some of the effects of inhomogeneities. (2) The high frequency observations in the POLAMI survey \citep{thu17,agu17b} and the detailed, wide band observations of \cite{osu13} are discussed. It is shown how they can be given a relatively straightforward explanation; in particular, that both indicate the presence of nearly circular characteristic waves. (3) It is pointed out that some of the approximations in common use have limited validity and, hence, should be applied with care. The outline of the paper is as follows: A short introduction to the transfer equation and the main properties of characteristic waves are given in Section\,\ref{sect2}. The formulation of the transfer equation for a light ray in terms of characteristic waves and its solution are presented in Section\,\ref{sect3}. The results for a homogeneous source are discussed in Section\,\ref{sect4}, where special attention is given to the two limits of nearly circular and nearly linear characteristic waves. Observations are discussed in Section\,\ref{sect5} and the main points of the paper are summarised in Section\,\ref{sect6}. \section{Polarisation transfer in a homogeneous medium} \label{sect2} Plane waves are the solution to Maxwell's equations in a homogeneous medium. Hence, instead of considering the propagation of a general electromagnetic field, it is sufficient to restrict attention to its Fourier components, $ \exp [i(\mathbf{k}\cdot \mathbf{r} - \omega t)]$. Here, $k = 2\pi/\lambda $ and $\omega = 2\pi \nu$, where $\lambda$ and $\nu$ are the wavelength and frequency, respectively. However, the plane waves do not correspond to the physically measurable electric and magnetic fields $\mathbf{E}$ and $\mathbf{B}$ but rather to $\mathbf{D}$ and $\mathbf{H}$, whose relations to the physical fields are determined by the properties of the medium. In a plasma relevant for synchrotron sources it is usually assumed that the permeability plays a negligible role so that $\mathbf{H} = \mathbf{B}$ and that the influence of the plasma can be written in component form as $D_{\rm l} = (\delta_{\rm l,m} + (4\pi i/\omega) \sigma_{\rm l,m})E_{\rm m}$, where $\sigma_{\rm l,m}$ is the dielectric tensor. The indices (l,m) run over all three spatial coordinate (x,y,z), i.e., (l,m = x,y,z) and a repeated index implies summation. Since $\mathbf{D \cdot k} = 0$, one finds \begin{equation} E_{\rm z} + \frac{4\pi i}{\omega} \sigma_{\rm z,m}E_{\rm m} = 0, \label{eq1} \end{equation} where $\mathbf{k}$ has been chosen to lie along the z-axis (see Figure\,\ref{fig1}). Since $| \sigma_{\rm l,m} | \sim c \kappa$, where $\kappa$ is the absorptivity of the plasma, the magnitude ratio between the longitudinal and transverse components of the electric field is $| E_{\rm z} / E_{\rm (x,y)}| \sim \kappa / k$. This is usually a very small number, i.e., the distance over which the radiation is absorbed is much larger than its wavelength. The weak anisotropy limit then corresponds to neglecting the longitudinal component of the electric field, in which case the the transfer equation can be written \begin{equation} \left(\frac{\partial}{\partial t} - c\frac{\mathbf{k}}{k}\cdot \frac{\partial}{\partial \mathbf{r}}\right) \left(\frac{\partial}{\partial t} + c\frac{\mathbf{k}}{k}\cdot \frac{\partial}{\partial \mathbf{r}}\right) \mathbf{E} = -4\pi \frac{\partial}{\partial t}\mathbf{J}, \label{eq2} \end{equation} where $J_{\rm l} = \sigma_{\rm l,m} E_{\rm m}$ (l,m = x,y,) is the current induced by the electric field. Since the magnitude of the RHS of equation (\ref{eq2}) is $\sim \omega c\kappa |E|$, it is seen that $kc/\omega \sim 1 + \kappa/ k$. The first operator on the LHS can then be evaluated to give $-2i \omega$, while the second operator corresponds to the comoving derivative in a frame moving with velocity $c$; i.e., $c\,{\rm d}/{\rm d}s$. Hence, without loss of accuracy, Equation (\ref{eq2}) can be written as a first order differential equation \begin{equation} \frac{\rm d}{{\rm d}s}\, E_{\rm l} = -\frac{2\pi}{c}\sigma_{\rm l,m}E_{\rm m}, \label{eq3} \end{equation} where $s$ is the distance along a ray path. The transfer equation in Equation (\ref{eq3}) can be rewritten as \begin{equation} \frac{\rm d}{{\rm d}s}\, E_{\rm l}E_{\rm j}^* = -\frac{2\pi}{c}\left(\sigma_{\rm l,m}E_{\rm m}E_{\rm j}^* + \sigma_{\rm j,m}^* E_{\rm m}^* E_{\rm l}\right), \label{eq4} \end{equation} where $(^*)$ denotes complex conjugate and (l,j,m = x,y). The transfer equation is usually written in terms of the Stokes parameters defined as $I = |E_{\rm x}|^2 + |E_{\rm y}|^2$, $Q = |E_{\rm x}|^2 - |E_{\rm y}|^2$, and $U +iV = 2E_{\rm x}E_{\rm y}^*$. It is straightforward to show that Equation (\ref{eq4}) is equivalent to the standard formulation. In the homogeneous case, Equation (\ref{eq4}) has an analytical solution \citep[e.g.,][]{j/o77}; however, it is rather complex. An alternative to the standard formulation is to start from Equation (\ref{eq3}). The Stokes parameters are then calculated only after the radiation has been transported through the medium rather than at the point of emission. Although the two methods are equivalent, as discussed briefly in \cite{bjo88}, the latter solution is helpful when trying to understand how the physical properties of the plasma affect the polarisation of the emerging radiation. The reason is that the standard solution is expressed in terms of the various plasma parameters (i.e., $\sigma_{\rm l,m}$), while the alternative solution is expressed in terms of the polarisation properties of the two characteristic waves ($K^{1,2}$) and their phase difference ($\Delta k$). \subsection{Characteristic waves}\label{sect2a} The dielectric tensor in Equation (\ref{eq3}) can be written \[ \sigma_{\rm l,m} = \frac{c\kappa}{4\pi}\left( \begin{array}{cc} 1 & \Upsilon_{\rm V} - i\Upsilon_{\rm L}\\ -\Upsilon_{\rm V} - i\Upsilon_{\rm L} & 1 \end{array} \right), \] where $\Upsilon_{\rm V} = \hat{\xi}_{\rm V} + i \xi_{\rm V}$ and $\Upsilon_{\rm L} = \hat{\xi}_{\rm U} + i \xi_{\rm U}$. The notation in this paper follows rather closely the one in \cite{j/o77}, except that in order to avoid confusion with the complex conjugate, ($\,\hat{}$\,) is used instead of ($^*$) to denote parameters accounting for the circular and linear birefringence of the plasma. All the $\xi$-parameters are normalised to the absorptivity; e.g., $\xi_{\rm V} = \kappa_{\rm V}/\kappa$ and $\xi_{\rm U} = \kappa_{\rm U}/\kappa$, where $ \kappa_{\rm V}$ and $\kappa_{\rm U}$ are the absorption coefficients for the Stokes $V$ and $U$ parameters (see Appendix C). Furthermore, it proves convenient to use $\phi = -\pi /4$ (see Figure\,\ref{fig1}) instead of $\phi = 0$, as done in \cite{j/o77}, since this renders $K^1 = -K^2$ (see below). As a result, the roles played by the Stokes parameters $Q$ and $U$ interchange; e.g., synchrotron emission has no $Q$-component. Since $U+iV = 2E_{\rm x}E_{\rm y}^{*}$, this choice also brings forth the formal similarity between the linear and circular polarisation. The eigenvalues obtained by diagonalising $\sigma_{\rm l,m}$ are given by \begin{equation} \eta^{1,2} = \frac{c\kappa}{4\pi}\left(1 \mp i\sqrt{\Upsilon_{\rm V}^2 +\Upsilon_{\rm L}^2}\right) \label{eq5} \end{equation} and Equation (\ref{eq3}) can be solved directly for the two characteristic waves \begin{equation} \mathbf{E}^{1,2} = \mathbf{E}^{1,2}_{\rm o} \exp\left(-\frac{2\pi}{c}\eta^{1,2} s \right). \label{eq6} \end{equation} Their phase difference can be defined as \begin{eqnarray} \Delta k & = & -\frac{2\pi}{c}(\eta^1 - \eta^2) \nonumber\\ & = & i\kappa \sqrt{\Upsilon_{\rm V}^2 + \Upsilon_{\rm L}^2}. \label{eq7} \end{eqnarray} Likewise, the polarisation of the two characteristic waves, $K^{1,2} \equiv E_{\rm y}^{1,2}/E_{\rm x}^{1,2}$, are obtained as \begin{eqnarray} K^{1,2} & = & \mp \frac{\delta k}{\Upsilon_{\rm V} - i\Upsilon_{\rm L}} \nonumber\\ & = & \pm \sqrt{\frac{1-\rho}{1+\rho}}, \label{eq8} \end{eqnarray} where $\delta k = \Delta k/\kappa$ is the normalised phase difference and \begin{eqnarray} \rho & \equiv & i\frac{\Upsilon_{\rm V}}{\Upsilon_{\rm L}} \nonumber\\ & = & \frac{\hat{\xi}_{\rm V} \xi_{\rm U} - \hat{\xi}_{\rm U} \xi_{\rm V} + i(\xi_{\rm V} \xi_{\rm U} +\hat{\xi}_{\rm V} \hat{\xi}_{\rm U})} {\hat{\xi}_{\rm U}^2 + \xi_{\rm U}^2}. \label{eq9} \end{eqnarray} It should be noted that there is a sign ambiguity in Equations (\ref{eq7}) and (\ref{eq8}) when evaluating the square root. It is shown below that this sign always enters in the product of $K^{1,2}$ and $\Delta k$. Hence, the choice is physically unimportant as long as the same sign convention is used for both. It is sometimes claimed that the characteristic waves are orthogonal \citep[e.g.,][]{k/m98}, which implies that their polarisation vectors would point in opposite directions on the Poincar\'e sphere. The radiative transfer is then approximated as a rotation of the polarisation vector of the emitted radiation around this axis \citep{k/m98,r/b02}. The polarisation of the characteristic waves are orthogonal when $\mathbf{E^1} \cdot \mathbf{E^{2^*}} = 0$ or $1+ K^1K^{2^*} = 0$. Hence, it is seen from Equation (\ref{eq8}) that a necessary condition for the characteristic waves to be orthogonal is $|K^{1,2}| = 1$. Likewise, from Equation (\ref{eq8}) \begin{equation} |K^{1,2}|^4 = 1 - \frac{4\rho_{\rm r}}{1+ |\rho|^2 + 2\rho_{\rm r}}, \label{eq10} \end{equation} where the subscript "r" denotes the real part of $\rho$. In general then, the characteristic waves are not orthogonal and, hence, such a simplification should be used with care. However, one may note that when absorption is neglected, the characteristic waves will be orthogonal, since then $\rho_{\rm r} = 0$ (cf. Equation \ref{eq9}). \section{Properties of the transfer equation}\label{sect3} Although the transfer equation is trivial to solve when using characteristic waves (i.e., Equation \ref{eq6}), there are a few aspects of the solution that need to be emphasised. The polarisation properties of radiation are normally given in terms of the Stokes parameters and the emissivity ($\epsilon$) is specified for each one of them. Hence, the initial conditions in Equation (\ref{eq6}), i.e., $\mathbf{E}^{1,2}_{\rm o}$, need to be related to the emissivities of the individual Stokes parameters. This involves two steps: (1) Equation (\ref{eq6}) presupposes $100 \%$ polarised radiation. The emissivities should therefore be divided into two $100 \%$ polarised waves. (2) Each of these waves is then written as a sum of the two characteristic waves. \subsection{Division into two characteristic waves}\label{sect3a} Consider a $100 \%$ polarised wave, which initially has an electric field $\mathbf{E_{\rm o}}$ with polarisation $K_{\rm o} = E_{\rm y,o}/E_{\rm x,o}$. Its division into the two characteristic waves $\mathbf{E}^{1,2}_{\rm o}$ yields \begin{eqnarray} E_{\rm x,o} & = & E^1_{\rm x,o} + E^2_{\rm x,o}\nonumber\\ K_{\rm o}E_{\rm x,o} & = & K^1E^1_{\rm x,o} + K^2E^2_{\rm x,o}, \label{eq11} \end{eqnarray} which can be solved to give \begin{eqnarray} E^1_{\rm x,o} & = & -E_{\rm x,o}\frac{K_{\rm o}-K^2}{K^2 - K^1}\nonumber\\ E^2_{\rm x,o} & = & E_{\rm x,o}\frac{K_{\rm o}-K^1}{K^2 - K^1}. \label{eq12} \end{eqnarray} The connection to the Stokes parameters is obtained from $|E_{\rm x,o}|^2 = (I_{\rm o} + Q_{\rm o})/2$ and $K^{*}_{\rm o} = (U_{\rm o} + iV_{\rm o})/2|E_{\rm x,o}|^2$. Without loss of generality $E_{\rm x,o}$ can be chosen to be real and one finds \begin{eqnarray} E^1_{\rm x,o} & = & \sqrt{\frac{I_{\rm o}}{8(1+q_{\rm o})}}\left(1+q_{\rm o} - \frac{u_{\rm o}- iv_{\rm o}} {K^2}\right)\nonumber\\ E^2_{\rm x,o} & = & \sqrt{\frac{I_{\rm o}}{8(1+q_{\rm o})}}\left(1+q_{\rm o} + \frac{u_{\rm o}- iv_{\rm o}} {K^2}\right), \label{eq13} \end{eqnarray} where $q_{\rm o}=Q_{\rm o}/I_{\rm o}$, $u_{\rm o}=U_{\rm o}/I_{\rm o}$, $v_{\rm o}=V_{\rm o}/I_{\rm o}$, and $K^1 = -K^2$ has been used. As the wave propagates through the plasma its components vary according to $E_{\rm x} = E^1_{\rm x} + E^2_{\rm x}$ and $E_{\rm y} = K^1E^1_{\rm x} + K^2E^2_{\rm x} = K^2(-E^1_{\rm x} + E^2_{\rm x})$, where now $E^{1,2}_{\rm x} = E^{1,2}_{\rm x,o} \exp(-\kappa s/2 \pm \Delta ks/2)$. With $U+iV = 2E_{\rm x}E^{*}_{\rm y}$, it is shown in Appendix A that after travelling a distance $s$, its circular polarisation is \begin{eqnarray} \lefteqn{V = I_{\rm o} \exp(-\kappa s)\left[ v_{\rm o} \left\{ \left(\frac{K_{\rm i}}{|K|}\right)^2 \cosh(\Delta k_{\rm r}s) +\left(\frac{K_{\rm r}}{|K|}\right)^2 \cos(\Delta k_{\rm i}s) \right\}\right.} \hspace{2.4cm} \nonumber\\ & & - u_{\rm o}\frac{K_{\rm i}K_{\rm r}}{|K|^2}\left\{\cosh(\Delta k_{\rm r}s) - \cos(\Delta k_{\rm i} s)\right\} \nonumber\\ & & + \frac{K_{\rm i}}{2}\left\{1 + \frac{1}{|K|^2} + q_{\rm o}\left(1- \frac{1}{|K|^2}\right)\right\} \sinh(\Delta k_{\rm r}s) \nonumber\\ & & + \left. \frac{K_{\rm r}}{2}\left\{1 - \frac{1}{|K|^2} + q_{\rm o}\left(1+ \frac{1}{|K|^2}\right) \right\}\sin(\Delta k_{\rm i}s) \right], \label{eq14} \end{eqnarray} where the subscripts "r" and "i" denote the real and imaginary parts, respectively, of a quantity. Furthermore, to simplify the notation, $K \equiv K^2$ has been introduced. There are a number of general features of the circular polarisation that are apparent from Equation (\ref{eq14}), which will also be relevant for a homogeneous source, i.e., when emission occurs along the ray path. The things to note for a synchrotron source are: (1) The resulting value of $V$ depends linearly on the initial Stokes parameters ($v_{\rm o}$, $u_{\rm o}$, and $q_{\rm o}$). Since Stokes parameters are additive, although Equation (\ref{eq14}) was derived for a $100\%$ polarised wave, it is valid also in general for a partially polarised wave (i.e., $v_{\rm o}^2 + u_{\rm o}^2 +q_{\rm o}^2 < 1$). The same is true for the other Stokes parameters. (2) The first term ($\propto v_{\rm o}$) corresponds to emission, while the second one ($\propto u_{\rm o}$) accounts for the conversion of linear to circular polarisation. This term is $\propto K_{\rm i}K_{\rm r}$, which implies a symmetric behaviour of $V$ for linear ($|K_{\rm i}| \ll 1$) and circular ($|K_{\rm r}| \ll 1$) characteristic waves (see Section \ref{sect4b} for further discussion of this issue). Although not obvious here, it will be shown later that the third and fourth terms correspond to absorption of the $U$ and $V$ parameters. Furthermore, the term $\ |K|^2 - 1$ explicitly shows the effects of the non-orthogonality of the characteristic waves. (3) It is seen from Equation (\ref{eq8}) that the magnitude of $\rho$ determines the polarisation properties of the characteristic waves; $|\rho| \ll 1$ corresponds to linearly polarised waves (i.e., $|K_{\rm i}| \ll 1$), while $|\rho| \gg 1$ corresponds to circularly polarised waves (i.e., $|K_{\rm r}| \ll 1$). (4) The magnitude of the transfer induced circular polarisation is largest when $|\rho| \sim 1$, since then $|K_{\rm r}| \sim| K_{\rm i}| \sim ||K|^2 -1| \sim 1$. An example of this can be seen in \cite{bjo90}, where the transition from linear to circular characteristic waves was discussed; in particular, it was shown that the degree of circular polarisation can reach several tens of percent, i.e., of the same order as the linear polarisation. It should also be noticed that the overall magnitude of the circular polarisation is determined by the polarisation properties of the characteristic waves (i.e., $K$), while its variation with frequency/optical depth is mainly due to their phase difference (i.e., $\Delta k$). (5) It is seen that the sign chosen for $\sqrt{\Upsilon_{\rm V}^2 + \Upsilon_{\rm L}^2}$ is unimportant as long as it applies to both $K^{1,2}$ and $\Delta k$ (cf. Equation \ref{eq8}). \section{Circular polarisation from a homogeneous source}\label{sect4} The circular polarisation from a homogeneous source is obtained by integrating Equation (\ref{eq14}) from $s = 0$ to $s = s_{\rm max}$. This is done in Appendix B. Here, $s_{\rm max}$ is the thickness of the source so that its optical depth is $\tau = \kappa s_{\rm max}$. The observed circular polarisation in compact radio sources is usually of the order of one percent or smaller. Although inhomogeneities along a given sightline can severely affect the polarisation (this will be discussed in a forthcoming paper), the simplest explanation is that the physical conditions are such that either $|\rho| \ll 1$ or $|\rho| \gg 1$ (cf. the discussion in Section \ref{sect3a}). The plasma parameter with the least constrained value in compact radio sources is $\hat{\xi}_{\rm V}$. This is due to its sensitivity to two virtually unknown quantities, i.e., the number of low energy electrons (e.g., the low energy cut-off of the relativistic electrons) and the fraction of electron-positron pairs in the plasma (cf. Appendix C). Hence, the two limits of $|\rho|$ likely correspond to the two extremes $|\hat{\xi}_{\rm V}| \ll 1$ and $|\hat{\xi}_{\rm V}| \gg 1$. \subsection{Circular polarisation from nearly circular characteristic waves}\label{sect4a} It is convenient to write $\Delta k_{\rm r} s_{\rm max} \equiv \delta k_{\rm r} \tau$ and $\Delta k_{\rm i} s_{\rm max} \equiv \delta k_{\rm i} \tau$. When $|\Upsilon_{\rm V}| \gg |\Upsilon_{\rm L}|$, $|\rho| \gg 1$ and the characteristic waves are nearly circularly polarised (cf. Equation \ref{eq8}). In most cases this corresponds to $|\hat{\xi}_{\rm V}| \gg 1$. Expanding Equations (\ref{eq7}) and (\ref{eq8}) to lowest order in $|\rho|^{-1}$, one finds that $|K_{\rm r}|$, $|\delta k_{\rm r}|$, $|\delta k_{\rm i}|^{-1}$, and $|K|^2 - 1$ are all $\sim |\rho|^{-1}$. Furthermore, let $v$ and $u$ denote the normalised $V$ and $U$ emissivities, respectively. Then $|v|$ and $|\xi_{\rm V}|$ are both small (cf. Appendix C). Assuming them to be of the same order of magnitude as $|\rho|^{-1}$, the solution in Appendix B can be expanded to lowest order in $\rho^{-1}$. This yields \begin{eqnarray} V & = &S\left[v\left\{1-\exp(-\tau)\right\} - uK_{\rm r}\left\{1-\exp(-\tau)\right\}\right.\nonumber\\ &+& \left. \delta k_{\rm r}\left\{1-\exp(-\tau)(1+ \tau)\right\} + uK_{\rm r}\frac{\sin(\delta k_{\rm i}\tau)}{\delta k_{\rm i}}\exp(-\tau)\right], \label{eq15} \end{eqnarray} where $S = \epsilon/\kappa$ is the source function. The relevant plasma parameters are $K_{\rm r} = -\hat{\xi}_{\rm U}/\hat{\xi}_{\rm V}$, $\delta k_{\rm r} = -(\xi_{\rm V} +\xi_{\rm U}\hat{\xi}_{\rm U}/\hat{\xi}_{\rm V}) = -\xi_{\rm V} + \xi_{\rm U} K_{\rm r}$, $\delta k_{\rm i} = \hat{\xi}_{\rm V}$, and $|K|^2 - 1 = -2\xi_{\rm U}/\hat{\xi}_{\rm V}$. With these expressions, it is straightforward to show that Equation (\ref{eq15}) is identical to the solution given in \cite{bjo88}. However, to illuminate the various physical mechanisms influencing the observed circular polarisation a better representation of the solution is \begin{eqnarray} V &=& S\left[(v - \xi_{\rm V})\left\{1-\exp(-\tau)(1+\tau)\right\} + v\tau \exp(-\tau)\right.\nonumber\\ &-& \left. K_{\rm r}\left\{(u-\xi_{\rm U})\left\{1-\exp(-\tau)(1+\tau)\right\} + u\tau\exp(-\tau)\left(1 - \frac{\sin(\hat{\xi}_{\rm V}\tau)}{\hat{\xi}_{\rm V}\tau}\right)\right\}\right]. \label{eq16} \end{eqnarray} This shows explicitly the similarities between the $V$ and $U$ emissivities/absorptivities. For a thermal distribution of electrons, e.g., a relativistic Maxwellian, $v = \xi_{\rm V}$ and $u=\xi_{\rm U}$ \citep{j/h79}. Hence, it is the non-thermal aspect of the electron distribution which causes the change of sign in the circular polarisation at large optical depths. For a power law distribution of relativistic electrons both $|v - \xi_{\rm V}|$ and $|u-\xi_{\rm U}|$ are quite a bit smaller than $|v|$ and $|u|$, respectively \citep{j/o77}. Furthermore, for small optical depths, the non-thermal terms both vary as $\tau^2$, while the $v$- and $u$-terms vary as $\tau$ (for the $u$-term, this is valid for $\tau > \hat{\xi}_{\rm V}^{-1}$). Therefore, it is expected that for most electron distributions, the major contributions to the integrated circular polarisation come from the $v$- and $u$-terms. Although $q=0$ for synchrotron radiation, the $q$-term has been kept in the general solution given in Appendix B. The reason is to illustrate the nature of the conversion of linear to circular polarisation. It is sometimes said \citep{jon88, m/m18} that the conversion acts only on the Stokes parameter $Q$ and, hence, that the conversion in a synchrotron source occurs in two steps; first $U$ is converted to $Q$ through Faraday rotation and then $Q$ is converted to $V$. However, no $q$-term appears in Equation (\ref{eq16}). This implies that even if there were a $Q$-term, its contribution to the circular polarisation would be of order $\hat{\xi}_{\rm V}^{-2}$ and, hence, negligible. Another way of seeing the same thing is to consider the magnitude of the transfer induced circular polarisation, i.e., $|K_{\rm r}|$. Faraday rotation is $\propto \hat{\xi}_{\rm V}$ but $K_{\rm r} \propto \hat{\xi}_{\rm V}^{-1}$; i.e., larger Faraday rotation (larger $Q$) results in smaller circular polarisation. Hence, the name "Faraday conversion" often used for this process may be somewhat of a misnomer, since the conversion of $U$ to $V$ occurs directly without any intermediate steps. It is often assumed that absorption does not affect the conversion of $U$ to $V$ in the optically thin regime \citep[e.g.,][]{war98, ens03, osu13}. The solution to the transfer equation is then given by the Faraday conversion term, $V/I = u\tau_{\rm F}\tau_{\rm C}/6$, where $ \tau_{\rm F} = \hat{\xi}_{\rm V}\tau$ and $\tau_{\rm C} = \hat{\xi}_{\rm U}\tau$. However, expanding Equation (\ref{eq16}) for small optical depths and using $I=S\tau$, one finds for the conversion of linear to circular polarisation, $V/I = (\hat{\xi}_{\rm U}/\hat{\xi}_{\rm V})[\tau(u-\xi_{\rm U}) +u(1-\sin(\hat{\xi}_{\rm V}\tau)/\hat{\xi}_{\rm V}\tau)]$. A rapid rise in circular polarisation occurs at $\tau \sim \hat{\xi}_{\rm V}^{-1}$, so that for $\hat{\xi}_{\rm V} \tau > 1$, the leading term is $u\hat{\xi}_{\rm U}/\hat{\xi}_{\rm V}$. For $\hat{\xi}_{\rm V} \tau < 1$, the circular polarisation is substantially smaller, since it is determined by higher order terms. Among these is the Faraday conversion term, which is smaller by a factor $(\hat{\xi}_{\rm V} \tau)^2$ as compared to $u\hat{\xi}_{\rm U}/\hat{\xi}_{\rm V}$. Furthermore, the contribution from the non-thermal term ($\propto \tau (u-\xi_{\rm U})$) may become significant, since it decreases with decreasing optical depth slower than the Faraday conversion term ($\tau$ vs $\tau^2$). It should also be noted that Equations (\ref{eq15}) and (\ref{eq16}) are correct only to first order in $|\rho|^{-1} \sim |\hat{\xi}_{\rm U}/\hat{\xi}_{\rm V}|$. Hence, second order terms may dominate the Faraday conversion term ($|\hat{\xi}_{\rm U}/\hat{\xi}_{\rm V}|$ vs $(\hat{\xi}_{\rm V} \tau)^2$). In this frequency range, \cite{j/o77} assumed that the observed circular polarisation is unaffected by transfer effects and, instead, given directly by the emission process. Therefore, it is unlikely that neglect of absorption is a viable approximation even in the optically thin regime (cf., the discussion of the effects of absorption on the non-orthogonality of the characteristic waves at the end Section \ref{sect2a}). In general then, the Faraday conversion term does not provide a good approximation to the transfer induced circular polarisation. This aspect of the circular polarisation is discussed further in Section \ref{sect4b}. \subsection{Circular polarisation from nearly linear characteristic waves}\label{sect4b} When $|\Upsilon_{\rm V}| \ll |\Upsilon_{\rm L}|$, the characteristic waves are nearly linearly polarised, since $|\rho| \ll 1$. In this limit, $|K_{\rm i}|$ and $|K|^2-1$ are both $\sim |\rho|$. Since $|\xi_{\rm U}| \sim 1$ and $|\hat{\xi}_{\rm U}|$ cannot be assumed to be large in general, this corresponds in most cases to $|\hat{\xi}_{\rm V}| \ll 1$. The rather simple form of Equations (\ref{eq15}) and (\ref{eq16}) is due mainly to the properties of the phase difference $\delta k$ (i.e., $|\delta k_{\rm r}| \ll 1$ and $|\delta k_{\rm i}| \gg 1$). Here, on the other hand, one finds to lowest order in $\rho$, $\delta k_{\rm r} = -\xi_{\rm U}$ and $\delta k_{\rm i} = \hat{\xi}_{\rm U}$, the magnitude of which are both expected to be of order unity. This leads to a somewhat more complex expression for $V$. Expanding the solution in Appendix B to lowest order in $\rho$ yields \begin{eqnarray} V &=& S\left[\frac{(v - \xi_{\rm V} - q\hat{\xi}_{\rm U})}{1+\hat{\xi}_{\rm U}^2}\left\{1-\exp(-\tau)\left(\cos(\hat{\xi}_{\rm U}\tau) + \frac{\sin(\hat{\xi}_{\rm U}\tau)}{\hat{\xi}_{\rm U}}\right)\right\} + \frac{v\sin(\hat{\xi}_{\rm U}\tau) \exp(-\tau)}{\hat{\xi}_{\rm U}}\right.\nonumber\\ &+& K_{\rm i}\left\{\frac{(u-\xi_{\rm U})}{1-\xi_{\rm U}^2}\left(1-\exp(-\tau)\left(\cosh(\xi_{\rm U}\tau)+\frac{\sinh(\xi_{\rm U}\tau)}{\xi_{\rm U}}\right)\right) + \frac{u\sinh(\xi_{\rm U}\tau)\exp(-\tau)}{\xi_{\rm U}}\right\}\nonumber \\ &-& \left.K_{\rm i}\left\{\frac{(u - \xi_{\rm U})}{1+\hat{\xi}_{\rm U}^2}\left(1-\exp(-\tau)\left(\cos(\hat{\xi}_{\rm U}\tau) + \frac{\sin(\hat{\xi}_{\rm U}\tau)}{\hat{\xi}_{\rm U}}\right)\right) + \frac{u\sin(\hat{\xi}_{\rm U}\tau) \exp(-\tau)}{\hat{\xi}_{\rm U}}\right\}\right], \nonumber\\ \label{eq17} \end{eqnarray} where $K_{\rm i} = (\hat{\xi}_{\rm V}\hat{\xi}_{\rm U} + \xi_{\rm V}\xi_{\rm U})/(\hat{\xi}_{\rm U}^2 + \xi_{\rm U}^2)$ and $(|K|^2 - 1)/2 = (\xi_{\rm V}\hat{\xi}_{\rm U} - \hat{\xi}_{\rm V}\xi_{\rm U})/(\hat{\xi}_{\rm U}^2 + \xi_{\rm U}^2)$, which leads to $\hat{\xi}_{\rm U}(|K|^2 - 1)/2 = \xi_{\rm V} - \xi_{\rm U} K_{\rm i}$, have been used. The structures of Equations (\ref{eq16}) and (\ref{eq17}) are rather similar. Although $|\delta k| \sim 1$ makes the variation of $V$ with $\tau$ more involved, their basic properties remain the same; e.g., the non-thermal terms (i.e., $v-\xi_{\rm V}$ and $u-\xi_{\rm U}$) are small compared to the $v$- and $u$-terms. They only become important at large optical depths, where they cause a change of sign. Furthermore, the amplitude of the conversion of linear to circular polarisation is determined by $K_{\rm i}$ rather than $K_{\rm r}$. This formal similarity between Equations (\ref{eq16}) and (\ref{eq17}) is due to the symmetric expressions of $\delta k$ and $K$ in the two limits (cf. Equations \ref{eq7} and \ref{eq8}). It is seen in Appendix B that the two limits can be related by just interchanging $\Upsilon_{\rm L}$ and $\Upsilon_{\rm V}$. Hence, several of the main properties, which derive from the birefringence of the plasma, can be obtained by interchanging $\hat{\xi}_{\rm U}$ and $\hat{\xi}_{\rm V}$. One example is the term giving the major contribution to the circular polarisation from conversion of linear polarisation. In Equation (\ref{eq16}) it is $\propto 1-\sin(\hat{\xi}_{\rm V}\tau)/\hat{\xi}_{\rm V}\tau$, while in Equation (\ref{eq17}) the corresponding expression is $\propto \sinh(\xi_{\rm U}\tau)/\xi_{\rm U}\tau - \sin(\hat{\xi}_{\rm U}\tau)/\hat{\xi}_{\rm U}\tau$. For nearly circular characteristic waves, this term causes a rapid rise in circular polarisation at $\tau \sim |\hat{\xi}_{\rm V}|^{-1}$ (cf. the discussion in Section \ref{sect4a}). Likewise, for nearly linear characteristic waves, this increase occurs at $\tau \sim |\hat{\xi}_{\rm U}|^{-1}$. The major difference is the values of $|\hat{\xi}_{\rm V}|$ and $|\hat{\xi}_{\rm U}|$ in the two limits. As already mentioned, they are expected to be quite different; while $|\hat{\xi}_{\rm V}| \gg 1$ for circular characteristic waves, $|\hat{\xi}_{\rm U}|$ may not be much larger than unity for linear characteristic waves. Therefore, the observed circular polarisation is expected to come from, on average, larger optical depths for nearly linear as compared to nearly circular characteristic waves. The transition between these two limits was discussed in \cite{bjo90}. It was shown that the optical depth where the circular polarisation peaks decreases smoothly from $\tau >1$ to $\tau <1$ as the characteristic waves change from linear to circular \citep[see also][for the latter limit]{j/o77}. Physically, this can be understood as follows: When there are few low energy electrons, e.g., a power-law distribution of electrons with a low energy cut-off close to the synchrotron self-absorption frequency, $|\hat{\xi}_{\rm V}|\ll|\hat{\xi}_{\rm U}|$ and the characteristic waves are linearly polarised. As the low energy cut-off decreases, the magnitude of both $\hat{\xi}_{\rm V}$ and $\hat{\xi}_{\rm U}$ increase. The value of $\hat{\xi}_{\rm V}$ increases much faster than that of $\hat{\xi}_{\rm U}$, which causes the characteristic waves to change from linear to circular. At the same time, as the value of $\hat{\xi}_{\rm U}$ increases, so does the relative contribution to $V$ from the optically thin part of the spectrum (i.e., corresponding to $\tau \raise0.3ex\hbox{$>$}\kern-0.75em{\lower0.65ex\hbox{$\sim$}} |\hat{\xi}_{\rm U}|^{-1}$). This shows that the frequency distribution of the circular polarisation is expected to be quite different for nearly linear and nearly circular characteristic waves. Basically, this is due to the very different values of $\delta k_{\rm i}$ in the two cases; i.e., it is a consequence of the increase in circular polarisation at $\tau \sim |\delta k_{\rm i}|^{-1}$ together with an increasing value of $|\delta k_{\rm i}|$ as the characteristic waves change from nearly linear to nearly circular (cf. Equations \ref{eq7} and \ref{eq8}). The use of this property to distinguish observationally between linear and circular characteristic waves is discussed further in Section \ref{sect5}. For nearly circular characteristic waves, the frequency range where $1> \tau> |\hat{\xi}_{\rm V}|^{-1}$ should be rather large. On the other hand, the corresponding frequency range for nearly linear characteristic waves is expected to be much smaller. Hence, $\tau< |\hat{\xi}_{\rm U}|^{-1}$ may dominate the optically thin region. Expanding Equation (\ref{eq17}) to lowest order in $\hat{\xi}_{\rm U}\tau$ gives \begin{eqnarray} \frac{V}{I} &=& v(1-\tau) +(v-\xi_{\rm V})\frac{\tau}{2} + K_{\rm i}u(\xi_{\rm U}^2 +\hat{\xi}_{\rm U}^2)\frac{\tau^2}{6} - q\hat{\xi}_{\rm U}\frac{\tau}{2}\nonumber \\ &=& v(1-\tau) +(v-\xi_{\rm V})\frac{\tau}{2} +u\xi_{\rm V}\xi_{\rm U}\frac{\tau^2}{6} + u\hat{\xi}_{\rm V}\hat{\xi}_{\rm U}\frac{\tau^2}{6} - q\hat{\xi}_{\rm U}\frac{\tau}{2}. \label{eq18} \end{eqnarray} It is seen that the Faraday conversion term appears in Equation (\ref{eq18}), $u\hat{\xi}_{\rm V}\hat{\xi}_{\rm U}\tau^2/6 = u\tau_{\rm F}\tau_{\rm C}/6$. Since $\tau_{\rm F}\tau_{\rm C} = (\hat{\xi}_{\rm U}/\hat{\xi}_{\rm V})(\hat{\xi}_{\rm V}\tau)^2 = (\hat{\xi}_{\rm V}/\hat{\xi}_{\rm U})(\hat{\xi}_{\rm U}\tau)^2$, this appearance is another example of the symmetry between nearly circular and nearly linear characteristic waves. If this term were the dominant one, the name "Faraday conversion" would be appropriate in this limit. However, as discussed already in Section \ref{sect4a}, this is unlikely, since (1) it is really a third order term in the sense that both $\hat{\xi}_{\rm V}/\hat{\xi}_{\rm U}$ and $\hat{\xi}_{\rm U}\tau$ are much smaller than unity and (2) keeping also second order terms in the expansion parameter (i.e., $\hat{\xi}_{\rm V}/\hat{\xi}_{\rm U}$) could give contributions to $V$ larger than the Faraday conversion term. In contrast to circular characteristic waves, Equation (\ref{eq17}) shows that a non-synchrotron $q$-term can affect the circular polarisation. Formally, the $q$-term is similar to the Faraday conversion term, since, roughly, $U\hat{\xi}_{\rm V}\tau/2$ is the $Q$-value produced by Faraday rotation of the synchrotron $U$-emission. Such an additional source of linearly polarised emission can give a significant contribution to $V$, since there is no restrictions on the value of $q$; for example, as shown by \cite{hod82}, this term can dominate the observed circular polarisation in inhomogeneous sources. \section{Discussion}\label{sect5} The degree of circular polarisation observed in compact radio sources varies but it is rarely larger than $\sim 1\,\%$. This is roughly consistent with the level expected directly from the synchrotron emission process (see Appendix C). However, as discussed in the Introduction, there are reasons to believe that the observed circular polarisation is also affected by transport effects. This opens up a way to gain more detailed information about the source properties than is possible from the flux alone. As mentioned in Section \ref{sect4}, the rather low level of circular polarisation makes it likely that the characteristic waves are either nearly linearly or nearly circularly polarised. It is important to be able to distinguish between the two, since this has implications for some of the most central issues regarding the properties of compact radio sources; e.g., the presence of electron-positron pairs and the acceleration process of the radiating particles. The flat spectrum of compact radio sources has been called a "cosmic conspiracy" by \cite{cot80}. \cite{b/k79} showed that a class of models in which relativistic electrons stream out in a jet with constant opening angle could account for the observations under two conditions: (1) The adiabatic losses of the electrons are compensated by a continuous re-acceleration so that their low energy cut-off stays constant. (2) The strength of the magnetic field varies inversely with radius. This leads to a constant brightness temperature along the jet. Such an inhomogeneous jet has a self-similar structure which implies no change of polarisation with frequency, since the parameters in the transfer equation stay constant along the jet. Inhomogeneous sources can, roughly, be divided into two classes: (1) The source is homogeneous along each sightline but the source properties (e.g., the optical depth) vary between different sightlines. In its original form, the Blandford/K\"{o}nigl-jets belong to this class. (2) The source properties vary along a given sightline, e.g., due to turbulence. The first class of sources can be seen as a superposition of many homogeneous sources. In practice, the polarisation properties are obtained by integrating the results in Section \ref{sect4} over the appropriate range of parameter values. On the other hand, the polarisation properties of the second class of sources are more complicated to calculate, since, here, the characteristic waves don't propagate individually, i.e., they couple. In the present paper, it is assumed that the first kind of source model is sufficient to discuss the polarisation properties of compact radio sources. The effects of coupling of the characteristic waves will be treated in a forthcoming paper. Although the neglect of coupling could be seen as a serious limitation to the validity of the conclusions, this may not be so, at least not qualitatively. The reason is the following: It was mentioned in Section \ref{sect3a} that the amplitude of the circular polarisation is determined mainly by the polarisation properties of the characteristic waves ($K$), while its variation with frequency/optical depth is determined in large part by their phase difference ($\Delta k$). It is seen from Equation (\ref{eq3}) that the accumulated phase difference along a ray path does not depend on whether the medium is homogeneous or not. The coupling between the characteristic waves is due to variation of the local value of $K$ along the ray path. As a result, the coupling is expected to affect mainly the amplitude of the circular polarisation and less its frequency/optical depth dependence. This can be seen explicitly in \cite{bjo90}, where the circular polarisation from a homogeneous medium is compared to that emerging from a medium in which coupling is important. Therefore, the discussion below focuses on the frequency/optical depth dependence of the circular polarisation as a way to distinguish between nearly circular and nearly linear characteristic waves. For flat spectrum radio sources, a substantial frequency dependence of the polarisation is expected only in the region around the spectral turnover, where the emission becomes optically thin. This occurs normally at rather large frequencies ($\sim 100\,$GHz) and it has only recently become possible to obtain high quality observations of the circular polarisation in this range for a fair number of sources \citep{thu17}. However, not all compact radio sources conform to the standard Blandford/K\"{o}nigl-jet model. Gigahertz-Peaked Spectrum sources are a class of objects, which have lower turnover frequencies ($\sim\,$few GHz) as well as a spectrum declining towards lower frequencies. It is clear that these sources are inhomogeneous, since, normally, their spectra are quite a bit flatter than the characteristic $\nu^{5/2}$-spectrum of homogeneous sources. Hence, they are expected to show frequency dependent polarisation also in the optically thick part of the spectrum. A good example of such a source is PKS B2126-158 \citep{osu13}. \subsection{The POLAMI survey}\label{sect5a} In the POLAMI survey a large number of compact radio sources have been observed multiple times at 3 and 1.3\,mm \citep{agu17a}. The spectral index ($\alpha$) indicates that the flux is mostly optically thin radiation. However, there is a tendency for the spectrum to flatten when the flux increases \citep{agu17b}. This suggests that the turn-over frequency (i.e., $\tau \sim 1$) is, on average, close to 3\,mm. This sample is then a good starting point for a discussion of the origin of the circular polarisation. An important finding is that the maximum amplitude of circular polarisation is higher at 1\,mm (2.6\,\%) as compared to 3\,mm (2.0\,\%) \citep{thu17}. Furthermore, both of these values are, in turn, substantially larger than those found by others at longer wavelengths (i.e., optically thick frequencies). There are two implications from these observations which both suggest the presence of nearly circular characteristic waves. As shown in Section \ref{sect4}, the observed peak of the degree of circular polarisation in the optically thin regime is consistent with nearly circular characteristic waves but hard to reconcile with nearly linear characteristic waves. Also, in an inhomogeneous source, the polarisation at optically thick frequencies corresponds to an average over a range of optical depths. The sign change of the circular polarisation at large optical depths (due to the $u-\xi_{\rm U}$ term) is similar for both circular and linear characteristic waves. However, the relative contribution to the circular polarisation from this non-thermal term is larger for nearly circular characteristic waves as compared to the nearly linear ones, since $K_{\rm r} \propto \nu^{-1}$ and $K_{\rm i}\, \raise0.3ex\hbox{$\propto$}\kern-0.75em{\lower0.65ex\hbox{$\sim$}} \,\nu$ (cf. Equations \ref{eq16} and \ref{eq17}). Hence, the relative increase of the circular polarisation between the optically thick and thin parts of the spectrum should be larger for nearly circular characteristic waves as compared to nearly linear ones. In the standard jet model, the spread in optical depth in the azimuthal direction is rather small and results in an averaging of possible rapid variations on small scales; cf. the integration over a thin shell done in \cite{j/o77}. Hence, observations of an unresolved source are determined mainly by the radial variations of the jet properties. For nearly circular characteristic waves, the circular birefringence is large (i.e., $|\hat{\xi}_{\rm V}|\gg 1$) and the main contribution to the linearly polarised flux comes from small optical depths, $\tau \sim |\hat{\xi}_{\rm V}|^{-1}$. Let $R_{\rm o}$ be the radius where $\tau = 1$ for some frequency $\nu$ and $\hat{\xi}_{\rm V, o}$ the corresponding value of $\hat{\xi}_{\rm V}$. With $B\propto R^{-1}$, the radial variation of the optical depth is $\tau = (R/R_{o})^{-(5+2\alpha)/2}$ and $\hat{\xi}_{\rm V}= \hat{\xi}_{\rm V,o}(R/R_{o})^{(1+2\alpha)/2}$. The radius where most of the linearly polarised flux is emitted ($R_{\rm L}$) is then obtained from $\tau |\hat{\xi}_{\rm V}|\sim 1$ as $R_{\rm L} \sim R_{\rm o} \hat{\xi}_{\rm V,o}^{1/2}$. Furthermore, the corresponding Stokes parameters are $U_{\rm L} \sim |Q_{\rm L}| \sim uS_{\rm o} \hat{\xi}_{\rm V,o}^{-\alpha/2}$, where $S_{\rm o}$ is the source function at $R_{\rm o}$. Since, in this limit, $U_{\rm L} \sim |Q_{\rm L}|$, small variations of the radial jet properties could give rise to rather large variations in the polarisation angle; in particular, this may account for the observed lack of a preferred polarisation angle in many sources \citep{agu17b}. If conversion from linear polarisation contributes significantly to the observed circular polarisation, one deduces $|\hat{\xi}_{\rm V,o}/\hat{\xi}_{\rm U,o}|\sim 10^2$ (cf. Equation \ref{eq16}). Assuming $|\hat{\xi}_{\rm U,o}| \sim 1$, $R_{\rm L} \sim 10 R_{\rm o}$ and the linearly polarised flux comes from a radius much larger than that of either the total or the circularly polarised flux. In line with observations, this implies that variations in linear polarisation should have a longer time scale than and correlated weakly with those in circular polarisation or total flux. Furthermore, with $\alpha \approx 1$, the depolarisation would also be $\sim10$, which shows that Faraday rotation could be responsible for a larger fraction of the observed depolarisation of the linear flux. The total and circularly polarised fluxes come, roughly, from the same region (i.e., $\tau \sim 1$). However, their sensitivity to changes in the various plasma parameters are very different. Not only does the circular polarisation vary more rapidly with optical depth than the total flux but, most importantly, the circular polarisation is sensitive to variations in plasma parameters that leave the total flux unaffected. As an example, for nearly circular characteristic waves, the magnitude of the circular polarisation due to conversion from linear polarisation is $\sim\hat{\xi}_{\rm U}/\hat{\xi}_{\rm V} \propto \gamma_{\rm i}^3/\ln \gamma_{\rm i}$ (Appendix C), where $ \gamma_{\rm i}$ is the Lorentz factor at the lower cut-off in the energy distribution of the relativistic electrons. The more rapid and uncorrelated variations of the circular flux as compared to the total flux observed by POLAMI could then come from small changes in $\gamma_{\rm i}$. \subsection{PKS B2126-158} \label{5b} \cite{osu13} have presented high quality, multi-frequency polarisation measurements of PKS B2126-158, which has a turn-over frequency at 5.7 GHz. The source is inhomogeneous, since it has an inverted spectrum below this frequency ($\raise0.3ex\hbox{$\propto$}\kern-0.75em{\lower0.65ex\hbox{$\sim$}}\, \nu$). This makes it an ideal object for frequency dependent polarisation studies; in particular, in contrast to the flat spectrum sources, frequency dependent polarisation is expected in the optically thick part of the spectrum. The circular polarisation peaks at a frequency above the turn-over frequency, indicating nearly circular characteristic waves. Several of its properties are as expected for a homogeneous source with $|\hat{\xi}_{\rm V}|\gg 1$ \citep[see][]{j/o77}; for example, a broad minimum in the degree of linear polarisation coincide with the maximum in circular polarisation and there is a clear indication of a $\sim 90^{o}$ swing in the polarisation angle in the optically thick part of the spectrum (i.e., $Q$ changes sign). However, there are two aspects of the observations which do not fit with a homogeneous source, namely, the lack of a sign change of the circular polarisation in the optically thick part of the spectrum and the apparently smooth $\sim 90^{o}$ swing in the polarisation angle rather than an abrupt flip. In order to see how these can be accounted for by an inhomogeneous source structure, a few of its properties needs to be considered. The range of optical depths in an inhomogeneous source, which contributes to the flux at a given frequency, depends on the slope in the optically thick part of the spectrum. For a flat spectrum, the polarisation is independent of frequency and no change of sign is observed in either $Q$ or $V$. As the spectrum becomes more inverted, the relative importance of the large optical depths increases. Hence, for some value of the slope, sign changes will be observed for $Q$ and/or $V$. For $\tau |\hat{\xi}_{\rm V}| > 1$, it can be shown that in a homogeneous source $Q = S[\xi_{\rm U}(1-\exp(-\tau)) - u]/\hat{\xi}_{\rm V}$. As compared to the circular polarisation in Equation (\ref{eq16}), there are two differences: (1) The sign change in $Q$ occurs at smaller optical depth than the corresponding change for $V$. (2) Since $Q \propto \hat{\xi}_{\rm V}^{-1}$ and $V \propto \hat{\xi}_{\rm U}/\hat{\xi}_{\rm V}$, the amplitude of $Q$ decreases with frequency somewhat faster than does the one for $V$ ($\nu^{-1.2}$ vs $\nu^{-1}$, where $\alpha = 0.7$ has been used). Hence, the relative importance of large optical depths is larger for $Q$ than for $V$. Both of these effects cause the sign change in $Q$ to occur at a higher frequency than for $V$. In a forthcoming paper, it will be discussed how the observed change of sign in $Q$ but not in $V$ can be made consistent with the observed spectrum. In general then, sign changes in $V$ and $Q$ in inhomogeneous sources depend on the slope in the optically thick part of the spectrum. Observations with high spatial resolution may resolve some of the inhomogeneities and, hence, make it more likely to find such sign changes. This could be the case for the VLBA-observations of NGC\,1275 (3C\,84), where the sign of the circular polarisation changed between the optically thick and thin parts of the source \citep{h/w04}. Unfortunately, no linear polarisation was detected so the expected concurrent sign change in $Q$ could not be established. In contrast to the circular polarisation, the value of $Q$ is determined by contributions from two very different regions in the jet. The optically thin emission comes from radii much larger than that at $\tau \sim 1$, which emits $Q$-flux with opposite sign. As the spectrum becomes increasingly inverted, the relative contribution to $Q$ from the optically thin emission goes down. As mentioned above, $U \sim |Q|$ for this component even when the total $Q$ changes sign and, hence, the value of $U$ will be non-negligible. This causes the $90^{o}$ flip in position angle observed in a homogeneous source to be replaced by a smooth $90^{o}$ swing in an inhomogeneous source. \subsection{Observational implications} \label{sect5c} Both the POLAMI sample and the detailed observations of PKS B2126-158 are most straightforwardly understood for characteristic waves, which are nearly circular polarised. This conclusion rests on the observed frequency distribution of the circular polarisation and implies $|\hat{\xi}_{\rm V}/\hat{\xi}_{\rm U}| \gg 1$. Its actual value is harder to estimate, since, as discussed above, the magnitude of the circular polarisation may be seriously affected by inhomogeneities along various lines of sight. However, the properties of the linear polarisation in the POLAMI sample can be accounted for by a value of $|\hat{\xi}_{\rm V}|$ consistent with only minor contributions from inhomogeneities. Assuming this to be the case, the value of $|\hat{\xi}_{\rm V}/\hat{\xi}_{\rm U}|\sim 10^2$ can be used to constrain the properties of the synchrotron plasma. In addition to the magnetic field direction, when the frequency dependence of the transfer coefficients are normalised to the turn-over frequency, there are two free parameters (see Appendix C); namely, $\gamma_{\rm min}$ and the number of electron-positron pairs ($n_{\rm p}$) relative the excess number of electrons ($n_{\rm exc}$). With $|\hat{\xi}_{\rm V}/\hat{\xi}_{\rm U}|\sim 10^2$, one finds $(\gamma_{\rm min}^3/\ln \gamma_{\rm min})(1+2n_{\rm p}/n_{\rm exc}) \sim 10^2$ (Equation \ref{c4}). Although the presence of nearly circular characteristic waves by itself is enough to show that $\gamma_{\rm min}$ is much below that corresponding to the turn-over frequency (i.e., $\gamma_{\rm min} \ll \gamma_{\rm abs} \approx 10^2$, see Appendix C), observations allow a fair fraction of electron-positron pairs. An upper limit from the relativistic particles is obtained for $\gamma_{\rm min} \sim 1$, i.e., the particles are injected into the acceleration process with trans-relativistic energies. This gives $n_{\rm p}/n_{\rm exc}\,\raise0.3ex\hbox{$<$}\kern-0.75em{\lower0.65ex\hbox{$\sim$}}\,10^2$. The emission coefficient for the circular polarisation depends on $n_{\rm exc}/n_{\rm p}$ but not $\gamma_{\rm min}$. Hence, the degeneracy between the two can be broken by direct observation of the circular polarisation intrinsic to the synchrotron process. However, this may require observations in the frequency range corresponding to $|\hat{\xi}_{\rm V}|\tau\,\raise0.3ex\hbox{$<$}\kern-0.75em{\lower0.65ex\hbox{$\sim$}}\,1$ (see also below). The conversion of linear to circular polarisation is often described by the Faraday conversion term $u\tau_{\rm F}\tau_{\rm C}/6$, which has a very steep frequency dependence ($\propto \nu^{-5}$). Since observations indicate a more modest frequency dependence of the circular polarisation, this has limited more detailed modelling of the sources properties \citep[e.g.,][]{osu13,thu17}. However, it was shown in Section \ref{sect4} that this term is unlikely to significantly affect the observed circular polarisation. Instead, as argued above, the use of the full solution to the transfer equation allows a rather direct interpretation of the observations. Nearly circular characteristic waves imply large Faraday depths over a wide range of frequencies. The apparent lack of observed Faraday rotation has been used to argue, instead, that the characteristic waves are linearly polarised \citep{war77}. Although, in the standard jet model, polarisation in the flat, optically thick part of the spectrum should be constant, Faraday rotation is expected in the optically thin part. However, even here, the polarisation angle should remain constant until the transition to the Faraday thin regime occurs (i.e., $|\hat{\xi}_{\rm V}|\tau \sim 1$). With the source parameters deduced above from the observed circular polarisation (e.g., $|\hat{\xi}_{\rm V}| \sim 10^2$), this transition takes place at a frequency $|\hat{\xi}_{\rm V}|^{1/2} \sim 10$ larger than the turn-over frequency. Accurate polarisation measurements may be hard to obtain at such frequencies. Furthermore, the change in position angle should be smaller than for a homogeneous source. Since $U \sim |Q|$, the change in position angle is expected to be $\sim \pi/8$ rather than $\sim \pi/4$ for a homogeneous source. \section{Conclusions}\label{sect6} The transfer equation of polarised light in a homogeneous medium can be solved analytically. However, the standard solution is complex and observations are usually discussed in terms of various approximations. The main conclusions in the present paper are: 1) The use of characteristic waves allows an alternative way of expressing the transfer equation. The solution is more compact and transparent regarding the physical mechanisms determining the emerging polarisation than in the standard formulation. 2) The frequency dependence of the circular polarisation is a direct way of establishing the properties of the characteristic waves. 3) High quality observations of circular polarisation in compact radio sources indicate that the characteristic waves are nearly circularly polarised. This provides, for example, an upper limit to the fraction of electron-positron pairs. 4) Several of the approximations in common use have limited applicability; for example, it is shown that the Faraday conversion term is unlikely to have a significant impact on the observed circular polarisation. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
5,502
Australia Primary Crusher Australia Primary Crusher Primary Rotary Crusher Coal Australia Drill Wikipedia A drill is a tool fitted with a cutting tool attachment or driving tool attachment, usually a drill bit or driver bit, used for boring holes in various materials or. rotary coal crushers australia - Newest Crusher, Grinding, ... Impact Crusher This Impact Crusher is used for primary, secondary and fine crushing for all kinds of minerals and rocks(for example, the granite, marble, and . rotary coal crushers australia - ZCRUSHER. coal crusher rotary crusher - hackersgroupofindia. Crushers, Rotary Breakers, Idlers, Country : Australia ; Inquiry Equipment : crusher or rotary breaker plant primary rotary crusher coal australia india crusher. stone crusher for sale Slogan du blogue. primary rotary crusher coal australia . primary rotary crusher coal australia. Used rock crushers for aggregate and mining. A used rock crusher will cost much less than new and still do the Crawler. LIVE CHAT GET PRICE. rotary coal crushers selection Chat Oline. primary rotary crusher crushing coal ... rotary breaker crusher used for coal crushing ... coal crushers. rotary breakers; ...: ; coal selection crushers [ 4.9 - 4177 Ratings ] The Gulin ... rotary coal crushers selection. rotary coal crushers selection. Jun 18, 2018· Impact crusher is further divided. For example limestone, coal, gypsum, seeds etc. Impact. Contact Us. China Cone Crusher, Cone Crusher Manufacturers, Suppliers ... Impact Rotary Crusher. ... Contact Us. The Primary Impact Crusher: Made for Concrete Recycling. secondhand coal dryer equipment in australia used coal dryer equipment in australia. Ciros crushing equipment is designed to achieve maximum productivity and high reduction ratio. rotary coal crushers australia - nainitalaromain rotary coal crushers rotary coal crushers excellent mining crushing machinery products or production line design, the company is committed to building the . primary rotary crusher coal australia . primary rotary crusher coal australia. Jaw crushers process medium to hard quarry rock or other materials by compressing it between the fixed jaw and the swing jaw. From large primary jaw crusher and impact crusher, used coal dryer equipment in australia... Coal Dryer, BBQ Charcoal Making Machine, Briquette Press ... China gongyi lantian machinery factory mainly engaged in coal dryer, BBQ charcoal making machine, briquette press machine, charcoal briquette machine and other forming equipment production and marketing manufacturers. Australia Primary Crusher Supplier - apeda. primary rotary crusher coal australia ptfewire. Hammers Freeswinging or fixed metal impact surfaces attached to the rotor assembly Primary Crusher The first crusher in a crushing system. Access Petrotec & Mining Solutions, MINING EQUIPMENT Access . primary rotary crusher coal australia prsd. rotary coal crushers australia squarawoodcraftscoza. primary rotary crusher coal australia tfg Sitemap This is a visible sitemap for website visitors By continuing Reply. power plants maximum requirements of fuel is a coal. The handling of this fuel is a great job. handle the fuel, i.e., coal, each power station is equipped with a coal handling plant. of Critical Equipments for Coal Handling Plants (CHP) of Thermal Power Stations is typical job. rotary crushers for coal mine - pochiraju. Shanghai Changlei High Efficient and Energy coal crushing Production Line; COMPANY PROFILES ... zenith 1213 | Track Jaw Crushers, Cone, Impact, VSI, Screeners . ... zenith supplies world's largest mobile crusher | Mining Australia. primary and secondary crusher australia - Primary crushers are commonly These are heavy duty machines used to reduce the size of ROM ore to a size manageable by secondary crushers or . Primary Rotary Crusher Coal AustraliaPrimary Rotary Crusher Coal Australia Crusherasia coal crushing machine and grinding mill for In order to crush coal, .Crusher Or Rotary Breaker Plant. . are one of the main type primary crushers or breakers in a mine or ore processing plant. primary rotary crusher coal australia. sher Small Stone Primary Rotary Cone Crusher primary rotary crusher coal australiahome primary rotary crusher coal australia. small size stone crushers stone crusher machine,for sale,crusher stone crusher ..al crushing equipment in . Quality improvement of Coal Before the crusher there is a grizzly of 450 or 300 mm and all coal is made to pass » Learn More. coal crusher australia. primary rotary crusher coal australia - .
{ "redpajama_set_name": "RedPajamaC4" }
6,073
{"url":"https:\/\/wiki.kidzsearch.com\/wiki\/Isle_of_Man","text":"kidzsearch.com > wiki\n\n# Isle of Man\n\nIsle of Man\nEllan Vannin or Mannin\n Coat of arms\nMotto:\u00a0Quocunque Jeceris Stabit\u00a0\u00a0(Latin)\n\"Whithersoever you throw it, it will stand\"[1]\nAnthem:\u00a0O Land of Our Birth\nArrane Ashoonagh dy Vannin\u00a0\u00a0(Manx)\nRoyal anthemGod Save the Queen\nLocation of \u00a0Isle of Man\u00a0\u00a0(red)\n\nin the Irish Sea (Manx Sea) between England\u00a0\u00b7 Scotland\u00a0\u00b7 Wales and Northern Ireland\u00a0\u00a0(dark grey)\n\nCapital\nand largest city\nDouglas (Doolish)\n54\u00b009\u2032N 4\u00b029\u2032W\ufeff \/ \ufeff54.15\u00b0N 4.483\u00b0W\nOfficial languages\nDemonym Manx\nGovernment Constitutional Monarchy\nBritish Crown Dependencya\n-\u00a0 Lord of Mann Elizabeth II\n-\u00a0 Chief Minister Allan Bell\nLegislature Tynwald\n-\u00a0 Upper house Legislative Council\n-\u00a0 Lower house House of Keys\nStatus\n-\u00a0 Lordship of Mann revested in British crown\n1765\nArea\n-\u00a0 Total 572\u00a0km2 (196th)\n221\u00a0sq\u00a0mi\n-\u00a0 Water\u00a0(%) 0\nPopulation\n-\u00a0 estimate 84,655 (202nd)\n-\u00a0 Density 148\/km2 (77th)\n362.4\/sq\u00a0mi\nGDP\u00a0(PPP) 2010\u00a0estimate\n-\u00a0 Total $2.113 billion (162nd) - Per capita$35,000 (27th)\nGini41[2]\nmedium\nHDI (2010)0.849[3]\nvery high\u00a0\u00b7 14th\nCurrency Official currency is the Manx pound. The Pound sterling is also used. (GBP)\nTime zone GMT (UTC+0)\n-\u00a0 Summer\u00a0(DST) \u00a0(UTC+1)\nDrives on the left\nCalling code +44b\nISO 3166 code IM\nInternet TLD .im\na. Parliamentary democracy under constitutional monarchy.\nb. +44 1624 (landline) area code\n+44 7524 \/ 7624 \/ 7924 (mobile)\n\nThe Isle of Man (Manx: Ellan Vannin) is an island in the Irish Sea, off the coast of Great Britain (of which it is a crown dependency). Douglas is the capital city. It also has a flag with a red background and 3 armoured legs joined together - \"whichever way you throw us, we always land on our feet\".\n\nIt has a Parliament called Tynwald. It is the longest running parliament in the world.[source?]\n\n## Government\n\nThe Isle of Man is a Crown dependency. Foreign affairs, defence, and good government are handled by the British government, but in all other matters the island is independent. The Isle of Man Government is the executive and proposes laws to the legislature, Tynwald. Laws passed by Tynwald are given royal approval by the Lieutenant Governor unless the British Minister of Justice says they do not help the good government of the island.\n\n## Geography\n\nMap of the Isle of Man.\n\nThe Isle of Man is an island in the Irish Sea, it is northwest of the European continent. It is between the United Kingdom and Ireland. The island is 22\u00a0km wide and 52\u00a0km long, it has a total area of 572\u00a0km\u00b2.[4] The Isle of Man has a total of 160\u00a0km of coastline, it has no important bodies of water. Apart from the island itself, the Isle of Man also includes some nearby islands. The most important of these islands are called Calf of Man, St Patrick's Isle and St Michael's Isle.\n\nThe island's terrain is varied, it has mountains in the north and south. A valley is more or less in the center of the island, between the cities of Douglas and Peel. The northern part of the island is very flat. Snaefell is the Isle of Man's highest mountain, it measures 621 meters above sea level. It is said that you can see Scotland, England, Ireland and Wales from the top of mount Snaefell.[5]\n\n## Weather\n\nThe Isle of Man has a usually mild weather. Summers are cool and winters are mild and rainy. Rainfall is similar to that of the other British Isles. Elevated parts of the Isle of Man get more rainfall, especially mount Snaefell. The northern and southern parts of the island are not as rainy as the rest.[6]\n\nThe island's weather is normally cool. The highest temperature ever registered is 28.9 \u00b0C, in Ronaldsway. The Isle of Man is not very sunny, but it is less cloudy than other parts of the British Isles; strong winds around the island help keep clouds in constant movement.[6]\n\n## Geology\n\nGeological fault at Niarbyl, Isle of Man. The narrow white diagonal line near centre of picture is the only remaining visible sign of the Iapetus Ocean.\n\nThe geology of Man is notable for the Iapetus Suture, which runs almost unseen right through the rocks of the island. The suture is the remnant of a once huge ocean, the Iapetus Ocean, which was lost about 420 million years ago as three continents came together.\n\n## Environment\n\nThe Isle of Man became separated from Ireland and the British Isles about 8500 years ago. The short period of time between the melting of glaciers and the rise of sea level allowed a small number of species to colonize the island by land. The island was heavily deforested in the Middle Ages, this weakened the island's environment. Some land is now protected by the government to help preserve its wildlife. Curraghs Wildlife Park, in the wetlands, is home to many species of animals and plants. The island is also home to a large number of bird species.\n\n### Plant life\n\nA lot of the Isle of Man's plant life, or flora, is composed of shrubs (bushes) and other short plants. Several species of grass and moss also live there. Mosses on the island contribute to the formation of peat. There is peat in the island's wet areas. The island has heavily deforested in the Middle Ages. Common trees on the island include ashes, elms, pines, willows, and hawthorns. There are also other trees on the island, as well as many species of flowering plants. Bogs are home to ferns and orchids.[7][8][9]\n\n### Animal life\n\nThe Isle of Man is home to a large number of bird and insect species. Many species live in \"curraghs\" (wetlands in the northeast of the island). Curraghs are protected by the Government of the Isle of Man. Curraghs Wildlife Park is in these wetlands, it is both a zoo and a protected area. During the winter, curraghs are the second largest nesting ground of the Hen Herrier in Europe. The Peregrine Falcon, Merlin, European Robin, Willow Warbler, Song Thrush, Dunnock, Swan and a subspecies of Winter Wren possibly native to the island also nest in the curraghs. The Chough is also in the Isle of Man, it is more common than in other parts of Europe.[10] Some farming methods have decreased the bird population of the island. The Northern Lapwing is now rarely found, and the Yellowhammer is now extinct on the island.[11]\n\n18 species of butterfly and 250 species of moth also live on the Isle of Man. Most of them live in the wetlands during different seasons.[12]\n\n## References\n\n1. \"Island Facts\". Isle of Man Public Services (www.gov.im). Retrieved 15 September 2011.\n2. \"Income inequalities\". The Poverty Site. Retrieved 21 April 2011.\n3. \"Human Development Report 2010\". United Nations. p.\u00a0143 ff.. Retrieved 21 April 2011.\n4. Geography: Physical Geography. Isle of Man Public Services (2010). Retrieved 3 June 2010.\n5. Travelling on the Snaefell Mountain Railway. Isle of Man Guide. Retrieved 30 August 2009.\n6. Climate. Isle of Man Guide. Retrieved 3 June 2010.\n7. Creatures Great and Small - Cretooryn Mooarey's Beggey. Isle of Man Government (2010). Retrieved 3 June 2010.\n8. Tree Gallery. Isle of Man Woodland Trust. Retrieved 3 June 2010.\n9. Nature Trail. Isle of Man Government (2010). Retrieved 3 June 2010.\n10. Manx Wildlife. Isle of Man Government (2010). Retrieved 3 June 2010.\n11. Manx Bird life prepares to celebrate twelfth birthday. BBC (16 February 2010). Retrieved 3 June 2010.\n12. The Butterfly Trail. Isle of Man Government (2010). Retrieved 3 June 2010.","date":"2017-07-22 16:49:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2539443373680115, \"perplexity\": 11107.878353960197}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-30\/segments\/1500549424088.27\/warc\/CC-MAIN-20170722162708-20170722182708-00177.warc.gz\"}"}
null
null
{"url":"https:\/\/www.electro-tech-online.com\/threads\/led-voltage-meter-lm339-design-question.103616\/","text":"# LED voltage meter LM339 design question\n\n#### namezero111111\n\n##### New Member\nHello folks,\n\nI just registered for this forum.\n\nI am trying to design a circuit to determine lead-acid battery charge level.\nI determined that at 77*F, a 100% charge would be 12.63V, and a 50% charge would be 12.00v.\nI also have data for every 10%.\nI designed the circuit as indicated in the attached file, and have tested it in LTspice, which indicates that the circuit would work (see the second picture).\n\nHowever, I believe that this circuit is very difficult to built (it seems cumbersome). I have read about the LM3914 (?) that can be used to drive a bar display, but I don't think it could measure such small voltage changes that are also non-linear.\nSo I was wondering, if I were to build this, is there any better way than using odd resistors like 6579 or 7353 ohms? I'd have to be very close to those values so that the bar display is reasonably accurate.\n\nI've also seen designs where the voltage divider on the measured side is \"cascaded\" instead of every comparator having its own little voltage divider. If I were to make such a design, how would I calculate the values of the resistors required?\n\nDo you see any problem with the design as is? I might be worried about the low current in the diodes as indicated on the diagram, would that be a problem?\n\n-namezero\n\n#### Attachments\n\n\u2022 180.4 KB Views: 444\n\u2022 186.8 KB Views: 832\n\u2022 9.1 KB Views: 239\nLast edited:\n\n#### ericgibbs\n\n##### Well-Known Member\nhi,\nThe LM3914 can be 'offset' such that it works over a limited range of input voltage.\neg: can be set for 11V thru 13V, so that the 10 LED's represent a 2Volt input span.\n\nIs this what you have in mind.?\n\nEDIT:\n\nCheck the polarity of the LED's in your schematic, they are the wrong way around.\n\nLast edited:\n\n#### namezero111111\n\n##### New Member\nIndeed they were. I didn't know the LM339 couldn't source current. I found that in another thread. That way around it seems to work. But now the the lights come on when the voltage is BELOW the threshold, not above it.\n\nWhen I swap the +\/- inputs on the 339, LTspice shows some dramatic noise where is oscillates back and forth on the transition, even if I put a resistor for hysteris in there.\n\nThe reason I was using the 339 and not the 3914 is that this way I can learn a lot more about what is going on instead of using a microcontroller and having no idea. I'm also trying to advance myself a little in the field of simple electronics : )\n\nAnd by the way, the way you drew the circuit looks so much cleaner!!\n\nSo I guess my only question remains, how would one better approximate the values of the resistors?\nMy idea was to use a math program called Derive6 and ruthe voltage divider formula through it.\n(i.e. 5.0 = 12.36 * R1\/(R1+R2) and then create a table with R1 in 100 ohm steps or so and looks for a value where I can match R2.\nBut I am sure there is a better solution out there.\n\nThank you again!\n\n-namezero\n\n#### ericgibbs\n\n##### Well-Known Member\nSo I guess my only question remains, how would one better approximate the values of the resistors?\nMy idea was to use a math program called Derive6 and ruthe voltage divider formula through it.\n(i.e. 5.0 = 12.36 * R1\/(R1+R2) and then create a table with R1 in 100 ohm steps or so and looks for a value where I can match R2.\nBut I am sure there is a better solution out there.\n\nThank you again!\n\n-namezero\nhi,\nThe actual LM3914 uses a resistive chain to create the individual comparator reference voltages.\n\nBy driving the chain with different voltages it effects the individual switching points.\n\nWhy dont you explore that method.?\n\n#### namezero111111\n\n##### New Member\nI want to. I am still a little confused about the resistor chain, but am looking into it right now actually.\nMy confusion arises from the fact that R1 for each successive 339 increases, so R2 must increase, too.\n\nI found a document that says that V_i=i\/N * V_ref for equal resistors.\nI am currently looking into that, and the diagram here: here.\n\nI need the voltages to be 12.6 12.5 12.4 12.24 12.12 12.00 11.9 11.8 volts, so there is a slight divergence from a linear relationship in the middle, and I am trying to figure out the math for the resistors in such a chain.\n\nThank you!\n\n-namezero\n\n#### ericgibbs\n\n##### Well-Known Member\nI want to. I am still a little confused about the resistor chain, but am looking into it right now actually.\nMy confusion arises from the fact that R1 for each successive 339 increases, so R2 must increase, too.\n\nI found a document that says that V_i=i\/N * V_ref for equal resistors.\nI am currently looking into that, and the diagram here: here.\n\nI need the voltages to be 12.6 12.5 12.4 12.24 12.12 12.00 11.9 11.8 volts, so there is a slight divergence from a linear relationship in the middle, and I am trying to figure out the math for the resistors in such a chain.\n\nThank you!\n\n-namezero\nhi,\nLooking at your voltage list it appears to have three groupings of steps.\nWhy not use three Vrefs to drive three separate resistor chains, one for each group.?\n\nCode:\n[B]12.6 12.5 12.4 0.1v 12.24 12.12 12.00 [\/B] .12v 11.9 11.8 0.1v [\/B]\n\nLast edited:\n\n#### MikeMl\n\n##### Well-Known Member\nAttached is a similar circuit which required non-equally spaced trip points. However, look at the schematic. You might pick up a couple of pointers about how to add a little hysteresis at each trip point. Note how I used a behavioral voltage source to create a non-linear function.\n\n#### Attachments\n\n\u2022 86.1 KB Views: 506\n\n#### namezero111111\n\n##### New Member\nThank you so much!\n\nI solved the problem by placing one 100 ohm resistor in the chain and 62 ohms otherwise.\nThe plot for all voltages is almost identical to the one I had before with separate dividers.\n\nI only have one problem now, that is when the diodes switch on and off I get hysteris problems over a few milliseconds. I tried adding a 220k resistor from the output to the + terminal of the comparator, but that had no effect.\nIs that something that only occurs in simulation but would make no difference in real life?\nIt just bothers me because it causes the simulation to run very slowly.\n\nWhat do you think of this circuit now?\n\n-namezero\n\n#### Attachments\n\n\u2022 223.5 KB Views: 183\n\u2022 428.7 KB Views: 252\n\u2022 6.5 KB Views: 154\n\n#### ericgibbs\n\n##### Well-Known Member\nThank you so much!\n\nI solved the problem by placing one 100 ohm resistor in the chain and 62 ohms otherwise.\nThe plot for all voltages is almost identical to the one I had before with separate dividers.\n\nI only have one problem now, that is when the diodes switch on and off I get hysteris problems over a few milliseconds. I tried adding a 220k resistor from the output to the + terminal of the comparator, but that had no effect.\nIs that something that only occurs in simulation but would make no difference in real life?\nIt just bothers me because it causes the simulation to run very slowly.\n\nWhat do you think of this circuit now?\n\n-namezero\nhi,\nYou have the INV and NI inputs crossed over, you cannot apply hysteresis from the outputs to the NI inputs because you have them decoupled to 0V via that power rail.\n\nEDIT:\nThis is a simple example of an equally spaced Vgap, note the comp input sense\n\nUse the method as shown by MikeL\n\nLast edited:\n\n#### namezero111111\n\n##### New Member\nI see. So where I have it connected to + I should have connected it to - and vice versa, right?\n\nThe reason I did it this way is because otherwise the diodes light up when they shouldn't and vice versa.\nI had it the other way around before.\n\nHm maybe I am still confused as to the operation of comparators. Non-inverting is the + input right?\nI am a little new to this : ) Sorry if I ask irrelevant questions.\n\n#### namezero111111\n\n##### New Member\nThank you! Everything works, I just built the circuit.\nI had never used a computer program before, to check circuits, I didn't know they existed for free.\nFirst time I built a circuit and it worked right away!! This is much more fun!\n\nThank you!\n\nNow I will add a timer so you can push a button and the voltmeter stays on for let's say 20 seconds!\n\nLast edited:","date":"2019-04-23 06:12:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5131425857543945, \"perplexity\": 1217.8504130239778}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-18\/segments\/1555578593360.66\/warc\/CC-MAIN-20190423054942-20190423080942-00149.warc.gz\"}"}
null
null
Q: Getting database information to another class in Android I basically used this tutorial on how to make a SQLite database, which is fine. I now have a new class named camera, where I want to get all the (or a single) phone numbers that are in the database, but I have no idea how to call for the database in the new class. I tried to look at other examples, but they use a DatabaseHelper instead of the Handler here, and the code is constructed in a bit of a different way, which got me confused. My Question is: how do I call for my database in the new camera class and get all the information back I need (phone numbers)? I don't know if this is important, but I want to open the database at a OnClickView event DatabaseHandler.java public class DatabaseHandler extends SQLiteOpenHelper { // All Static variables // Database Version private static final int DATABASE_VERSION = 1; // Database Name private static final String DATABASE_NAME = "contactsManager"; // Contacts table name private static final String TABLE_CONTACTS = "contacts"; // Contacts Table Columns names private static final String KEY_ID = "id"; private static final String KEY_NAME = "name"; private static final String KEY_PH_NO = "phone_number"; public DatabaseHandler(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } // Creating Tables @Override public void onCreate(SQLiteDatabase db) { String CREATE_CONTACTS_TABLE = "CREATE TABLE " + TABLE_CONTACTS + "(" + KEY_ID + " INTEGER PRIMARY KEY," + KEY_NAME + " TEXT," + KEY_PH_NO + " TEXT" + ")"; db.execSQL(CREATE_CONTACTS_TABLE); } // Upgrading database @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // Drop older table if existed db.execSQL("DROP TABLE IF EXISTS " + TABLE_CONTACTS); // Create tables again onCreate(db); } /** * All CRUD(Create, Read, Update, Delete) Operations */ // Adding new contact void addContact(Contact contact) { SQLiteDatabase db = this.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(KEY_NAME, contact.getName()); // Contact Name values.put(KEY_PH_NO, contact.getPhoneNumber()); // Contact Phone // Inserting Row db.insert(TABLE_CONTACTS, null, values); db.close(); // Closing database connection } // Getting single contact Contact getContact(int id) { SQLiteDatabase db = this.getReadableDatabase(); Cursor cursor = db.query(TABLE_CONTACTS, new String[] { KEY_ID, KEY_NAME, KEY_PH_NO }, KEY_ID + "=?", new String[] { String.valueOf(id) }, null, null, null, null); if (cursor != null) cursor.moveToFirst(); Contact contact = new Contact(Integer.parseInt(cursor.getString(0)), cursor.getString(1), cursor.getString(2)); // return contact return contact; } // Getting All Contacts public List<Contact> getAllContacts() { List<Contact> contactList = new ArrayList<Contact>(); // Select All Query String selectQuery = "SELECT * FROM " + TABLE_CONTACTS; SQLiteDatabase db = this.getWritableDatabase(); Cursor cursor = db.rawQuery(selectQuery, null); // looping through all rows and adding to list if (cursor.moveToFirst()) { do { Contact contact = new Contact(); contact.setID(Integer.parseInt(cursor.getString(0))); contact.setName(cursor.getString(1)); contact.setPhoneNumber(cursor.getString(2)); // Adding contact to list contactList.add(contact); } while (cursor.moveToNext()); } // return contact list return contactList; } // Updating single contact public int updateContact(Contact contact) { SQLiteDatabase db = this.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(KEY_NAME, contact.getName()); values.put(KEY_PH_NO, contact.getPhoneNumber()); // updating row return db.update(TABLE_CONTACTS, values, KEY_ID + " = ?", new String[] { String.valueOf(contact.getID()) }); } // Deleting single contact public void deleteContact(Contact contact) { SQLiteDatabase db = this.getWritableDatabase(); db.delete(TABLE_CONTACTS, KEY_ID + " = ?", new String[] { String.valueOf(contact.getID()) }); db.close(); } // Getting contacts Count public int getContactsCount() { String countQuery = "SELECT * FROM " + TABLE_CONTACTS; SQLiteDatabase db = this.getReadableDatabase(); Cursor cursor = db.rawQuery(countQuery, null); cursor.close(); // return count return cursor.getCount(); } The camera class where I want to call for the database in the OnClickView public camera extends Activity { protected Dialog mSplashDialog; private Button foto; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); populateBtn(); } private void populateBtn() { foto1 = (Button) this.findViewById(R.id.button2); foto1.setOnClickListener(new View.OnClickListener() { @SuppressLint("SimpleDateFormat") @Override public void onClick(View v) // here is where i want to call for the database (to get a single, or all the phone numbers) } The contact class Contact.java package com.androidhive.androidsqlite; public class Contact { //private variables int _id; String _name; String _phone_number; // Empty constructor public Contact(){ } // constructor public Contact(int id, String name, String _phone_number){ this._id = id; this._name = name; this._phone_number = _phone_number; } // constructor public Contact(String name, String _phone_number){ this._name = name; this._phone_number = _phone_number; } // getting ID public int getID(){ return this._id; } // setting id public void setID(int id){ this._id = id; } // getting name public String getName(){ return this._name; } // setting name public void setName(String name){ this._name = name; } // getting phone number public String getPhoneNumber(){ return this._phone_number; } // setting phone number public void setPhoneNumber(String phone_number){ this._phone_number = phone_number; } } A: from your camera class: DatabaseHandler myDatabase = new DatabaseHandler(this); List<Contact> allContacts = myDatabase.getAllContacts(); // do something with your contacts... Hope this helps. Just to point that the DatabaseHandler expects a Context in the constructor, so, as long as your class camera extends from Activity, it should work. A: Just create a instance of your DatabaseHandler class and use its methods. For get all contacts: DatabaseHandler db = new DatabaseHandler(this); List<Contact> contacts = db.getAllContacts(); for (Contact contact : contacts) { // use your contact methods... } For one contact: DatabaseHandler db = new DatabaseHandler(this); Contact contact = db.getContact(your_id); // use your contact methods... Use this only if you're in an Activity class. You'll need a valid context otherwise. Update: The OnClickListener is an internal class. You'll need to use its external class as a valid context. Use: private void populateBtn() { foto1 = (Button) this.findViewById(R.id.button2); foto1.setOnClickListener(new View.OnClickListener() { @SuppressLint("SimpleDateFormat") @Override public void onClick(View v) DatabaseHandler db = new DatabaseHandler(camera.this); List<Contact> contacts = db.getAllContacts(); for (Contact contact : contacts) { // use your contact methods... } } } } A: // Reading all contacts Log.d("Reading: ", "Reading all contacts.."); List<CompanysContact> contacts = db.getAllContacts(); for (CompanysContact cn : contacts) { String log = "Id: "+cn.getID()+" ,Name: " + cn.getName() + " ,Phone: " + cn.getPhoneNumber() ; // Writing Contacts to log Log.d("Name: ", log); }
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,148
{"url":"https:\/\/www.physicsforums.com\/threads\/calculate-the-angular-momentum-of-a-solid-uniform-sphere.353858\/","text":"# Calculate the angular momentum of a solid uniform sphere\n\n1. Nov 11, 2009\n\n### raiderIV\n\n1. The problem statement, all variables and given\/known data\nCalculate the angular momentum of a solid uniform sphere with a radius of 0.120m and a mass of 14.0kg if it is rotating at 6.00rad\/s about an axis through its center.\n\n2. Relevant equations\nAngular Momentum = I * w\n\n3. The attempt at a solution\nWhen using the formulas above i am obtaining the answer 0.1008 however, that is not the correct answer. Any Ideas?\n\nEdit: I also know that the answer needs to be in kg * m2\/s\n\n2. Nov 11, 2009\n\n### stanton\n\nalways take a careful look at question and gather all the information and equations you know to solve this problem. Maybe you can change a little bit of equations or connect them together and you will get it.\nL = I*\u03c9\nPlug values given:\n0.5mr\u00b2*\u03c9 = 0.5*14*.12\u00b2*6.00= ____ kg\u2219m^2\/s\nyup. looks good to me! hope this helps.\nAnd by the way, just guessing, do you also need to calculate the KE for that?\n\nLast edited: Nov 11, 2009\n3. Nov 11, 2009\n\n### raiderIV\n\nYup... I see what i did wrong... in every calculation i did i was thinking cylinder for I instead of sphere. Thank you. And yes I do have to find the KE which turns out to be 1.45J\n\nThanks again for the help,\n~John\n\n4. Nov 11, 2009\n\n### stanton\n\nWait. I revised my answer because there were some error. Please refer to that. Sorry about that...\n[0.5mr\u00b2*\u03c9 = 0.5*14*.12\u00b2*6.00= ____ kg\u2219m^2\/s","date":"2017-08-21 20:51:36","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8161503076553345, \"perplexity\": 860.9314379504319}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886109525.95\/warc\/CC-MAIN-20170821191703-20170821211703-00442.warc.gz\"}"}
null
null
Nelson apartment building fills with smoke Nelson Fire Rescue say someone's burning food filled all three floors of the Terrace Apartments with smoke on Thursday afternoon. May. 13, 2016 3:00 p.m. The Terrace Apartments emptied for 45 minutes after a fire alarm was set off. They arrived shortly before 4 pm. to find light smoke in the hallways and an evacuation underway, with the assistance of the Nelson Police Department. "It was suspected due to the amount of smoke and the odour experienced outside that it was likely the result of unattended cooking, but crews weren't able to confirm that it wasn't still burning," Assistant Chief Michael Daloise said in a news release. As they searched for the source of the smoke, fire crews encountered people who had not yet left the building. "The fire appears to have originated as a result of burnt food that was discovered, and affected all three floors with the odour," Daloise said. However, the occupant wasn't around to discuss the circumstances. Firefighters plan to follow up with their investigation. There were no injuries. Daloise said once you evacuate a building, you should not leave the area in case you are able to provide information to fire crews. "This information could be critical to life safety," he said. Reality TV series to capture Kaslo Logger Sports CHARTS: Drug overdose deaths in B.C. by city, region
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,448
Q: Find the range of $k$. Let a, b, c be the sides of a triangle where $a\neq c$ and $k \in R$. If the roots of the equation $x^2+ 2(a + b +c)x + 3k(ab + bc + ca) = 0$ are real, then find the interval in which $k$ lies. I have used the fact that equation has real roots, but how to use the fact a,b,c, are sides of a triangle. A: Since $a,b,c$ are sides of a triangle, $$|a-b|<c, \quad |b-c|<a, \quad |c-a|<b$$ Squaring and adding these inequalities, we get $$a^2+b^2+c^2-2ab-2bc-2ca < 0$$ i.e. $$\dfrac{a^2+b^2+c^2}{ab+bc+ca} < 2$$ From the discriminant equation for the quadratic, we have $$k \leq \dfrac{a^2+b^2+c^2}{3(ab+bc+ca)} + \dfrac{2}{3}$$ and hence $$k < \dfrac{4}{3}$$ A: $$(2(a+b+c))^2\ge4(1)(3k(ab+bc+ca)$$ (since if $ax^2+bx+c=0$ then $b^2\ge4(a)(c)$ for real roots to exist) $$k\le\frac{4(a+b+c)^2}{12(ab+bc+ca)}$$
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,664
{"url":"http:\/\/openstudy.com\/updates\/55a07b2ae4b05670bbb4f623","text":"## ganeshie8 one year ago Evaluate the sum $\\large 1-\\frac{2^3}{1!}+\\frac{3^3}{2!}-\\frac{4^3}{3!}+\\cdots$\n\n1. anonymous\n\nRight, or this$\\sum_{n=1}^\\infty (-1)^{n-1}\\frac{n^3}{(n-1)!}$same difference (or sum, :P)\n\n2. ganeshie8\n3. anonymous\n\nDefinitely reminiscent of the power series for $$e^{-x}$$: $\\sum_{n=0}^\\infty (-1)^{n}\\frac{x^n}{n!}$ Shifting the index, $e^{-x}=\\sum_{n=1}^\\infty (-1)^{n-1}\\frac{x^{n-1}}{(n-1)!}$ Let's say $$x=1$$, then $\\frac{1}{e}=\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}}{(n-1)!}$ Add and subtract $$n^3$$ in the numerator: \\begin{align*} \\frac{1}{e}&=\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}(n^3+1-n^3)}{(n-1)!}\\\\\\\\ &=\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}n^3}{(n-1)!}-\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}(n^3-1)}{(n-1)!}\\\\\\\\ &=\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}n^3}{(n-1)!}-\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}(n^2+n+1)}{(n-2)!} \\end{align*} I'm thinking write $$n^2+n+1$$ as a quadratic in $$n-2$$, so $n^2+n+1=(n-2)^2+5(n-2)-13$ \\begin{align*} \\frac{1}{e}&=S-\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}((n-2)^2+5(n-2)-13)}{(n-2)!}\\\\\\\\ &=S-\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}(n-2)^2}{(n-2)!}-5\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}(n-2)}{(n-2)!}\\\\&\\quad\\quad+13\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}}{(n-2)!}\\\\\\\\ &=S-\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}(n-2)}{(n-3)!}-\\color{red}{5\\sum_{n=1}^\\infty \\frac{(-1)^{n-3}}{(n-3)!}}-\\color{blue}{13\\sum_{n=1}^\\infty \\frac{(-1)^{n-2}}{(n-2)!}}\\\\\\\\ \\frac{1}{e}+\\color{red}{\\frac{5}{e}}+\\color{blue}{\\frac{13}{e}}&=S-\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}(n-2)}{(n-3)!} \\end{align*} Hmm, there's a mistake somewhere... W|A is saying the sum is negative.\n\n4. anonymous\n\n^ where $$S$$ is what we're looking to find.\n\n5. anonymous\n\nLooks like you can derive by differentiating $$e^{-x}$$.\n\n6. anonymous\n\nAh yeah that's much more efficient.\n\n7. anonymous\n\nOh, $$-13$$ should be $$+7$$. That fixes everything.\n\n8. geerky42\n\n@SmthsAndGiggles I think few series are problematic? For example, for series in last step, just check first term n=1, and you would have negative fractional (-2)! in denominator.\n\n9. ganeshie8\n\nafter cancelling, the index also shifts, so that shouldn't be a problem i think\n\n10. anonymous\n\nI recall seeing $$n!$$ defined to be $$0$$ if $$n$$ is negative in some contexts, so we could use that to our advantage, but that's kind of a cheap trick.\n\n11. ganeshie8\n\nthat issue can be avoided, if we simply shift the index right \\begin{align*} \\frac{1}{e}&=\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}(n^3+1-n^3)}{(n-1)!}\\\\\\\\ &=\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}n^3}{(n-1)!}-\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}(n^3-1)}{(n-1)!}\\\\\\\\ &=\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}n^3}{(n-1)!}-\\sum_{n=\\color{red}{2}}^\\infty \\frac{(-1)^{n-1}(n^2+n+1)}{(n-2)!} \\end{align*}\n\n12. ganeshie8\n\nwe can simply shift the index because n=1 produces 0 anyways\n\n13. anonymous\n\n$e^{-x}=\\sum_{n=0}^{\\infty}(-1)^{n}\\frac{x^n}{n!}$Differentiate and:$-e^{-x}=\\sum_{n=1}^{\\infty}(-1)^{n}\\frac{nx^{n-1}}{(n-1)!}$Divide both sides by $$-1$$ and you get: $e^{-x}=\\sum_{n=1}^{\\infty}(-1)^{n-1}\\frac{nx^{n-1}}{(n-1)!}$Let $$m = n-1$$ and so $$n=m+1$$:$e^{-x}=\\sum_{m=0}^{\\infty}(-1)^{m}\\frac{(m+1)x^m}{m!}= \\sum_{n=0}^{\\infty}(-1)^{n}\\frac{x^n}{n!}$\n\n14. anonymous\n\nconsider $$(n+1)^3=A-Bn+Cn(n-1)-Dn(n-1)(n-2)\\\\n^3+3n^2+3n+1=A-Bn+C(n^2-n)-D(n^3+n^2+2n)\\\\n^3+3n^2+3n+1=A+(B-C+2D)n+(C+D)n^2+Dn^3$$so we have $$A=1,D=-1$$ and $$C=2,B=-3$$\n\n15. anonymous\n\nnow consider $$1+-1+2+-3=-1$$ so we get $$-1\/e$$ Q.E.D.\n\n16. ganeshie8\n\nwow! how did that work\n\n17. ganeshie8\n\nlooks pretty close to @SithsAndGiggles method but also looks more clever!\n\n18. anonymous\n\nDefinitely less taxing than copy\/pasting sums over and over :)\n\n19. ganeshie8\n\ngotcha! both your methods are identical, its only that oldrin.bataku has managed to keep it in short form by writing (n+1)^3 as 1+7n+6n(n-1)+n(n-1)(n-2) one shot :)\n\n20. anonymous\n\n\\begin{align*}e^{-x}&=\\sum_{n=0}^\\infty\\frac{(-1)^nx^n}{n!}\\\\-e^{-x}&=\\sum_{n=0}^\\infty\\frac{(-1)^nnx^{n-1}}{n!}\\\\e^{-x}&=\\sum_{n=0}^\\infty\\frac{(-1)^nn(n-1)x^{n-2}}{n!}\\\\-e^{-x}&=\\sum_{n=0}^\\infty\\frac{(-1)^nn(n-1)(n-2)x^{n-3}}{n!}\\end{align*}so plug in $$x=1$$:$$e^{-x}=\\sum_{n=0}^\\infty\\frac{(-1)^n}{n!}\\\\-e^{-x}=\\sum_{n=0}^\\infty\\frac{(-1)^n n}{n!}\\\\e^{-x}=\\sum_{n=0}^\\infty\\frac{(-1)^nn(n-1)}{n!}\\\\-e^{-x}=\\sum_{n=0}^\\infty\\frac{(-1)^n n(n-1)(n-2)}{n!}$$\n\n21. anonymous\n\nso it follows if $$(n+1)^3=1+3n+2n(n-1)+n(n-1)(n-2)$$ then we have: $$\\sum_{n=0}^\\infty\\frac{(-1)^n (n+1)^3}{n!}\\\\\\quad =1\\cdot\\sum_{n=0}^\\infty\\frac{(-1)^n}{n!}+3\\cdot\\sum_{n=0}^\\infty\\frac{(-1)^nn}{n!}+2\\cdot\\sum_{n=0}^\\infty\\frac{(-1)^nn(n-1)}{n!}\\\\\\quad\\quad \\quad +1\\cdot\\sum_{n=0}^\\infty\\frac{(-1)^nn(n-1)(n-2)}{n!}\\\\\\quad=1\\cdot e^{-1}+3\\cdot(-e^{-1})+2\\cdot e^{-1}+1\\cdot(-e^{-1})\\\\\\quad =(1-3+2-1)\\cdot\\frac1e\\\\\\quad=-\\frac1e$$\n\n22. geerky42\n\nYou know how an Italian chef kisses his fingers and says something in Italian that translates to \"A masterpiece\" after tasting their own dish? That's how I feel about this.\n\n23. anonymous\n\nI didn't need to write it out, though, because the behavior of $$e^x$$ under differentiation is easy enough to see in your head and I recognized that derivatives $$e^{-x}$$ alternate in sign; then it just needed $$x=1$$ -- not that bad\n\n24. ganeshie8\n\n@oldrin.bataku this may not affect the solution, but when i solved the system of equations i get $(n+1)^3= 1+7n+6n(n-1)+n(n-1)(n-2)$\n\n25. ganeshie8\n\nthis wont affect because (1-7+6-1) end up being -1 :)\n\n26. ganeshie8\n\nthats pretty cool actually!! thnks for introducing the special trick xD\n\n27. anonymous\n\noops, you're probably right: $$(n+1)^3=A-Bn+Cn(n-1)-Dn(n-1)(n-2)$$ so we get $$n=0,1,2,3$$: $$1=A\\\\8=A-B\\\\27=A-2B+2C\\\\64=A-3B+6C-6D$$... which gives $$A=1,B=-7,C=6,D=-1$$, indeed\n\n28. anonymous\n\nbut yeah, I'm not sure if it's a popular trick of any sort as I've never seen it before, it just seemed kinda self-evident when I thought about how to do this problem\n\n29. ganeshie8\n\nguess i can mimic the same to any power https:\/\/www.wolframalpha.com\/input\/?i=%5Csum_%7Bn%3D1%7D%5E%5Cinfty+%28-1%29%5E%7Bn-1%7D%5Cfrac%7Bn%5E5%7D%7B%28n-1%29%21%7D just need to write (n+1)^5 as linear combinations of earlier products and add\/subtract 1\/e's ! im loving this method !\n\n30. anonymous\n\nyep, you just have to find a way to write the polynomial in terms of $$(n)_i$$ where $$(n)_k=n(n-1)(n-2)\\cdots(n-k+1)$$\n\n31. anonymous\n\nin fact, I bet you can use divided differences to find the coefficients: consider differences of (n+1)^3 for $$n=0,1,2,3$$: 1 7 8 12 19 6 27 18 37 64 so our coefficients are \\begin{align*}1\/0!&=1\\\\7\/1!&=7\\\\12\/2!&=6\\\\6\/3!&=1\\end{align*}\n\n32. ganeshie8\n\ninteresting way to pull finite differences into this xD im still trying to make sense why it is giving the coefficients..\n\n33. anonymous\n\nfor $$(n+1)^5$$ we have for $$n=0,1,2,3,4,5$$ 1 31 32 180 211 390 243 570 360 781 750 120 1024 1320 480 2101 1230 3125 2550 4651 7776 so 1, 31, 90, 65, 15, 1 so 1 - 31 + 90 - 65 + 15 - 1 = 9 so i predict the sum should be 9\/e\n\n34. anonymous\n35. anonymous\n\nit works because repeated differences: https:\/\/en.wikipedia.org\/wiki\/Finite_difference#Newton.27s_series\n\n36. ganeshie8\n\nif it were not alternating, then i bet the sum is 203e because 1 + 31 + 90 + 65 + 15 + 1 = 203\n\n37. ganeshie8\n\ntrivially im right too xD http:\/\/www.wolframalpha.com\/input\/?i=sum_%7Bn%3D0%7D%5Einfty++%28n%2B1%29%5E5%2Fn%21\n\n38. anonymous\n\nyep, that is correct\n\n39. ganeshie8\n\nguess il need to review repeated differences, i remember studying finite differences sometime back but don't seem to understand much...\n\n40. anonymous\n\ninterestingly enough, there's also way to do it when expanding in terms of complementary Bell numbers: $$\\frac1e\\tilde B_n=\\sum_{k=0}^\\infty\\frac{(-1)^n k^n}{k!}$$ so we observe for $$(n+1)^3=1+3n+3n^2+n^3$$ gives us $$\\frac1e(\\tilde B_0+3\\tilde B_1+3\\tilde B_2+\\tilde B_3)=\\frac1e(1-2+0+1)=-\\frac1e$$\n\n41. anonymous\n42. anonymous\n\nthe coefficients for our expansion in terms of falling factorials is actually just the Stirling numbers of the second kind: http:\/\/mathworld.wolfram.com\/StirlingNumberoftheSecondKind.html see 1,7,6,1\n\n43. anonymous\n\nin fact there's an even easier way to do it in terms of complementary Bell numbers: $$\\sum_{n=0}^\\infty\\frac{(-1)^n (n+1)^k}{n!}=-\\sum_{n=0}^\\infty\\frac{(-1)^{n+1}(n+1)^{k+1}}{(n+1)!}=-\\tilde B_{k+1}$$\n\n44. anonymous\n\noops, $$-\\frac{\\tilde B_{k+1}}e$$ i mean :-)\n\n45. anonymous\n\n@ganeshie8 here, this is why repeated differences works -- falling factorials obey a rule like the power rule of differentiation: https:\/\/en.wikipedia.org\/wiki\/Pochhammer_symbol#Relation_to_umbral_calculus","date":"2017-01-20 16:29:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 2, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7847645282745361, \"perplexity\": 1723.6225222785813}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560280835.60\/warc\/CC-MAIN-20170116095120-00172-ip-10-171-10-70.ec2.internal.warc.gz\"}"}
null
null
The Pontifical Scots College, Rome Priests For Scotland Preparing for Seminary With 0 Comments, Category: Preparing for Seminary, The Pontifical Scots College, Rome was founded on 5th December 1600 by Pope Clement VIII. It provided an education for young Scots Catholic men who, due to the laws against Catholics, could not receive a Catholic education at home. During the centuries that followed, the college sent a steady supply of priests to Scotland, being closed only when the French invaded Rome in 1798 and again during the Second World War. For two hundred years Jesuits and Italian secular clergy directed the College, but since 1800 the Rectors have all been Scots secular priests. At first the college was sited in a little house in what is known today as Via del Tritone, opposite the church of S. Maria in Costantinopoli. In 1604 it was transferred to Via Felice, now called Via delle Quattro Fontane, and there it remained till 1962. The Church of St. Andrew of the Scots was built beside the college and, although no longer in the possession of the college, Mass is still regularly celebrated there. The present college building on the Via Cassia was opened in 1964 by Pope Paul VI and has since been visited by Pope John Paul II. As well as a house for students for the priesthood, the Scots College has been a temporary home for many other Scots, such as the Bishops during the Second Vatican Council and other meetings, the several groups of priests who have taken part in theology refresher courses and, more recently, groups of pilgrims who come during the summer vacation. It has been at the centre of celebrations for the creation of three Scots Cardinals, Cardinal Gray, Cardinal Winning and Cardinal O'Brien, and it was visited by many pilgrims who came from Scotland for the Canonisation of St John Ogilvie. It also frequently hosts groups of pilgrims from Scotland staying in the city for major events or Holy Years, such as the Jubilee of 2000, the Year of Faith in 2012-2013 and the Jubilee Year of Mercy 2015-2016. This year there are around twenty students currently studying in the College in Rome, of the roughly thirty-five studying for the dioceses of Scotland in total. Website: http://www.scotscollege.org/home.aspx Twitter: @ScotsCollegeIT
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,970
{"url":"http:\/\/insightsoftwareconsortium.github.io\/SimpleITK-Notebooks\/Python_html\/69_x-ray-panorama.html","text":"# Creating a Lower Limb Panoramic X-ray\n\nMeasurement of knee alignment is useful for diagnosis of arthritic conditions and for planning and evaluation of surgical interventions. Alignment is measured by the hip-knee-ankle ($HKA$) angle in standing, load bearing, x-ray images. The angle is defined by the femoral and tibial mechanical axes. The femoral axis is defined by the center of the femur head and the mid condylar point. The tibial axis is defined by the center of the tibial plateau to the center of the tibial plafond.\n\nThe three stances defined by the $HKA$ angle are:\n\n1. Neutral alignment, $HKA=0^o$.\n2. Varus, bow-legged, $HKA<0^o$.\n3. Valgus, knock-kneed, $HKA>0^o$.\n\n1. T. D. Cooke et al., \"Frontal plane knee alignment: a call for standardized measurement\", J Rheumatol. 2007.\n2. A. F. Kamath et al., \"What is Varus or Valgus Knee Alignment?: A Call for a Uniform Radiographic Classification\", Clin Orthop Relat Res. 2010.\n\nFor a robust estimate of the $HKA$ angle we would like to use a single image that contains the anatomy from the femoral head down to the ankle. Acquisition of such an image with standard x-ray imaging devices is not possible. It is achievable by acquiring multiple partially overlapping images and aligning, registering, them to the same coordinate system. The subject of this notebook.\n\nThis notebook is based in part on the work described in: \"A marker-free registration method for standing X-ray panorama reconstruction for hip-knee-ankle axis deformity assessment\", Y. K. Ben-Zikri, Z. Yaniv, K. Baum, C. A. Linte, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, DOI:10.1080\/21681163.2018.1537859.\n\nIn\u00a0[1]:\nimport SimpleITK as sitk\nimport numpy as np\nimport os.path\nimport copy\n\n%matplotlib notebook\nimport gui\nimport matplotlib.pyplot as plt\n\n\n\n# Fetch all of the data associated with this example.\n\nFetching leg_panorama\/readme.txt","date":"2020-10-30 06:53:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6240817904472351, \"perplexity\": 9026.787835360421}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-45\/segments\/1603107909746.93\/warc\/CC-MAIN-20201030063319-20201030093319-00036.warc.gz\"}"}
null
null
{"url":"http:\/\/mathhelpforum.com\/calculus\/83240-antiderrivative.html","text":"1. ## antiderrivative of...\n\nIm having trouble finding the antiderrivative of r\/[sqrt(4+r^2)]\n\nthe answer is [sqrt(4+r^2)]...but how can that be?\n\n2. Originally Posted by johntuan\nIm having trouble finding the antiderrivative of r\/[sqrt(4+r^2)]\n\nthe answer is [sqrt(4+r^2)]...but how can that be?\n\n$\\int \\frac{r}{\\sqrt{4+r^2}}dr$\n\nlet $u=4+r^2 \\implies du=2rdr \\iff \\frac{du}{2}=rdr$\n\n$\\int \\frac{r}{\\sqrt{4+r^2}}dr=\\frac{1}{2}\\int \\frac{1}{u^{1\/2}}du=...$\n\n3. but then wouldnt the answer be over 2?\n\nu^(1\/2)\/2\n\nbut the answer doesnt have a denominator:s\n\n4. Not quite\n\n$\\frac{1}{2}\\int \\frac{1}{u^{1\/2}}du=\\frac{1}{2}\\int u^{-1\/2}du=\\frac{\\frac{1}{2}}{\\frac{1}{2}}u^{1\/2}=\\sqrt{u}=\\sqrt{4+r^2}$","date":"2017-08-23 07:27:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 4, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8927950263023376, \"perplexity\": 14454.519592057715}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886117874.26\/warc\/CC-MAIN-20170823055231-20170823075231-00430.warc.gz\"}"}
null
null
package testing import ( "sync" corev1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/types" "knative.dev/pkg/kmeta" "knative.dev/pkg/tracker" ) // NullTracker implements Tracker // // Alias is preserved for backwards compatibility type NullTracker = FakeTracker // FakeTracker implements Tracker. type FakeTracker struct { sync.Mutex references map[tracker.Reference]map[types.NamespacedName]struct{} } var _ tracker.Interface = (*FakeTracker)(nil) // OnChanged implements OnChanged. func (*FakeTracker) OnChanged(interface{}) {} // GetObservers implements GetObservers. func (n *FakeTracker) GetObservers(obj interface{}) []types.NamespacedName { item, err := kmeta.DeletionHandlingAccessor(obj) if err != nil { return nil } or := kmeta.ObjectReference(item) ref := tracker.Reference{ APIVersion: or.APIVersion, Kind: or.Kind, Namespace: or.Namespace, Name: or.Name, } n.Lock() defer n.Unlock() keys := make([]types.NamespacedName, 0, len(n.references[ref])) for key := range n.references[ref] { keys = append(keys, key) } return keys } // OnDeletedObserver implements OnDeletedObserver. func (n *FakeTracker) OnDeletedObserver(obj interface{}) { item, err := kmeta.DeletionHandlingAccessor(obj) if err != nil { return } key := types.NamespacedName{Namespace: item.GetNamespace(), Name: item.GetName()} n.Lock() defer n.Unlock() for ref, objs := range n.references { delete(objs, key) if len(objs) == 0 { delete(n.references, ref) } } } // Track implements tracker.Interface. func (n *FakeTracker) Track(ref corev1.ObjectReference, obj interface{}) error { return n.TrackReference(tracker.Reference{ APIVersion: ref.APIVersion, Kind: ref.Kind, Namespace: ref.Namespace, Name: ref.Name, }, obj) } // TrackReference implements tracker.Interface. func (n *FakeTracker) TrackReference(ref tracker.Reference, obj interface{}) error { item, err := kmeta.DeletionHandlingAccessor(obj) if err != nil { return err } key := types.NamespacedName{Namespace: item.GetNamespace(), Name: item.GetName()} n.Lock() defer n.Unlock() if n.references == nil { n.references = make(map[tracker.Reference]map[types.NamespacedName]struct{}, 1) } objs := n.references[ref] if objs == nil { objs = make(map[types.NamespacedName]struct{}, 1) } objs[key] = struct{}{} n.references[ref] = objs return nil } // References returns the list of objects being tracked func (n *FakeTracker) References() []tracker.Reference { n.Lock() defer n.Unlock() refs := make([]tracker.Reference, 0, len(n.references)) for ref := range n.references { refs = append(refs, ref) } return refs }
{ "redpajama_set_name": "RedPajamaGithub" }
3,053
"Building the world's most liveable city together". The Auckland Design Manual (ADM) provides an online resource for everyone involved in design, building and development to either share their great design stories with others, or to seek inspiration, tools and best practice advice from those who have already been successful. Auckland's planning rulebook, the Proposed Auckland Unitary Plan will articulate the rules for the future growth, whilst the ADM illustrates how to achieve the quality outcomes sought by the Unitary Plan. We were engaged initially to develop a visual strategy for managing the huge quantities of information across this platform. At that stage we came up with a visual style based around an instruction manual. The icons used throughout the site are taken from this. We then worked through a very collaborative process with the design team at the council to work out the best approach to the information design, creating the mega-menus and clear hierarchies within that. We also developed the logo currently being used, which keeps the word mark close to the council brand guidelines (and other council logos) while also giving it a unique look of it's own to help differentiate and reflect the values of good design important throughout the ADM. The overall design for the site we established takes shapes from an abstracted cityscape and used them small on the header and in bigger abstract shapes across the backgrounds to create a unifying style. We also created colour themes for the different levels of information to help give the user a sense of where they are in the (very large) site.
{ "redpajama_set_name": "RedPajamaC4" }
5,419
12/9/2009 - Can the U.S. Withdraw from Afghanistan and Iraq? 4/15/2008 - Is the "War on Terror" Creating Terrorism? 6/21/2007 - Living With a Nuclear Iran and North Korea? 5/17/2006 - What Should the U.S. Do about China? 6/17/2004 - The Future of Iraq: Democracy or Quagmire?
{ "redpajama_set_name": "RedPajamaC4" }
2,288
class MailAddressImporter < EntityImporter def valid? @valid ||= mail_addresses.all?(&:valid?) end def errors ImporterErrors.messages_for(mail_addresses) end def import mail_addresses.each(&:save) end protected def mail_addresses @mail_addresses ||= csv_entries.map(&:to_hash).map do |p| MailAddressPresenter.new(p).to_mail_address end end def self.required_headers %w(id location_id attention address_1 address_2 city state_province postal_code country) end end
{ "redpajama_set_name": "RedPajamaGithub" }
563
\section{Introduction} A Costas array is an $N\times N$ array of dots with the properties that one dot appears in each row and column, and that no two of the $N(N-1)/2$ line segments connecting dots have the same slope and length. It is clear that a permutation $f$ of $\{1,2,\ldots,N\}$, from the columns to the rows (i.e.\ to each column $x$ we assign exactly one row $f(x)$), gives a Costas array if and only if for $x \neq y$ and $k \neq 0$ such that $1 \leq x, y, x+k, y+k \leq N$, then $f(x+k)-f(x) \neq f(y+k)-f(y)$. Costas arrays were first considered by Costas \cite{costas1975} as permutation matrices with ambiguity functions taking only the values 0 and (possibly) 1, applied to the processing of radar and sonar signals. The use of Costas arrays in radar is summarized in \cite[\S 5.2]{levanon}. Costas arrays are also used in the design of optical orthogonal codes for code division multiple access (CDMA) networks \cite{maric1995}, and in the construction of low-density parity-check (LDPC) codes \cite{chae2004}. Let us briefly recall some known constructions on Costas arrays. One can find more details in the survey papers of Golomb and Taylor \cite{MR674209,golomb1984}, Drakakis \cite{drakakis}, Golomb and Gong \cite{GolombGong}. In the following, $p$ is taken to be a prime and $q$ a prime power. The known general constructions for $N \times N$ Costas arrays are the Welch construction for $N=p-1$ and $N=p-2$, the Lempel construction for $N=q-2$, and the Golomb construction for $N=q-2$, $N=q-3$. Moreover, if $q=2^k$, $k\geq 3$, the Golomb construction works for $N=q-4$. The validity of the Welch and Lempel constructions is proved by Golomb in \cite{MR749508}. The Golomb constructions for $N=q-3$ and $N=2^k-4$ depend on the existence of (not necessarily distinct) primitive elements $\alpha$ and $\beta$ in $\mathbb{F}_q$ such that $\alpha+\beta=1$. The existence of primitive elements $\alpha$ and $\beta$ in $\mathbb{F}_q$ such that $\alpha+\beta=1$ was proved by Moreno and Sotero in \cite{moreno1990}. (Cohen and Mullen give a proof with less computational checking in \cite{MR1209243}; more recently, Cohen, Oliveira e Silva, and Trudgian proved \cite{COT1} that, for all $q>61$, every non-zero element in $\mathbb{F}_{q}$ can be written as a linear combination of two primitive roots of $\mathbb{F}_{q}$.) \begin{comment} The total number of $N \times N$ Costas arrays have been enumerated for $N$ from 1 to 26 \cite{beard}. Let $c_N$ denote the number of $N \times N$ Costas arrays. It is known (cf.\ Drakakis \cite{drakakis}) that for all sufficiently large $N$, \[ c_N \leq N!\bigg(\frac{40(N-1)}{N(N-2)}+\frac{9N^2-45N+60}{(N-3)(N-4)(N-5)}\bigg), \] and so $c_N=o(N!)$. Silverman, Vickers and Mooney \cite{silverman} give probabilistic estimates on the number of Costas arrays. However it is not known whether $C_N > 0$ for all positive integers $N$. The smallest such $N$'s are $32$ and $33$. It is challenging problem to find more construction methods to obtain some information on the numbers of Costas arrays. \end{comment} Among these algebraic constructions over finite fields, there are the $T_4$ variant of the Lempel construction for $N=q-4$ when there is a primitive element $\alpha$ in $\mathbb{F}_q$ such that $\alpha^2+\alpha=1$, and the $G_4$ variant of the Golomb construction for $N=q-4$ when there are two primitive elements $\alpha$ and $\beta$ such that $\alpha + \beta = 1$ and $\alpha^{2} + \beta^{-1} = 1$. Through the study of primitive elements of finite fields, Golomb proved in \cite{golomb1992} that $q$ must be either 4, 5 or 9, or a prime $p \equiv \pm 1 \pmod{10}$ in order for the $T_4$ construction to apply. Note that this is a necessary but not sufficient condition (for example $p=29$). In the same paper, Golomb also proved that the values of $q$ such that the $G_4$ construction occurs are precisely $q = 4, 5, 9$, and those primes $p$ for which the $T_4$ construction occurs and which satisfy either $p \equiv 1 \pmod{20}$ or $p \equiv 9 \pmod{20}$. In this paper, we connect the $T_4$ and $G_{4}$ constructions with the concept of Fibonacci primitive roots. We show, in Theorems \ref{t1} and \ref{t2}, that under the Extended Riemann Hypothesis (ERH) there are infinitely many primes such that $T_4$ and $G_4$ can apply. We conclude with some observations and questions about trinomials of primitive roots. \section{Fibonacci primitive roots}\label{section:FPR} The $T_{4}$ construction requires a primitive root $\alpha$ such that \begin{equation}\label{dog} \alpha^{2} + \alpha = 1. \end{equation} To investigate the nature of solutions to (\ref{dog}) we recall the notion of a \textit{Fibonacci primitive root}, or \textit{FPR}. We say that $g$ is a FPR modulo $p$ if $g^{2} \equiv g + 1 \pmod p$. Shanks and Taylor \cite{ShanksTaylor} proved a similar statement to that which we give below. \begin{lem}\label{onlyLem} If $g$ is a FPR modulo $p$, then $g-1$ is a primitive root modulo $p$ that satisfies (\ref{dog}), and vice versa. \end{lem} \begin{proof} It is clear that $g$ satisfies $g^{2} \equiv g + 1 \pmod p$ if and only if $g-1$ satisfies (\ref{dog}): all that remains is to check that $g$ and $g-1$ are primitive. Suppose first that $g$ is a FPR modulo $p$. Then, since $g(g-1) \equiv 1\equiv g^{p-1}$, we have \begin{equation*}\label{cat} (g-1)^{n} \equiv g^{p-n-1} \pmod p, \end{equation*} Note that, as $n$ increases from $1$ to $p-1$, $g^{p-n-1}$ generates $\mathbb{F}_{p}$, since $g$ is primitive. Hence $g-1$ is a primitive root modulo $p$. The converse is similarly proved. \end{proof} Let $F(x)$ denote the number of primes $p\leq x$ that have at least one FPR. Shanks \cite{ShanksFPR} conjectured that under ERH, $F(x) \sim C \pi(x)$, where $\pi(x)$ is the prime counting function, and where $C \approx 0.2657\ldots $. Lenstra \cite{LenstraArtin} proved Shanks' conjecture; a proof also appears in Sander \cite{Sander}. We therefore have \begin{thm}\label{t1} Let $T(x)$ be the number of primes $p\leq x$ for which $p$ satisfies the $T_{4}$ construction. Then, under the Extended Riemann Hypothesis \begin{equation*}\label{t1:e1} T(x) \sim \frac{27}{38} \pi(x) \prod_{p=2}^{\infty} \left( 1- \frac{1}{p(p-1)}\right) \sim (0.2657\ldots) \pi(x). \end{equation*} \end{thm} Unconditionally, it seems difficult to show that there are infinitely many primes that have a FPR. Phong \cite{Phong} has proved some results about a slightly more general class of primitive roots. For our purposes, \cite[Cor.\ 3]{Phong} implies that if $p\equiv 1, 9 \pmod{10}$ such that $\frac{1}{2}(p-1)$ is prime then there exists (exactly) one FPR modulo $p$. This does not appear, at least to the authors, to make the problem any easier! We turn now to the $G_{4}$ construction, which requires two primitive roots $\alpha, \beta$ such that \begin{equation*}\label{dog2} \alpha + \beta = 1, \quad \alpha^{2} + \beta^{-1} = 1. \end{equation*} Since we require that $p \equiv 1, 9 \pmod{20}$ we are compelled to ask: how many of these primes have a FPR? We can follow the methods used in \cite[\S 8]{LenstraArtin}, and also examine Shanks's discussion in \cite[p.\ 167]{ShanksFPR}. Since we are now only concerned with $p\equiv 1, 9 \pmod{20}$ we find that the asymptotic density should be $\frac{9}{38}A$, where $A = \prod_{p=2}^{\infty} \left( 1- \frac{1}{p(p-1)}\right) \approx 0.3739558138$ is Artin's constant. This leads us to \begin{thm}\label{t2} Let $G(x)$ be the number of primes $p\leq x$ for which $p$ satisfies the $G_{4}$ construction. Then, under the Extended Riemann Hypothesis \begin{equation*}\label{t2:e1} G(x) \sim \frac{9}{38} \pi(x) \prod_{p=2}^{\infty} \left( 1- \frac{1}{p(p-1)}\right) \sim (0.08856\ldots) \pi(x). \end{equation*} \end{thm} \section{Conclusion} One can show that, for $p>7$ there can be no primitive root $\alpha$ modulo $p$ that satisfies $\alpha + \alpha^{-1} \equiv 1 \pmod p$. (Suppose there were: then $\alpha^{2} + 1 \equiv \alpha \pmod{p}$ so that $\alpha^{3}+ \alpha^{2} + 1 \equiv \alpha^{2} \pmod{p}$ whence $\alpha^{3} \equiv -1\pmod{p}$. Hence $\alpha^{6} \equiv 1\pmod{p}$ --- a contradiction for $p>7$.) From this, it follows that $x^{p-2} + x - 1$ is never primitive over $\mathbb{F}_p$ for $p>7$. Consider the following question: given $1\leq i\leq j \leq p-2$, let $d(i, j)$ denote the density of primes for which there is a primitive root $\alpha$ satisfying $\alpha^{i} + \alpha^{j} \equiv 1 \pmod p$. The above comments show that $d(1, p-2) = 0$; Theorem \ref{t1} shows that under ERH, $d(1, 2) \approx 0.2657$. What can be said about $d(i, j)$ for other prescribed pairs $(i, j)$? In the case $i=j$, we have $2 \alpha^i \equiv 1 \pmod{p}$ and thus $\alpha^i = \frac{p-1}{2}$. In particular, if $(i, p-1) =1$ then it is equivalent to ask for the density of primes such that $\frac{p-1}{2}$ is a primitive root modulo $p$. We have not been able to find a reference for this in the literature, though computational evidence seems to suggest that this value should be close to Artin's constant $0.37395\ldots$. When $i \neq j$, it is easy to see that $d(2, \frac{p-1}{2} + 1) = d(1,2)$. Therefore, under ERH the trinomial $x^{\frac{p-1}{2}+1} + x^2 - 1$ is primitive over $\mathbb{F}_p$ for infinitely many primes $p$. More generally, we can show that for $p > 3i$ there does not exist a primitive root $\alpha$ such that $\alpha^{\frac{p-1}{2} +i} + \alpha^{\frac{p-1}{2} +2i} \equiv 1 \pmod{p}$, and thus $d(\frac{p-1}{2} +i, \frac{p-1}{2} + 2i) =0$. Similarly, $d(i, 2i+\frac{p-1}{2}) =0$. Indeed, if $\alpha^i - \alpha^{2i} \equiv 1 \pmod{p}$ for a primitive $\alpha$, we obtain $\alpha^{3i}\equiv \alpha^{2i} - \alpha^i \equiv -1\pmod{p}$. Hence we can show that if $p > 6i$ there is no primitive element $\alpha$ such that $\alpha^i + \alpha^{2i+\frac{p-1}{2}} \equiv 1 \pmod{p}$. Using the same arguments as before, we can also show that $d(i, p-1-i) =0$ for any prefixed $i$. \begin{comment} \subsection{DeLeon} DeLeon \cite{DeLeon} proved that there exists an FPR if and only if $p\equiv \pm 1 \pmod{10}$ and $A(p) = p-1$, where $A(p)$ is the period of the Fibonacci numbers modulo $p$. De Leon also considers the case of $p\equiv 11, 19 \pmod{20}$ and shows that, if a FPR does exist, it must be of a certain form. Specifically, for $\mathcal{F}_{n}$ denoting the $n$th Fibonacci number, De Leon shows that, if $p \equiv 11, 19 \pmod{20}$ has a FPR $g$, then we must have \begin{equation*} g \equiv - \frac{1 + \mathcal{F}_{(p-1)/2}}{\mathcal{F}_{(p-1)/2}} \pmod p. \end{equation*} \subsection{Conjecture G} Cohen, Oliveira e Silva, and Trudgian recently proved \cite{COT1} that, for all $q>61$, every non-zero element in $\mathbb{F}_{q}$ can be written as a linear combination of two primitive roots of $\mathbb{F}_{q}$. Initially I thought this not to have any implications for Costas arrays; I thought that Cohen and Mullen \cite{MR1209243} proved the last of Golomb's conjectures (concerning sums of primitive roots) that had a direct application to Costas arrays. However, I received the following reply from Konstantinos Drakakis. \begin{quote} I also believe that the generalized conjecture you and your coauthors have proved [that is, in \cite{COT1}] does not have immediate applications in Costas arrays. But let us analyze this a bit further: you showed that for any $a,b,c$ you can find $g_m, g_n$ such that \begin{equation*} a = b g_n + c g_m. \end{equation*} But you can express $b=g_n^{i-1}$ and $c=g_m^{j-1}$ so that $a=g_n^i +g_m^j$. So, for every $i, j, a$ you can find $g_m, g_n$ so that this condition holds. In particular, if $a=1$, this shows that for any $(i,j)$ there will be a Golomb array with a dot at that point. I can't recall if this is known or not. If $a$ is not $1$, the interpretation for Costas arrays becomes less obvious. \end{quote} We can say something about distribution of primitive trinomials? When we have $a=bg_n + cg_m$, we have $a = bg^i + cg^j$ for some primitive element $g$. So $bx^i + cx^j - a$ is a primitive trinomial when we prescribe coefficients $a, b, c$. Can we say anything about $i, j$ or how far between $i$ and $j$? On the other hand, can we say more about the connection between $b, c$ and $g_n, g_m$? I don't think Drakakis's quote is true for arbitrary $(i, j)$? \end{comment}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,600
<!-- Do not edit this file. It is automatically generated by API Documenter. --> [Home](./index.md) &gt; [puppeteer](./puppeteer.md) &gt; [Protocol](./puppeteer.protocol.md) &gt; [Network](./puppeteer.protocol.network.md) &gt; [SignedCertificateTimestamp](./puppeteer.protocol.network.signedcertificatetimestamp.md) &gt; [status](./puppeteer.protocol.network.signedcertificatetimestamp.status.md) ## Protocol.Network.SignedCertificateTimestamp.status property Validation status. <b>Signature:</b> ```typescript status: string; ```
{ "redpajama_set_name": "RedPajamaGithub" }
8,791
Q: Break a date range into hours per day for each job Yesterday I had asked for an efficient way to break a date range into hours per day and received an answer at the following link... Is there an efficient way to break a date range into hours per day? Now I need to go a step further and generate the same thing for each job in a list. I have a table with the following sample information... +-------+-------------------------+-------------------------+ | JobID | StartDate | EndDate | +-------+-------------------------+-------------------------+ | 1 | 2015-01-27 07:32:35.000 | 2015-01-28 14:39:35.000 | | 2 | 2015-01-27 07:32:35.000 | 2015-01-29 16:39:35.000 | | 3 | 2015-03-02 09:46:25.000 | 2015-03-05 17:24:15.000 | +-------+-------------------------+-------------------------+ And I need to get a list like the following... +-------+------------+-------+ | JobID | Date | Hours | +-------+------------+-------+ | 1 | 2015-01-27 | 16.47 | | 1 | 2015-01-28 | 14.65 | | 2 | 2015-01-27 | 16.47 | | 2 | 2015-01-28 | 24.00 | | 2 | 2015-01-29 | 16.65 | | 3 | 2015-03-02 | 14.23 | | 3 | 2015-03-03 | 24.00 | | 3 | 2015-03-04 | 24.00 | | 3 | 2015-03-05 | 17.40 | +-------+------------+-------+ Can the recursive CTE (from the link I included) be modified to include a JobID? Thanks, Carl A: Here is what I came up with for a solution... DECLARE @testTable TABLE (JobID INT, startdate DATETIME, enddate DATETIME); INSERT INTO @testTable VALUES (1,'2015-01-27 07:32:35.000','2015-01-28 14:39:35.000'); INSERT INTO @testTable VALUES (2,'2015-01-27 07:32:35.000','2015-01-29 16:39:35.000'); INSERT INTO @testTable VALUES (3,'2015-03-02 09:46:25.000','2015-03-02 17:24:15.000'); WITH cte AS ( SELECT JobID,CAST(startdate AS DATE) startdate,DATEDIFF(minute, startdate, DATEADD(DAY, 1, CAST(startdate AS DATE) ) ) / 60.0 hours,enddate from @testTable UNION ALL SELECT JobID,DATEADD(DAY,1, startdate), DATEDIFF(minute, DATEADD(DAY,1, startdate), CASE WHEN DATEADD(DAY,2, startdate) > enddate THEN enddate ELSE DATEADD(DAY,2, startdate) END) / 60.0, enddate FROM cte WHERE startdate <> CAST(enddate AS DATE) ) SELECT * FROM cte ORDER BY JobID, startdate
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,997
Direct Relief Delivers Emergency Medical Aid to Peru, Briefs Peruvian Consul General on Flood Response Ambassador Liliana Tamara Cino de Silva, Consul General of Peru in Los Angeles looks over shipments with Latin America Program Manager Cydney Justman at the Goleta warehouse Friday. (Photo by Lara Cooper/Direct Relief) By Lara Cooper March 24, 2017 6:18 pm As deadly flooding persists in Peru, Direct Relief mobilized a second shipment of more than 23,000 pounds of medicines and medical aid to the country. Intense flooding has pounded the country since the beginning of the year. More than 80 people have been killed as a result and 110,000 have been displaced from their homes as the floodwaters have risen. On Friday, Ambassador Liliana Tamara Cino de Silva, Consul General of Peru in Los Angeles, visited Direct Relief headquarters to view firsthand how the organization is responding to the crisis in Peru. During Cino de Silva's visit, forklifts whizzed through the warehouse, lifting brightly wrapped emergency aid pallets into a truck for transport. The shipments have been expedited and should arrive in Lima, Peru, by Sunday. Direct Relief invited the ambassador to its Goleta headquarters to learn more about the response in Peru and how to assist in the future. The shipment sent out Thursday contained critical items like antibiotics, wound care supplies, insect repellant, personal hygiene products and other requested medical goods. These items will go directly to healthcare partners helping those impacted by the floods. Thursday's shipment will be distributed primarily to communities throughout coastal Peru that have been devastated by the flooding. The shipment, worth $984,000, is the latest to go out in the effort to aid flood victims in the country. Earlier this month, Direct Relief sent 12,700 pounds of medical aid. Cholera often follows natural disasters, like flooding, due to compromised water supplies and the prevalence of stagnant water. The shipment that went out Thursday includes two cholera kits, each of which contain enough medical supplies to treat 100 patients. Seventy portable water purification systems are also being sent. Because many of Peru's hospitals and clinics have been destroyed in the flooding, 13 medical tents are also part of the shipment, which will create a temporary space for health services to be administered. Emergency medical backpacks, which contain essential first aid items for first responders, are also included in the shipment. Direct Relief has been providing ongoing medical aid to Peru for the past 47 years. The first emergency situation that Direct Relief responded to in Peru was the Great Peruvian Earthquake, which struck northern Peru in 1970 and was responsible for killing over 66,000 people. Since then, Direct Relief has been an active and steady player in the emergency response arena, coordinating the delivery of situation-specific aid to partner facilities and organizations throughout Peru. Since 2009, Direct Relief has donated $42 million in medical goods to its partners to support their ongoing healthcare work. Filed Under: Health, Peru Health Centers Gear Up for Tropical Storm Barry Tropical Storm Barry: Emergency Aid Deployed Protecting Responders after Floodwaters Inundate Oklahoma Community
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,264
Home/Tecnologia/Nvidia GeForce RTX 4090 Graphics Can Reach 2750 MHz Nvidia GeForce RTX 4090 Graphics Can Reach 2750 MHz Roberto Silva Send an email Jul 5, 2022 Information regarding what Nvidia will be preparing for its new line of GeForce RTX 40 graphics cards continues to arrive regularly. And expectations are already high, leaving potential consumers of these chips eager for their arrival. More recently, industry sources indicate that the high-end model, namely the GeForce RTX 4090, will be able to reach a clock speed of 2,750 MHz. After it became known that Nvidia has cut part of the production of its next generation graphics cards, due to estimates that there will be a drop in the sale of PCs, the GeForce RTX 40, the revelations of possible characteristics and specifications of these new chips remain. from the North American manufacturer. GeForce RTX 4090 can reach 2750 MHz Again, it's the popular and reliable leaker @kopite7kimi who comes forward with new information about the next high-end graphics card from the new GeForce RTX 40 line. According to what has been revealed, the GeForce RTX 4090 model will arrive with an AD102-300 graphics chip- A1 with 16,348 CUDA cores and will be able to reach a base clock of 2,235 MHz and a turbo clock of 2,520 MHz. But in the leaker's publication it is mentioned that the maximum achieved by this GPU is 2,750 MHz. Details further show that the flagship graphics card will feature 24GB of GDDR6X RAM at 21Gbps, with a 384-bit memory interface. As for the TDP, it will be 450W. Gaming Gforce RTX4090 Tech Roberto Silva CANADÁ - CASA DOS AÇORES VOLTA A CELEBRAR AS FESTAS DO DIVINO ESPÍRITO SANTO VENEZUELA - MADEIRA DAY CELEBRATED AT THE SANCTUARY OF OUR LADY OF FATIMA NEW FEATURES COMING TO WHATSAPP – INCLUDING 'STEALTH MODE'
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
713
\section{Introduction} Our ability to understand, forecast and control dynamical systems depends crucially on our knowledge of its underlying equations. Recently data-driven methods to uncover underlying equations have been proposed to uncover unknown dynamics and to increase our forecast capabilities \cite{BruntonKutz,ChampionEtAl18,SchmidtLipson09,GuimeraEtAl20,UdrescuTegmark20,BiggioEtAl21}. A particularly attractive and easy-to-implement method, proposed by Brunton et al. \cite{BruntonEtAl16}, is {\em{Sparse Identification of Nonlinear Dynamics}}, or SINDy for short. The problem which is addressed by SINDy is the following: Given observations ${\bf{x}}_n\in \mathbb{R}^d$ sampled at (not necessarily equidistant) times $t_n$ which were generated by a dynamical system of the form \begin{align*} \dot {\bf{x}} = \mathcal{F}({\bf{x}}), \end{align*} where the dot signifies the time derivative, find an approximation to this dynamical system using only the data. SINDy approaches this question by assuming that the vector field $\mathcal{F}({\bf{x}})$ lies in the span of a given (potentially very large) library of functions such as simple polynomials. This reduces the problem to linear regression on a library of nonlinear functions. In line with the parsimony principle, SINDy imposes a sparsity constraint leading to a sparse approximation of $\mathcal{F}({\bf{x}})$ in terms of the members of the library functions. If the system is only partially observed, the method of SINDy can be extended to the reconstructed phase-space using Takens' embedding theorem and it is then known as the Hankel alternative view of Koopman (HAVOK) analysis \cite{BruntonEtAl17}. SINDy has been successfully applied to systems appearing in a wide range of scientific disciplines, including fluid dynamics, plasma physics and nonlinear optics \cite{LoiseauBrunton18,DamEtAl17,SorokinaEtAl16}. Many dynamical systems involve a delayed feedback response and are model by delay-differential equations (DDEs). Examples range from the natural world to engineering with applications in, for example, population dynamics \cite{Cushing77,Gourley00}, biological regulatory systems \cite{GlassEtAl21}, cardiac dynamics \cite{GottwaldKramer06,Gottwald08}, climate dynamics \cite{SuarezSchopf88,KeaneEtAl17}, mechanical vibration \cite{WangEtAl19} and in optical systems \cite{TerrienEtAl21}, to name just a few. In this paper, we extend the framework of SINDy to dynamical systems which are described by DDEs and where in addition to the sparse subset of the library of nonlinear functions and their associated coefficients, the delay has to be determined as a parameter. We achieve this by employing a bilevel optimization in which, for a fixed specified delay time, the error in reproducing the observations made by each approximate SINDy model is minimized. Particular emphasis will be given to deal with noisy data. We shall first test the proposed SINDy-delay methodology to a one-dimensional toy model with artificially noisy data before considering a challenging problem with biological data of gene expressions in the bacterium {\em{Pseudomonas aeruginosa}} subject to the influence of zinc. \textit{P. aeruginosa} is an opportunistic pathogen capable of causing acute infections in hospitals, in particular in immunocompromised patients, in cystic fibrosis patients and in severe burn victims \cite{Kerr09,Jurado-Martin21}. Therefore, it belongs to the Priority 1 category for research into antibiotic resistance as determined by the world health organization \cite{noauthor_who_2017}. \textit{P. aeruginosa} has a large genome, coding for 5570 open reading frames, of which 72 are involved in predicting so called two component systems (TCS) \cite{stover_complete_2000}. TCSs are crucial biological building blocks. They are composed of a sensor protein which in response to a stimulus, activates a cognate transcriptional regulator by phosphorylation, allowing for a rapid adaptation to environmental changes. \textit{P. aeruginosa} has one of the highest numbers of putative TCSs among bacteria, contributing to the ubiquity of this micro-organism \cite{alm2006}. For instance, the CzcRS TCS promotes resistance to high concentrations as well as to large fluctuations of the concentration of trace metals such as zinc. This is advantageous for the bacteria in an infectious context \cite{Perron04,Dieppois12} since to counter the multiplication of bacteria, the host uses nutritional immunity strategies via scavenging essential nutrients including zinc, iron and manganese \cite{kehl-fie_nutritional_2010, capdevila2016,lonergan2019}. Conversely, during phagocytosis, macrophages deliver a toxic amount of zinc and copper into the phagolysosome, leading to the death of the invader organism \cite{Djoko15,stafford2013,gao2018}. Thus, the success of an infection depends largely on the capacity of a pathogen to survive in zinc deficient as well as zinc excess environments and to switch from one to the other of these extreme conditions. \textit{P. aeruginosa} has a whole arsenal of the most effective systems for regulating the entry and exit of the metal. Moreover, zinc was shown to exacerbate the bacterium pathogenicity, enhancing the virulence factor production and rendering this micro-organism more resistant to antibiotics, especially those belonging to the carbapenem family, a last resort anti-pseudomonas class of compounds \cite{Perron04,Dieppois12}. In order to better understand the \textit{P. aeruginosa} zinc homeostasis, we derive a mathematical delay differential equation model focusing on the dynamic of two main zinc export machineries. To address this challenging question, we use the proposed SINDy-delay methodology applied to experimental data. The paper is organised as follows. In Section~\ref{sec:SINDy} we introduce an extension of SINDy to find parsimonious models for delay-differential equations. In Section~\ref{sec:ENSO} we illustrate the effectiveness of our method in the context of a known toy model DDE; in particular, we show that the DDE is recovered well even for short noise-contaminated observations. In Section~\ref{sec:bio} we then apply our method to experimental data of gene expression data of the bacterium \textit{P. aeruginosa} under various concentrations of zinc. We conclude in Section~\ref{sec:conclusion} and discuss biological implications of the discovered DDE describing the bacterium's zinc regulation system. \section{Sparse Identification of Nonlinear Dynamics with delay for noisy data (SINDy-delay)} \label{sec:SINDy} Consider a $d$-dimensional dynamical system with delay time $\tau$, \begin{align} \label{eq0} \dot {\bf{x}} = \mathcal{F}({\bf{x}}(t),{\bf{x}}(t-\tau)), \end{align} where ${\bf{x}}(t)\in\mathbb{R}^d$, which is probed at times $t_n$, $n=\ldots, -1, 0,1,2,3\ldots,$ by observations \begin{align*} \boldsymbol{\chi}_n = {\bf{x}}_n +\boldsymbol{ \Gamma} \boldsymbol{\eta}_n, \end{align*} with measurement error covariance matrix $\Gamma^2 \in \mathbb{R}^{d\times d}$ and independent normally distributed noise ${\boldsymbol{\eta}}_n\sim {\mathcal{N}}(0,\operatorname{Id})$. For simplicity we assume here throughout $\boldsymbol{ \Gamma} = \gamma \operatorname{Id}$. The aim is to find a parsimonious approximation of the vector field $\mathcal{F}({\bf{x}}(t),{\bf{x}}(t-\tau))$ as a linear combination of nonlinear functions selected from a library $\mathcal{R}$ of cardinality $N_{\mathcal{R}}$. In particular, the $k$th component is expressed as a linear combination of all the library functions $\theta_{j} \in \mathcal{R}$, $ j=1,\dots,N_{\mathcal{R}}$, of the form \begin{align} \mathcal{F}_k({\bf{x}}(t),{\bf{x}}(t-\tau))=\sum_{j=1}^{N_{\mathcal{R}}} \xi_{k}^{j} \theta_{j}({\bf{x}}(t),{\bf{x}}(t-\tau)) + \epsilon_k({\bf{x}}(t),{\bf{x}}(t-\tau)), \label{e.eps} \end{align} for all components $k=1,\ldots,d$. Simple nonlinear regression would amount to determining the coefficients $\xi_{k}^{j}$ using the method of least-squares to minimize the mismatch $\epsilon_k$. In SINDy rather, a sparsity constraint is invoked, seeking a parsimonious model with as many of the coefficients $\xi_{k}^{j}$ being zero while still ensuring fidelity of the approximation (\ref{e.eps}) with respect to the data. To describe how SINDy finds such an approximation, let us assume for the moment that observations are taken at equidistant times $t_n=n\Delta t$ with constant sampling time $\Delta t$. To account for the delay we form the observation vector \begin{align*} \hat {\boldsymbol{\chi}}_n^{(s)} = \begin{pmatrix*}[l] {\boldsymbol{\chi}}_n\\ {\boldsymbol{\chi}}_{n-s} \end{pmatrix*} \in \mathbb{R}^{2d}, \end{align*} for $n=1,\dots,N$, where the positive integer $s$ is related to a delay time $\tau=s\Delta t$. Following the exposition in Brunton et al. \cite{BruntonEtAl17} and Brunton and Kutz \cite{BruntonKutz}, we collect the observation vectors $\hat{\boldsymbol{\chi}}^{(s)}_n$ in a data matrix \begin{align*} \boldsymbol{X}^T = \begin{pmatrix*}[l] {\hat \bchi^{(s)}}_1 & {\hat \bchi^{(s)}}_2&\dots&{\hat \bchi^{(s)}}_{N} \end{pmatrix*} \in \mathbb{R}^{2d \times N}. \end{align*} Similarly we define the matrix consisting of derivatives of the observations ${\boldsymbol{\chi}}_n$ at the observation times \begin{align}\label{eq:Xdot} \boldsymbol{\dot X}^T = \begin{pmatrix*}[l] {\dot { {\boldsymbol{\chi}}}}_1 & {\dot {{\boldsymbol{\chi}}}}_2&\dots&{\dot {{\boldsymbol{\chi}}}}_{N} \end{pmatrix*} \in \mathbb{R}^{d \times N}. \end{align} Note that for the derivatives we only consider the time derivatives for ${\boldsymbol{\chi}}_n$ and not for ${\boldsymbol{\chi}}_{n-s}$ which would be redundant information. Typically one does not have access to the actual derivatives $\dot {\boldsymbol{\chi}}_n$ but only to the variables ${\boldsymbol{\chi}}_n$ themselves. For noise-free finely sampled observations with $\Delta t\ll 1$ finite differencing can be employed to approximate the derivatives. For noisy observations, however, estimating derivatives via finite-differencing leads to an amplification of the noise. Denoising methods such as the total-variation regularized method are required \cite{Chartrand11}. Here we propose to use simple polynomial regression for denoising as discussed in the following remark. \begin{remark} \label{rem:noise} (Denoising procedure for computing the derivative matrix \eqref{eq:Xdot}) For computing each $\dot {\boldsymbol{\chi}}_n$ , $n=1\ldots, N$ in \eqref{eq:Xdot}, we use polynomial regression. We define for the corresponding observation time $t_n$ a temporal window $[t_n-\delta,t_n+\delta]$ with $\delta = r\Delta t$ and fit a $3$rd order polynomial through the $2r-1$ observations ${\boldsymbol{\chi}}_{n-r},{\boldsymbol{\chi}}_{n-r+1},\ldots,{\boldsymbol{\chi}}_{n+r}$ lying within this time window. Choosing a sufficiently large temporal window containing more data points compared to the regression polynomial degree allows for noise reduction. The derivatives $\dot {\boldsymbol{\chi}}_n$ can then be analytically determined from the fitted polynomials at each time $t_n$. This denoising procedure can easily be adapted to handle non-equidistantly sampled observations which may be the situation in experimental data. \end{remark} At the heart of SINDy lies the choice of a suitably large library $\mathcal{R}$. A natural choice is the set of monomials in $x_k(t),x_k(t-\tau),k=1,\ldots,d$ up to a fixed degree $M$, $\mathcal{R}=\{1,x_1(t),x_2(t),x_1(t-\tau),x_2(t-\tau)\ldots \}$ with cardinality $N_{\mathcal{R}} = \binom{2d+M}{M}$. Given a library $\mathcal{R}$, the associated library matrix $\boldsymbol{\Theta} ({\boldsymbol{X}})\in \mathbb{R}^{N \times N_{\mathcal{R}}}$ is constructed from the data by evaluating all functions $\theta_{j}({\bf{x}}(t),{\bf{x}}(t-\tau))$ of the library $\mathcal{R}$ at the observation times $t=t_1,\ldots, t_N$. When considering the library consisting of monomials of up to order $M$, the library matrix becomes \begin{align*} \boldsymbol{\Theta}({\boldsymbol{X}}) = \begin{pmatrix} \boldsymbol{1} & {\boldsymbol{X}} & {\boldsymbol{X}}^2 & {\boldsymbol{X}}^3 & \ldots & {\boldsymbol{X}}^M \end{pmatrix} , \end{align*} where the matrices ${\boldsymbol{X}}^m \in \mathbb{R}^{ N \times \binom{2d+m-1}{m}}$ consist of rows whose coefficients include all possible monomials of degree $m$ between the $d$-dimensional variables ${\boldsymbol{\chi}}_n$ and ${\boldsymbol{\chi}}_{n-s}$. For simplicity, we will later in the numerical experiments exclude any products between ${\boldsymbol{\chi}}_n$ and ${\boldsymbol{\chi}}_{n-s}$. This reduces the number of columns of each ${\boldsymbol{X}}^m$ to $2\binom{d+m-1}{m}$ and the overall number of columns of $\boldsymbol{\Theta}({\boldsymbol{X}})$ to $N_{\mathcal{R}} = 2\binom{d+M}{M}$. In SINDy the minimization of the error $\epsilon_k$ made by the approximation (\ref{e.eps}) is achieved by an $\ell_1$-regularized regression problem. Defining first the $\ell_2$-cost function \begin{align} C(\Xi) = \sum_{k=1}^d \|| {\dot{X}}_k - \boldsymbol{\Theta}({\boldsymbol{X}}) \xi_k||_2^2, \label{e.C} \end{align} where $\dot X_k \in \mathbb{R}^N$ denotes the $k$th column of $\dot {\boldsymbol{X}}$ and $\Xi=\{\xi_k\}_{k=1,\ldots,d}$ is the coefficient matrix consisting of column vectors $\xi_k\in \mathbb{R}^{N_{\mathcal{R}}}$ which denote the coefficients associated with the library functions for the $k$th component of the state variable (cf. (\ref{e.eps})). To promote sparsity of the coefficients the cost function is minimized under an $\ell_1$-sparsity constraint according to \begin{align} \boldsymbol{\xi_k} = \argmin_{\xi_k\in \mathbb{R}^{N_{\mathcal{R}}}} || {\dot{{\boldsymbol{X}}}}_k - \boldsymbol{\Theta}( {\boldsymbol{X}}) \xi_k||_2+ \lambda ||\xi_k||_1, \label{e.cost} \end{align} where the regularization parameter $\lambda$ controls the sparsity. Rather than using a sequential thresholded least-squares algorithm to approximate the solution of the optimisation problem (\ref{e.cost}), as suggested in \cite{BruntonEtAl16}, we promote here sparsity by the following sequential procedure. Define $\xi^q_k$ to be the coefficient of the $q$th library function $\theta_q$ which is associated with the $k$th component of the vector field. For each $q=1,\dots,N_{\mathcal{R}}$ calculate the least square solution $\xi_k^{q} \in \mathbb{R}^{N_\mathcal{R}}$ corresponding to the minimization of the cost function $C(\Xi)$ with the hard sparsity constraint \begin{align*} \xi_{k}^q=0. \end{align*} For each of the $N_{\mathcal{R}}$ solutions $\xi_k^{q}$ record the associated minimized cost $C(\Xi)$, and select the value \begin{equation} \label{eq:qstar} q^*=\argmin_{q=1,\ldots,N_{\mathcal{R}}} \min_{\xi_k} \{C(\Xi)\ ; \ \xi_{k}^q=0 \} \end{equation} corresponding to the hard sparsity constraint $\xi_{k}^{q^*}=0$ which leads to the smallest increase in the minimum of the cost $C(\Xi)$. We then set $\xi_{k}^{q^*}=0$, i.e. excluding $\theta_{q^\star}$ from the library $\mathcal{R}$ for the $k$th state variable. Algorithmically this amounts to deleting the $q$th column of $\boldsymbol{\Theta}( {\boldsymbol{X}})$ when seeking solutions of (\ref{e.cost}). This process of eliminating coefficients $\xi_k^q$ is then repeated for the remaining library functions in $\mathcal{R}$ (and the corresponding columns of $\boldsymbol{\Theta}( {\boldsymbol{X}})$) until a significantly large change of the cost $C(\Xi)$ has been accrued, suggesting that removing any of the remaining functions will lead to a strong increase of the cost function, thereby deteriorating the accuracy of the SINDy model. \begin{remark} \label{rem:sparsity} (Promoting sparsity to approximate solutions of (\ref{e.cost})) Promoting sparsity by envoking \eqref{eq:qstar} avoids having to set a cutoff value $p$ such that coefficients with $|\xi_k^j|\leq p$ are removed as proposed in Brunton et al. \cite{BruntonEtAl16}. Instead the degree of sparsity is visually determined by plotting the cost function for an increasing number of removals. This does not require the data $X$ to be normalized in a pre-processing step and can be applied to situations in which variables may exhibit widely varying ranges. We shall encounter such a situation for the experimental data in Section~\ref{sec:bio}. \end{remark} The above procedure is applied to each of the components $k=1,\ldots,d$ with each component having their separate subset of eigenfunctions selected. Collecting the typically sparse output vectors $\xi_k^*$ k=1,$\ldots d$ in the matrix $\Xi^*=(\xi_1^*,\ldots \xi_d^*)$, the approximate SINDy delay differential equation model for arbitrary fixed delay time $\tau_s=s\Delta t$ is given by \begin{equation} \label{e.SINDy} \dot x_k(t;\tau_s) = \sum_{j=1}^{N_{\mathcal{R}}} (\xi^j_k)^* \theta_{j}({\bf{x}}(t),{\bf{x}}(t-\tau_s)),\quad k=1\ldots,d. \end{equation} Up to here this is standard SINDy, as described in Brunton et al. \cite{BruntonEtAl16}, except for the proposed alternative method of denoising with local polynomial regressions of the data points described in Remark \ref{rem:noise}, and for the modified algorithm to approximate solutions to the optimization problem (\ref{e.cost}) described in Remark \ref{rem:sparsity}. \begin{algorithm}[tb] \hrule \smallskip \textbf{Input:} {Observational data $\chi_n,n=\ldots,-1,0,1,2,\ldots$, and nonlinear function library set $\mathcal{R}=\{\theta_j\ ;\ j=1,\dots, N_{\mathcal{R}} \}$}.\\ \textbf{Output:} {Delay time $\tau^*$ and coefficient matrix $\Xi^*$ determining the delay differential equation model \eqref{eq_S}}.\\[2mm] Compute the derivative matrix $\boldsymbol{\dot X}$ in \eqref{eq:Xdot}, with denoising procedure (see Remark~\ref{rem:noise})\; \For{all delay time $\tau_s=s\Delta t$, $s\in\{0,1,2,\ldots\}$ }{ Compute the data matrix $\boldsymbol{X}$ and the associated library matrix $\Theta(\boldsymbol{X})$.\\ \For{$k\in\{1,\ldots,d\}$ }{ Set the list $Q\subset\{1,\ldots,N_{\mathcal{R}}\}$ of indices of vanishing coefficients $\xi_k^j=0$ to achieve the sparsity constraint of the SINDy methodology to $Q=\emptyset$\; \While{$C=\min_{\xi_k \in \mathbb{R}^{N_\mathcal{R}}} \{ \|| {\dot{X}}_k - \boldsymbol{\Theta}({\boldsymbol{X}}) \xi_k||_2 \ ;\ \xi_k^j =0\mbox{ for all }j\in Q\}$ does not increase significantly (see Remark~\ref{rem:sparsity})}{ Compute $\displaystyle (q^*,\xi_k^*)=\argmin_{q\in\{1,\ldots,N_{\mathcal{R}}\}\setminus Q,\ \xi_k \in \mathbb{R}^{N_{\mathcal{R}}}}\{\|| {\dot{X}}_k - \boldsymbol{\Theta}({\boldsymbol{X}}) \xi_k||_2\ ; \ \xi_{k}^j=0 \mbox{ for all } j\in Q \cup \{q\}\} $.\\ Set $Q=Q\cup \{q^*\}$. } } Keep $\Xi^*=\{\xi_1^*,\ldots,\xi_d^*\}$ and the corresponding error $\mathcal{E}(\tau_s)$ in \eqref{e.ell2}. } Save the optimal delay time $\tau^*$ given in \eqref{e.cost2}, and the corresponding coefficient matrix $\Xi^*$. \smallskip \hrule \smallskip \caption{SINDy algorithm for dynamical systems with temporal delay. \label{algo1}} \end{algorithm} To account for a delay we extended the nonlinear library $\{\theta_k\}_{k=1,\dots,{N_{\mathcal{R}}}}$ to include delay terms ${\bf{x}}(t-\tau)$, which fits in the standard SINDy methodology for fixed delay time parameter $\tau_s=s\Delta t$. To estimate the delay time $\tau=s\Delta t$ of the dynamical system \eqref{e.SINDy} which best matches the data $\boldsymbol{\chi}_n$ an additional optimization procedure is employed: Consider a range of delay times $\tau_s = s\Delta t$ with integer sequence $s\in \{0,1,2,\ldots,\}$. For each $\tau_s$ we perform the above procedure to obtain the SINDy model (\ref{e.SINDy}). We then compute the reconstruction error $\mathcal{E}(\tau_s)$ for fixed delay time parameter $\tau_s$ as the $\ell_2$-error between the solution of the SINDy model (\ref{e.SINDy}) and observations ${\boldsymbol{\chi}}_n$, \begin{align} \mathcal{E}(\tau_s)=\frac{1}{Z}\sum_{n=1}^N ||{\bf{x}}(t_n; \tau_s) - {\boldsymbol{\chi}}_n||^2. \label{e.ell2} \end{align} We set the normalization constant to $Z=\sum_{j=0}^N ||{\boldsymbol{\chi}}_j||^2$. The optimal delay time is estimated as the solution of \begin{align} \tau^\star = \argmin_{\tau_s} \mathcal{E}(\tau_s). \label{e.cost2} \end{align} This finally yields the SINDy-delay differential equation model, \begin{align} \label{eq_S} \dot {\bf{x}}(t)^T = \boldsymbol{\Theta}( {\bf{x}}(t)^T,{\bf{x}}(t-\tau^*)^T)\Xi^*. \end{align} We summarize the extension of the SINDy methodology to systems involving temporal delays in Algorithm \ref{algo1}. In the next Section we show how the SINDy-delay method performs for artificial data obtained from a simple one-dimensional DDE. \section{Application to a toy model} \label{sec:ENSO} \begin{table}[btp] \begin{center} \small \begin{tabular}{ccccccccccc} & $N$& $\Delta t$& noise $\gamma$ & observations & $\tau^\star$ & $x(t)$ & $x(t-\tau)$ & $x^3(t)$ & error $\mathcal{E}(\tau^{\tau^\star})$\\ \hline &--& -- & --& -- & $7$ & $1$ & $-0.75$ & $-1$ & -- \\ \hline (a) & $4000$ & $0.025$ & $0$ & $x,\dot{x}$ & $7.00$ &$1.00$ &$-0.75$ &$-1.00$& $1.36e-03$\\ (b) & $4000$ & $0.025$ & $0$ & $x$ & $7.00$ &$0.99$ &$-0.75$ &$-0.99$ &$1.80e-03$\\ (c) & $200$ & $0.25$ & $0.02$ & $x$ & $7.00$ &$0.84$ &$-0.71$ &$-0.88$ &$2.56e-02$\\ (d) & $4000$ & $0.025$ & $0.02$ & $x$ & $7.025$ &$0.92$ &$-0.74$ &$-0.94$ &$1.94e-02$\\ \hline \end{tabular} \end{center} \caption{Results of the SINDy-delay method (Algorithm \ref{algo1}) applied to data obtained from the toy model (\ref{e.ENSO}). The first row denotes the true delay time and coefficients of the DDE (\ref{e.ENSO}) used to generate the observations. Rows (a)--(d) present results for the different scenarios described in the main text. We present results the estimated delay times $\tau$, the coefficients as well as the associated reconstruction error $\mathcal{E}(\tau)$ (cf \eqref{e.ell2}) between the data and the SINDy differential equation model \eqref{eq_S} for varying data length $N$, sampling times $\Delta t$, noise levels $\gamma$. The columns for the monomial terms $x(t), x(t-\tau),x(t)^3$ display the estimated corresponding coefficients $\xi_{1}^j$ (coefficients for the remaining monomials were estimated to be $0$ in all experiments). Experiments in which only $x$ is observed required polynomial regression as described in Remark~\ref{rem:noise}.} \label{tab:para} \end{table} To illustrate how the SINDy-delay method is able to determine an underlying DDE together with the delay time parameter from noisy observations, we consider the following one-dimensional DDE, \begin{align} \dot x(t) = x(t)-x(t)^3 - \alpha x(t-\tau). \label{e.ENSO} \end{align} This DDE was introduced as a toy model in the context of climate science to describe, for example, the El Ni\~no -- Southern Oscillation (ENSO) phenomenon where $x(t)$ denotes a sea-surface temperature anomaly at time $t$ \cite{SuarezSchopf88}. We choose in the following as parameter value $\alpha=0.75$ and a delay time of $\tau=7$. The initial solution on the time interval $[-\tau,0]$ is chosen to be the stable periodic solution to \eqref{e.ENSO} and with $x(0)=1$. For the set of nonlinear library functions $\mathcal{R}$, we consider all monomials up to cubic degree, $\mathcal{R}=\{1,x(t),x(t)^2,x(t)^3,x(t-\tau),x(t-\tau)^2,x(t-\tau)^3\}$, excluding products of $x(t)$ and $x(t-\tau)$. We simulate the DDE (\ref{e.ENSO}) using the Matlab dde23 integrator with absolute and relative tolerances of $10^{-8}$ to produce time series of $N$ observations sampled at equidistant times with sampling time $\Delta t$ \cite{MATLAB:2019}. We present results for several scenarios with increasing difficulty. In particular, we investigate how the accuracy of the method depends on the amount of data available as well as on the level of noise. In the following we restrict the delays $\tau_s$ to $\tau_s = k \Delta t$ for $k=1,\dots,8.5/\Delta t$ so that we sample from the interval $[0, 8.5]$. \begin{figure}[tbp] \centering \begin{subfigure}[t]{0.47\textwidth} \centering \includegraphics[width=1\linewidth]{fig1a} \caption{noiseless observations of $x$ and $\dot{x}$ ($N=4\,000$).} \end{subfigure} \begin{subfigure}[t]{0.47\textwidth} \centering \includegraphics[width=1\linewidth]{fig1b} \caption{noisy observations of $x$ ($N=200$, $\gamma=0.02$).} \end{subfigure} \caption{Cost function $C(\Xi)$ (nomalized by $C(0)$) against removed monomials for fixed delay. Results are shown for the true delay time $\tau_s=7$ (open circles, online blue) and for the non-optimal delay time $\tau_s=6$ (diamonds, online red). The error increases after iteration 4 of the SINDy algorithm, as soon as the terms $x,x^3,x(t-\tau)$ actually present in the underlying DDE model start to be removed from the library.} \label{fig:remove} \end{figure} We begin with the ideal situation of noiseless observations of both the state ${\bf{x}}$ and the derivative $\dot{\bf{x}}$. We consider observations with $N=4\,000$ sampled at equidistant times with sampling interval $\Delta t=2.5\cdot 10^{-2}$. Figure~\ref{fig:remove}(a) shows the increase of the normalised cost function $C(\Xi)$ upon removal of members of the library $\boldsymbol{\Theta}({\boldsymbol{X}})$ for fixed delay time $\tau_s$. The normalization is with respect to $C(0)$, the value when all library functions are removed, i.e. the error encountered for a rough model with a constant solution $x(t)=x(0)$. The member of the library $\mathcal{R}$ to be removed at each iteration is chosen as the one leading to the least increase of the normalized cost function. This iterative process terminates with the remaining terms as output, just before the normalized cost function increases by more than 10$\%$. We present results for the delay time $\tau_s=7$ (blue curve with open circles), corresponding to the true delay time, and for a non-optimal delay time $\tau_s=6$ (red curve with diamonds). We indicate for both delay times the library functions which are removed at each iteration. For the correct delay time $\tau_s=7$, as expected, we observe a jump in the cost function when one of the terms is being removed which appears in the DDE (\ref{e.ENSO}) (i.e. $x(t)$, $x(t)^3$, $x(t-\tau)$). For the non-optimal delay time $\tau_s=6$ we also see, as expected, a jump but the selected terms $x(t)$, $x(t)^3$, $x(t-\tau)^3$ do not correspond to the actual terms appearing in (\ref{e.ENSO}). We also observe a significantly lower value of the cost function for the (optimal) delay time $\tau_s=7$ compared to the non-optimal delay time $\tau_s=6$ at iteration numbers for which none of the selected monomials have been removed. In Figure~\ref{fig:L2error} (open circles, online blue), we show how the optimal delay time $\tau^\star$ is determined by inspecting the reconstruction error $\mathcal{E}(\tau_s)$ (cf. (\ref{e.ell2})). The reconstruction error has a clear minimum at $\tau^\star=7$. In the more challenging case when only ($N=4\,000$) noise-less observations $x(t_n)$ are available and the derivative matrix $\dot {\boldsymbol{X}}$ has to be estimated in a post-processing step using the polynomial smoothing described in Remark~\ref{rem:noise}. We perform the polynomial regression with $r=25$ with $\delta = r\Delta t=0.625$. The SINDy algorithm recovers the coefficients and delay times close to the true values as seen in row (b) of Table~\ref{tab:para}.\\ We now test the method in the difficult case of short noise-contaminated data with $N=200$. The variable $x(t)$ is sampled at observation intervals of $\Delta t=0.25$ and are contaminated with observational noise with $\gamma=0.02$. To estimate the derivatives from the data polynomial regression is employed with $r=5$ and $\delta = r\Delta t=1.25$. Note the smaller value of $r$ compared to the noiseless case considered above accounting for the ten-times larger sampling time used here. Figures~\ref{fig:remove}(b) and \ref{fig:L2error} (diamonds, online red) show that, remarkably, SINDy identifies the correct members of the library and provides an excellent estimation of the delay time with $\tau^\star = 7$. The estimated parameters for the SINDy model (\ref{e.SINDy}) are reported in row (c) of Table~\ref{tab:para}, and, unsurprisingly, more strongly deviate from the true values with reconstruction error $\mathcal{E}(\tau^\star=7)$ of 2.56\%. If a longer time series with $N=4\,000$ is used to train the SINDy model, the error is reduced to 1.94\% (Table~\ref{tab:para}, row (d)), indicating that the limiting factor is the noise rather than the length of the time series. Figure~\ref{fig:remove}(b) shows again the normalized cost function $C(\Xi)$ upon removal of members of the library for fixed delay time $\tau_s$. We show results for the true delay time $\tau_s=7$ and a for a non-optimal delay time $\tau_s=6$. The increase of the cost function upon removal of terms appearing in the true model is less pronounced than in the ideal case of noiseless observations (cf. Figure~\ref{fig:remove}(a)). We remark that the value of the cost function for iteration numbers before the removal of the monomials of the DDE (\ref{e.ENSO}) is significantly larger than for the ideal noiseless case. Figure~\ref{fig:L2error} shows the reconstruction error $\mathcal{E}(\tau_m)$ as a function of the delay time $\tau_s$. As in the noiseless case a clear minimum is observed at $\tau_s=7$ corresponding to the delay of the true model. The perfect accuracy of the estimation of the delay with $\tau^\star=7$ is due to the coarse sampling time $\Delta t=0.25$ implying that the next closest values of delays used for the optimization are $\tau_s=6.75$ and $\tau_s=7.25$, which both lead to a significantly larger value of the reconstruction error $\mathcal{E}(\tau_s)$.\\% as seen in Figure~\ref{fig:L2error} (red curve).\\ In Figure~\ref{fig:comparison} we display the trajectories obtained from simulating the estimated SINDy model (\ref{e.SINDy}) and compare them to the trajectory of the (noiseless) true model (\ref{e.ENSO}), both initialized with the same initial condition as the true solution of (\ref{e.ENSO}). Note that the initial value $x(0)=1$ was not part of the (noisy) observations used for training. Remarkably, the SINDy algorithm permits to recover the true solution for the observed time window even for the noise-contaminated case with trajectories which are hardly discernible with the bare eye. For longer times we will, however, observe increasing phase errors for the noisy case which does not recover the true coefficients exactly (not shown for brevity). The same SINDy model run with a close but non-optimal delay time of $\tau=6$, however, leads to strong phase and amplitude errors of the SINDy-DDE model as seen in Figures~\ref{fig:comparison}(b) and~\ref{fig:comparison}(d). \begin{figure}[tbp] \centering \includegraphics[width=0.5\columnwidth]{fig2} \caption{Reconstruction error $\mathcal{E}(\tau_s)$ as a function of the delay time $\tau_s$, showing a clear minimum at the true delay time $\tau=7$ for the ideal case with $N=4\,000$ noiseless observations of $x$ and $\dot{x}$ (open circles; online blue) and for the case of $N=200$ noisy observations of $x$ with noise level $\gamma=0.02$ (diamonds; online red).} \label{fig:L2error} \end{figure} The results presented for the simple one-dimensional toy model (\ref{e.ENSO}) suggest that the SINDy-delay Algorithm~\ref{algo1} described in Section~\ref{sec:SINDy} is able to recover the dynamics of a DDE for relatively short data contaminated by moderate measurement noise at least on the time-scale covered by the observations. Indeed, the reconstruction error is only $2.56$ \% (see Table \ref{tab:para}) with only $N=200$ data measurements, compared to standard applications of SINDy with for instance with $N=10^5$ data measurements for the Lorenz attractor example in \cite[Appendix 4.2]{BruntonEtAl16}. In the next Section we show how to use the SINDy-delay method to uncover the dynamics from a set of biological experiments where the underlying dynamical system is not known. \begin{figure}[tbp] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=1\linewidth]{fig3a} \caption{noiseless observations of $x$ and $\dot{x}$\\ \tab $N=4\,000$, optimal $\tau=7$.} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=1\linewidth]{fig3b} \caption{noiseless observations of $x$ and $\dot{x}$\\ $N=4\,000$, non-optimal $\tau=6$.} \end{subfigure}\\ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=1\linewidth]{fig3c} \caption{noisy observations of $x$\\ $N=200$, $\gamma=0.02$, optimal $\tau=7$.} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=1\linewidth]{fig3d} \caption{noisy observations of $x$\\ $N=200$, $\gamma=0.02$, non-optimal $\tau=6$.} \end{subfigure} \caption{Comparison of the trajectories obtained from the SINDy model (\ref{e.SINDy}) (continuous curves; online blue) and from the true model (\ref{e.ENSO}) (dashed curves; online red) for optimal and non-optimal delay times, and for noiseless and noisy data points, respectively.} \label{fig:comparison} \end{figure} \section{Application to experimental data of gene expressions in {\em{Pseudomonas aeruginosa}}} \label{sec:bio} \begin{figure}[tbp] \centering \includegraphics[width=1\columnwidth]{fig4} \caption{ Sketch of the biological model of the two compartment system (TCS) for the regulation of zinc in \textit{Pseudomonas aeruginosa}. \textbf{A)} Representation of the bacterium \textit{Pseudomonas aeruginosa}. The square represents the location of the two membranes in which the transport systems visible in \textbf{B} are integrated in. \textbf{B)} Schematic representation of the two-steps dynamical response of the proteins CadA (blue) and CzcCBA (red) after zinc induction, adapted from \cite{ducret_czccba_2020}. As soon as the metal enters the cell, CadA is rapidly expressed by CadR, leading in a second phase to the induction of CzcCBA via the CzcRS TCS. \textbf{C)} The delay differential equation describing the dynamics of the Cad (blue color) and Czc (red color) systems after addition of 2 mM Zn obtained by the SINDy-delay method. } \label{fig:biomodel} \end{figure} Zinc is an essential element in most living organisms and its proper dosage is vital for their survival. In bacteria, zinc is typically bound to proteins and is responsible for both structural and functional roles of those proteins \cite{blencowe2003}. Too small zinc concentrations impede on the biological functioning of these proteins. Equally, if zinc is present in excess, it becomes toxic, mainly by nonspecific bindings compromising the cellular integrity \cite{blencowe2014}. Therefore, intracellular zinc concentration must be tightly regulated. This balance of cellular concentration (also called homeostasis) is finely controlled by zinc import and export systems and their regulators. Several strategies have evolved in \textit{P. aeruginosa} to mitigate against strong fluctuations of environmental zinc concentrations. In particular, numerous systems composed of transmembrane complexes act to maintain zinc homeostasis \cite{pederick2015,ducret2021}. Like all Gram negative bacteria, \textit{P.aeruginosa} possesses a double membrane separated by a particular compartment, the periplasm as illustrated in Figure~\ref{fig:biomodel}. Several complexes are involved in the uptake of zinc in two stages: the first one allows transport of zinc from the outside into the periplasm, the second allows for transport from the periplasm into the cytoplasm. In presence of zinc excess, the associated import transporters are repressed, giving way to a reversed export systems. The most effective transporter is the efflux pump CzcCBA, which expels metal from the periplasm or the cytoplasm directly out of the cell (cf. Figure~\ref{fig:biomodel}) \cite{nies1989,goldberg1999}. The P-type ATPase CadA on the other hand expels zinc from the cytoplasm to the periplasm \cite{lee2001}. (We follow here the standard convention that proteins have names starting with a capital letter whereas their associated genes have names all in small caps and are written in italics). Other export systems have been described in this bacterium, such as CzcD or YiiP, but do not appear to play a major role in zinc resistance \cite{salusso2017,ducret_czccba_2020}. The expression of the proteins CadA is regulated by CadR that belongs to a family of transcriptional regulators known to be constitutively located on the promoter sequences of their target genes \cite{brown2003}. This configuration provides a fast response as follows: when the cytoplasmic zinc concentration reaches a critical value, CadR binds the metal and immediately induces \textit{cadA} transcription \cite{ducret_czccba_2020}. Conversely, the efflux pump CzcCBA is activated by the CzcRS TCS, where in presence of high periplasmic concentration of zinc, the CzcS sensor activates the CzcR regulator which in turn binds the DNA, promoting the activation of its own transcription and the \textit{czcCBA} efflux pump, but also represses \textit{oprD} porin transcription~\cite{ducret_czccba_2020}. OprD is the entry route for carbapenem antibiotics. Therefore, in presence of zinc, CzcR render the bacterium resistant to both metal and antibiotics. Interestingly, the CadA P-type ATPase appeared to be a key component for a full and timely induction of CzcCBA, suggesting a hierarchical expression in zinc export systems \cite{ducret_czccba_2020} as shown schematically in Figure~\ref{fig:biomodel}. In a zinc deficient medium, all import systems are expressed. Consequently, zinc accumulates rapidly in the cytoplasm during a metal boost. This results in the closure of the uptake machineries and at the same time the fast induction of CadA, which begins to expel zinc from the cytoplasm to the periplasm, leading subsequently to the activation of the CzcRS TCS and therefore of CzcCBA. This subsequently promotes a strong expulsion of zinc, which in turn decreases CadR activity and hence CadA expression. To better characterize and model this regulatory system, we seek a simplified two-dimensional differential equation system, describing the dynamical induction of the two agents CadA and CzcCBA. The following subsection describes the experimental set-up employed to obtain measurements for CadA and CzcCBA. \subsection{Experimental design and results} \label{subsec:bio_mod} We used the transcriptional fusions \textit{cadA::gfp} and \textit{czcCBA::gfp} described in \cite{ducret_czccba_2020}. To do so the green fluorescent protein (GFP) were fused with the regulatory sequences of \textit{cadA} or \textit{czcC} genes, respectively. To investigate the interaction between the proteins CadA and CzcCBA, we consider a wild type (wt) strain of \textit{P. aeruginosa} as well as mutants in which either CadA is not expressed ($\Delta$\textit{cadA}) or CzcCBA is not expressed ($\Delta czcA$). Strains were independently grown in a zinc deficient M-LB medium as described in \cite{ducret_czccba_2020}, for 2 hours 30 minutes before the addition of different concentrations of zinc (in the form of ZnCl\textsubscript{2}). We perform experiments for various zinc concentrations with 0.5, 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5 mM. The fluorescence of \textit{cadA::gfp} was measured for the wt and the $\Delta$\textit{czcA} strains. Similarly, the fluorescence of \textit{CzcCBA::gfp} was measured in the wt and the $\Delta$\textit{cadA} strains. Fluorescent values were monitored every 5 minutes for 160 minutes and normalized by the optical density at 600nm (OD600, a standard methodology which permits to estimate the bacterial concentrations). This amounts to a short time series of 33 measurements per experiment. Each experiment is conducted three times and we report on the averages over those three experiments. In the following time $t = 0$ corresponds to the moment of the metal addition. For ease of exposition, fluorescence measurement are shifted to start with a value of $0$ at time $t = 0$. Figure~\ref{fig:fusion2mM} shows the fusion measurements for the wild type and two mutants after adding 2mM of ZnCl\textsubscript{2}. In agreement with previous work \cite{ducret_czccba_2020}, in the wt strain (see Figure~\ref{fig:fusion2mM}a), the CadA induction drops when CzcCBA begins to be expressed, i.e. several minutes after the addition of zinc. However in the $\Delta czcA$ mutant we observe a continuous induction of CadA during the time of the experiments (see Figure~\ref{fig:fusion2mM}b). The fusion results also reveal a later induction of CzcCBA in the $\Delta cadA$ strain compared to the wt strain (see Figure~\ref{fig:fusion2mM}c). \subsection{SINDy-delay method to uncover CadA and CzcCBA system dynamics} \begin{figure}[tbp] \begin{subfigure}[b]{0.3\textwidth} \raggedleft \includegraphics[height=4.5cm]{fig5a} \caption{wt} \end{subfigure}\quad\quad\quad \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[height=4.3cm]{fig5b} \caption{$\Delta$\textit{czcA}} \end{subfigure}\quad \begin{subfigure}[b]{0.3\textwidth} \raggedright \includegraphics[height=4.3cm]{fig5c} \caption{$\Delta$\textit{cadA}} \end{subfigure} \caption{ Fluorescence measurements after addition of 2 mM ZnCl\textsubscript{2} compared to the corresponding mathematical delay differential equation (DDE) model (solid lines). The fluorescence intensity over time is shown for the wt, $\Delta cadA$ and $\Delta czcA$ strains, containing the \textit{cadA::gfp} (open circles; online blue) or \textit{czcA::gfp} (diamonds; online red) fusions. The values are normalized by the optical density (OD600). Standard deviations of three independent measurements are shown. The mathematical solutions, according to the SINDy selected model, are shown in solid lines. } \label{fig:fusion2mM} \end{figure} The dynamics and induction intensity of CadA and CzcCBA systems depend on several factors, including intracellular (periplasmic and/or cytoplasmic) concentration of zinc, as well as on the response velocity and the metal sensitivity of their respective regulators. Moreover, experimental data obtained from transcriptional fusions are only proxies depending on GFP synthesis and its stability. For simplicity, we ignore these complex interactions and instead consider only two "boxes", one signifying all the variables involved in CadA expression (blue box in Figure~\ref{fig:biomodel}) and one signifying those responsible of CzcCBA expression (red box in Figure~\ref{fig:biomodel}). This simplification implies a mathematical model with only CadA and CzcCBA expressions as dependent variables. We assume that the zinc concentration remains constant during the induction experiment. For the wild type bacteria (wt) we seek a model of the form \begin{align} \label{e.frawark1a} \dot x_{wt}(t) &= f(x_{wt},y_{wt}), \qquad \dot y_{wt}(t+\tau_{wt}) = g(x_{wt},y_{wt}), \end{align} where $x(t)$ represents the fluorescence from \textit{cadA::gfp} while $y(t)$ represents \textit{czcA::gfp}. For the mutant $\Delta\textit{czcA}$, which lacks expression of $\textit{czcA}$ the dynamics is obtained by setting $y=0$ in the above model for the wild type. We obtain \begin{align} \dot x_{\Delta\textit{cz}}(t) &= f(x_{\Delta\textit{cz}},0) . \label{e.frawark1b} \end{align} Similarly, for the mutant $\Delta\textit{cadA}$, which lacks expression of $\textit{cadA}$ the dynamics is obtained by setting $x=0$ in the above model for the wild type, and we obtain \begin{align} \dot y_{\Delta\textit{ca}}(t+\tau_{\Delta\textit{ca}}) &= g(0,y_{\Delta\textit{ca}}). \label{e.frawark1c} \end{align} We allowed here for a delay time $\tau_{\Delta\textit{ca}} \neq \tau_{wt}$ accounting for the possibility that the delay may depend on the presence of the various agents present in the regulatory process. We also assume that $x(t)=y(t)=0$ for $t<0$ which corresponds to the natural assumption that neither CadA ($x$) nor CzcCBA ($y$) are produced when no zinc has been added yet into the growing medium, which could activate their expression (see Figure~\ref{fig:biomodel}). To determine the model \eqref{e.frawark1a}-\eqref{e.frawark1c}, we apply the SINDy-delay method presented in Sections~\ref{sec:SINDy} for the fluorescence measurements of the expression kinetics experiments described in Section~\ref{subsec:bio_mod}. To estimate the functions $f$ and $g$ in \eqref{e.frawark1a} as well as the delay times from the experimental data, we consider a library consisting of all monomials up to cubic order to approximate \eqref{e.frawark1a}-\eqref{e.frawark1c}. To search for a parsimonious model only few terms selected from the library, we apply the sparsity constraints as detailed in Section \ref{sec:SINDy} and search for the delay times $\tau_{\Delta\textit{ca}}$ and $\tau_{wt}$ in the set $[0,5,\ldots,160]$ in units of minutes: we remove the less meaningful terms of the library and stop the process when the minimum of the normalized cost function $C(\Xi)$/$C(0)$, which refers to equation (\ref{e.C}), increases by more than 10 percents. Optimal delays time are found by minimizing the function $\mathcal{E}(\tau_{wt},\tau_{\Delta ca})$ reconstruction error corresponding to (\ref{e.ell2}). This process is applied for all experiments with the various zinc concentrations. In Figure~\ref{fig:2mMprocess} we show results for the 2 mM induction of zinc. Figure~\ref{fig:2mMprocess}(a) shows the increase of the normalized cost function upon removal of both $x$ and $y$ components. Figure~\ref{fig:2mMprocess}(b) shows the reconstruction error $\mathcal{E}(\tau_s)$ with a minimum error of $7.8\%$ for $\tau_{wt}=30$ and $\tau_{\Delta\textit{ca}}=70$ minutes. In particular, we obtain from the SINDy-delay methodology the following delay differential equation model for a concentration of 2 mM of zinc, \begin{align}\centering \dot x_{wt}(t) &= 201-9.08\cdot10^{-3}\, y_{wt}, \label{eq:supermodelx}\\ \dot y_{wt}(t+\tau_{wt}) &= 117+4.28\cdot 10^{-7}\, x_{wt}^{2} \label{eq:supermodely} \end{align} and \begin{align}\centering \dot x_{\Delta cz} &=201, \nonumber\\ \dot y_{\Delta ca}(t+\tau_{\Delta ca}) &=117 \label{eq:supermodel3} \end{align} with $\tau_{wt}=30, \tau_{\Delta ca}=70$. In Figure~\ref{fig:fusion2mM}, solutions of the DDE model \eqref{eq:supermodelx}-\eqref{eq:supermodely} and of \eqref{eq:supermodel3} are plotted and compared with experimental data for 2 mM ZnCl\textsubscript{2}, which shows a high degree of similarity with a reconstruction error of 7.85\%. The complete results for all zinc concentrations tested (from 0.5 to 2.5 mM) are shown in Table \ref{tab:parabio}. We remark that the coefficient of the linear and quadratic terms in \eqref{eq:supermodelx},\eqref{eq:supermodely}, are of the order of $10^{-2}$ and $10^{-7}$, respectively; although their coefficients are small, their presence is crucial. Such small coefficients are hard to detect when employing standard thresholding procedures. This illustrates the advantage of our method based to promote sparsity outlined in Remark~\ref{rem:sparsity}. \begin{figure}[tbp] \begin{subfigure}[b]{0.5\textwidth} \raggedleft \includegraphics[width=1\linewidth]{fig7a} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\linewidth]{fig7b} \caption{} \end{subfigure} \caption{(a) Cost function $C(\Xi)$ (normalized by $C(0)$) against removed monomials for fixed optimal delays, $\tau_{wt}=30$ (open circles; online blue) and $\tau_{\Delta ca}=70$ (diamonds; online red). The vertical black line indicates the iteration number where the process is stopped. At iteration 5 the cost function has increased more than 10\% for both components. (b) Reconstruction error $\mathcal{E}(\tau_{wt},\tau_{\Delta ca})$ showing a minimum value equal to $7.8\cdot10^{-2}$ for $\tau_{wt}=30$ and $\tau_{\Delta ca}=70$, indicated by a red cross.} \label{fig:2mMprocess} \end{figure} \begin{table}[btp] \begin{center} \renewcommand{\arraystretch}{1.3} \scalebox{0.6}{ \begin{tabular}{|c|cc|ccccccc|ccccc|c|} \hline Zn &\multicolumn{2}{|c|}{Delays} & \multicolumn{7}{|c|}{Coefficients for function $x(t)$ (\textit{cadA::gfp})} & \multicolumn{5}{|c|}{Coefficients for function $y(t)$ (\textit{czcA::gfp})}&Error \\ \hline $[mM]$ & $\tau_{wt}$& $\tau_{\Delta\textit{ca}}$ & $1$ & $x$ &$y$&$x^2$&$xy$ & $y^2$ & $xy^2$& $1$& $x$&$y$&$x^2$ & $xy$ & $\mathcal{E}(\tau_{wt},\tau_{\Delta ca})$ \\ \hline 0.5 &20 &25 &370 &-3.51$\cdot10^{-2}$ & -3.70$\cdot10^{-2}$ &1.14$\cdot10^{-6}$ &0 & 1.69$\cdot10^{-6}$&0&118&1.61$\cdot10^{-2}$&0&0&-1.53$\cdot10^{-6}$&0.0328\\ 1& 25 & 35 & 317& 0& 0&-2.16$\cdot10^{-7}$& -3.55$\cdot10^{-6}$&0&1.14$\cdot10^{-10}$ &180&5.30$\cdot10^{-3}$ &-6.51$\cdot10^{-3}$&0&0&0.0711\\ 1.25&30&45&316&0&0&-1.74$\cdot10^{-7}$&-2.56$\cdot10^{-6}$&0&7.90$\cdot10^{-11}$&162&0&-5.18$\cdot10^{-3}$&3.36$\cdot10^{-7}$&0&0.0666\\ \hdashline {1.5}& 25& {55}& 216&0& -1.13$\cdot10^{-2}$&0&0&0&0&122&0&0&2.35$\cdot10^{-7}$&0& 0.11\\ {1.75}& 30& {65}& 209&0& -1.04$\cdot10^{-2}$&0&0&0&0&112&0&0&3.46$\cdot10^{-7}$& 0& 0.0865\\ {2}& 30& {70}& 201&0&-9.08$\cdot10^{-3}$&0&0&0&0&117&0&0&4.28$\cdot10^{-7}$&0& 0.0785\\ {2.25}& 35& {85}& 200&0&-8.48$\cdot10^{-3}$&0&0&0&0&134&0&0& 4.10$\cdot10^{-7}$&0&0.0730\\ \hdashline 2.5& 45& 95& 200&0&-6.54$\cdot10^{-3}$&0&0&0&0&129& 1.04$\cdot10^{-2}$&0&0&0& 0.0613\\ \hline \end{tabular}} \end{center} \caption{Results of the SINDy-delay method for the various zinc concentrations. Terms from the library function which were not selected for any zinc concentration are not represented. } \label{tab:parabio} \end{table}% \renewcommand{\arraystretch}{1}% \paragraph{DDE model accuracy and consistency} We observe in Table~\ref{tab:parabio} that for all zinc concentrations the SINDy-delay method yields DDE models with reconstruction errors smaller than 11\%. The SINDy model matches the experimental data very well and is biologically consistent for all ZnCl\textsubscript{2} concentrations. This is notable given the very short length of the experimental time series with $N=33$ data. Remarkably, for moderate ZnCl\textsubscript{2} concentrations between 1.5 mM and 2.25 mM (emphasized in dashed lines in Table~\ref{tab:parabio}), a unified SINDy DDE model arises which benefits from the sparsity feature, with only the terms $1$,$y$ and $x^2$ selected, allowing for a biologically consistent interpretation of the terms. Importantly, the signs of the associated coefficients are consistent with the biological model: the coefficient associated with the linear term in $y$ in (\ref{eq:supermodelx}), which describes the influence of $y$ (CzcCBA) on $x$ (CadA), is negative in agreement with the biological model where CzcCBA represses CadA. Similarly, the coefficient associated with the $x^2$ term in (\ref{eq:supermodely}), which describes the influence of $x$ (CadA) on $y$ (CzcCBA), is positive in agreement with the biological model where CadA accelerates the expression of CzcCBA \eqref{e.frawark1a}. For low ZnCl\textsubscript{2} concentrations smaller than 1.25 mM the SINDy models are not as sparse, involving more terms than for the moderate concentrations (for instance, up to five functions $1,x,y,x^2,y^2$ for $x$ (CadA) at 0.5 mM of zinc), while for the highest considered ZnCl\textsubscript{2} concentration, the different term $x$ is selected in place of $x^2$. We also remark that the SINDy model is likely to model the response of \textit{P. aeruginosa} to a boost in zinc only for the time duration of the experiment. Indeed, the SINDy models in Table~\ref{tab:parabio} exhibit unphysical negative CadA and CzcA concentrations for all considered ZnCl\textsubscript{2} concentrations if a time larger than 800 minutes would be considered (not shown here) in place of the time of 160 minutes considered in the experiments. This suggests that the simplified two box model may be insufficient to capture the impact of the induced stress for longer times, and additional components or mechanisms need to be included in the modelling. \paragraph{CadA is essential for maintaining a rapid expression of CzcCBA} Consider the range of zinc concentrations from 1.5 to 2.25 mM as emphasised with dashed lines in Table~\ref{tab:parabio}. A remarkable observation is that the coefficients computed from the SINDy-delay method are only weakly sensitive to the applied zinc concentration, with the exception of the delay time $\tau_{\Delta ca}$, which increases linearly with the zinc concentration, as shown in Figure \ref{fig:delaycomp}. This linear increase of the delay time $\tau_{\Delta ca}$ in the absence of CadA suggests that the protein CadA is particularly necessary for a rapid zinc response and suggests that the positive effect of CadA on the efflux pump is all the more important as the zinc concentration is high. The OD600 measurement allows the counting of cells independently of whether they are alive or dead. Thanks to colony counting and quantification of cell viability at concentrations of 1.25 mM and 2 mM ZnCl\textsubscript{2} after 160 min of incubation (not displayed here for brevity), we observed that the same number of living cells are detected, and hence this difference in delay under different zinc concentrations cannot be attributed to a differential mortality between the $\Delta cadA$ and the wt strains. Biologically, this could reflect a reasonable mechanism whereby the bacterium wants to react as quickly as possible to a stress regardless of its intensity. \begin{figure}[tbp] \centering \includegraphics[width=0.5\linewidth]{fig8Zn} \caption{Estimated delay times $\tau_{wt}, \tau_{\Delta ca}$ (in minutes) as a function of the zinc concentration (in mM). Remarkably, we observe a linear increase as a function of the zinc concentration of the delay time $\tau_{\Delta ca} = \alpha \cdot [\mathrm{ZnCl_\textsubscript{2}}]$ with $\alpha = 37.1$ min nM\textsuperscript{-1} (linear regression in dashed line with slope $36$).} \label{fig:delaycomp} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper, we extended the SINDy methodology introduced in \cite{BruntonEtAl16} to the case of delay differential equations with a focus on short and noise-contaminated data. To construct the temporal derivatives from noisy measurements we employed a simple denoising procedure based on polynomial regression (Remark~\ref{rem:noise}). We further introduced a stopping criterion to promote sparsity which avoids having to introduce sensitive threshold parameters (Remark~\ref{rem:sparsity}). To estimate the temporal delay we applied a bilevel optimization whereby first standard SINDy method is applied for a range of fixed delay times, and then subsequently the optimal delay time is determined by the delay time yielding the minimal reconstruction error. We showed that our method is able to reliably uncover the DDE from noisy data obtained from a known toy model. Applying the SINDy-delay methodology to model the dynamics of the {\em Pseudomonas aeruginosa} zinc response from a limited amount of measurement highlighted the subtle interactions between the Cad and Czc regulatory systems. In particular, the SINDy DDE model revealed the importance of CadA on CzcCBA induction for minimizing the time required for the bacterium to respond effectively to a sudden zinc excess. The compatibility between the results of the SINDy DDE models and the biological data supports the hypothesis that the dynamical mechanism of resistance to moderate boosts of zinc can be explained by the interaction of only two systems, namely CadA and CzcCBA. Our results motivate further investigations of this dynamics. The present work was performed over 160 minutes after the metal induction and illustrates only the initial establishment of resistance. Additional experimental data on longer times, which require continuous cultures in a chemostat and a more sensitive method to monitor the \textit{cadA} and \textit{czcCBA} transcriptional expressions would make it possible to compare these mathematical predictions with the biological situation. \paragraph{Acknowledgments} This work was partially supported by the UNIGE-USyd strategic Partnership Collaboration Awards (PCA) 2019-2023, the Swiss National Science Foundation, project No. 31003A\_179336 for K.P and projects No. 200020\_184614 and No. 200020\_192129 for G.V. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,537
AUSTRALIA, "Anzac legend" Greece's Presidential Guard Evzones led the Allies and Greek contingent at today's Anzac Day march and Wreath Laying Ceremony at the Shrine of Remembrance, in Melbourne, hosted by the Victorian Returned Services League. Anzac Day, 25 April, is one of Australia's most important national occasions. It marks the anniversary of the first major military action fought by Australian and New Zealand forces during the First World War. It is also the day where we remember all Australians who have served and died in war and in operational services. When war broke out in 1914, Australia had been a federated nation for only 13 years, and its government was eager to establish a reputation among the nations of the world. When Britain declared war in August 1914, Australia was automatically placed on the side of the Commonwealth. In 1915 Australian and New Zealand soldiers formed part of the expedition that set out to capture the Gallipoli peninsula in order to open the Dardanelles to the allied navies. The ultimate objective was to capture Constantinople, the capital of the Ottoman Empire, an ally of Germany. The Australian and New Zealand forces landed on Gallipoli on 25 April, meeting fierce resistance from the Ottoman Turkish defenders. What had been planned as a bold stroke to knock Turkey out of the war quickly became a stalemate, and the campaign dragged on for eight months. At the end of 1915, the allied forces were evacuated from the peninsula, with both sides having suffered heavy casualties and endured great hardships. More than 8,000 Australian soldiers had died in the campaign. Gallipoli had a profound impact on Australians at home, and 25 April soon became the day on which Australians remembered the sacrifice of those who died in the war. Although the Gallipoli campaign failed in its military objectives, the actions of Australian and New Zealand forces during the campaign left a powerful legacy. What became known as the "Anzac legend" became an important part of the identity of both nations, shaping the ways in which they viewed both their past and their future. ANZACs IN GREECE Today we also remember the ANZACS who fought in the Battle of Greece and Crete. On 6th April 1941, the Battle in Greece was one of the first engagements of the Australian Army against the Nazis in World War II. Many of the Anzacs of Greece and Crete (e.g. Anzac Constantine Aroney) had also fought in the first Anzac Campaign in Gallipoli a few decades' years earlier and are rare "Dual Anzacs". Some of the Anzacs of Greece and Crete (e.g. Anzac James Zampelis) came from Greek Australian migrant families. The Greek and Crete Campaign included Australia's highest ranked Indigenous Australian soldier Captain Reginald Saunders who was supported and saved by the Cretan people for nearly a year until escape. Their human bonds are an important Australian story and will empower indigenous Australians and contribute to reconciliation. Of the 1,686 Anzacs, 646 Australians are buried or memorialized in Greece in Phaleron, Athens, Rhodes and Suda Bay in Crete. Over 50 percent of deceased Australians have never been found or are unidentified and are memorialized at the Athens Memorial. 8900 Anzac prisoners of war were captured in the Battle of Crete and Greece, representing 83% of the Australian soldiers captured by the Nazis in World War II. The Anzacs had come from one of the newest democracies to fight in Greece, the birthplace of Democracy for freedom and liberty. Some 11 per cent of the Greek general population and 80 per cent of the Jewish population perished as a result of the Nazi invasion. history time line Australian Minister supports return of stolen Parthenon Marbles to Greece Dr Lina Mendoni and Open Horizons: Ancient Greek Journeys and Connections Greece Teams Up with Australian Committees for Return of Parthenon Marbles
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
624
Stories to lift your spirits Puzzles & Solutions Supporting homeless people in Milton Keynes Milton Keynes worship band Testament have recorded a song to support the work of Open Door among homeless people in the city ... Band member Mike Baldwin takes up the story: "Homelessness is something that most of us either only glimpse out of the corners of our eyes as we rush around or we deliberately turn our heads away if we encounter it head on with no escape. It's uncomfortable. "The fact that, in our technically advanced society, there are vulnerable members who aren't catered for, who don't fit, is embarrassing. In some things we are so damn clever, in others we remain in the dark ages. "There are many reasons for ending up on the streets or someone else's sofa. Teenage pregnancy resulting in the daughter being kicked out, the arrival of a new step-parent leading to arguments, long-term illness resulting in financial debt (how many months could you survive without your income?) are all common reasons. "In Milton Keynes we started a soup run in 2007 that now runs 365 evenings a year. And while it's conducted by a couple of churches, many of the volunteers aren't churchgoers or Christians. We minister to around 70 polite and articulate clients each evening; like the majority seen in Famous, Rich and Homeless on BBC 1. "We also work closely with Open Door MK, a charity that supports the homeless and vulnerably housed. They run drop-in centres a couple of times a week as well as organise support services to try and plug people back into 'the system'. But not everyone fits into the system, so their work is quite tricky. "We know of eight clients who have died over the years, some on the streets. 'Little Tony' died recently round the side of Christ the Cornerstone church. He was 34 and the second client we'd met in 2007. "He used to drink and dabble with drugs and everyone connected with social services, MK hospital, paramedics, the police, YMCA and so on all knew 'Little Tony'. He was kind, soft and vulnerable and was offered support but wasn't able to take advantage of it due to his addictions. So he fell through the net and slept on the streets. "Testament is an MK-based worship band and one of their tracks See Me Now (from the album Go and Tell) highlights the world of 'Little Tony'. You can watch the video at https://www.youtube.com/watch?v=DXo6ebtyQ0o and all the proceeds from Amazon and iTunes downloads go to Open Door. "So far the video has had more than 800 views but we would like to boost this and the track downloads. Please take a couple of minutes to watch the video and, perhaps, download the track to ensure around 80p goes to Open Door MK." Get more inspiring reading We are continuing to produce seasonal 'mini magazines' and make them available for you to use offering encouraging, engaging content – still stories, still jargon-free, still designed to be given away, still free. Our latest, an 8-page issue for Christmas 2019, will be available very soon – order HERE from CPO. Manchester: No More Knives tour brings youth alternative How our church responded to modern slavery Light the Night remembrance event draws crowds to Bristol church © 2020 Inspire Magazine. All rights reserved. Inspire Magazine is published by Christian Publishing & Outreach, registered in England & Wales (Charity no 221462). Company limited by guarantee No 588731. VAT no 860 219 341. Registered office: CPO, 1 Easting Close, Worthing BN14 8HQ, UK
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,261
Ecology and Evolutionary Biology Home / News & Events / Newsletter A message from EEB Chair Dr. Keith Clay It has been as challenging a year for EEB as it has been for everyone at Tulane and beyond. But EEB entered 2020 on an upswing and despite the adversity of a global pandemic, our momentum has continued. In 2018, I came to Tulane to as the new EEB Chair following David Heins, who served as chair for 19 years, retired. The department has been enriched by David's leadership during his tenure as chair, especially in the wake of the Hurricane Katrina, which saw EEB recover and thrive. EEB has continued to grow and diversify our faculty, students, research fields and curriculum. In addition to myself, EEB has hired three new tenure-track faculty members in the past three years: plant evolutionary biologist Katie Ferris, animal physiologist Alex Gunderson and disease ecologist Hannah Frank. In addition, we hired a new Professor of Practice, Jelagat Cheruiyot, in 2017. EEB faculty, undergraduates and graduate students have received a number of new federal research grants as well as prestigious awards and fellowships. For example, Jordan Karubian was named Duren Professor by Newcomb-Tulane College and Inaugural Scholar-in-Residence by the Center for Public Service at Tulane. And a variety of new courses have been added to our curriculum, reflecting the influx of new faculty with their new interests and expertise. I'm excited about the future of EEB, and how the new classes and research programs of our faculty will help us recruit better qualified and more diverse undergraduate majors and graduate students. While 2020 represented a bump in the road, EEB is set to accelerate forward in 2021 and beyond!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,733
Q: Regarding buoyant force acting on a cone in an accelerated container While solving questions related to fluid mechanics, I came across this particular question: The correct options are (A), (B) and (C). I was only able to mark option (A) with certainty. I have some conceptual doubts here: * *What exactly is the significance of '${a = g}$'? (I thought of considering pseudo force, and hence, assumed the cone to be slanted due to it. But the question got complicated.) Does it really influence the answers in any way, i.e., will the answers differ if the container wasn't accelerating, or even moving at all? *The second option has an obvious typo error (dimensional error), but even if the force in the option was ${'(πr^3ρg)/3'}$, it still doesn't match my answer. I was able to find the relation between the height(${h}$) of the part of the cone in liquid 1 and ${r}$ as ${h^3 = r^3/2}$. So, ${F = (πr^3ρg)/6}$ (I have considered only buoyant force). Is there any other force applied by liquid 1 which I might be missing here? Could anyone please clarify both of my doubts? A: Ignoring the indicated acceleration, I agreed with statement, A. Assuming that the upper liquid could only push down on the upper section, I integrated the pressure times the horizontal component of surface elements to get a total force of πρg$r^3$/3. (A poorly written question.)
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,888
Q: Xslt - how to group by and show output in same row in XSLT only partial output is showing with current code Can you pls help me with this code resolution My scenario is to display results based on Location I need to output two fields for each location. I am using grouping to achieve this. The Hours fields is same for all the locations and payrate combination but the Employee count field is different. That is causing the issue. my XML code: <wd:Report_Data xmlns:wd="urn:com.workday.report/TEST_PB"> <wd:Report_Entry> <wd:location>Canada</wd:location> <wd:payRate>Salary</wd:payRate> <wd:salcount>10</wd:salcount> <wd:hours>250</wd:hours> </wd:Report_Entry> <wd:Report_Entry> <wd:location>Canada</wd:location> <wd:payRate>Hourly</wd:payRate> <wd:hrlycount>3</wd:hrlycount> <wd:hours>120</wd:hours> </wd:Report_Entry> <wd:Report_Entry> <wd:location>Canada</wd:location> <wd:payRate>CWR</wd:payRate> <wd:cwrcount>2</wd:cwrcount> <wd:hours>100</wd:hours> </wd:Report_Entry> <wd:Report_Entry> <wd:location>USA</wd:location> <wd:payRate>Salary</wd:payRate> <wd:salcount>7</wd:salcount> <wd:hours>200</wd:hours> </wd:Report_Entry> <wd:Report_Entry> <wd:location>USA</wd:location> <wd:payRate>Hourly</wd:payRate> <wd:hrlycount>5</wd:hrlycount> <wd:hours>500</wd:hours> </wd:Report_Entry> <wd:Report_Entry> <wd:location>USA</wd:location> <wd:payRate>CWR</wd:payRate> <wd:cwrcount>10</wd:cwrcount> <wd:hours>700</wd:hours> </wd:Report_Entry> </wd:Report_Data> my xsl code: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:wd="urn:com.workday.report/TEST_PB" exclude-result-prefixes="xs" version="2.0" xmlns:functx="http://www.functx.com"> <xsl:output method="text" indent="no"/> <xsl:strip-space elements="*"/> <xsl:variable name="NEWLINE" select="'&#xa;'"/> <xsl:variable name="COMMA" select="','"/> <xsl:template match="/"> <xsl:call-template name="DetailRecords"/> </xsl:template> <xsl:template name="DetailRecords"> <xsl:for-each-group select="wd:Report_Data/wd:Report_Entry" group-by="wd:location"> <xsl:sort select="wd:location"/> <xsl:value-of select="concat(wd:location,$COMMA)"/> <xsl:if test="wd:payRate = 'Salary'"> <xsl:value-of select="current-group()/wd:hours" separator="," /> </xsl:if> <xsl:if test="wd:payRate = 'Hourly'"> <xsl:value-of select="current-group()/wd:hours" separator="," /> </xsl:if> <xsl:if test="wd:payRate = 'CWR'"> <xsl:value-of select="current-group()/wd:hours" separator="," /> </xsl:if> <xsl:if test="wd:payRate = 'Salary'"> <xsl:value-of select="current-group()/wd:salcount" separator="," /> </xsl:if> <xsl:if test="wd:payRate = 'Hourly'"> <xsl:value-of select="current-group()/wd:hrlycount" separator="," /> </xsl:if> <xsl:if test="wd:payRate = 'CWR'"> <xsl:value-of select=" wd:cwrcount" separator="," /> </xsl:if> <xsl:value-of select="$NEWLINE"/> </xsl:for-each-group> </xsl:template> </xsl:stylesheet> ***Current Output:*** Canada,250,120,10010 USA,200,500,7007 ***Expected Output:*** Canada,250,10,120,3,100,2 USA,200,7,500,5,700,10 A: I think you want e.g. (XSLT 3) <xsl:for-each-group select="wd:Report_Data/wd:Report_Entry" group-by="wd:location"> <xsl:sort select="wd:location"/> <xsl:value-of select="current-grouping-key(), current-group()!(wd:hours, wd:salcount, wd:hrlycount, wd:cwrcount)" separator=","/> <xsl:value-of select="$NEWLINE"/> </xsl:for-each-group> or (XSLT 2) <xsl:for-each-group select="wd:Report_Data/wd:Report_Entry" group-by="wd:location"> <xsl:sort select="wd:location"/> <xsl:value-of select="current-grouping-key(), for $e in current-group() return ($e/wd:hours, $e/wd:salcount, $e/wd:hrlycount, $e/wd:cwrcount)" separator=","/> <xsl:value-of select="$NEWLINE"/> </xsl:for-each-group> I am not sure why the question is tagged as xslt-1.0 if it uses xsl:for-each-group introduced in XSLT 2.0
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,259
I realize that one may not automatically envision a clear link to environmental sustainability when they think of the city of Dallas, Texas, but I am here to let you know that times are changing and Dallas has stepped up their sustainability game! On the 20th Anniversary of Health Care Without Harm, CleanMed is excited to partner with Dallas and the Omni hotel for our largest conference event ever and we wanted to give you a glance at some of the good work the city and hotel are putting forth to make sure we have an incredible experience. The Dallas Bikeway System allows for the implementation of a 1,296-mile network incorporating new and existing pathways for citizens preferring alternate forms of transportation. City Forestry programs promote tree planting projects and cultivate foresters with basic knowledge of tree skills to act as advocates for Dallas' urban forests. In 2011, DFW ranked 10th among the nation's 100-largest metropolitan areas in green job creation. In addition, the city of Dallas is among the top purchasers of green energy, ranks #3 on the EPA's list of 'Top 20 local government partners' and #15 on the 'National Top 50' list. Dallas is the first city in the United States to be ISO 14001 certified across all major operations. ISO 14001 is an international environmental standard which sets environmental goals for organizations and communities to exceed environmental compliance, and continually improve and reduce the impact of their operations on the environment. Our "Harvest for Change" dinner on May 18 will feature local, sustainable, and organic food from nearby farms. The Omni Dallas Roof materials feature solar reflective indexes to help with heat island effects. Hotel features systems for lighting and thermal control. Examples include guest rooms equipped with a key switching system where guests put the room key into a slot to turn on lights and have control of the heating, ventilating and air conditioning (HVAC) system. When removed, the HVAC system sets back to a preset temperature and the lights automatically shut off. The indoor air quality management plan for the Omni will reduce Indoor Air Quality (IAQ) problems resulting from construction, and the team is using low Volatile Organic Compounds (VOC) products such as paints, adhesives and carpet systems. For more information about Dallas initiatives please visit http://greendallas.net/. We look forward to welcoming you May 17-19 at the beautiful LEED Certified Omni Dallas Hotel for CleanMed.
{ "redpajama_set_name": "RedPajamaC4" }
2,227
{"url":"https:\/\/lesca.me\/archives\/working-on-ubunutu-compiling-burining-at80s52.html","text":"# [Ubuntu]\u5728ubuntu\u4e0a\u7f16\u8bd1\u3001\u70e7\u5199AT80S52 (Working on Ubunutu: compiling, burining for AT80S52)\n\nIn this article, I will use sdcc to cross-compile source file of 8051, and burn its output(hex file) over usbasp with avrdude.\n\n[cpp]\n\/* Filename: test.c\n* Description: sample source for sdcc\n* Author: Lesca FANG\n* Date: Mar 7, 2011\n*\/\n\n#include <8052.h>\ntypedef unsigned int size_t;\n\n#define LED P0_0\n\nvoid delay(size_t t)\n{\nwhile(t\u2013);\n}\n\nvoid main()\n{\nwhile(1)\n{\nLED = 0;\ndelay(10000);\nLED = 1;\ndelay(10000);\n}\n}\n\n[\/cpp]\n\nIf you haven\u2019t install sdcc yet, run the following command:\n\n$sudo apt-get install sdcc \u5982\u679c\u5df2\u7ecf\u6210\u529f\u5b89\u88c5\uff0c\u5219\u8fd0\u884c\u4e0b\u9762\u7684\u547d\u4ee4\uff1a If that\u2019s done, run this command: $ sdcc -mmcs51 test.c\n\nWhat we need here is .ihx(Intel hex) file.\n\navrdude is supposed to burn for AVR chips, so it\u2019s not possible to burn directly to 8051 chips. But after some configuration, impossible is noting.\n\nWe have three different configration files for 8051 chips. They are respectively suit for some types. Check what you need and then copy them to \/etc\/avrdude.conf.\n\nFor AT89S51\n\n#------------------------------------------------------------\n# AT89S51\n#------------------------------------------------------------\npart\nid = \"8052\";\ndesc = \"AT89S51\";\nsignature = 0x1E 0x51 0x06;\nchip_erase_delay = 500000;\npgm_enable = \"1 0 1 0 1 1 0 0 0 1 0 1 0 0 1 1\",\n\"x x x x x x x x x x x x x x x x\";\n\nchip_erase = \"1 0 1 0 1 1 0 0 1 0 0 x x x x x\",\n\"x x x x x x x x x x x x x x x x\";\n\ntimeout = 200;\nstabdelay = 100;\ncmdexedelay = 25;\nsynchloops = 32;\nbytedelay = 0;\npollindex = 3;\npollvalue = 0x53;\npredelay = 1;\npostdelay = 1;\npollmethod = 0;\n\nmemory \"flash\"\nsize = 4096;\npaged = no;\nmin_write_delay = 4000;\nmax_write_delay = 9000;\nread = \" 0 0 1 0 0 0 0 0\",\n\" x x x a12 a11 a10 a9 a8\",\n\" a7 a6 a5 a4 a3 a2 a1 a0\",\n\" o o o o o o o o\";\n\nwrite = \" 0 1 0 0 0 0 0 0\",\n\" x x x a12 a11 a10 a9 a8\",\n\" a7 a6 a5 a4 a3 a2 a1 a0\",\n\" i i i i i i i i\";\nmode = 0x21;\ndelay = 12;\n;\n\nmemory \"signature\"\nsize = 3;\nread = \"0 0 1 0 1 0 0 0 x x x 0 0 0 a1 a0\",\n\"0 0 0 0 0 0 0 0 o o o o o o o o\";\n;\n;\n\n\nFor AT89S52\n\n#------------------------------------------------------------\n# AT89S52\n#------------------------------------------------------------\npart\nid = \"8052\";\ndesc = \"AT89S52\";\nsignature = 0x1E 0x52 0x06;\nchip_erase_delay = 500000;\npgm_enable = \"1 0 1 0 1 1 0 0 0 1 0 1 0 0 1 1\",\n\"x x x x x x x x x x x x x x x x\";\n\nchip_erase = \"1 0 1 0 1 1 0 0 1 0 0 x x x x x\",\n\"x x x x x x x x x x x x x x x x\";\n\ntimeout = 200;\nstabdelay = 100;\ncmdexedelay = 25;\nsynchloops = 32;\nbytedelay = 0;\npollindex = 3;\npollvalue = 0x53;\npredelay = 1;\npostdelay = 1;\npollmethod = 0;\n\nmemory \"flash\"\nsize = 8192;\npaged = no;\nmin_write_delay = 4000;\nmax_write_delay = 9000;\nread = \" 0 0 1 0 0 0 0 0\",\n\" x x x a12 a11 a10 a9 a8\",\n\" a7 a6 a5 a4 a3 a2 a1 a0\",\n\" o o o o o o o o\";\n\nwrite = \" 0 1 0 0 0 0 0 0\",\n\" x x x a12 a11 a10 a9 a8\",\n\" a7 a6 a5 a4 a3 a2 a1 a0\",\n\" i i i i i i i i\";\nmode = 0x21;\ndelay = 12;\n;\n\nmemory \"signature\"\nsize = 3;\nread = \"0 0 1 0 1 0 0 0 x x x 0 0 0 a1 a0\",\n\"0 0 0 0 0 0 0 0 o o o o o o o o\";\n;\n;\n\n\nFor AT89S8253\n\n#------------------------------------------------------------\n# AT89S8253\n#------------------------------------------------------------\npart\nid = \"8253\";\ndesc = \"AT89S8253\";\nchip_erase_delay = 20000;\npgm_enable = \"1 0 1 0 1 1 0 0 0 1 0 1 0 0 1 1\",\n\"x x x x x x x x x x x x x x x x\";\n\nchip_erase = \"1 0 1 0 1 1 0 0 1 0 0 x x x x x\",\n\"x x x x x x x x x x x x x x x x\";\n\ntimeout = 200;\nstabdelay = 100;\ncmdexedelay = 25;\nsynchloops = 32;\nbytedelay = 0;\npollindex = 3;\npollvalue = 0x53;\npredelay = 1;\npostdelay = 1;\npollmethod = 0;\n\nmemory \"flash\"\nsize = 12288;\npaged = no;\nmin_write_delay = 4000;\nmax_write_delay = 9000;\nread = \" 0 0 1 0 0 0 0 0\",\n\" x x a13 a12 a11 a10 a9 a8\",\n\" a7 a6 a5 a4 a3 a2 a1 a0\",\n\" o o o o o o o o\";\n\nwrite = \" 0 1 0 0 0 0 0 0\",\n\" x x a13 a12 a11 a10 a9 a8\",\n\" a7 a6 a5 a4 a3 a2 a1 a0\",\n\" i i i i i i i i\";\nmode = 0x21;\ndelay = 12;\n;\n\nmemory \"signature\"\nsize = 2;\nread = \"0 0 1 0 1 0 0 0 x x x x x x x x\",\n\"x x 1 1 0 0 0 a0 o o o o o o o o\";\n;\n;\n\n\nAfter configuring, we can now burn the ROM. Still remember the output file that we need? Yes! It\u2019s test.ihx :\n\n$sudo avrdude -p 8052 -c usbasp -e -U flash:w:'.\/test.ihx' avrdude: warning: cannot set sck period. please check for usbasp firmware update. avrdude: AVR device initialized and ready to accept instructions Reading | ################################################## | 100% 0.02s avrdude: Device signature = 0x1e5206 avrdude: erasing chip avrdude: warning: cannot set sck period. please check for usbasp firmware update. avrdude: reading input file \".\/test.ihx\" avrdude: input file .\/test.ihx auto detected as Intel Hex avrdude: writing flash (140 bytes): Writing | ################################################## | 100% 2.07s avrdude: 140 bytes of flash written avrdude: verifying flash memory against .\/test.ihx: avrdude: load data flash data from input file .\/test.ihx: avrdude: input file .\/test.ihx auto detected as Intel Hex avrdude: input file .\/test.ihx contains 140 bytes avrdude: reading on-chip flash data: Reading | ################################################## | 100% 0.70s avrdude: verifying ... avrdude: 140 bytes of flash verified avrdude: safemode: Fuses OK avrdude done. Thank you. \u9009\u9879\u8bf4\u660e(Options)\uff1a -p specifies the type of the MCU connected to the programmer. -c specifies the default programmer -e causes a chip erase to be executed. -U memtype:op:filename The op field specifies what operation to perform: r read device memory and write to the specified file w read data from the specified file and write to the device memory v read data from both the device and the specified file and perform a verify \u6211\u73b0\u5728\u6240\u80fd\u505a\u7684\u5c31\u662f\u795d\u4f60\u597d\u8fd0\uff0c\u56e0\u4e3a\u5982\u679c\u6ca1\u6709\u6210\u529f\u5f88\u6709\u53ef\u80fd\u662f\u4ee5\u4e0b\u539f\u56e0\u9020\u6210\u7684\u3002 What I can do now is saying \u201cGood luck\u201d to you, \u2018cos a failure can be caused by the following reasons. \u9519\u8bef\u6392\u89e3 What\u2019s wrong? \u2022 \u4f60\u7684\u4e0b\u8f7d\u7ebf\u6ca1\u6709\u88ab\u7cfb\u7edf\u6b63\u786e\u8bc6\u522b Your usbasp is not identified by your system \u2022 lsusb\u770b\u770b\u6709\u6ca1\u6709\u8fd9\u884c\uff0c\u5982\u679c\u6ca1\u6709\u5f88\u6709\u53ef\u80fd\u662f\u4f60\u7684usbasp\u6709\u95ee\u9898\u3002 Check this line with lsusb. If no similar here, it may be your usbasp\u2019s responsibility. $ lsusb\nBus 008 Device 002: ID 16c0:05dc VOTI USBasp AVR Programmer\n\n\u2022 \u7cfb\u7edf\u53d1\u73b0\u4e86usbasp\u53ef\u662f\u8fd8\u662f\u4e0d\u884c\u00a0\u00a0System has found the usbasp device but it still won\u2019t work\n\u2022 \u8fd9\u5f88\u6709\u53ef\u80fd\u662f\u4f60\u7684usbasp\u5185\u90e8ROM\u4e0d\u6b63\u786e\u6216\u8005\u4f7f\u7528\u7684\u4e0d\u662f\u6807\u51c6usbasp\u534f\u8bae\u9020\u6210\u7684\u3002\u5efa\u8bae\u8d2d\u4e70\u5546\u4e1a\u7248\u7684usbasp\u4e0b\u8f7d\u7ebf\uff0c\u6bd4\u8f83\u7a33\u5b9a\u3002\nThis is probobaly because of a incorrect usbasp ROM or non-standard usbasp protocal. You may use a commercial usbasp which can be more reliable.\n\nQ&A\n\n\u2022 \u4e3a\u4ec0\u4e48avrdude\u5982\u6b64\u4e4b\u6162\uff1f Why avrdude is so slow?\n\u2022 \u8fd9\u4e3b\u8981\u662f\u56e0\u4e3a\u5b83\u6bcf\u6b21\u53ea\u5728USB\u5305\u4e2d\u653e\u7f6e\u4e00\u5b57\u8282\u3002\u6211\u4e5f\u8bd5\u8fc7\u4f7f\u7528\u9875\u9762\u6a21\u5f0f\uff0c\u4f46\u662f\u8fd9\u4e0d\u80fd\u9002\u7528\u4e8e\u5927\u6587\u4ef6\u7684\u4e0b\u8f7d\uff0c\u800c\u4e14\u6bcf\u6b21\u8bfb\u56de\u6821\u68c0\u90fd\u5931\u8d25\nThis is because it only reads one byte for every USB packet. I have tried to change it to page mode, but it doesn\u2019t work for big hex files and cannot read-back verifring.\n\n\u2022 \u4e3a\u4ec0\u4e48\u8981root\u6743\u9650\u8fd0\u884cavrdude\uff1f Why run avrdude as root?\n\u2022 \u5728Ubuntu\u4e0a\uff0c\u53ea\u6709\u4f7f\u7528root\u6743\u9650\u624d\u80fd\u6253\u5f00usbasp\u6240\u5728\u7684USB\u8bbe\u5907\u3002\nOnly run as root can it open the USB device linked with usbasp on Ubuntu.\n\n\u2022 \u547d\u4ee4\u592a\u957f\u4e86\uff1fToo long command line?\n\u2022 \u8bd5\u8bd5\u8fd9\u4e2a\uff0c\u4fdd\u5b58\u540e\u8981\u6539\u4e3a\u53ef\u6267\u884c\u3002\nTry this batch file, save it, change mode to executive.\n[bash]\n#! \/bin\/bash\nmode=flash:w:PWD\/1\n#echo mode\nsudo avrdude -p 8052 -c usbasp -e -U\nmode\n[\/bash]\n\u8fd9\u4e2a\u811a\u672c\u53ea\u6709\u4e00\u4e2a\u53c2\u6570\u2014\u2014\u5f85\u4e0b\u8f7d\u6587\u4ef6\u540d\u3002","date":"2022-01-25 04:03:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4881216883659363, \"perplexity\": 1602.2566393340924}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320304760.30\/warc\/CC-MAIN-20220125035839-20220125065839-00468.warc.gz\"}"}
null
null
#ifndef CTSAUDIO_TASKSEQUENTIAL_H #define CTSAUDIO_TASKSEQUENTIAL_H #include <utils/String8.h> #include <list> #include "TaskGeneric.h" class TaskAsync; class TaskSequential: public TaskGeneric { public: TaskSequential(); virtual ~TaskSequential(); virtual TaskGeneric::ExecutionResult run(); virtual bool parseAttribute(const android::String8& name, const android::String8& value); /** * Queue async task for asynchronous execution (= call complete later) * If the task is already queued, it will not be queued again ,but will just return true. */ bool queueAsyncTask(TaskAsync* task); private: /** * Run all async tasks queued (= call complete) and dequeue them. * Execution will be continued even for error, and the 1st error result will be returned. */ TaskGeneric::ExecutionResult runAsyncTasksQueued(); private: int mRepeatCount; android::String8 mIndexName; int mRepeatIndex; std::list<TaskAsync*> mAsyncTasks; }; #endif // CTSAUDIO_TASKSEQUENTIAL_H
{ "redpajama_set_name": "RedPajamaGithub" }
1,684
Least number not in the irregular table of k! mod p for k = 1..p-1 for primes p. 3, 4, 3, 4, 4, 3, 10, 7, 3, 3, 3, 5, 3, 8, 3, 3, 10, 4, 7, 3, 3, 7, 8, 4, 5, 3, 3, 3, 4, 3, 3, 4, 5, 9, 5, 4, 4, 4, 3, 5, 7, 3, 3, 4, 3, 4, 8, 4, 3, 5, 7, 4, 3, 3, 3, 3, 3, 5, 5, 12, 3, 4, 7, 9, 5, 5, 7, 3, 3, 4, 3, 3, 4, 3, 3, 3, 4, 7, 3, 5, 5, 7, 9, 5, 3, 4, 7 Note that 1 and 2 are always in the table k! mod p. T. D. Noe, Plot of terms 3 to 1000 T. D. Noe, Table of terms 3 to 1000 Tim Trudgian, There are no socialist primes less than 10^9, Integers 14 (2014), A63. (Mma) nn = 100; f = Table[n!, {n, Prime[nn]}]; Table[s = Table[Mod[f[[n]], p], {n, p - 1}]; Complement[Range[p - 1], Union[s]][[1]], {p, Prime[Range[3, nn]]}] Cf. S000447, S000452. nonn T. D. Noe, Jan 12 2015
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,856
Q: Cambiar el contenido de un iframe Tengo una página de prueba que tiene un menú y más abajo un iframe que me muestra una tabla de ventas. Esa tabla de ventas la tengo creada en otro fichero .html en el mismo directorio que la pagina index. Bien, mi idea es que cuando el usuario pulse un elemento del menú, el contenido del iframe cambie. Por ejemplo, cuando pulse en Usuarios, el contenido del iframe sea la tabla de usuarios y cuando pulse en ventas, el contenido sea ventas. Las tablas de usuarios.html y ventas.html están en el mismo directorio que el index. Este es mi código de index. <!doctype html> <html lang="en"> <head> <title>Title</title> <!-- Required meta tags --> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Bootstrap CSS --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous"> </head> <body> <nav class="navbar navbar-expand-lg navbar-light bg-light"> <div class="collapse navbar-collapse" id="navbarNav"> <ul class="navbar-nav"> <li class="nav-item active"> <a class="nav-link" id="ventas" href="#">Ventas <span class="sr-only">(current)</span></a> </li> <li class="nav-item"> <a class="nav-link" id="usuarios" href="#">Usuarios</a> </li> </ul> </div> </nav> <div class="embed-responsive embed-responsive-16by9"> <iframe id="iframe" class="embed-responsive-item" src="ventas.html" allowfullscreen></iframe> </div> <!-- Optional JavaScript --> <!-- jQuery first, then Popper.js, then Bootstrap JS --> <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script> </body> ¿Alguna ayuda? He leído que se podría hacer con JQuery o JavaScript pero no soy muy bueno en esos lenguajes A: lo primero que debes hacer es capturar los distintos links del menú en un arreglo, esto lo haces utilizando el método querySelectorAll() y pasándole por parámetro un punto seguido del nombre de la clase (.nav-link) Luego, asumiendo que tus distintos archivos HTML tienen el mismo nombre que los id de cada link del menú, crear un método que reemplace el valor del atributo "src" del iframe, por el valor del id del hipervínculo clicado. Por ultimo, debes recorrer el arreglo de hipervínculos, y agregarles un eventListener con el método de reemplazo del iframe. espero haberte ayudado! const links = document.querySelectorAll('nav-link'); let iframe = document.getElementById('iframe'); function changeIframeContent(src) { iframe.setAttribute('src', `${src}.html`); } links.forEach( link => { link.addEventListener('click', changeIframeContent(link.id)); }) <!doctype html> <html lang="en"> <head> <title>Title</title> <!-- Required meta tags --> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Bootstrap CSS --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous"> </head> <body> <nav class="navbar navbar-expand-lg navbar-light bg-light"> <div class="collapse navbar-collapse" id="navbarNav"> <ul class="navbar-nav"> <li class="nav-item active"> <a class="nav-link" id="ventas" href="#">Ventas <span class="sr-only">(current)</span></a> </li> <li class="nav-item"> <a class="nav-link" id="usuarios" href="#">Usuarios</a> </li> </ul> </div> </nav> <div class="embed-responsive embed-responsive-16by9"> <iframe id="iframe" class="embed-responsive-item" src="ventas.html" allowfullscreen></iframe> </div> <!-- Optional JavaScript --> <!-- jQuery first, then Popper.js, then Bootstrap JS --> <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script> </body> A: Como estás utilizando Bootstrap, creo que la mejor solución es utilizar las pestañas o nav que en la misma librería de Bootstrap están listadas. <!doctype html> <html lang="en"> <head> <title>Title</title> <!-- Required meta tags --> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Bootstrap CSS --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous"> </head> <body> <nav class="navbar navbar-expand-lg navbar-light bg-light"> <div class="collapse navbar-collapse" id="navbarNav"> <ul class="navbar-nav nav-tabs" id="myTab" role="tablist"> <li class="nav-item active" role="presentation"> <a class="nav-link" id="ventas-tab" data-toggle="tab" href="#ventas" role="tab" aria-controls="ventas" aria-selected="true">Ventas <span class="sr-only">(current)</span></a> </li> <li class="nav-item" role="presentation"> <a class="nav-link" id="usuarios-tab" data-toggle="tab" href="#usuarios" role="tab" aria-controls="usuarios" aria-selected="false">Usuarios</a> </li> </ul> </div> </nav> <div class="tab-content" id="myTabContent"> <div class="tab-pane fade show active" id="ventas" role="tabpanel" aria-labelledby="ventas-tab"> <div class="embed-responsive embed-responsive-16by9"> <iframe id="iframe" class="embed-responsive-item" src="//getbootstrap.com/docs/4.5/components/navs/" allowfullscreen></iframe> </div> </div> <div class="tab-pane fade" id="usuarios" role="tabpanel" aria-labelledby="usuarios-tab"> <div class="embed-responsive embed-responsive-16by9"> <iframe id="iframe" class="embed-responsive-item" src="//getbootstrap.com/docs/4.5/components/alerts/" allowfullscreen></iframe> </div> </div> </div> <!-- Optional JavaScript --> <!-- jQuery first, then Popper.js, then Bootstrap JS --> <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script> </body> Por favor nota que estoy utilizando prácticamente tu mismo código, sin embargo he tomado nota de las navs dentro de la librería de Bootstrap y he cambiado también el contenido de tus iframe: "ventas" y "usuarios" por enlaces reales que te muestran la web oficial de Bootstrap. Esto con la intención de que veas el ejemplo funcional, pero únicamente tendrás que modificar: <div class="embed-responsive embed-responsive-16by9"> <iframe id="iframe" class="embed-responsive-item" src="//getbootstrap.com/docs/4.5/components/navs/" allowfullscreen></iframe> </div> Por: <div class="embed-responsive embed-responsive-16by9"> <iframe id="iframe" class="embed-responsive-item" src="ventas.html" allowfullscreen></iframe> </div> Y hacer lo mismo con "usuarios". Espero que hayas comprendido la explicación y si tienes dudas adicionales, estaré encantado de ayudarte a resolverlas.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,574
SEQUIM BEACON Meet Charlie Bush --- Lowell Rathbun 1-21-21 Charlie Bush has a distinguished career serving the City of Sequim. He has a distinguished education, holding a BA from Wittenberg University and a graduate degree from Syracuse University. He also holds certificates from the Harvard Kennedy School and the University of Virginia. Mr. Bush has been in public service for 23 years, starting in Glendale, AZ, and served the cities of Bellingham, Prosser, and Issaquah before coming to Sequim. He has served on the boards of at least 4 different public service organizations including the Association of Washington Cities. The city of Sequim has been happy with Mr. Bush's performance during his 6 years of service for the city. Just 6 months after he was hired, in 2016, he was awarded a pay increase by the city council. In 2019, Mr. Bush received another pay increase and received accolades from the mayor and other council members. During his tenure, Mr. Bush oversaw many improvements to the city of Sequim. Some of these improvements include the following: He moved the city to a robust financial position where it now enjoys balanced budgets; he oversaw improvements to Carrie Blake Park; and he oversaw the rebuilding of Fir Street. Significantly, he assisted by the city attorney pushed for increased funding for human services and he even spent a night in the cold, filming for the public what it is like to be homeless. He kept the city on an even keel, adhering to the rule of law, responsive to public concern and guiding city staff through a very turbulent period surrounding the MAT clinic. And finally, Mr. Bush has brought the city through a Covid-19 public emergency and the city has dispensed over $400,000 in funds to aid local businesses hurt by the pandemic. It seems incomprehensible that a man of Mr. Bush's caliber and dedication should be forced to resign at such a critical point in the city's affairs. Read the full story below, complete with references. Mr. Bush was probably born in or around 1976. He graduated from Penfield High School, presumably in New York, with a Regents diploma in 1993. He earned a BA in Political Science from Wittenberg University, a top ranked liberal arts college in Springfield, OH. In 1998, he earned a Masters in Public Administration from the Maxwell School of Citizenship and Public Affairs at Syracuse University. He also holds a Professional Certificate from the Harvard Kennedy School, and another professional certificate in public service from the University of Virginia. Career Before Sequim He started his career with the city of Glendale, AZ. After a year, he served for 3 years at the city of Phoenix, AZ and left as a Project Manager. He spent 4 years Intergovernmental Management Analyst in the Greater Seattle area and was a board member of the Washington City County Management Association for 2 years. From 2008 to 2012, he served as the City Administrator of the city of Prosser, WA before taking a position with the city of Issaquah, WA for 3 years, serving as the Deputy Chief Administrator and then as the Director of Development Services. During that time he served as a board member of both the Washington City County Management Association and the Association of Washington Cities. In August, 2015, Mr. Bush became the City Manager of Sequim, WA. In Issaquah, Mr. Bush managed about 161 employees, and a total budget of $63 million. He also directed $250 million dollar in construction spending. While with the city of Sequim, he has also served as a board member of the MRSC and the Alliance for Innovation. His Accomplishments in Sequim On June 24, 2015 the Sequim City Council chose Mr. Bush out of six finalists to be the new City Manager. His initial salary was set at $120,000 plus benefits. Mayor Candace Pratt was excited to choose him for his youth and energy. Council member Ted Miller said that Mr. Bush was their first choice. After taking the oath of office on August 24, 2015, Mr. Bush lost no time in setting his priorities for the city: a possible proposal for the Metropolitan Parks District, forming the 2016 budget and finding safer parking for the Albert Haller Playfields in what is now Carrie Blake Park. Part of Mr. Bush's agenda for the city budget process at the time was to move the city to a more sustainable funding system. 2016 -- Raise & Credential It wasn't long before the city council was quite satisfied with Mr. Bush's performance as the new city manager and unanimously awarded him pay increase on February 8, 2016. The council gave him a rating of 8.7 of 10 in his performance review. Mayor Dennis Smith stated that he was very pleased with the level of communication with Mr. Bush and how his management has been received by city staff. Ted Miller and Bob Lake also said they were pleased with his performance also. Later in that year, Mr. Bush received a prestigious Credentialed Manager designation from the International City/County Management Association. In order to receive this credential, a member must have significant experience as a senior management executive, earned a degree and demonstrated commitment to high standards of integrity, lifelong learning and professional development. Finishing out the year of 2016, Mr. Bush appointed and oversaw the swearing-in of Sequim's new police chief, Sheri Crain after the previous Chief, Bill Dickinson had retired. Chief Crain had been with the Sequim Police Department for 26 years, working her way up until she finally became the Chief. 2017 and 2018 -- Advocating for human services and spending a night in the winter cold The year 2017 must have been a quiet one in Sequim, as there are few news stories to be found concerning Mr. Bush in that year. However, in January of that year, a report about the removal of racist notebook papers surfaced in the Sequim High School. Mr. Bush was supported by the city council which voted to open a dialogue about discrimination and hate crimes. Mike Flynn of Sequim CommunityPlus said that a racist poster had appeared in the high school with disturbing images. In April of 2018, Mr. Bush worked with the Sequim city council and the Clallam County Fire District 3 in an effort to partner in bring an emergency medical facility to the City of Sequim. As the city grows, the need for a faster emergency medical response in the area has become more pressing. In that same month, Mr. Bush qualified for a $16,000 full tuition scholarship to attend the Harvard Senior Executive program that summer. After completing the three-week program, Mr. Bush was awarded a professional certification as a senior executive in state and local government. Later on in that year, the city of Sequim began to be concerned with providing human services. Mr. Bush and city attorney Kristina Nelson-Gross gave a public workshop about human services on November 14 at the Sequim Civic Center. Some video of that meeting can be found on the city's YouTube web site. https://www.youtube.com/watch?v=mzxs1L1KlPc&t=8s It was agreed that there were gaps in the human services: physical health, mental health, food insecurity, substance abuse disorder and sheltering. The city agreed to allocate $75,000 a year to provide human services and later signed a contract with a new organization, the Sequim Health and Housing Collaborative (SHHC). This funding continues to this day. Mr. Bush advocated at the time that a one-stop-shop storefront center for assisting people with substance abuse disorders be set up somewhere in the Sequim downtown area. Mr. Bush and Ms. Nelson Gross also pushed for improvements in sheltering and feeding the homeless population of Sequim. Mr. Bush put his body where his mouth was in advocating for aid for the homeless. In winter of 2018, Mr. Bush and several associates suited up and spent an entire night outdoors, trying to experience what it is like to be homeless. That night they took the bus to Serenity House in PA, then came back, hung around Safeway and stayed for a while in the bandshell in Carrie Blake park, all the while filming it for the public. Watch the video here: https://www.youtube.com/watch?v=vSsAbHTje48&t=7s 2019 -- The MAT Clinic & Another Pay Raise In 2019, the excitement began with the announcement of plans to build a MAT clinic in town. This resulted in a lot of activity, particularly by Save Our Sequim and other opponents of the MAT. A great deal of public pressure began to descend on Mr. Bush, the city staff and the city council. This has been documented elsewhere on this web site. Late in the year, the city council still resoundingly approved of Mr. Bush performance and unanimously agreed to give him a 3.5% merit and cost of living pay raise on October 26. Mayor Dennis Smith stated that Mr. Bush's overall evaluation was excellent with numerous accolades. Mr. Smith stated that "we are indeed fortunate to have an individual such as Charlie Bush as the Sequim City Manager. They appreciated his "thoughtfulness, enthusiasm, creativity, forward-thinking, collaborative leadership style, integrity, and commitment to both the city and the community." 2020 -- The MAT Clinic, Resignation & Reinstatement & Covid-19 In 2020, the excitement continued to ramp up, especially around the issue of the MAT clinic. Early in the year, on February 10, Mr. Bush, unaware of the impending Covid-19 outbreak, announced that he intended to resign his position, effective April 17. He indicated his intent to do some hiking on the Appalachian Trail before resuming his career elsewhere. At that announcement, Deputy Mayor Ted Miller announced that he was regretful to see Mr. Bush leaving. It should be noted that some members of the community had called for his resignation over the MAT clinic controversy. It should be noted that on the day that Mr. Bush officially resigned, Bob Bilow, a local retired attorney, in a public comment in the city council, denounced Mr. Bush to his face, accusing him of plotting to approve the MAT clinic behind the public's back and demanded that the city council fire him on the spot. In the succeeding city council meeting on February 24, the city council voted unanimously to name assistant city manager, Charisse Deschenes to replace Mr. Bush on an interim basis. There were issues with this process. Brandon Janisse did not participate in the secret executive session called by Mayor Armacost. Evidently there was a private meeting that took place about the city manager process that he and other council members were not involved in. Mr. Janisse felt that an ad hoc committee of three city councilors were involved in choosing the next city manager and other council members were left out of the process. Janisse also said that "There was no rush for what just happened." In the meantime, Covid-19 intervened and hiking on the Appalachian Trail apparently was not in the cards. Two city council meetings later, the city council voted unanimously (5-0) (there was a vacancy on the council due to the resignation of Jennifer States and Troy Tenneson abstained) to reinstate Charlie Bush to his old position as City Manager. Mr. Bush was already involved in transitioning the city to an emergency footing due to the Covid-19 outbreak. Mayor Armacost stated in that meeting that he felt fortunate to have Mr. Bush back and said the city "will benefit from Charlie's leadership during this challenging time in our community." During the summer, Mr. Bush and the city council agreed to allocate money to help small businesses in Sequim that have been hurt by the Covid-19 epidemic. By the end of the year, the city, under Mr. Bush's leadership, has donated more than $400,000 in grants to aid small business owners in Sequim. In September of 2020, a controversy broke out. On August 27, 2020, Mayor Armacost who had just returned from the Sturgis Motorcycle Rally in Sturgis, SD during the Covid-19 outbreak, made some comments about the QAnon conspiracy theory and advocated to the public on behalf of that theory. "It makes you think for yourself" he said at one point. The mayor also expressed his displeasure at the actions that Mr. Bush and Ms. Nelson Gross had taken to sue Parkwood Homes and SOS to recover court costs the city had incurred in defending itself in a lawsuit brought by Parkwood and SOS. (See the story reported on the Sequim Beacon https://www.sequimbeacon.org/city-council.html .) On September 9, both Mayor Armacost and Mr. Bush released statements stating that it was "inappropriate" for the mayor to speak about his support for the QAnon conspiracy theory during an official city sponsored event which "Coffee With the Mayor" is. Mr. Bush stated that this was the first time since he began working for the city that a mayor has commented on national politics that have nothing to do with the City of Sequim. At the time, the Washington State Public Disclosure Commission (PDC) received a complaint regarding the mayor's conduct which was "under review and a determination has not yet been made". 2021 -- The Monday Surprise Then the surprise came at the city council meeting on January 11, 2021 during which a visibly irritated Mayor Armacost insisted on his right as the "presiding officer" to hold an 80-minute secret meeting to discuss the resignation of Charlie Bush. After the meeting, the council voted 4-2 to authorize the mayor to negotiate Mr. Bush's resignation as the Sequim city manager. This story has been amply reported elsewhere. ( https://www.sequimgazette.com/news/city-council-agrees-to-city-manager-bushs-resignation/ , https://www.sequimbeacon.org/city-council.html ) A new grassroots organization, the Sequim Good Governance League, has sprung up, circulating a petition to retain Mr. Bush, as well as scheduling a rally in his support on the evening of the upcoming city council meeting on January 25. ( https://www.sequimgazette.com/news/sequim-group-forms-to-seek-city-council-transparency-open-dialogue/ ) ​https://www.sequimwa.gov/directory.aspx?eid=2 1_21_20 https://www.sequimgazette.com/news/sequim-group-forms-to-seek-city-council-transparency-open-dialogue/ 1_20_21 https://www.sequimgazette.com/news/details-sparse-on-call-for-bush-resignation/ 1_20_21 https://www.sequimgazette.com/news/city-council-agrees-to-city-manager-bushs-resignation/ 1_12_21 https://www.sequimgazette.com/news/mayor-city-say-coffee-sessions-is-no-place-for-personal-opinions/ 9_15_20 https://www.sequimgazette.com/news/city-prioritizes-fir-street-paving-for-pandemic-emergency-response/ 4_1_20 https://www.sequimgazette.com/news/sequim-city-council-reinstates-city-manager/ 3_25_20 https://www.sequimgazette.com/news/sequim-to-appoint-interim-city-manager-deschenes-following-bushs-departure/ 3_3_20 https://www.sequimgazette.com/news/sequim-city-manager-bush-resigns-to-pursue-hiking-goal/ 2_11_20 https://www.sequimgazette.com/news/sequim-city-manager-receives-positive-review-pay-increase/ 11_6_19 https://www.youtube.com/watch?v=vSsAbHTje48 1_3_19 https://www.peninsuladailynews.com/news/sequim-considers-gaps-solutions-to-communitys-health/ 11_22_18 https://www.peninsuladailynews.com/news/sequim-city-manager-selected-for-harvard-senior-executive-program/ 4_15_18 https://www.sequimgazette.com/news/city-fire-district-look-to-bring-urgent-care-to-sequim/ 4‑11_18 https://www.peninsuladailynews.com/news/sequim-council-supports-dialogue-on-discrimination/ 1_26_17 https://www.sequimgazette.com/news/crain-sworn-in-as-sequim-police-chief/ 12_21_16 https://www.sequimgazette.com/news/bush-earns-credentialed-manager-designation/ 5‑25_16 https://www.sequimgazette.com/news/councilors-give-city-manager-high-marks-and-a-pay-raise/ 2_10_16 https://www.sequimgazette.com/news/new-city-manager-sets-priorities/ 9_4_15 https://www.sequimgazette.com/news/sequim-chooses-new-city-manager/ 6_24_15 https://www.linkedin.com/public-profile/in/charlie-bush-b05916a?challengeId=AQGP66zd7etF3QAAAXcm6PmID3a0rJVMexRUR7V5HAf-ZUSVsy_zTdkfkVW9TBhGNP8vouxXErNJnugc9wjz2YLb3AENAkJrIQ&submissionId=d8733756-785d-5c16-5119-8990025bf216 linkedin resume
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,034
Q: Using find() in a simple std::map derived class: can not explain compile error I'm using quite often simple map lookup, where I want to avoid exception handling... Therefore I wrote first a "helper" function, then a simple derived class. However, I can NOT explain the compile error when using "find()" (and "end()") directly (1); it works when using "this->" (2) or "map<K, T>::" (3). Can anyone explain? I hope the code/intention is clear enough. Thanks, Gabriel #include <iostream> #include <map> using namespace std; //convenient helper template <typename K, typename T> class map_ex: public map<K, T> { public: bool try_get(const K k, T &v) { auto f = find(k); //1: not found ??! //auto f = this->find(k); //2: works //auto f = map<K, T>::find(k); //3: works bool found = f != this->end(); if(found) v = f->second; return found; }; }; int main() { map_ex<string, string> m; m["test"] = "1"; string v; m.try_get("test", v); std::cout << v << std::endl; return 0; }
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,689
/* * To change this template, choose Tools | Templates * and open the template in the editor. */ package br.faetec.model.dao; import br.faetec.model.dto.UnidadeDTO; import java.util.List; import javax.persistence.Query; import org.eclipse.persistence.config.HintValues; import org.eclipse.persistence.config.QueryHints; /** * Implementa os métodos para manipulação da tabela unidade no banco de dados. @author Antonio Cassiano **/ public class UnidadeDAO { /** * Recupera todos os registros da tabela unidade. @author Antonio Cassiano @return List - Lista preenchida com unidades. **/ public List listar() { Query query = Database.manager.createNamedQuery("UnidadeDTO.findAll"); query.setHint(QueryHints.MAINTAIN_CACHE, HintValues.FALSE);// evita consulta em cache List lista = query.getResultList(); return lista; } /** * Recupera um registro da tabela unidade, com base no valor da chave primária. @author Antonio Cassiano @param unidadeDTO @return UnidadeDTO - preenchido com o registro selecionado. **/ public UnidadeDTO selecionar(UnidadeDTO unidadeDTO) { return (UnidadeDTO) Database.manager.find(UnidadeDTO.class, unidadeDTO.getIdUnidade()); } /** * Grava um registro na tabela unidade se IdUnidade igual a zero, caso contrario atualiza o registro. @author Antonio Cassiano @param unidadeDTO **/ public void gravar(UnidadeDTO unidadeDTO) { Database.manager.getTransaction().begin(); if (unidadeDTO.getIdUnidade() == 0) { Database.manager.persist(unidadeDTO); // gravar } else { Database.manager.merge(unidadeDTO); // atualizar } Database.manager.getTransaction().commit(); } /** * Exclui um registro da tabela unidade, com base no valor da chave primária. @author Antonio Cassiano @param unidadeDTO **/ public void excluir(UnidadeDTO unidadeDTO) { Database.manager.getTransaction().begin(); Database.manager.remove(Database.manager.find(UnidadeDTO.class, unidadeDTO.getIdUnidade())); Database.manager.getTransaction().commit(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,351
Maurice Arnold Renaud (24 July 1861 – 16 October 1933) was a cultured French operatic baritone. He enjoyed an international reputation for the superlative quality of his singing and the brilliance of his acting. Early years Renaud was born in Bordeaux, as Arnaud Maurice Croneau. He studied for a year at the Paris Conservatoire, and then at the Brussels Conservatoire under Joseph Cornelis and Henri Warnots. He made his début at the Théâtre Royal de la Monnaie, Brussels, in 1883 and remained with that company until 1890, singing in the premières of Ernest Reyer's Sigurd in 1884 and in his Salammbô in 1890, opposite Rose Caron in both. He would re-appear at the Monnaie also in the period 1908-14. In October 1890 he joined the Opéra-Comique, making his debut as Karnak in Lalo's Le roi d'Ys, and also singing title roles in Don Giovanni and Der fliegende Holländer and Scarpia in Tosca. The following year he moved to the Opéra, making his debut as Nelusko in Meyerbeer's L'Africaine. He continued to appear at the Opéra regularly until 1914. International career Renaud's London début occurred during the Diamond Jubilee Gala at Covent Garden in June 1897. He sang in the Second Act of Tannhäuser with Emma Eames and Ernest van Dyck and in the Fourth Act of Les Huguenots with Albert Alvarez and Pol Plançon. Further performances at Covent Garden in 1897 included Don Giovanni with Ada Adini, Zélie de Lussan, and Marcel Journet. Renaud performed regularly in London until 1904 and made occasional appearances thereafter. Casts for these performances were often extraordinary: Carmen with Emma Calvé, Emma Eames, and Albert Saléza; Don Giovanni with Lilli Lehmann, Lillian Nordica or Emmy Destinn, Suzanne Adams, Zélie de Lussan and Edouard de Reszke; Manon with Mary Garden; Rigoletto with Nellie Melba or Selma Kurz, Enrico Caruso, and Marcel Journet. Renaud toured extensively, appearing at Saint Petersburg, Berlin, Monte Carlo, where he sang in the premières of Massenet's Le jongleur de Notre-Dame (1902) and Chérubin (1905). In 1902 he sang Méphistophélès in Raoul Gunsbourg's staging of Berlioz' La damnation de Faust, both in Monte Carlo and at La Scala with Toscanini conducting. Renaud in New York Maurice Grau, general manager of the Metropolitan Opera, signed a contract with Renaud, but various conflicts prevented the baritone from making his début at New York's premier opera house before the turn of the century. When Heinrich Conried succeeded Grau, he reneged on the contract with Renaud. The artist sued and won a substantial settlement. In 1906, Oscar Hammerstein I signed Renaud for the Manhattan Opera House, at the urging of Nellie Melba, who loved his striking good looks and elegant Jean de Reszke-like persona. It is ironic then that Renaud's greatest triumphs at the Manhattan company would be associated with Mary Garden, a lady not known for her interest in male pulchritude. Renaud's debut there was in a memorable December 1906 Rigoletto with Melba and Alessandro Bonci as the duke. Then in November 1907 Mary Garden made her debut at the Manhattan in Massenet's Thaïs, with Renaud as Athanaël. W. J. Henderson wrote that "His Athanaël has never been rivaled. No one else succeeded in creating the same impression of intensity." Renaud's greatest parts at the Manhattan included Don Giovanni, Scarpia, Germont, Hérode in Hérodiade, and the three villains in The Tales of Hoffmann. After Hammerstein was bought out in 1910, Renaud joined the Met, making his debut as Rigoletto on 25 November opposite Melba and Florencio Constantino. He sang with the company for two seasons, making his final appearance in March 1912 as Valentin in Gounod's Faust. Later years Maurice Renaud occasionally performed with the Boston and Chicago-Philadelphia companies during his final years in America. On 21 November 1910, he appeared as Scarpia with Carmen Melis, later Renata Tebaldi's teacher, prompting the Boston critic Horatio Parker to write, "...this was as vivid and racking a performance of Tosca since it first came to the stage!" In his final London performances in 1911 at Hammerstein's London Opera House he sang in Hérodiade, Rigoletto, Tales of Hoffmann and Nouguès' Quo Vadis. During the Great War, Renaud gave concerts for the troops and was wounded at the front when he and others in a trench took an artillery hit. He was left an invalid. After the War he was awarded the Légion d'honneur by the French government. In April 1919, after appearing at a Paris Opéra gala, Renaud finally retired. He appeared in a silent film in 1920. He died in Paris. Records Maurice Renaud made 52 extant records, 45 of them for The Gramophone Company (the forerunner of EMI) and seven for Pathé. Issued between 1901 and 1908, many of them duplicate (or even triplicate) the same favourite pieces, meaning that he actually recorded only 16 arias and five songs. As the duplications were issued, earlier versions were deleted, so that some of these items are now, over 100 years later, exquisitely rare. With one exception, everything is sung in French. There are no duets or ensembles. Regrettably, arias from many of his most celebrated operatic roles were not committed to disc; but what he did record is sufficient to demonstrate his greatness as a singer and interpretive artist. Reissues on modern format The Complete Gramophone Recordings 1901 - 1908 [plus one Pathé issue] Marston The Baritones Vol. I - 'The French School Symposium The Harold Wayne Collection Vol. 8 Symposium Souvenirs of Rare French Operas IRCC Reyer - Sigurd: Excerpts by Various Artists Malibran Covent Garden on Record Vol. I 1870 - 1904 Pearl Appreciation Maurice Renaud was a handsome man, trim and erect, with regular features, deep-set eyes, wavey chestnut-coloured locks and a magisterial handlebar mustache that completed the picture of virile magnetism. He was a fine figure of a singer, a convincing actor on stage, praised by all the most exacting critics on two continents. He was very much a baryton-noble in the tradition of such legendary Paris Opéra singers as Jean-Baptiste Faure (painted so masterfully by Degas) and Jean Lassalle. His voice was a luxury item of great beauty and almost ideal richness and weight for any rôle in the French operatic repertory. To Italian and German parts he brought an elegance and nobility nurtured in the school of dramatic declamation of the Académie nationale de musique, related to that of the Comédie Française and the whole historic conception of tragic and heroic performance in French literary theater. He was also a first-rate bel canto master, utterly accomplished in matters of vocal production and breathing. This combination of declamatory and vocal command gave his singing a unique authority and brilliance. It can be stated with confidence that very, very few artists have stood on his level. As the noted New York critic Henry Krehbiel wrote: "Where Renaud sits, there is the head of the table." References Karl-Josef Kutsch and Leo Riemens, editors: Großes Sängerlexikon Basel, Saur, 2000 Scott, Michael: The Record of Singing to 1914 (Volume I). London, Duckworth, 1977 Rosenthal, Harold & Warrack, John: The Concise Oxford Dictionary of Opera (Second Edition). Oxford University Press, 1980. Mouchon (Jean-Pierre), "Maurice Renaud: Le Protée de l'art lyrique", volume I, biography, discography (Saint-Denis, France, Édilivre, 2018); volume II, chronology, bibliography, index (Association internationale de chant lyrique TITTA RUFFO, Marseilles, France, 2018, or Saint-Denis, Édilivre, 2018). French operatic baritones 1933 deaths 1861 births Musicians from Bordeaux Royal Conservatory of Brussels alumni Conservatoire de Paris alumni Chevaliers of the Légion d'honneur
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,527
José Luis Rosales (ur. 1 marca 1943) – salwadorski strzelec, olimpijczyk. Wystąpił na Letnich Igrzyskach Olimpijskich 1972, na których pojawił się w jednej konkurencji. Zajął 56. miejsce w strzelaniu z pistoletu szybkostrzelnego z 25 metrów (na 62 strzelców). Wyniki olimpijskie Przypisy Bibliografia Salwadorscy strzelcy Salwadorscy olimpijczycy Uczestnicy Letnich Igrzysk Olimpijskich 1972 Urodzeni w 1943
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,809
<?xml version="1.0" encoding="utf-8"?> <resources> <color name="sliding_menu_item_down">#ff39393b</color> <color name="sliding_menu_item_release">#00000000</color> <color name="sliding_menu_background">#ff2c2c2e</color> <color name="sliding_menu_body_background">#fff5f5f5</color> </resources>
{ "redpajama_set_name": "RedPajamaGithub" }
5,379
{"url":"https:\/\/www.researcher-app.com\/paper\/1636460","text":"3 years ago\n\n# Sharp endpoint $L^p$ estimates for the Schrodinger groups.\n\nPeng Chen, Xuan Thinh Duong, Ji Li, Lixin Yan\n\nLet $L$ be a non-negative self-adjoint operator acting on $L^2(X)$ where $X$ is a homogeneous space with a dimension $n$. Suppose that the heat operator $e^{-tL}$ satisfies the generalized Gaussian $(p_0, p'_0)$-estimates of order $m$ for some $1\\leq p_0 < 2$ and $m\\geq 2$. In this paper we prove sharp endpoint $L^p$-Sobolev bounds for the Schr\\\"odinger groups $e^{itL}$ that for every $p\\in (p_0, p'_0)$, there exists a constant $C=C(n,p)>0$ independent of $t$ such that for $s= n\\big|{1\/2}-{1\/p}\\big|$\n\n\\begin{eqnarray*}\n\n\\left\\| (I+L)^{-{s}}e^{itL} f\\right\\|_{p} \\leq C(1+|t|)^{s}\\|f\\|_{p}, \\ \\ \\ \\ \\ t\\in{\\mathbb R} \\end{eqnarray*} and\n\n\\begin{eqnarray*}\n\n\\left\\| I_{s}(t)(L) f\\right\\|_{p} \\leq C \\|f\\|_{p}, \\ \\ \\ \\ t\\in {\\mathbb R}\\backslash\\{0\\}, \\end{eqnarray*} where $I_{s}(t)(L)$ is the Riesz means for the Schr\\\"odinger group defined by $I_{s}(t)(L)=t^{-s} \\int_0^t (t-\\lambda)^{s-1} e^{-i\\lambda L} d\\lambda$ for $t>0$, and $I_{s}(t)(L)={\\overline I}_{s}(-t)(L)$ for $t<0$. As a consequence, the above estimates hold for all $1<p<\\infty$ when the heat kernel of $L$ satisfies a Gaussian upper bound. This extends the classical results due to Feffermann and Stein, Miyachi, and Sj\\\"ostrand for the Laplacian on the Euclidean spaces ${\\mathbb R}^n$.\n\nPublisher URL: http:\/\/arxiv.org\/abs\/1811.03326\n\nDOI: arXiv:1811.03326v2\n\nYou might also like\nDiscover & Discuss Important Research\n\nKeeping up-to-date with research can feel impossible, with papers being published faster than you'll ever be able to read them. That's where Researcher comes in: we're simplifying discovery and making important discussions happen. With over 19,000 sources, including peer-reviewed journals, preprints, blogs, universities, podcasts and Live events across 10 research areas, you'll never miss what's important to you. It's like social media, but better. Oh, and we should mention - it's free.\n\nResearcher displays publicly available abstracts and doesn\u2019t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article.","date":"2022-10-03 18:32:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.890285074710846, \"perplexity\": 1081.8506948607326}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030337428.0\/warc\/CC-MAIN-20221003164901-20221003194901-00106.warc.gz\"}"}
null
null
Q: Disable PUSH notification (opt-out) from SDK We want to create a setting inside our app that let our clients enable/disable PUSH notifications permission sent by MC (commercial PUSH). We need this because we also send transactional PUSH notifications using another tool and we must allow our clients to enable/disable each type of PUSH notifications separately. We have checked documentation and we see the following functions: are they the correct way to dishabel notifictaions on the specific device? a. Android: PushMessageManager - disablePush b. iOS: sfmc_setPushEnable Thanks in advance. A: The docs say yes. Trust the docs.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,587
\section*{Introduction} Gamma-ray bursts (GRBs) continue to confound astrophysicists nearly a quarter century after their discovery \cite{KSO:739}. Before the launch of CGRO, most scientists thought that GRBs came from magnetic neutron stars residing in a thick disk (having a scale height of up to $\sim$ 2 kpc) in the Milky Way \cite{Hig:Lin:90,Hard:91}. The data gathered by BATSE showed the existence of a rollover in the cumulative brightness distribution of GRBs and that the sky distribution of even faint GRBs is consistent with isotropy \cite{Meegan:92,Briggs:95}. This rules out a thick Galactic disk source population. Consequently, the primary impact of the BATSE results has been to intensify debate about whether the bursts are Galactic or cosmological in origin. Galactic models attribute the bursts primarily to high-velocity neutron stars in an extended Galactic halo, which must reach one fourth or more of the distance to M31 ($d_{\rm M31} \sim 690$ kpc) in order to avoid any discernible anisotropy \cite{Hak:94,Hartmann:94}. Cosmological models place the GRB sources at distances $d \sim 1 - 3$ Gpc, corresponding to redshifts $z\sim 0.3 - 1$. A source population at such large distances naturally produces an isotropic distribution of bursts on the sky, and the expansion of the universe or source evolution can reproduce the observed rollover in the cumulative brightness distribution \cite{Fenimore:93}. Recent studies \cite{LyneLori:94,Frail:94} have revolutionized our understanding of the birth velocities of radio pulsars. They show that a substantial fraction of neutron stars have velocities that are high enough to produce an extended halo around the Milky Way like that required by Galactic halo models of GRBs \cite{LiDer:92}. Podsiadlowski, Rees, and Ruderman \cite{Pods:94} have carried out pioneering calculations of the spatial distribution expected for high-velocity neutron stars born in the Galactic disk. They consider the effects of a non-spherical halo potential and of M31, but neglect the effects of the Galactic disk, which we find is also important. \section*{Models} We have calculated detailed models of the spatial distribution expected for a population of high-velocity neutron stars born in the Galactic disk and moving in a Galactic potential that includes the bulge, disk, and a dark matter halo. We use the mass distribution and potential given by Kuijken and Gilmore \cite{KG:89} which includes a disk, a bulge, and a dark matter halo. The densities of the disk and of the halo are \begin{eqnarray} \rho_D= \rho_D^0 \exp\left({-r\over r_d}\right)\exp\left({-z\over z_d}\right), & {}~~~~~~ & \rho_H=\rho_H^0 \left[ 1 + \left({r\over r_c}\right)^2 \right]^{-1}. \end{eqnarray} The circular velocity $v_c$ and the Galactic disk lead to characteristic angular anisotropies as a function of burst brightness which provide a signature, and therefore a test, of high-velocity neutron star models. Prolate or oblate dark matter halos also produce other angular anisotropies as a function of burst brightness which may provide a signature of such models \cite{Pods:94}. We assume that neutron stars are born with the circular velocity $v_c \approx 220~\hbox{km~s$^{-1}$}$ of the Galactic disk. Given that current knowledge of the distribution of initial kick velocities is uncertain, we adopt a Green-function approach: we calculate the spatial distribution of neutron stars for a set of kick velocities (e.g., $v_{\rm kick} = 200, 400,..., 1400$~\hbox{km~s$^{-1}$}). We follow the resulting orbits for up to $3 \times 10^9$ years. In our initial calculations, we assume that the bursts are standard candles, i.e. $L = \delta(L-L_0)$. We parameterize the burst-active phase by a turn-on age $\delta t$ and a duration $\Delta t$, and assume that the rate of bursting is constant throughout the burst-active phase. The high-velocity neutron star model then has four parameters: $v_{\rm kick}$, $\delta t$, $\Delta t$, and the BATSE sampling depth $d_{\rm max}$. \begin{figure}[th] \begin{center} \begin{tabular}{lr} {\psfig{file=plo2cth.ps,width=5.cm,angle=-90}} & {\psfig{file=plo2sb2.ps,width=5.cm,angle=-90}} \\ {\psfig{file=plo2pf2.ps,width=5.cm,angle=-90}} & {\psfig{file=brightD.ps,width=5.cm,angle=-90}} \\ \end{tabular} \end{center} \caption{Comparison of a Galactic halo model in which neutron stars are born with a kick velocity of $1000$~\hbox{km~s$^{-1}$}\ and have a burst-active phase lasting $\Delta t = 500$ million years with a carefully-selected sample of 285 bursts from the BATSE 2B catalogue. Panels (a) and (b) show the contours in the ($\delta t$, $d_{\rm max}$)-plane along which the Galactic dipole and quadrupole moments of the model differ from those of the data by $\pm$ 1$\sigma$ (solid lines), $\pm$ 2$\sigma$ (dashed line), and $\pm$ 3$\sigma$ (short-dashed line) where $\sigma$ is the model variance; the thin line in panel (a) shows the contour where the dipole moment for the model equals that for the data. Panel (c) shows the contours in the ($\delta t$, $d_{\rm max}$)-plane along which 32\%, 5\%, and $4 \times 10^{-3}$ of simulations of the cumulative distribution of 285 bursts drawn from the peak flux distribution of the model have KS deviations $D$ larger than that of the data. Panel (d) compares the brightness distribution of the model shown in (a) - (c), taking $\delta t=30$~Myrs and $d_{max}=200$~kpc, to the BATSE plus PVO data. } \vspace{-5mm} \end{figure} \section*{Comparison between models and data} We compare the models with a carefully-selected data set that is self-consistent. We use only bursts that trigger on the 1024~ms timescale because we require that all bursts lie above the counts threshold in one trigger timescale; the 1024~ms timescale yields the largest sampling depth, and therefore imposes the strongest constraint on models, of the three BATSE trigger timescales. We adopt $F_{\rm pk}^{1024}$, the peak flux in 1024~ms, as our measure of burst brightness. We therefore include only bursts which have a $F_{\rm pk}^{1024}$ and $t_{90} > 1024$~ms. We consider only bursts with $F_{\rm pk}^{1024} \ge 0.35$~photons~cm$^{-2}$~s$^{-1}$ in order to avoid threshold effects \cite{Fenimore:93,ZandtFen:94}. We also exclude overwriting bursts, because the threshold is much higher for these bursts, and MAXBC bursts, because they have unknown positional errors. The 2B catalogue contains 285 bursts satisfying the above criteria. This set of bursts has Galactic dipole and quadrupole moments $\langle \cos \theta \rangle =0.056 \pm 0.034$, and $\langle \sin^2 b-{1\over 3} \rangle = -0.033 \pm 0.017$, compared to the values $\langle \cos \theta \rangle =-0.013$, and $\langle \sin^2 b-{1\over 3} \rangle = -0.005$ expected for a uniform sky distribution, taking into account the BATSE sky exposure. As a first step in testing the viability of Galactic halo models, we have compared the Galactic dipole and quadrupole moments, $\langle \cos \theta \rangle$ and $\langle \sin^2 b - 1/3 \rangle$, of the angular distribution of bursts for the model with those for the above set of bursts, using $\chi^2$. We have also compared the peak flux distribution for the model with that for the above set of bursts, using the KS test. \begin{figure}[t] \centerline{\psfig{file=map.285.ps,width=11cm,angle=-90}} \caption{Sky distribution of 285 bursts drawn randomly from the angular distribution expected for the model illustrated in Figure~1 when $\delta t=30$~Myrs, and $d_{\rm max}= 200$~kpc.} \end{figure} These comparisons do {\it not} provide estimates of model parameters (i.e., they do not yield parameter confidence regions), but are meant only to be a rough ``goodness-of-fit" guide to models which should be tested using a more rigorous approach like the maximum likelihood method. As an illustrative example, we show in Figure~1 the results for a Galactic halo model in which neutron stars are born with a uniform single velocity $1000$~\hbox{km~s$^{-1}$}\ and the burst-active phase has an initial burst rate $r \propto t^2$ and lasting $\Delta t = 500$ million years. Figure 2 shows the sky distribution of 285 bursts drawn randomly from the angular distribution expected for the model with $\delta t=30$~Myrs, and $d_{\rm max}= 200$~kpc. Comparisons of this kind show that the high-velocity neutron star model can reproduce the peak flux and angular distributions of the bursts in the BATSE 2B catalogue for neutron star kick velocities $v_{\rm kick} \mathrel{\mathpalette\simov >} 800$~\hbox{km~s$^{-1}$}, burst turn-on ages $\delta t \mathrel{\mathpalette\simov >} 10$~million years, and BATSE sampling depths $100$~kpc $\mathrel{\mathpalette\simov <} d_{\rm max} \mathrel{\mathpalette\simov <} 400$~kpc. Moreover, comparisons of this kind show that there is a large region of parameter space in which these models can reproduce the angular distribution of the bursts in the preliminary BATSE~3B catalogue. In high-velocity neutron star models, the slope of the cumulative peak flux distribution for the brightest BATSE bursts and the PVO bursts reflects the space density of the relatively small fraction of burst sources in the solar neighborhood. The nearness of the observed slope of the cumulative peak flux distribution of these bursts to -3/2, the value expected for a uniform spatial distribution of sources which emit bursts that are ``standard candles," must be considered a coincidence in the high-velocity neutron star model. However, a spread in neutron star kick velocities, in neutron star ages at which bursting behavior begins, or in the burst luminosity function tends to produce a cumulative peak flux distribution with a slope of -3/2; beaming of bursts along the direction of motion of the source or evolution of the rate of bursting as a function of age also tends to produce a slope of -3/2. We find that there are many combinations of these factors which successfully reproduce the slope of the BATSE plus PVO peak flux distribution. For example, a model in which the burst luminosity function is a log normal distribution with a FWHM of a factor of $\mathrel{\mathpalette\simov <} 10$ and the burst-active phase has an abrupt (``heaviside function") turn-on, one in which the kick velocities are distributed between $800$ and $ 1200$~km~s$^{-1}$ and the burst-active phase initially has a burst rate $r = (t/\delta t)$, or one in which the burst-active phase initially has a burst rate $r = {1 \over 2} (t/\delta t)^2$ work equally well. Figure~1d compares the peak flux distribution for the last model and the BATSE+PVO peak flux distribution. M31 provides a strong constraint on the BATSE sampling distance $d_{\rm max}$ \cite{Hak:94}. We have investigated the effects of M31 within the framework of the high-velocity neutron star model described above by including the distortion of the Galactic halo potential due to M31 and the burst sources emanating from M31. We find that for such models M31 imposes a limit on the BATSE sampling distance $d_{\rm max} \mathrel{\mathpalette\simov <} 400$~kpc, even if the bursting activity of neutron stars lasts for more than $10^9$ years \cite{BulCopLam:95c}.
{ "redpajama_set_name": "RedPajamaArXiv" }
683
Q: how to compare the current record to the next in the same table in terms of 'datetime' in MySQL? Problem: I'm going to explain this problem using the Sakila sample database and it data so it is easier for you. Ok, so my question is how can I compare the current record to the next in the same table in terms of 'datetime'. This is how the table looks like: payment_id customer_id staff_id rental_id amount payment_date last_update 1 1 1 76 2.99 25/05/2005 11:30:37 15/02/2006 22:12:30 2 1 1 573 0.99 28/05/2005 10:35:23 15/02/2006 22:12:30 3 1 1 1185 5.99 15/06/2005 00:54:12 15/02/2006 22:12:30 4 1 2 1422 0.99 15/06/2005 18:02:53 15/02/2006 22:12:30 5 1 2 1476 9.99 15/06/2005 21:08:46 15/02/2006 22:12:30 Using the above explanation in this sample, for each 'staff_id', how can I compare the current row with the next (using 'payment_date' for current and next), so it brings only the pair of records where the amount of the current record is the same as the next (something like current.amount = next.amount). This means that each record should be compared to the next of the same 'staff_id', and so on. I'm currently using this query, which do the job, but it takes for ever. I know it works good because I setted LIMIT 3 and it brought the correct ones (you can test it as well if you have the Sakila sample database): SELECT * FROM payment a JOIN payment b ON a.staff_id = b.staff_id AND a.payment_date > b.payment_date AND a.amount = b.amount LEFT JOIN payment c ON a.staff_id = c.staff_id AND c.payment_date < a.payment_date AND c.payment_date > b.payment_date WHERE c.payment_id IS NULL LIMIT 3; Could you please help me?
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,598
Q: nexus 5x stuck on Google screen after replacing one of the system shared libraries My nexus 5x works fine until I changed some share library in system partition: adb push out/target/product/bullhead/system/lib/somename.so /system/lib/somename.so adb reboot then my nexus 5x is stuck on Google screen on boot. But when I cleared data, my nexus 5x works fine again: fastboot -w && fastboot reboot I also find that system.img can't be flashed alone, I must wipe data using fastboot -w when I flash the system.img, or it will stuck on Google screen on boot. Anyone know how to solve this? A: I have done a lot, and finally it worked, I'm not sure which one of these solved my problem, but I believe it has something to do with: fastboot flash userdata userdata.img Hope it will work for you.
{ "redpajama_set_name": "RedPajamaStackExchange" }
416
Grúzskoie - Грузское - és un poble del territori de Krasnodar, a Rússia. Es troba a la vora del riu Grúzskaia, a 19 km al nord-est de Krílovskaia i a 176 km al nord-est de Krasnodar, la capital. Pertany al municipi de Novopaixkóvskaia. Referències Raion de Krilóvskaia Pobles del territori de Krasnodar
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,443
module Resque module Plugins module SerialQueues class Config attr_accessor :serial_queues attr_accessor :lock_timeout def initialize self.serial_queues = [] self.lock_timeout = 60*60 end def serial_queues=(queues) raise "queues should be an Array" unless queues.is_a?(Array) @serial_queues = queues.map(&:to_s) end end end end end
{ "redpajama_set_name": "RedPajamaGithub" }
1,688
     About Us Subscriber Login My Subscription Trump Impeachment Is A Disaster For Democrats and the Country Who Cost The Republican Party The U.S. Senate? Probably The Libertarian Party Insulting Attempts To Remove President Trump Are Making Things Worse, Not Better Ugly U.S. Capitol Attack Reflects Ugly Present — And Possibly Uglier Future Around New England Return of Medical Devices Tax Could Hit Massachusetts Businesses By State House News Service | December 17, 2017, 21:26 EST Printed from: https://newbostonpost.com/2017/12/17/return-of-medical-devices-tax-could-hit-massachusetts-businesses/ A magnetic resonance imaging, or MRI, scanner is one of the more expensive medical devices made by American manufacturers. (Photo courtesy of Wikipedia.org) By Matt Murphy State House News Service BOSTON — The failure of Republicans in Congress this year to repeal the Affordable Care Act means that a controversial tax on medical device sales will return in 2018 unless legislators intervene in the next couple weeks, putting a major Bay State industry on edge. The tax, which was included in the 2010 health care reform law as a way to help pay for an expansion of Medicaid, puts a levy of 2.3 percent on devices like X-ray and MRI machines, surgical instruments, and pacemakers. After a brief suspension, the tax is set to be reinstated in January. Many members of the Massachusetts delegation have been vocal in their opposition to the medical device tax since its inception, but on this past week Republican Beth Lindstrom sought to make it an issue in her U.S. Senate campaign by calling out U.S. Sen. Elizabeth Warren for not pushing repeal during the Senate tax reform debate. "The medical device industry produces a constant stream of life-saving innovations. As Senator, one of the first things I'll do will be to file a repeal bill and work with colleagues on both sides of the political aisle to pass it," Lindstrom said in a statement. The medical device industry in Massachusetts, according to the Massachusetts Medical Device Industry Council, accounts for about 480 firms and 21,000 jobs. Seventy percent of those companies, MassMEDIC President Tom Sommer said, have fewer than 10 employees and could struggle to comply with and absorb the tax. "The next 24 to 72 hours are going to be critical to efforts to repeal or suspend this tax further," said Sommers, who took part in a conference call Wednesday on the issue with the national Advanced Medical Technology Association. With House and Senate leaders in Washington arriving at a compromise Wednesday on tax reform, Sommers said the hope is that Congress will turn its attention in the final weeks of the year to other issues that must be dealt with, including a reauthorization of the Children's Health Insurance Program. Lindstrom wrote a letter to Warren and U.S. Sen. Edward Markey in October during the tax reform debate on Capitol Hill urging them to use that as an opportunity to eliminate the tax for good, but repeal did not make it into either the House or Senate tax bills. She blamed Warren for being "too busy grandstanding her opposition to lower taxes." A spokesman for Warren said the senior Democratic senator opposed the Republican tax bill, as did all other Democrats in the Senate, making it an inappropriate vehicle to push for other reforms. Warren has long supported repeal of the medical device tax, though she has favored, like many other Democrats, ensuring that the lost revenue would be replaced. "When Congress taxes the sale of a specific product through an excise tax, as the Affordable Care Act does with medical devices, it too often disproportionately impacts the small companies with the narrowest financial margins and the broadest innovative potential. It also pushes companies of all sizes to cut back on research and development for life-saving products. With an appropriate offset, we can repeal the medical device tax without cutting health care coverage for millions of people or forcing Americans to fight the whole health care battle all over again," Warren wrote in a 2012 op-ed during her first campaign for Senate. The Congressional Budget Office estimates that elimination of the medical device tax would cost the Treasury $24.4 billion over a decade. After the tax was collected in 2013, 2014, and 2015, it was delayed by Congress and is set to resume on January 1. House Ways and Means Chairman Kevin Brady announced a package of bills introduced this past week to provide relief from Obamacare taxes, including the "Cadillac tax" on high-cost insurance plans. One bill sponsored by U.S. Representative Erik Paulsen of Minnesota would suspend the medical device tax for another five years. It's possible that one or more of those proposals could be attached to an end-of-year spending bill or Children's Health Insurance Program reauthorization, industry insiders believe. "We'd like the permanent repeal, but suspending the tax for another five years would be an important first step to dealing with the uncertainty in the industry right now," Sommers said. Because of the way the device tax is structured, Sommers said that without a repeal or suspension of the tax, medical device firms would have to make a first payment the second week of January based on estimated sales going forward. This requirement, he said, particularly hurts small device manufacturers and start-ups that don't have a lot of capital. "We'll see what we saw during the years the tax was in place. Companies will be looking very carefully at their expenditures in research and development and investments in innovation. They'll be looking at head count and other ways to slim down to account for the 2.3 percent that's taken off the revenue line," Sommers said. Lindstrom's campaign said that from 2005 to 2010 prior to passage of Obamacare, employment at Massachusetts medical device companies grew by 15 percent, and held steady during the recession. However, employment fell by 9 percent from 2011 through 2016. Sommers said MassMEDIC doesn't have precise numbers on the numbers of jobs lost following passage of the Affordable Care Act, but said he knows anecdotally that layoffs attributed to the tax did occur. U.S. Representatives Seth Moulton of Salem, Steve Lynch of Boston, and William Keating of Bourne have been active in pushing for repeal of the device tax, according to MassMEDIC, while others in the delegation have also been supportive. Warren voted for a repeal amendment in 2013, and she also backed legislation filed by Moulton in 2015 to eliminate the tax, according to an aide. "We're hopeful that the next week will bring some positive news from Washington," Sommers said. How High Is Too High? Diehl, Lindstrom Hit… Trump's DNA Test Jab at Elizabeth Warren… U.S. Supreme Court Ruling Allowing Sales Taxes on… Jeff Bezos: Redefining Customer-Centric Three Lies and a Truth: Long 'New York' Article on… Massachusetts House Passes Bill That Would Whack… GET FREE EMAIL NEWSLETTER Curt Schilling Says He Lost Insurance Over Social Media Posts Deval Patrick Joins Twilio Board of Directors Charlie Baker Won't Say If Trump Should Be Able To Run For Office Again Mitt Romney May Vote To Convict Trump In The U.S. Senate … Again Ayanna Pressley Says There Are White Supremacists In Congress Massachusetts Will Use Gillette Stadium, Fenway Park To Administer Coronavirus Vaccines Boston Police Commissioner William Gross Will Likely Run For Mayor Congressman Jim McGovern Calls Capitol Hill Protestors and Rioters Fascist Biden Picks Gina Raimondo For His Cabinet Charlie Baker Calls On Donald Trump To Step Down Republican Congressman Calls On Democratic Party To Change Its Name Or Be Kicked Out of U.S. House of Representatives Washington Redskins Getting Ready To Cave On Name Aunt Jemima Resigns Joe Biden Confuses Memorial Day With Labor Day – But Remembers That August Is Before Labor Day NBP Top Ten 10 Times Saturday Night Live Got It Right 10 Classic New York Times Reader Comments on David Brooks Abortion Memo To Democrats Top Ten Reasons FBI Director James Comey Had To Go 10 Times Stephen Colbert Was Way More Offensive Than a Crude Putin Joke Boston History Pro-Life Activism On The Rise As ROE Act Abortion Expansion Bill Is Pending Pro-Life Group Prays In Pouring Rain For Defeat of Abortion-Expansion Amendment Outside Massachusetts State House Coronavirus Checks For Illegal Immigrants? Massachusetts State Senator Says Yes State Officials Expecting More Tax Revenue in Massachusetts Even Through Coronavirus Shutdowns Home Opinion Boston Region Nation & World Politics Education Culture Profiles Blogs Copyright © 2021 Boston Media Networks
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,752
\section{Introduction} Topological invariants of moduli spaces of semi-stable sheaves on complex surfaces are a rich subject with links to many topics in physics and mathematics. Closely related topics in physics are gauge theory, instantons, electric-magnetic duality \cite{Vafa:1994tf} and also (multi-center) black holes \cite{Diaconescu:2007bf, Manschot:2009ia, Manschot:2010xp}. Instantons saturate the bound on their minimal action, the so-called Bogomolnyi-Prasad-Sommerfeld (BPS) bound. The prime interest of this article are topological invariants of moduli spaces of instantons, in particular their Poincar\'e polynomials, which are commonly referred to as ``BPS invariants''. These invariants correspond also to (refined) supersymmetric indices enumerating supersymmetric or BPS states. Instantons on complex surfaces are described algebraically as semi-stable vector bundles and coherent sheaves \cite{Huybrechts:1996, Friedman:1998}. Generating functions of BPS invariants of sheaves on surfaces are computed for rank 1 by G\"ottsche \cite{Gottsche:1990} and rank 2 by Yoshioka \cite{Yoshioka:1994, Yoshioka:1995}. These generating functions lead to intriguing connections with (mock) modular forms \cite{Vafa:1994tf,Gottsche:1996, Gottsche:1998,Bringmann:2010sd}, which are a manifestation of electric-magnetic duality of the gauge theory \cite{Vafa:1994tf}. Refs. \cite{Manschot:2010nc, Manschot:2011dj} compute BPS invariants for rank 3 sheaves with Chern classes such that stability coincides with semi-stability. The present article computes the BPS invariants of {\it semi-stable} sheaves with rank 3 on Hirzebruch surfaces $\Sigma_\ell$ and on the projective plane $\mathbb{P}^2$, and explains how to generalize the computations to higher rank. The developed techniques can be applied straightforwardly to compute BPS invariants of the other rational and ruled surfaces. Although the extension from stable to semi-stable might seem a minor one, it requires to deal with various subtle but fundamental aspects of the moduli spaces of semi-stable sheaves, which could be neglected in Ref. \cite{Manschot:2010nc}. Having resolved how to deal with these aspects for $r=3$, the computations can in principle be extended to any rank. This introduction continues with summarizing the contents of the paper, after recalling the computations in Ref. \cite{Manschot:2010nc} which were inspired by \cite{ Yoshioka:1994, Yoshioka:1995, Gottsche:1996, Gottsche:1998}. A crucial fact for the computations is that the blow-up $\phi:\tilde \mathbb{P}^2\to \mathbb{P}^2$ is isomorphic to the Hirzebruch surface $\Sigma_{\ell}\to C$ with $\ell=1$. The fibre $f$ and base $C$ of $\Sigma_{\ell}$ are both isomorphic to $\mathbb{P}^1$. As explained in more detail in Section \ref{subsec:suitable}, the BPS invariants of $\Sigma_\ell$ with polarization $J$ chosen sufficiently close to $f$ (a so-called ``suitable'' polarization, see Definition \ref{def:suitable}) vanish for sheaves with first Chern class $c_1$ and rank $r$ such that $c_1\cdot f\neq 0 \mod r$. Wall-crossing then allowed to compute the BPS invariants for other choices of $J$. The BPS invariants of $\mathbb{P}^2$ were obtained from those of $\tilde \mathbb{P}^2$ by application of the blow-up formula \cite{Yoshioka:1996, Gottsche:1998, Li:1999}, which is a simple relation between the generating functions of the invariants for $\mathbb{P}^2$ and $\tilde \mathbb{P}^2$. However, its original form is only valid for $\gcd(c_1\cdot \phi^*H,r)=1$ and $J=\phi^* H$, with $H$ the hyperplane class of $\mathbb{P}^2$. The present paper describes how to deal with the cases when $c_1$ and $r$ do not satisfy the constraints for vanishing of the BPS invariant or the blow-up formula. The formal theory of invariants of moduli spaces (or stacks) of semi-stable sheaves is developed by Kontsevich and Soibelman \cite{Kontsevich:2008} and Joyce \cite{Joyce:2004, Joyce:2005, Joyce:2008}. We will in particular use the notion of virtual Poincar\'e functions for moduli stacks, which are a generalization of Poincar\'e polynomials of manifolds. The virtual Poincar\'e function of a moduli stack is (conjecturally) related to the BPS invariant by (\ref{eq:stackinvariant}). The BPS invariant is most natural from physics and leads to generating functions with modular properties. Two novel ingredients of this paper are: \begin{enumerate} \item Eq. (\ref{eq:restrictfibre}) which provides for any rank $r\geq 1$ the generating function of virtual Poincar\'e functions of the moduli stack of sheaves on a Hirzebruch surface $\Sigma_\ell$ whose restriction to the fibre $f$ is semi-stable. Eq. (\ref{eq:totalset2}) gives the generalization to virtual Hodge functions for more general ruled surfaces $\Sigma_{g,\ell}$. \item {\it Extended} Harder-Narasimhan filtrations $0\subset F_1\subset F_2\subset\dots \subset F_\ell=F$, whose definition (Def. \ref{def:extHNfiltr}) differs from the usual definition (\ref{def:HNfiltr}) of HN filtrations by allowing quotients $E_i=F_i/F_{i-1}$ with equal (Gieseker) stability $p_J(E_i,n)\succeq p_J(E_{i+1},n)$. These filtrations in combination with the associated invariants (\ref{eq:setfiltration}) are particularly useful to compute generating functions of BPS invariants starting from Conjecture \ref{eq:restrictfibre} and their changes across walls of marginal stability. \end{enumerate} To obtain the BPS invariants for a suitable polarization, one subtracts from Eq. (\ref{eq:restrictfibre}) generating functions corresponding to extended HN-filtrations given by (\ref{eq:setfiltration}), analogous to the seminal papers about vector bundles on curves \cite{Harder:1975, Atiyah:1982fa}. Naturally, these techniques are also applicable to compute invariants of semi-stable invariants for other mathematical objects like vector bundles on curves and quivers. Also a solution to this recursive procedure is given analogous to Ref. \cite{Zagier:1996}. Then repeated application of the formula for filtrations (which is equivalent with the wall-crossing formulas \cite{Kontsevich:2008, Joyce:2008}) gives the BPS invariants for other choices of the polarization. Finally, the blow-up formula provides the invariants on $\mathbb{P}^2$. The earlier mentioned condition $\gcd(c_1\cdot \phi^*H,r)=1$ is a consequence of the fact that the blow-up formula is applicable for the Poincar\'e functions $\mathcal{I}^\mu(\Gamma,w;J)$ with respect to $\mu$-stability instead of the more refined Gieseker stability. However with the invariant for filtrations (\ref{eq:setfiltration}), it is straightforward to transform the BPS invariants $\Omega(\Gamma,w;J)$ to $\mathcal{I}^\mu(\Gamma,w;J)$ for $\mu$-stability. The rational factors in Eq. (\ref{eq:setfiltration}) appear naturally in the relation between the generating functions of these invariants. The paper illustrates in detail the above steps for sheaves with rank 2 and 3, and shows their agreement with various consistency conditions, e.g. the blow-up formula, integrality and $w\leftrightarrow w^{-1}$ symmetry of the Poincar\'e polynomial. \\ \newline The outline of the paper is as follows. Section \ref{sec:sheaves} reviews some necessary properties of sheaves on surfaces including stability conditions. Section \ref{sec:genfunctions} discusses the invariants and generating functions. Section \ref{sec:restrictfibre} presents the generating function (\ref{eq:restrictfibre}) of the virtual Poincar\'e functions of the stack of sheaves whose restriction to the fibre is semi-stable. Then we continue with the computation of the invariants of $\Sigma_\ell$ for any choice of polarization in Section \ref{sec:ruledsurface}. Finally Section \ref{sec:projplane} presents the blow-up formula (\ref{eq:blowup}) and computes the generating function for sheaves on $\mathbb{P}^2$ with $(r,c_1)=(3,0)$. \section*{Acknowledgements} \noindent I would like to thank L. G\"ottsche, H. Nakajima, T. Wotschke and K. Yoshioka for helpful and inspiring discussions. I am grateful to E. Diaconescu and especially S. Meinhardt for their explanations of the work of D. Joyce \cite{Joyce:2004, Joyce:2005}. Part of the presented research was done as a postdoc of the IPhT, CEA Saclay and supported by ANR grant BLAN06-3-137168. \section{Sheaves on surfaces} \label{sec:sheaves} We consider sheaves on a smooth projective surface $S$. The Chern character of the sheaf $F$ is given by ch$(F)=r(F)+c_1(F)+\frac{1}{2}c_1(F)^2-c_2(F)$ in terms of the rank $r(F)$ and its Chern classes $c_1(F)$ and $c_2(F)$. The vector $\Gamma(F)$ parametrizes in the following the topological classes of the sheaf $\Gamma(F):=(\,r(F),\mathrm{ch}_1(F),\mathrm{ch}_2(F)\,)$. Other frequently occuring quantities are the determinant $\Delta(F)=\frac{1}{r(F)}(c_2(F)-\frac{r(F)-1}{2r(F)}c_1(F)^2)$, and $\mu(F)=c_1(F)/r(F)\in H^2(S,\mathbb{Q})$. Given a filtration $0\subset F_1\subset \dots \subset F_{\ell}=F$, let $E_i=F_i/F_{i-1}$ and $\Gamma_i=\Gamma(E_i)$. The discriminant of $F$ is given in terms of the subobjects and quotients by: \begin{equation} \label{eq:discriminant} \Delta(\,\Gamma(F)\,)=\sum_{i=1}^\ell\frac{r(E_i)}{r(F)}\Delta(E_i)-\frac{1}{2r(F)}\sum_{i=2}^\ell \frac{r(F_i)\,r(F_{i-1})}{r(E_i)} \left(\mu(F_{i})-\mu(F_{i-1}) \right)^2. \end{equation} We are interested in the moduli space (or moduli stack) of semi-stable sheaves with respect to Gieseker stability, but also the coarser $\mu$-stability appears in order to apply the blow-up formula. To define these two stability conditions, let $C(S)\subset H^2(S,\mathbb{R})$ be the ample cone of $S$, and the (reduced) Hilbert polynomial $p_{J}(F,n)= \chi(F\otimes J^n)/r(F)$. For a surface $S$, we have \cite{Friedman:1998}: \begin{equation} p_{J}(F,n)=J^2n^2/2+\left( \frac{c_1(F)\cdot J}{r(F)}-\frac{K_S\cdot J}{2}\right)n +\frac{1}{r(F)}\left(\frac{c_1(F)^2-K_S\cdot c_1(F)}{2}-c_2(F) \right)+\chi(\mathcal{O}_S). \end{equation} Note that this function can be obtained from the physical central charge as in \cite{Diaconescu:2007bf, Manschot:2009ia}. In the large volume limit, the stability condition asymptotes to the lexicographic ordering of polynomials based on their coefficients. This ordering is denoted by $\prec$. Then, \begin{definition} A torsion free sheaf $F$ is Gieseker stable (respectively semi-stable) if for every subsheaf $F'\subsetneq F$, $p_J(F',n)\prec p_J(F,n)$ ( respectively $p_J(F',n)\preceq p_J(F,n)$ ). \end{definition} and \begin{definition} Given a choice $J\in C(S)$, a torsion free sheaf $F$ is called $\mu$-stable if for every subsheaf $F'\subset F$, $\mu(F')\cdot J <\mu(F)\cdot J$, and $\mu$-semi-stable if for every subsheaf $F'$, $\mu(F')\cdot J \leq\mu(F)\cdot J$. \end{definition} Thus $\mu$-stability is a coarser stability condition then Gieseker stability, although the walls of marginal stability for both stability conditions are the same. A wall of marginal stability $W(F',F)\subset H^2(S,\mathbb{R})$ is the codimension 1 subspace of $C(S)$, such that $(\mu(F')-\mu(F))\cdot J=0$, but $(\mu(F')-\mu(F))\cdot J\neq 0$ away from $W(F',F)$. The invariants based on Gieseker stability exhibit better integrality and polynomial properties then the ones based on $\mu$-stability. On the other hand, operations like restriction to a curve and blowing-up a point of $S$ are most natural for $\mu$-semi-stable sheaves. The moduli space $\mathcal{M}_J(\Gamma)$ of Gieseker stable sheaves on $S$ (with respect to the ample class $J$) whose rank and Chern classes are determined by $\Gamma$ has expected dimension: \begin{equation} \label{eq:dim} d_{\mathrm{exp}}(\Gamma)=\dim_{\mathbb{C}}(\mathrm{Ext}^1(F,F))-\dim_{\mathbb{C}}(\mathrm{Ext}^2(F,F))=2r^2\Delta-r^2\chi(\mathcal{O}_S)+1. \end{equation} When $\mathrm{Ext}^2(F,F)=0$ the moduli space is smooth and of the expected dimension. Vanishing of $\mathrm{Ext}^2(F,F)$ for semi-stable sheaves on surfaces can be proven if the polarization satisfies $J\cdot K_S<0$. More generally, we have \begin{proposition} Let $J\in C(S)$ such that $J\cdot K_S<0$ and let $F$ and $G$ be Gieseker semi-stable sheaves with respect to polarization $J$ such that $p_J(F,n) \preceq p_J(G,n)$. Then: $$\mathrm{Ext}^2(F,G)=0.$$ \end{proposition} \begin{proof} Due to Serre duality $\mathrm{Ext}^2(F,G)=\mathrm{Hom}(G,F\otimes K_{S})^\vee$. Assume contrary to the proposition that $\mathrm{Ext}^2(F,G)\neq 0$, such that a non-vanishing morphism $\psi:G\to F \otimes K_{S}$ exists. Then $F \otimes K_{S}$ is a quotient of $G$, and semi-stability of $G$ implies $p_J(F\otimes K_{S},n)\succeq p_J(G,n)$. Now we find a contradiction, since the assumption $J\cdot K_S<0$ implies $p_J(F\otimes K_{S},n) \prec p_J(F,n) \preceq p_J(G,n)$. Therefore a non-vanishing $\psi$ cannot exist and the proposition follows. \end{proof} \noindent Dimension estimates for (coarse) moduli spaces of semi-stable sheaves are more subtle due to endomorphisms. We will find that BPS invariants computed in Sections \ref{sec:ruledsurface} and \ref{sec:projplane} are in agreement with the expected dimension (if non-vanishing). Twisting a sheaf $E$ by a line bundle $\mathcal{L}$ is an isomorphism of moduli spaces. The Chern classes of the twisted sheaf $E'=E\otimes \mathcal{L}$ are: \begin{eqnarray} \label{eq:ltwist} && r(E')=r(E), \quad c_1(E')=c_1(E)+r(E)c_1(\mathcal{L}),\non\\ && c_2(E')=c_2(E)+(r(E)-1)c_1(\mathcal{L})c_1(E)+c_1(\mathcal{L})^2\frac{r(E)(r(E)-1)}{2}.\non \end{eqnarray} The discriminant remains invariant: $\Delta(E')=\Delta(E)$. This shows that it suffices to compute the generating functions for $c_1(E)\in H^2(S,\mathbb{Z}/r\mathbb{Z})$. Determination of generating functions of BPS invariants for $r\geq 2$ is an open problem in general. To make progress, we specialize in the following to the set of smooth ruled surfaces. A ruled surface is a surface $\Sigma_{g,\ell}$ together with a surjective morphism $\pi: \Sigma_{g,\ell} \to C_g$ to a curve $C_g$ with genus $g$, such that the fibre over each point of $C_g$ is a smooth irreducible rational curve and such that $\pi$ has a section. Let $f$ be the fibre of $\pi$, then $H_2(\Sigma_{g,\ell},\mathbb{Z})=\mathbb{Z}C_g\oplus\mathbb{Z}f$, with intersection numbers $C_g^2=-\ell$, $f^2=0$ and $C_g\cdot f=1$. The canonical class is $K_{\Sigma_{g,\ell}}=-2C_g+(2g-2-\ell)f$. The holomorphic Euler characteristic $\chi(\mathcal{O}_{\Sigma_{g,\ell}})$ is $1-g$. An ample divisor $J\in C(\Sigma_{g,\ell})$ is parametrized by $J_{m,n}=m(C_g+\ell f)+nf$ with $m,n> 0$. The condition $J\cdot K_S<0$ translates to $m(2g-2-\ell)<2n$. Most of this article will further specialize to the Hirzebruch surfaces $\Sigma_{0,\ell}=\Sigma_\ell$. For these surfaces $J\cdot K_S<0$ is satisfied for all $J\in C(\Sigma_{\ell})$. The surface $\Sigma_1$ playes a special role since besides being a ruled surface, $\Sigma_{1}$ is also the blow-up $\phi: \mathbb{\tilde P}^2\to \mathbb{P}^2$ of the projective plane $\mathbb{P}^2$. The exceptional divisor of $\phi$ is $C_0=C$, and the pullback of the hyperplane class $H$ of $\mathbb{P}^2$ is given by $\phi^*H=C+f$. Due to the simplicity of $\mathbb{P}^2$, it is of intrinsic interest to determine the generating functions of its BPS invariants. \section{BPS invariants and generating functions} \label{sec:genfunctions} This section defines the generating functions of the BPS invariants and discusses some of its properties. Physically, the BPS invariant arises by considering topologically twisted $\mathcal{N}=4$ Yang-Mills on the surface $S$ \cite{Vafa:1994tf}. The path integral of this theory localizes on the BPS solutions, including the instantons, due to the topologically twisted supersymmetry \cite{Vafa:1994tf}. The BPS invariant is given by a weighted sum over the BPS Hilbert space $\mathcal{H}(\Gamma,J)$, and based on the path integral one can show that the (numerical) BPS invariant corresponds to the Euler number of the BPS moduli space. Alternatively one can consider the $\mathcal{N}=2$ supersymmetric theory in $\mathbb{R}^{3,1}$ obtained from the compactification of IIA theory on a non-compact Calabi-Yau $\mathcal{O}(-K_S)\to S$. The $\mathcal{N}=2$ theory with gauge group $SU(2)$ and without hypermultiplets can be engineered by any of the Hirzebruch surfaces $\Sigma_\ell$ \cite{Katz:1996fh}. Sheaves supported on $\Sigma_\ell$ correspond to magnetic monopoles and dyons in $\mathcal{N}=2$ gauge theory. In this theory, the BPS invariant can be refined with an additional parameter $w$ \cite{Gaiotto:2010be}: \begin{equation} \Omega(\Gamma,w;J)=\frac{\mathrm{Tr}_{\mathcal{H}(\Gamma,J)}\, 2\hat J_3(-1)^{2\hat J_3}(-w)^{2\hat I_3 +2\hat J_3}}{(w-w^{-1})^2}, \end{equation} with $\hat J_3$ a generator of the $SU(2)\cong \mathrm{Spin}(3)$ group arising from rotations in $\mathbb{R}^{3,1}$, and $\hat I_3$ is a generator of the $SU(2)_R$ $R$-symmetry group. BPS representations have the form $\left[ (\textstyle{\frac{1}{2}},0)\oplus (0,\textstyle{\frac{1}{2}})\right]\otimes \omega$ with $\omega=(j,j')$ a vacuum representation of $\mathrm{Spin}(3)\oplus SU(2)_R$ with spins $j$ and $j'$. One factor of $w-w^{-1}$ in the denominator will vanish due to the factor $(\textstyle{\frac{1}{2}},0)\oplus (0,\textstyle{\frac{1}{2}})$ (the half-hypermultiplet) present for every BPS state \cite{Gaiotto:2010be}. Since $\Omega(\Gamma,w;J)$ is thus essentially an $SU(2)$ character, this shows that $\Omega(\Gamma,w;J)$ is a polynomial divided by $w-w^{-1}$; the polynomial has integer coefficients and is invariant under $w\leftrightarrow w^{-1}$. The positivity conjectures of Ref. \cite{Gaiotto:2010be} assert furthermore that the coefficients are positive. We choose to divide by the factor $w-w^{-1}$ in order to have nice modular properties of the generating functions. See for example Eq. (\ref{eq:rank1}). The $\mathcal{N}=2$ picture shows that the refined BPS invariant provides more information than the Euler number of the moduli space. The $w$-expansion is expected to give the $\chi_y-$genus of the BPS moduli space \cite{Chuang:2013}. To make this more precise, we let $\mathcal{M}_J(\Gamma)$ be the suitably compactified moduli space of semi-stable sheaves on $S$ with topological classes $\Gamma$ and for polarization $J\in C(S)$, i.e. the Gieseker-Maruyama compactification. If we assume that $J\cdot K_S<0$ and that semi-stable is equivalent to stable, the moduli space is smooth and the BPS invariant corresponds mathematically to \cite{Chuang:2013}: \begin{equation} \label{eq:BPSinvariant} \Omega(\Gamma,w;J):=\frac{w^{-\dim_\mathbb{C}\mathcal{M}_J(\Gamma)}}{w-w^{-1}}\, \chi_{w^2}(\mathcal{M}_J(\Gamma)), \qquad w^2\neq 1, \end{equation} with on the right hand side the $\chi_y$-genus, which is defined in terms of the virtual Hodge numbers $h^{p,q}(X)=\dim H^{p,q}(X,\mathbb{Z})$ of the quasi-projective variety $X$ by $\chi_{y}(X)=\sum_{p,q= 0}^{\dim_\mathbb{C}(X)}(-1)^{p-q}\,y^p\,h^{p,q}(X)$. Eq. (\ref{eq:dim}) provides us with the degree of $\chi_{w^2}(\mathcal{M}_J(\Gamma))$, and since $\mathcal{M}_J(\Gamma)$ is compact, orientable and without boundary $h^{p,q}(X)=h^{\dim_\mathbb{C}(X)-p,\dim_\mathbb{C}(X)-q}(X)$. For rational surfaces, which include the ruled surfaces with $g=0$, the non-vanishing cohomology of smooth moduli spaces of semi-stable sheaves has Hodge type $(p,p)$ \cite{Beauville:1992, Gottsche:1998}. Therefore, $\chi_{w^2}(X)=P(X,w)=\sum_{i=0}^{2\dim_\mathbb{C}(X)}b_i(X)\,w^i$ with $P(X,w)$ the Poincar\'e polynomial and $b_i(X)=\sum_{p+q=i}h^{p,q}(X)$ the Betti numbers of $X$. If semi-stable is not equivalent to stable, $\mathcal{M}_J(\Gamma)$ contains singularities due to non-trivial automorphisms of the sheaves. The formal mathematical framework for the integer BPS invariants or motivic Donaldson-Thomas invariants is developed by Kontsevich and Soibelman \cite{Kontsevich:2008}. For our purposes it is useful to introduce also two other invariants, $\bar\Omega(\Gamma,w;J)$ and $\mathcal{I}(\Gamma,w;J)$. These invariants are defined using the notion of moduli stack $\mathfrak{M}_J(\Gamma)$ which properly deals with the mentioned singularities in the moduli space of semi-stable sheaves by keeping track of the automorphism groups of the semi-stable sheaves. The invariant $\mathcal{I}(\Gamma,w;J)$ is an example of a motivic invariant. In general an invariant of a quasi-projective variety $X$ is called 'motivic' if $\Upsilon(X)$ satisfies: \begin{itemize} \item[-] If $Y\subseteq X$ is a closed subset then $\Upsilon(X)=\Upsilon (X\,\backslash\, Y) + \Upsilon (Y)$, \item[-] If $X$ and $Y$ are quasi-projective varieties $\Upsilon(X\times Y)=\Upsilon(X)\, \Upsilon(Y)$. \end{itemize} Ref. \cite{Joyce:2005} defines a motivic invariant, the virtual Poincar\'e function $\Upsilon'$, for Artin stacks, which are stacks whose stabilizer groups are algebraic groups. The virtual Poincar\'e function $\mathcal{I}(\Gamma,w;J)$ is a rational function in $w$ and a natural generalization of the Poincar\'e polynomial of smooth projective varieties to stacks. The definition of these invariants for stacks is such that for a quotient stack $[X/G]$ with $G$ an algebraic group, one has $\Upsilon'([X/G])=\Upsilon(X)/\Upsilon(G)$. Using the virtual Poincar\'e function $\Upsilon'$, Definition 6.20 of Ref. \cite{Joyce:2004} defines the virtual Poincar\'e function $\mathcal{I}(\Gamma,w;J)$ (in Ref. \cite{Joyce:2004} denoted by $I^\alpha_{\mathrm{ss}}(\tau)^\Lambda$) for the moduli stacks of semi-stable sheaves on surfaces with Ext$^2(X,Y)=0$ for $p_J(X,n)\prec p_J(Y,n)$. Definition 6.22 of Ref. \cite{Joyce:2004} also defines a second invariant $\bar \Omega(\Gamma,w;J)$ (denoted by $\bar J^\alpha(\gamma)^\Lambda$ in Ref. \cite{Joyce:2004}). These appear in fact rather natural from the physical perspective \cite{Manschot:2010qz, Kim:2011sc}. See also \cite{Nakajima:2010} for related discussions of invariants. The invariants $\bar \Omega(\Gamma,w;J)$ are the rational multi-cover invariants of $\Omega(\Gamma,w;J)$: \begin{eqnarray} \label{eq:refw} \bar \Omega(\Gamma,w;J)&:=&\sum_{m|\Gamma} \frac{\Omega(\Gamma/m,-(-w)^m;J)}{m}. \end{eqnarray} They can be expressed in terms of $\mathcal{I}(\Gamma,w;J)$ and vice versa (Theorem 6.8 in \cite{Joyce:2004}): \begin{equation} \label{eq:inversestackinv} \bar \Omega(\Gamma_i,w;J):=\sum_{\Gamma_1+\dots +\Gamma_\ell=\Gamma\atop p_J(\Gamma_i,n)=p_J(\Gamma,n)\,\mathrm{for}\,\, i=1,\dots,\ell} \frac{(-1)^{\ell+1}}{\ell} \,\prod_{i=1}^\ell \mathcal{I}(\Gamma_i,w;J). \end{equation} with inverse relation: \begin{equation} \label{eq:stackinvariant} \mathcal{I}(\Gamma,w;J)=\sum_{\Gamma_1+\dots +\Gamma_\ell=\Gamma\atop p_J(\Gamma_i,n)=p_J(\Gamma,n)\,\mathrm{for}\,\, i=1,\dots,\ell} \frac{1}{\ell !} \,\prod_{i=1}^\ell \bar \Omega(\Gamma_i,w;J), \end{equation} Note that $\mathcal{I}(\Gamma,w^{-1};J)\neq -\mathcal{I}(\Gamma,w;J)$ and that $\mathcal{I}(\Gamma,w;J)$ in general has higher order poles in $w$ compared to $\Omega(\Gamma,w;J)$. It is an interesting question what geometric information the integer invariants $\Omega(\Gamma,w;J)$ carry if $m|\Gamma$ with $m>1$. For $r=m=2$, Remark 4.6 of Ref. \cite{Yoshioka:1995} argues that $\Omega(\Gamma,w;J)$ computes the Betti numbers of rational intersection cohomology of the singular moduli space $\mathcal{M}_J(\Gamma)$. The generating function in Remark 4.6 of Ref. \cite{Yoshioka:1995} is very closely related to the one obtained for moduli spaces of semi-stable vector bundles over Riemann surfaces in (the Corrigendum to) Ref. \cite{Kirwan:1986}. Intersection cohomology is a cohomology theory for manifolds with singularities which satisfies Poincar\'e duality if the manifolds are complex and compact. It is therefore natural to expect that the BPS invariant (\ref{eq:BPSinvariant}) for $r\geq 3$ also provides Betti numbers of intersection cohomology groups. This issue is left for further research. The seminal papers \cite{Gottsche:1990, Yoshioka:1994, Yoshioka:1995} compute moduli space and stack invariants by explicitly counting sheaves on the surface $S$ defined over a finite field $\mathbb{F}_s$ with $s$ elements. The Poincar\'e function $\mathcal{I}(\Gamma,s^\frac{1}{2};J)$ is upto an overall monomial computed by: \begin{equation} \label{eq:countoverF} \sum_{E\in M_J(\Gamma,\mathbb{F}_s)} \frac{1}{\#\mathrm{Aut}(E)}, \end{equation} where $M_J(\Gamma,\mathbb{F}_s)$ is the set of semi-stable sheaves with characteristic classes $\Gamma$. The Weil conjectures imply that the expansion coefficients in $s$ are the Betti numbers of the moduli spaces. The parameter $s$ is related to the $w$ in this article by $s=w^{2}$. Eq. (\ref{eq:countoverF}) shows that poles of $\mathcal{I}(\Gamma,w;J)$ in $w$ appear when the sheaves have non-trivial automorphism groups. If semi-stable is equivalent to stable $\mathcal{I}(\Gamma,w;J)=\Omega(\Gamma,w;J)$; the factor $(w-w^{-1})^{-1}$ in Eq. (\ref{eq:BPSinvariant}) is due to the automorphisms which are multiplication by $\mathbb{C}^*$. The automorphism group of semi-stable and unstable bundles or sheaves is in general $GL(n)$, whose number of elements over $\mathbb{F}_s$ is $(1-s)(1-s^2)\dots (1-s^n)$ and thus lead to higher order poles. We continue now by defining the generating function $h_{r,c_1}(z,\tau;S,J)$ of $\bar \Omega(\Gamma,w;J)$: \begin{equation} h_{r,c_1}(z,\tau;S,J)=\sum_{c_2} \bar \Omega(\Gamma,w;J)\,q^{r\Delta(\Gamma)-\frac{r\chi(S)}{24}}. \end{equation} where $q:=e^{2\pi i \tau}$, with $\tau \in \mathcal{H}$ and $w:= e^{2\pi i z}$ with $z\in \mathbb{C}$. Since twisting by a line bundle (\ref{eq:ltwist}) is an isomorphism of moduli spaces, it suffices to compute $h_{r,c_1}(z,\tau;S,J)$ for $c_1\in H_2(S,\mathbb{Z}/r\mathbb{Z})$. The expansion parameter $t$ for $c_2$ in Refs. \cite{Gottsche:1990, Yoshioka:1994, Yoshioka:1995} is related to $q$ by $q=s^rt$. The generating function $h_{1,c_1}(z,\tau;S)$ depends only on $b_2(S)$ for $S$ a smooth projective surface with $b_{1}(S)=b_{3}(S)=0$ \cite{Gottsche:1990}: \begin{equation} \label{eq:rank1} h_{1,c_1}(z,\tau;S)=\frac{i}{\theta_1(2z,\tau)\,\eta(\tau)^{b_2(S)-1}}, \end{equation} where the Dedekind eta function $\eta(\tau)$ and Jacobi theta function $\theta_1(z,\tau)$ are defined by: \begin{eqnarray} \label{eq:etatheta} \eta(\tau)\quad \,\,&:=&q^{\frac{1}{24}}\prod_{n=1}^\infty (1-q^n),\non\\ \theta_1(z,\tau)&:=&i q^{\frac{1}{8}}(w^\frac{1}{2}-w^{-\frac{1}{2}})\prod_{n\geq 1}(1-q^n)(1-wq^n)(1-w^{-1}q^n).\non \end{eqnarray} The dependence on $J$ is omitted in Eq. (\ref{eq:rank1}), since all rank 1 torsion free sheaves are stable throughout $C(S)$. Similarly, $J$ is omitted in the following from $h_{r,c_1}(z,\tau;\mathbb{P}^2,J)$, since $b_2(\mathbb{P}^2)=1$ and therefore the BPS invariants do not vary as function of $J$. For clarity of exposition, $\Sigma_\ell$ is omitted from the arguments of $h_{r,c_1}(z,\tau;\Sigma_\ell,J)$. We will be mainly concerned with the invariants $\bar \Omega(\Gamma,w;J)$ since the generating functions are defined in terms of these invariants. However, some formulas are most naturally phrased in terms of $\mathcal{I}(\Gamma,w;J)$. For example, the product formula of Conjecture \ref{conj:restrictfibre} is a generating function for $\mathcal{I}(\Gamma,w;f)$ and the blow-up formula in Section \ref{sec:projplane} is phrased in terms of $\mathcal{I}^\mu(\Gamma,w;J)$, which are invariants with respect to $\mu$-stability instead of Gieseker stability. \section{Restriction to the fibre of Hirzebruch surfaces} \label{sec:restrictfibre} This subsection deals with the set $M_f(\Gamma)$ of sheaves whose restriction to the (generic) fibre $f$ of $\pi:\Sigma_\ell \to C$ is semi-stable. Inspired by the existing results for $r=1$ and 2 \cite{Gottsche:1990, Yoshioka:1995} and moduli stack invariants for vector bundles over Riemann surfaces \cite{Harder:1975, Atiyah:1982fa}, a generating function for $r\geq 1$ is proposed enumerating virtual Poincar\'e functions $\mathcal{I}(\Gamma,w;f)$ of moduli stacks $\mathfrak{M}_f(\Gamma)$ of sheaves whose restriction to the fibre is semi-stable. We do not present a derivation of this generating function based on $\mathfrak{M}_f(\Gamma)$ for $r\geq 3$, nor an analysis of the properties of $\mathfrak{M}_f(\Gamma)$. Section \ref{sec:ruledsurface} computes the BPS invariants starting from these generating functions, and shows that they pass various non-trivial consistency checks implied by the blow-up and wall-crossing formulas. We define the generating function $H_{r,c_1}(z,\tau;f)$ of $\mathcal{I}(\Gamma,w;f)$ by: \begin{equation} H_{r,c_1}(z,\tau;f):=\sum_{c_2} \mathcal{I}(\Gamma,w;f)\, q^{r\Delta(\Gamma)-\frac{\chi(S)}{24}}. \end{equation} The following conjecture gives $H_{r,c_1}(z,\tau;f)$ for any $r\geq 1$ and $c_1\in H_2(\Sigma_\ell,\mathbb{Z})$: \begin{conjecture} \label{conj:restrictfibre} The function $H_{r,c_1}(z,\tau;f)$ is given by: \begin{equation} \label{eq:restrictfibre} H_{r,c_1}(z,\tau;f)=\left\{ \begin{array}{cl} \frac{i\,(-1)^{r-1}\,\eta(\tau)^{2r-3}}{\theta_1(2z,\tau)^2\,\theta_1(4z,\tau)^2\dots\theta_1((2r-2)z,\tau)^2\,\theta_1(2rz,\tau)}, & \mathrm{if}\,\,c_1\cdot f=0\mod r,\quad r\geq 1, \\ 0, & \mathrm{if}\,\,c_1\cdot f\neq 0\mod r, \quad r>1. \end{array}\right. \end{equation} \end{conjecture} The above expressions for $H_{r,c_1}(z,\tau;f) $ are not conjectural for all $(r,c_1)$. Vanishing of $H_{r,c_1}(z,\tau;f)$ for $c_1\cdot f\neq 0\mod r$ is well known. See for example Section 5.3 of \cite{Huybrechts:1996}. The vanishing is a consequence of the fact that all bundles $F$ on $\mathbb{P}^1$ are isomorphic to a sum of line bundles $F\cong \mathcal{O}(d_1)\oplus \mathcal{O}(d_2)\dots \mathcal{O}(d_r)$. Therefore, a bundle $F$ on $\mathbb{P}^1$ can only be semi-stable\footnote{Recall that a vector bundle $F$ of rank $r$ and degree $d$ on a curve $C$ is stable (respectively semi-stable) if for every subbundle $F'\subsetneq F$ (with rank $r'$ and degree $d'$) $d'/r'<d/r$ (respectively $d'/r'\leq d/r$).} if its degree $d$ is equal to $ 0 \mod r$ such that the degrees of the line bundles are $d_i=d/r$. The degree $d(E_{|f})$ of the restriction of a sheaf $E$ on $\Sigma_\ell$ to $f$ is equal to $c_1(E)\cdot f$. Therefore, the only cases for which $H_{r,c_1}(z,\tau;f)$ does not vanish is for $c_1\cdot f=0\mod r$. For $r=1$, Eq. (\ref{eq:restrictfibre}) reduces to Eq. (\ref{eq:rank1}). Ref. \cite{Yoshioka:1995} proved the conjecture for $(r,c_1)=(2,f)$, which is now briefly recalled. Ref. \cite{Yoshioka:1995} considers the ruled surface $\tilde \mathbb{P}^2$ over a finite field $\mathbb{F}_s$, and utilizes the fact that any vector bundle in $F$ can be obtained from $\pi^*\pi_* F$, which is a vector bundle on $\tilde \mathbb{P}^2$ supported on $C$, by successive elementary transformations. An elementary transformation is defined by \cite{Huybrechts:1996}: \begin{definition} Let $D$ be an effective divisor on the surface $S$. If $F$ and $G$ are vector bundles on $S$ and $D$ respectively, then a vector bundle $F'$ on $S$ is obtained by an elementary transformation of $F$ along $G$ if there exists an exact sequence: \begin{equation} 0\to F' \to F\to i_* G\to 0, \end{equation} where $i$ denotes the embedding $D\subset S$. \end{definition} This shows that the contribution to $h_{2,c_1}(z,\tau;J)$ from $M_f(\Gamma)$ is the product of the total set of vector bundles on $C$, multiplied by the number of elementary transformations. The total set of vector bundles with $r=2$ on $C$ is enumerated by \cite{Harder:1975}: \begin{equation} \label{eq:zeta2} \frac{s^{-3}}{1-s}\,\zeta_C(2) \end{equation} where $\zeta_{C}(n)$ is the zeta function of the Riemann surface $C_0$. One has for general genus $g$: \begin{equation} \zeta_{C_g}(n)=\frac{\prod_{j=1}^{2g}(1-\omega_j s^{-n})}{(1-s^{-n})(1-s^{1-n})}. \end{equation} Multiplication of (\ref{eq:zeta2}) by the factor due to elementary transformations gives \cite{Yoshioka:1995}: \begin{equation} \label{eq:setrestrict} \sum_{c_2} \sum_{E\in M_f(2,mf,c_2)}\frac{t^{c_2}}{\# \mathrm{Aut}(E)}=\frac{s^{-3}}{1-s}\zeta_C(2)\,\prod_{a\geq 1}Z_s(S,s^{2a-2}t^a) Z_s(S,s^{2a}t^a), \end{equation} with $Z_s(S,t)$ the zeta function of the surface $S$: \begin{equation} Z_s(S,t)=\frac{1}{(1-t)(1-st)^{b_2(S)}(1-s^2t)}. \end{equation} The parameter substitutions $q=s^rt$ and $w^2=s$ give then Eq. (\ref{eq:restrictfibre}) (upto an overal monomial in $w$ and $q$). This derivation for $r=2$ indicates that $H_{r,c_1}(z,\tau;f)$ is closely related to that of the virtual Poincar\'e function of the stack of vector bundles on a Riemann surface $C_g$ with genus $g$ \cite{Harder:1975, Atiyah:1982fa}: \begin{equation} \label{eq:totalset} H_r(z;C_g):=-w^{r^2(1-g)}\frac{(1+w^{2r-1})^{2g}}{1-w^{2r}}\prod_{j=1}^{r-1}\frac{(1+w^{2j-1})^{2g}}{(1-w^{2j})^{2}}. \end{equation} The first term in the $q$-expansion of Eq. (\ref{eq:restrictfibre}) starts with Eq. (\ref{eq:totalset}) for $g=0$. One could thus understand $H_{r,c_1}(z,\tau;f)$ as an extension of $H_r(z;C_0)$ to a modular infinite product. It is conceivable that Conjecture \ref{conj:restrictfibre} for $r>2$ can be proven in a similar manner as for $r=2$. The following sections show that at least for $r=3,4$, it is consistent with various other results. Moreover, it continues to hold for the other Hirzebruch surfaces with $\ell\geq 0$. As an aside we mention the generalization of the conjecture to ruled surfaces $\Sigma_{g,\ell}$ over a Riemann surface $C_g$ with $g> 0$. These surfaces are not rational and the moduli spaces of semi-stable sheaves for these surfaces also have cohomology $H^{p,q}(\mathcal{M}_J(\Gamma),\mathbb{Z})$ for $p\neq q$. In order to capture this more refined information we recall the refinement of Eq. (\ref{eq:totalset}) to the virtual Hodge function \cite{Earl:2000}: \begin{equation} \label{eq:totalset2} H_r(u,v;C_g):=-\frac{(xy)^{r^2(1-g)/2}}{1-x^{r}y^r}\frac{\prod_{j=1}^{r} (1+x^jy^{j-1})^{g}(1+x^{j-1}y^j)^{g}}{\prod_{k=1}^{r-1} (1-x^{k}y^k)^{2}}. \end{equation} with $x:=e^{2\pi i u}$ and $y:=e^{2\pi i v}$. The structure of this function directly suggests the following generalization of Conjecture \ref{conj:restrictfibre} for the generating function $H_{r,c_1}(u,v,\tau;f,\Sigma_{g,\ell})$ of virtual Hodge functions $\mathcal{I}(\Gamma_i,x,y;f)$ of the moduli stack $\mathfrak{M}_f(\Gamma;\Sigma_{g,\ell})$ : \begin{conjecture} \label{conj:restrictfibre2} The function $H_{r,c_1}(u,v,\tau;f,\Sigma_{g,\ell})$ is given by: \begin{equation} \label{eq:restrictfibre2} \left\{ \begin{array}{cl} \frac{i\,(-1)^{r-1}\,\eta(\tau)^{2r(1-g)-3}}{\theta_1(r(u+v),\tau)} \frac{\prod_{j=1}^{r} \theta_1(ju +(j-1)v+\frac{1}{2},\tau)^{g}\, \theta_1((j-1)u +jv+\frac{1}{2},\tau)^{g}}{\prod_{k=1}^{r-1} \theta_1(k(u+v),\tau)^2}, & \mathrm{if}\,\,c_1\cdot f=0\mod r,\quad r\geq 1, \\ 0, & \mathrm{if}\,\,c_1\cdot f\neq 0\mod r, \quad r>1. \end{array}\right. \end{equation} \end{conjecture} \section{BPS invariants of Hirzebruch surfaces} \label{sec:ruledsurface} \subsection{BPS invariants for a suitable polarization} \label{subsec:suitable} This subsection computes for $c_1\cdot f= 0 \mod r$ the BPS invariants of $\Sigma_\ell$ for a polarization $J\in C(\Sigma_\ell)$ sufficiently close to $J_{0,1}=f$. The BPS invariants are for this choice of $J$ independent of $\ell$. ``Sufficiently close'' depends on the topological classes of the sheaf. Generalizing Def. 5.3.1 of \cite{Huybrechts:1996} to general $r\geq 1$, we define a $\Gamma$-suitable polarization by: \begin{definition} \label{def:suitable} A polarization $J$ is called $\Gamma$-suitable if and only if: \begin{itemize} \item[-] $J$ does not lie on a wall for $\Gamma=(r,\mathrm{ch}_1,\mathrm{ch}_2)$ and, \item[-] for any $J$-semi-stable subsheaf $F'\subset F$ with $\Gamma(F)=\Gamma$, $(\mu(F')-\mu(F))\cdot f=0$ or $(\mu(F')-\mu(F))\cdot f$ and $(\mu(F')-\mu(F))\cdot J$ have the same sign. \end{itemize} \end{definition} We will keep the dependence on the Chern classes implicit in the following and denote a suitable polarization by $J_{\varepsilon,1}$ with $\varepsilon$ positive but sufficienty small. From the definition follows that if $J_{\varepsilon,1}$ is a $\Gamma(F)$-suitable polarization, and $F_{|f}$ is unstable, then $F$ is $\mu$-unstable. Thus we need to subtract from $M_f(\Gamma)$, i.e. the set of sheaves with topological classes $\Gamma$ whose restriction to the fibre $f$ is semi-stable, the subset of $M_f(\Gamma)$ which is Gieseker unstable for $J_{\varepsilon,1}$. We continue by explaining this for $r=2$. Then the general formula is proposed for the invariant enumerating extended HN-filtrations, which is consequently applied to $r=3$. A crucial tool to obtain the invariants enumerating semi-stable sheaves are Harder-Narasimhan filtrations \cite{Harder:1975}, which can be defined for either Gieseker or $\mu$-stability. To define these filtrations, let $\varphi$ denote either Gieseker, $\varphi(F)=p_J(F,n)$, or $\mu$-stability, $\varphi(F)=\mu(F)\cdot J$. Then: \begin{definition}\label{def:HNfiltr} A Harder-Narasimhan filtration (HN-filtration) with respect to the stability condition $\varphi$ is a filtration $0\subset F_1 \subset F_2\subset \dots \subset F_\ell=F$ of the sheaf $F$ such that the quotients $E_i=F_i/F_{i-1}$ are semi-stable with respect to $\varphi$ and satisfy $\varphi(E_i)>\varphi(E_{i+1})$ for all $i$. \end{definition} Since $\mu$-stability is coarser then Gieseker stability, the length $\ell_{\mathrm{G}}(F)$ of the HN-filtration with respect to Gieseker stability is in general larger than the length $\ell_\mu(F)$ of its HN-filtration with respect to $\mu$-stability. Using the additive and multiplicative properties of motivic invariants discussed below (\ref{eq:BPSinvariant}), one can determine the BPS invariants for a suitable polarization. The Poincar\'e function of the stack of HN-filtrations with respect to Gieseker stability and prescribed $\Gamma_i=\Gamma(E_i)$ is \cite{Yoshioka:1996}: \begin{equation} \label{eq:setHN} w^{-\sum_{i<j} r_ir_j (\mu_j-\mu_i)\cdot K_S} \prod_{i=1}^\ell \mathcal{I}(\Gamma_i,w;J), \end{equation} where $r_ir_j (\mu_j-\mu_i)\cdot K_S$ is the Euler form for semi-stable sheaves on the projective surface $S$. One could define a similar function for the stack of filtrations with respect to $\mu$-stability. For the generalization to Hodge numbers, one replaces $w^2$ by $xy$ in $w^{-\sum_{i<j} r_ir_j (\mu_j-\mu_i)\cdot K_S}$ and $\mathcal{I}(\Gamma_i,w;J)$ by $\mathcal{I}(\Gamma_i,x,y;J)$. For $(r,c_1)=(2,f)$, the only HN-filtrations with respect to $J_{\varepsilon,1}$ have length $\ell_\mathrm{G}=2$. Denoting $c_1(E_2)=bC-af$, and thus $c_1(E_1)=-bC+(a+1)f$, one easily verifies that the HN-filtrations correspond to $a\geq 0$ and $b=0$. Since $b=0$ the dependence of $K_S$ in Eq. (\ref{eq:setHN}) does not lead to a dependence on $\ell$. Using that Eq. (\ref{eq:rank1}) is also the generating function of $\mathcal{I}(\Gamma,w;J)$ for $r=1$, Eq. (\ref{eq:setHN}) becomes: \begin{equation} \label{eq:contHN1} \sum_{a \geq 0} w^{-2(2a+1)}\, h_{1,0}(z,\tau)^2 =- \frac{w^2}{1-w^4}\,h_{1,0}(z,\tau)^2, \end{equation} where we assumed $|w|>1$. Subtracting this from Eq. (\ref{eq:restrictfibre}) for $r=2$ gives: \begin{equation} h_{2,f}(z,\tau; J_{\varepsilon,1})=\frac{-1}{\theta_1(2z,\tau)^2\,\eta(\tau)^2}\left(\frac{i\,\eta(\tau)^3}{\theta_1(4z,\tau)}+\frac{w^2}{1-w^4}\right), \end{equation} which is easily verified to enumerate invariants $\bar \Omega(\Gamma;J_{\varepsilon,1})$ satisfying the expected properties mentioned below Eq. (\ref{eq:BPSinvariant}). For $(r,c_1)=(2,0)$, the HN-filtrations with respect to $J_{\varepsilon,1}$ and $\ell_\mathrm{G}=2$ split naturally in two subsets: the first set has length $\ell_\mu=2$ with respect to $\mu$-stability, and the second set has $\ell_\mu=1$. Similarly to (\ref{eq:contHN1}), the first set gives rise to: \begin{equation} \label{eq:r2unset} -\frac{1}{1-w^4}\,h_{1,0}(z,\tau)^2 , \end{equation} and the second set to: \begin{equation} \label{eq:r2secset} \frac{1}{2}h_{1,0}(z,\tau)^2-\frac{1}{2} \sum_{n\geq 0} \Omega((1,0,n),w)^2\,q^{2n}, \end{equation} where the second term subtracts from the first the Gieseker semi-stable sheaves which should not be subtracted from $H_{2}(z,\tau;f)$. Subtraction of Eqs. (\ref{eq:r2unset}) and (\ref{eq:r2secset}) from $H_{2}(z,\tau;f)$ gives the generating function of $\mathcal{I}(\,(2,0,c_2),w;J)$, which corresponds by Eq. (\ref{eq:stackinvariant}) to: \begin{equation} \label{eq:h20} h_{2,0}(z,\tau; J_{\varepsilon,1})=\frac{-1}{\theta_1(2z,\tau)^2\,\eta(\tau)^2} \left(\frac{i\,\eta(\tau)^3}{\theta_1(4z,\tau)}+\frac{1}{1-w^4}-\frac{1}{2}\right), \end{equation} Again one can verify that the invariants satisfy the expected integrality properties. Remark 4.6 of Ref. \cite{Yoshioka:1995} determines the Betti numbers of the intersection cohomology of the singular moduli spaces and arrives at the same generating function (\ref{eq:h20}). The Betti numbers for the intersection cohomology of the moduli space of semi-stable vector bundles on Riemann surfaces were earlier computed in Ref. \cite{Kirwan:1986}. The above procedure gives these Betti numbers with much less effort. For example, one can easily verify that \begin{equation} H_2(z,C_g)+\left(\frac{1}{1-w^4}-\frac{1}{2} \right)H_1(z,C_g)^2, \end{equation} with $H_r(z,C_g)$ as in Eq. (\ref{eq:totalset}), is equivalent with Proposition 5.9 in the Corrigendum to \cite{Kirwan:1986}. Since the invariants $\mathcal{I}(\Gamma,w;J)$ are not so compatible with modular generating functions for $r\geq 2$, it is useful to work as much as possible with the invariants $\bar \Omega(\Gamma,w;J)$. To this end an extension of the HN-filtration is necessary: \begin{definition} \label{def:extHNfiltr} An {\rm extended} Harder-Narasimhan filtration (with respect to Gieseker stability) is a filtration $0\subset F_1\subset F_2\subset \dots \subset F_\ell=F$ whose quotients $E_i=F_i/F_{i-1}$ are semi-stable and satisfy $p_J(E_i,n)\succeq p_J(E_{i+1},n)$. \end{definition} An example of an extended Harder-Narashimhan filtration can be obtained by considering a Jordan-H\"older filtration of the semi-stable quotients of a standard HN-filtration. Recall that a Jordan-H\"older filtrations is a filtration $0\subset F_1\subset F_2\subset \dots \subset F_\ell=F$ of a semi-stable bundle $F$ such that the quotients $E_i=F_i/F_{i-1}$ are stable and satisfy $p_J(E_i,n)=p_J(F,n)$. However, not all extended HN-filtrations are obtained this way since Definition \ref{def:extHNfiltr} allows for semi-stable quotients. From Eq. (\ref{eq:stackinvariant}) follows that the natural invariant $\bar \Omega(\{ \Gamma_i \},w; J)$ associated to the stack $\mathfrak{M}_J(\{\Gamma_i\})$ of extended HN-filtrations with prescribed Chern classes $\Gamma_i=\Gamma(E_i)$ is: \begin{equation} \label{eq:setfiltration} \bar \Omega(\{ \Gamma_i \};w, J):=\frac{1}{|\mathrm{Aut} (\{\Gamma_i \};J) | }w^{-\sum_{i<j} r_ir_j (\mu_j-\mu_i)\cdot K_S} \prod_{i=1}^\ell \bar \Omega(\Gamma_i,w; J). \end{equation} The number $|\mathrm{Aut} (\{\Gamma_i \};J)|$ is equal to $\prod_am_a!\,$, where $m_a$ is the total number of quotients $E_i$ with equal reduced Hilbert polynomial $p_J(E_{a},n)$. Thus only for HN-filtrations $|\mathrm{Aut} (\{\Gamma_i \};J)|=1$. If the sum over all extended HN-filtrations contains a group $\{ E_i \}$ with equal $p_J(E_{i},n)$ but unequal $\Gamma_i$, the factor $\frac{1}{|\mathrm{Aut} (\{\Gamma_i \};J)|}$ divides out a number of permutations. To avoid this overcounting, one could introduce a further ordering on the vectors $\Gamma_i$, which should be obeyed by the set of filtrations to be summed over. Then one would divide by $|\mathrm{Aut}(\{\Gamma_i\})|=\prod_{p} n_p!$, where $n_p$ is the number of equal vectors $\Gamma_p$ appearing among the $\Gamma_i$, $i=1,\dots,\ell$. This is the origin of the ``Boltzmann statistics'' in wall-crossing formulas \cite{Manschot:2010qz} in the work of Joyce \cite{Joyce:2004}. The functions $h_{r,c_1}(z,\tau;J_{\varepsilon,1})$ with $c_1\cdot f=0\mod r$ are given by the recursive formula \begin{equation} \label{eq:recursion} h_{r,c_1}(z,\tau;J_{\varepsilon,1})=H_{r,c_1}(z,\tau;f)-\sum_{\mathrm{ch}_2}\sum_{ \Gamma_1+\dots+\Gamma_\ell=(r,c_1,\mathrm{ch}_2) \atop p_J(\Gamma_i,n)\succeq p_J(\Gamma_{i+1},n),\, \ell>1} \bar \Omega(\{ \Gamma_i \};w, J_{\varepsilon,1})\,q^{r\Delta(\Gamma)-\frac{r\chi(S)}{24}}, \end{equation} with $\Delta(\Gamma)$ given in terms of $\Gamma_i$ by Eq. (\ref{eq:discriminant}) and $H_{r,c_1}(z,\tau;f)$ defined by Eq. (\ref{eq:restrictfibre}). We continue by applying Eq. (\ref{eq:setfiltration}) to compute $h_{3,c_1}(z,\tau;J_{1,\varepsilon})$, with $c_1=f$ and $0$. One obtains: \begin{proposition} \begin{eqnarray} \label{eq:h31eps} h_{3,f}(z,\tau;J_{\varepsilon,1})&=&\frac{i\,\eta(\tau)^3}{\theta_1(2z,\tau)^2\,\theta_1(4z,\tau)^2\,\theta_1(6z,\tau)} +\frac{w^2+w^4}{1-w^6}\frac{1}{\theta_1(2z,\tau)^3\,\theta_1(4z,\tau)} \\ &&-\frac{w^4}{(1-w^4)^2}\frac{i}{\theta_1(2z,\tau)^3\eta(\tau)^3}, \non \end{eqnarray} \begin{eqnarray} \label{eq:h30eps} h_{3,0}(z,\tau;J_{\varepsilon,1})&=&\frac{i\,\eta(\tau)^3}{\theta_1(2z,\tau)^2\,\theta_1(4z,\tau)^2\,\theta_1(6z,\tau)} +\frac{1+w^6}{1-w^6}\,\frac{1}{\theta_1(2z,\tau)^3\,\theta(4z,\tau)}\\ && - \left( \frac{w^4}{(1-w^4)^2}+\frac{1}{3}\right)\,\frac{i}{\theta_1(2z,\tau)^3\,\eta(\tau)^3}. \non \end{eqnarray} \end{proposition} \begin{proof} We start by proving Eq. (\ref{eq:h31eps}). Denote the length of an extended HN-filtration by $\ell$, its length with respect to $\mu$-stability by $\ell_\mu$ and Gieseker stability $\ell_\mathrm{G}$. We first consider the unstable filtrations with $\ell=\ell_\mu=2$, and parametrize $c_1(E_2)$ by $bC-af$. These are parametrized by $a\geq 0$ and $b=0$. There are four possibilities to be distinguished: whether $r(E_1)=1$ or 2, and whether the quotient with rank 2 has $c_1=0$ or $f\mod 2$. Adding up these contributions, one obtains: \begin{equation} \label{eq:h31cont1} -\frac{w^4+w^{8}}{1-w^{12}}\,h_{1,0}(z,\tau)\,h_{2,0}(z,\tau;J_{\varepsilon,1})-\frac{w^2+w^{10}}{1-w^{12}}\,h_{1,0}(z,\tau)\,h_{2,f}(z,\tau;J_{\varepsilon,1}), \end{equation} The filtrations with $\ell=3$ consist of 3 subsets: one set with $\ell_\mu=3$, one with $\ell_\mu=2$ but $\ell_\mathrm{G}=3$, and one with $\ell_\mathrm{G}=2$. Parametrizing $c_1(E_i)=b_iC-a_if$, the first set is parametrized by $a_i-a_{i+1}>0$, $\sum_{i=1}^3 a_i=1$ and $b_i=0$. These are counted by: \begin{equation} \label{eq:h31cont2} \sum_{k_1,k_2>0\atop k_2=k_1-1\mod 3} w^{-4(k_1+k_2)}\,h_{1,0}(z,\tau)^3=\frac{w^4}{(1-w^4) (1-w^{12})}\,h_{1,0}(z,\tau)^3. \end{equation} For the second and third sets, one needs to distinguish between equality of the stability condition of $E_2$ with $E_1$ or $E_3$. These two sets are enumerated by: \begin{equation} \label{eq:muns} -\frac{1}{2}\frac{w^4+w^8}{1-w^{12}}\,h_{1,0}(z,\tau)^3. \end{equation} Note that the factor $\frac{1}{|\mathrm{Aut}(\{\Gamma_i\},J_{1,\varepsilon})|}$ naturally combines the contributions of filtrations with $\ell_\mu<\ell$. Another observation is that the term $-\frac{1}{2}$ in the second factor of $h_{2,0}(z,\tau;J_{\varepsilon,1})$ (\ref{eq:h20}) cancels against (\ref{eq:muns}) in the total sum. After subtraction of the terms (\ref{eq:h31cont1})-(\ref{eq:muns}) from Eq. (\ref{eq:restrictfibre}) for $r=3$, and writing the whole series in terms of modular functions, one obtains (\ref{eq:h31eps}). For $(r,c_1)=(3,0)$, one needs to subtract the following terms: \begin{itemize} \item[-] due to unstable filtrations with $\ell=\ell_\mu=2$: \begin{equation} -\frac{2}{1-w^{12}}\, h_{1,0}(z,\tau)\,h_{2,0}(z,\tau;J_{\varepsilon,1})-\frac{2w^6}{1-w^{12}}\, h_{1,0}(z,\tau)\,h_{2,f}(z,\tau;J_{\varepsilon,1}),\non \end{equation} \item[-] due to unstable filtrations with $\ell=2$, $\ell_\mu=1$ and $\ell_\mathrm{G}=1$ or 2: \begin{equation} \frac{2}{2}\,h_{1,0}(z,\tau)\,h_{2,0}(z,\tau;J_{\varepsilon,1}), \non \end{equation} \item[-] due to unstable filtrations with $\ell=\ell_\mu=3$: \begin{equation} \frac{1+w^{12}}{(1-w^8)(1-w^{12})}\,h_{1,0}(z,\tau)^3, \non \end{equation} \item[-] due to unstable filtrations with $\ell=3$, $\ell_\mu=2$ and $\ell_\mathrm{G}=2$ or 3: \begin{equation} -\frac{2}{2}\frac{1}{1-w^{12}}\, h_{1,0}(z,\tau)^3, \non \end{equation} \item[-] due to unstable filtrations with $\ell=3$, $\ell_\mu=1$ and $1\leq \ell_\mathrm{G}\leq 3$: \begin{equation} \frac{1}{6}\, h_{1,0}(z,\tau)^3.\non \end{equation} \end{itemize} Subtracting the terms above from (\ref{eq:restrictfibre}) gives (\ref{eq:h30eps}). Subtracting further $\frac{1}{3}h_{1,0}(3z,3\tau)=\frac{i}{3\,\theta_1(6z,3\tau)\,\eta(3\tau)}$ from (\ref{eq:h30eps}) provides integer invariants in agreement with the definition (\ref{eq:refw}). \end{proof} The recursive procedure explained above can be solved, such that $h_{r,c_1}(z,\tau;J_{\varepsilon,1})$ can be directly expressed in terms of the $H_{r'}(z,\tau;f)$ with $r'\leq r$, without computing first the $h_{r',c_1}(z,\tau;J_{\varepsilon,1})$, and moreover giving more compact expressions. The solution follows from Ref. \cite{Zagier:1996} (the solution to the recursion for vector bundles over Riemann surfaces) and Eq. (\ref{eq:inversestackinv}) one obtains: \begin{eqnarray} h_{r,c_1}(z,\tau;J_{\varepsilon,1})&=&\sum_{(r_1,c_{1,1})+\dots+(r_\ell ,c_{1,\ell }) =(r,c_1), \atop \mu_i\cdot J_{\varepsilon,1} \geq \mu_{i+1}\cdot J_{\varepsilon,1}} \frac{(-1)^{m-1}}{m} w^{-\sum_{i<j} r_ir_j(\mu_j-\mu_i) \cdot K_S} \prod_{i=1}^m H_{r_i,0}(z,\tau;f) \non \\ &=& \sum_{(r_1,a_1)+\dots+(r_\ell, a_\ell ) =(r,c_1\cdot C), \atop a_i\geq a_{i+1}} \frac{(-1)^{m-1}}{m} w^{-2\sum_{i<j} r_ir_j(a_j-a_i)} \prod_{i=1}^m H_{r_i,0}(z,\tau;f) \end{eqnarray} This becomes after carrying out the sums over $a_i$ \cite{Zagier:1996}: \begin{eqnarray} \label{eq:solvrecursion} h_{r,-af}(z,\tau;J_{\varepsilon,1}) &=&\sum_{(r_1,a_1)+\dots+(r_m,a_m)=(r,a)\atop a_i/r_i=a/r} \frac{(-1)^{m-1}}{m} \non \\ &&\prod_{i=1}^m \left( \sum_{ r_1+\dots +r_\ell=r_i} \frac{w^{2M(r_1,\dots ,r_\ell;a_i/r_i)}}{\left(1-w^{2(r_1+r_2)}\right)\dots \left(1-w^{2(r_{\ell-1}+r_\ell)}\right)} H_{r_1,0}(z,\tau;f) \dots H_{r_\ell,0}(z,\tau;f)\right), \end{eqnarray} where \begin{equation} M(r_1,\dots ,r_\ell;\lambda)=\sum_{j=1}^{\ell-1}(r_j+r_{j+1})\,\{ (r_1+\dots + r_j) \lambda \} , \end{equation} with $\{ \lambda \}:=\lambda-\lfloor \lambda \rfloor$. One can verify that Eq. (\ref{eq:solvrecursion}) for $r=3$ is in agreement with Eqs. (\ref{eq:h31eps}) and (\ref{eq:h30eps}). As an example we give here $h_{4,0}(z,\tau;J_{\varepsilon,1})$: \begin{eqnarray} h_{4,0}(z,\tau)&=&H_{4,0}(\tau,z;f)+\frac{1}{2}\frac{1+w^8}{1-w^8}\,H_{2,0}(\tau,z;f)^2+\frac{1+w^8}{1-w^8}\,H_{1,0}(\tau,z;f)\, H_{3,0}(\tau,z;f)\non \\ &&+\frac{1-w^{16}}{(1-w^4)(1-w^6)^2}\,H_{1,0}(\tau,z;f)^2\,H_{2,0}(\tau,z;f)+\frac{1}{4}\frac{1-w^{16}}{(1-w^4)^4}H_{1,0}(\tau,z;f)^4, \end{eqnarray} which is to be compared with: \begin{eqnarray} h_{4,0}(z,\tau)&=&H_{4,0}(\tau,z;f)-\left( -\frac{w^{12}}{(1-w^8)\,(1-w^{12})^2} +\frac{1}{2}\frac{1+w^{24}}{(1-w^{12})\, (1-w^{24})}\right. \non \\ &&\left.\qquad -\frac{1}{3}\frac{1}{1-w^{24}}-\frac{1}{4}\frac{1}{1-w^{16}}+\frac{1}{24}\right)h_{1,0}(z,\tau)^4\non\\ &&-\left(\frac{2(1+w^{20})}{(1-w^{16})\,(1-w^{24})}+\frac{1+w^{24}}{(1-w^{12})\,(1-w^{24})}-\frac{2}{1-w^{24}} \right. \non \\ && \left.\qquad -\frac{1}{1-w^{16}}+\frac{1}{2} \right) h_{1,0}(z,\tau)^2 \,h_{2,0}(z,\tau;J_{\varepsilon,1})\non \\ &&-\left( \frac{2\,(w^{10}+w^{30})}{(1-w^{16})\,(1-w^{24})}+\frac{2\,w^{18}}{(1-w^{12})\,(1-w^{24})} \right)\,h_{1,0}(z,\tau)^2 \,h_{2,f}(z,\tau;J_{\varepsilon,1})\non \\ &&-\left( -\frac{1}{1-w^{16}}+\frac{1}{2}\right)\, h_{2,0}(z,\tau;J_{\varepsilon,1})^2 - \left( -\frac{w^8}{1-w^{16}}\right)\,h_{2,f}(z,\tau;J_{\varepsilon,1})^2\non \\ &&-\left(-\frac{2}{1-w^{24}} +1\right)\, h_{1,0}(z,\tau)\,h_{3,0}(z,\tau;J_{\varepsilon,1})\non \\ && -\left(-\frac{2\,(w^8+w^{16})}{1-w^{24}} \right) \, h_{1,0}(z,\tau)\,h_{3,0}(z,\tau;J_{\varepsilon,1}).\non \end{eqnarray} \subsection{Wall-crossing} This subsection explains how to compute $h_{r,c_1}(z,\tau;J)$ for a generic choice of polarization $J$ from the generating functions for $J=J_{\varepsilon,1}$. The BPS invariants $\Omega(\Gamma,w;J)$ for $J$ differ in general from those for $J=J_{\varepsilon,1}$, since sheaves might become semi-stable or unstable by changing the polarization. The change of the BPS invariants depends on the Hirzebruch surface $\Sigma_\ell$ through the canonical class $K_{\Sigma_\ell}$. Knowing how $h_{r,c_1}(z,\tau;J)$ varies in the ample cone $C(\Sigma_1)$ is particularly important for the computation of $h_{r,c_1}(z,\tau;\mathbb{P}^2)$ since the blow-up formula is to be applied for the polarization $J_{1,0}=\phi^*H$, where $H$ is the hyperplane class of $\mathbb{P}^2$ (see the next section). The change of the invariants can be obtained recursively from Eq. (\ref{eq:setfiltration}) after determining which filtrations change from semi-stable to unstable or vice versa. More quantitatively one has for $J$ and $J'$ sufficiently close: \begin{eqnarray} \label{eq:wallcrossing} \Delta \bar \Omega(\Gamma,w;J\to J')&=&\sum_{{\Gamma=\Gamma_1+\dots+\Gamma_\ell ,\atop p_{J'}(\Gamma_{i})\preceq p_{J'}(\Gamma_{i+1}),} \atop p_J(\Gamma_{i})\succeq p_J(\Gamma_{i+1})}\frac{1}{|\mathrm{Aut}(\{\Gamma_i\};J)|} \, w^{-\sum_{i<j} r_ir_j (\mu_j-\mu_i)\cdot K_S}\prod_{i=1}^\ell \bar \Omega(\Gamma_i;w,J)\non\\ &&-\sum_{{\Gamma=\Gamma_1+\dots+\Gamma_\ell ,\atop p_{J'}(\Gamma_{i})\succeq p_{J'}(\Gamma_{i+1}),} \atop p_J(\Gamma_{i})\preceq p_J(\Gamma_{i+1})}\frac{1}{|\mathrm{Aut}(\{\Gamma_i\};J')|} \, w^{-\sum_{i<j} r_ir_j (\mu_j-\mu_i)\cdot K_S}\prod_{i=1}^\ell\bar \Omega(\Gamma_i;w,J'), \end{eqnarray} with $|\mathrm{Aut}(\{\Gamma_i\};J)|$ defined below Eq. (\ref{eq:setfiltration}). Note that the invariants are evaluated on both sides of the wall. This makes this formula a recursive formula as it requires knowledge of $\Omega(\Gamma_i,w;J')$, but since we are only interested in small rank this is not a serious obstacle. A solution to the recursion is given by Theorem 6.24 of \cite{Joyce:2004}. Other ways to determine $\Omega(\Gamma_i,w;J')$ in terms of $\Omega(\Gamma_i,w;J)$ is using a graded Lie algebra \cite{Kontsevich:2008} or the Higgs branch analysis of Ref. \cite{Manschot:2010qz} based on Ref. \cite{Reineke:2002}. Since generating functions capturing wall-crossing are already described in the literature, the explicit expressions of $h_{r,c_1}(z,\tau;J_{m,n})$ for $r=2$ and $3$, are presented here without further details. We have for $r=2$ \cite{Yoshioka:1994, Gottsche:1996}: \begin{eqnarray} h_{2,\beta C -\alpha f}(z,\tau; J_{m,n})&=&h_{2,\beta C -\alpha f}(z,\tau;J_{\varepsilon,1})+\non \\ &&\textstyle{\frac{1}{2}} \sum_{a,b\in \mathbb{Z}}\textstyle{\frac{1}{2}} \left(\, \mathrm{sgn}((2b-\beta)n-(2a-\alpha )m )- \mathrm{sgn}((2b-\beta)-(2a-\alpha ) \varepsilon)\, \right)\non\\ &&\times \left(w^{-(\ell-2)(2b-\beta)-2(2a-\alpha)}-w^{(\ell-2)(2b-\beta)+2(2a-\alpha)} \right)\,q^{\frac{\ell}{4}(2b-\beta)^2+\frac{1}{2}(2b-\beta)(2a-\alpha)}\,h_{1,0}(z,\tau)^2,\non \end{eqnarray} and for $r=3$ \cite{Manschot:2010nc, Manschot:2010xp}: \begin{eqnarray} h_{3,\beta C -\alpha f}(z,\tau; J_{m,n})&=&h_{3,\beta C -\alpha f}(z,\tau;J_{\varepsilon,1})+\non \\ && \sum_{a,b\in \mathbb{Z}}\textstyle{\frac{1}{2}} \left(\, \mathrm{sgn}((3b-2\beta)n-(3a-2\alpha )m )- \mathrm{sgn}((3b-2\beta)-(3a-2\alpha ) \varepsilon)\, \right) \non \\ &&\times \left(w^{-(\ell-2)(3b-2\beta)-2(3a-2\alpha)}-w^{(\ell-2) (3b-2\beta)+2(3a-2\alpha)} \right)\,q^{\frac{\ell}{12}(3b-2\beta)^2+\frac{1}{6}(3b-2\beta)(3a-2\alpha)}\non\\ &&\times h_{2,bC-af}(z,\tau;\Sigma_\ell, J_{|3b-2\beta|,|3a-2\alpha|})\,h_{1,0}(z,\tau).\non \end{eqnarray} \section{BPS invariants of $\mathbb{P}^2$} \label{sec:projplane} The Hirzebruch surface $\Sigma_1$ can be obtained as a blow-up $\phi:\Sigma_1\to \mathbb{P}^2$ of the projective plane $\mathbb{P}^2$. Interestingly, we can compute the BPS invariants of $\mathbb{P}^2$ from those of $\Sigma_1$ from the blow-up formula. This formula is a remarkable result which states that the ratio of generating functions of BPS invariants of a surface $S$ and its blow-up $\tilde S$ is a (theta) function independent of $S$ or $J$ \cite{Yoshioka:1996, Gottsche:1998, Li:1999}. The underlying reason for this relation is that every semi-stable sheaf on $\tilde S$ can be obtained from one on $S$ by an elementary transformation along the exceptional divisor of the blow-up. Two subtle issues of the blow-up formula are (Proposition 3.4 of \cite{Yoshioka:1996}): \begin{itemize} \item[-] the stability condition is $\mu$-stability rather than Gieseker stability, \item[-] it involves the virtual Poincar\'e functions $\mathcal{I}(\Gamma,w;J)$ of the moduli stack. \end{itemize} To take these two issues into account let $\bar \Omega^\mu(\Gamma,w;J)$ be the invariant enumerating $\mu$-semi-stable sheaves which is obtained from $\bar \Omega^\mu(\Gamma,w;J)$ by addition of the Gieseker unstable sheaves which are $\mu$-semi-stable using Eq. (\ref{eq:setfiltration}). Moreover, let $\mathcal{I}^\mu(\Gamma,w;J)$ be the corresponding virtual Poincar\'e function with corresponding generating function $H^\mu_{r,c_1}(z,\tau;\tilde S,J)$. The blow-up formula now reads \cite{Yoshioka:1996, Gottsche:1998, Li:1999}: \begin{proposition} \label{prop:blowup} Let $S$ be a smooth projective surface and $\phi: \tilde S \to S$ the blow-up at a non-singular point, with $C_\mathrm{e}$ the exceptional divisor of $\phi$. The generating functions $H^\mu_{r,c_1}(z,\tau;S,J)$ and $H^\mu_{r,c_1}(z,\tau;\tilde S,J)$ are related by the ``blow-up formula'': \begin{equation} \label{eq:blowup} H^\mu_{r,\phi^* c_1-kC_\mathrm{e}}(z,\tau;\tilde S,\phi^*J)=B_{r,k}(z,\tau)\, H^\mu_{r,c_1}(z,\tau;S,J), \end{equation} with \begin{equation} B_{r,k}(z,\tau)=\frac{1}{\eta(\tau)^r}\sum_{\sum_{i=1}^ra_i=0 \atop a_i \in \mathbb{Z}+\frac{k}{r}} q^{-\sum_{i<j}a_ia_j}w^{\sum_{i<j}a_i-a_j}.\non \end{equation} The blow-up formula for generating functions of Hodge numbers is identical except with the replacement of $z$ by $\textstyle{\frac{1}{2}}(u+v)$ in $B_{r,k}(z,\tau)$. \end{proposition} \noindent The two relevant cases for this article are $r=2,3$: \begin{equation} B_{2,k}(z,\tau)=\frac{\sum_{n\in \mathbb{Z}+k/2} q^{n^2}w^n}{\eta(\tau)^2},\qquad B_{3,k}(z,\tau)=\frac{\sum_{m,n \in \mathbb{Z}+k/3} q^{m^2+n^2+mn}w^{4m+2n}}{\eta(\tau)^3}. \end{equation} Note that $B_{r,k}(z,\tau)$ does not depend on $S$ or $J$. The computation of $h_{r,c_1}(z,\tau;\mathbb{P}^2)$ from $h_{r,\phi^* c_1-kC}(z,\tau;\Sigma_1)$ in general involves the following three steps: \begin{enumerate} \item Compute $h^\mu_{r,\phi^* c_1-kC}(z,\tau; J_{1,0})$ by adding to $h_{r,\phi^* c_1-kC}(z,\tau; J_{1,\varepsilon})$ terms due to sheaves on $\Sigma_1$ which are not Gieseker stable for $J_{1,\varepsilon}$, but $\mu$-semistable for $\phi^* H=J_{1,0}$, and consequently compute $H^\mu_{r,\phi^* c_1-kC}(z,\tau; J_{1,0})$ by adding the terms prescribed by Eq. (\ref{eq:stackinvariant}). The generating functions and the factorial factors in Eq. (\ref{eq:stackinvariant}) combine these two steps very naturally into one. \item Divide by $B_{r,k}(z,\tau)$ to obtain $H^\mu_{r,c_1}(z,\tau; \mathbb{P}^2)$. \item Determine $h_{r,c_1}(z,\tau; \mathbb{P}^2)$ from $H^\mu_{r,c_1}(z,\tau; \mathbb{P}^2)$ by reversing step (1). \end{enumerate} For $c_1=\beta C+f$, $\beta=0$ or $1$, and $J=J_{1,0}$, $\mu$-stability is equivalent to Gieseker stability, and therefore steps 1) and 3) become trivial. For example, one can compute $h_{3,H}(z,\tau;\mathbb{P}^2)$ starting from $h_{3,C+f}(z,\tau;J_{1,0})$ as was done in Ref. \cite{Manschot:2010nc}, or from from $h_{3,f}(z,\tau;J_{1,0})$ which requires Conjecture \ref{conj:restrictfibre} and Eq. (\ref{eq:setfiltration}). One can verify that the first terms of both $q$-expansions of $h_{3,H}(z,\tau;\mathbb{P}^2)$ are equal, which is in agreement with Proposition \ref{prop:blowup}. A proof of the equality of these expressions for $h_{3,H}(z,\tau;\mathbb{P}^2)$ would imply a proof of Conjecture \ref{conj:restrictfibre} for $(r,c_1)=(3,f)$ since $h_{3,f}(z,\tau;J_{\varepsilon,1})$ is related to $h_{3,C+f}(z,\tau;J_{1,\varepsilon})$ by the blow-up formula and wall-crossing. When $\mu$- and Gieseker stability are not equivalent, steps 1) and 3) are not trivial. We will first explain them for $r=2$ following \cite{Yoshioka:1995}. One obtains: \begin{proposition} \label{eq:h20P2} \begin{eqnarray} h_{2,0}(z,\tau;\mathbb{P}^2)=\frac{1}{B_{2,1}(z,\tau)} \left[ h_{2,C}(z,\tau;J_{1,\varepsilon}) +\sum_{b<0 \atop b=-1 \mod 2} w^b q^{\frac{1}{4}b^2}h_{1,0}(z,\tau)^2\,\right]-\frac{1}{2}h_{1,0}(z,\tau;\mathbb{P}^2)^2.\non \end{eqnarray} \end{proposition} \begin{proof} The only extended HN-filtrations which are Gieseker unstable for $J=J_{1,\varepsilon}$ and $\mu$-semi-stable for $J=J_{1,0}$ have $\ell=\ell_\mu=2$. For the parametrization $c_1(E_2)=bC-af$, the set of sheaves which is unstable for $J_{1,\varepsilon}$ but $\mu$-semistable for $J_{1,0}$ corresponds to $b<0$ and $a=0$. This gives the second term inside the brackets. Consequently, step (2) divides by $B_{2,1}(z,\tau)$, and step (3) subtracts the $\mu$-semi-stable sheaves which are not Gieseker semi-stable with $\ell=2$ and $\ell_\mu=1$. \end{proof} Alternatively, one can compute $h_{2,0}(z,\tau;\mathbb{P}^2)$ starting from $h_{2,0}(z,\tau;J_{1,\varepsilon})$. In that case the term due to step (1) in the brackets is $\left( \sum_{b<0 \atop b=0 \mod 2}w^bq^{\frac{1}{4}b^2}+\frac{1}{2} \right)\,h_{1,0}(z,\tau)^2$, and one divides by $B_{2,0}(z,\tau)$. Addition of $\frac{1}{2}h_{1,0}(z,\tau;\mathbb{P}^2)$ provides the expected integer invariants, in agreement with \cite{Yoshioka:1995}. Accidentily, the terms due to step (1) and step (3) can simply be incorporated by replacing $J_{1,\varepsilon}$ by $J_{1,0}$ in $h_{2,\beta C}(z,\tau;J_{1,\varepsilon})$, and can be written in terms of the Lerch sum \cite{Bringmann:2010sd}. The remainder of this section discusses $r=3$. In terms of $h_{3,C}(z,\tau;J_{1,\varepsilon})$, $h_{3,0}(z,\tau;\mathbb{P}^2)$ is given by: \begin{proposition} \begin{eqnarray} h_{3,0}(z,\tau;\mathbb{P}^2)&=&\frac{1}{B_{3,1}(z,\tau)} \left[ h_{3,C}(z,\tau;J_{1,\varepsilon}) +\left( \sum_{b< 0 \atop b=-2,-4 \mod 6 } w^{b}q^{\frac{1}{12}b^2}\right)\, h_{1,0}(z,\tau) \, h_{2,0}(z,\tau;J_{1,\varepsilon}) \right. \non \\ &&+ \left( \sum_{b<0 \atop b=-1,-5 \mod 6 } w^{b}q^{\frac{1}{12}b^2}\right)\, h_{1,0}(z,\tau) \, h_{2,C}(z,\tau;J_{1,\varepsilon}) \\ &&+\left.\left(\sum_{k_1,k_2<0\atop k_2=k_1+1 \mod 3}w^{2(k_1+k_2)}q^{\frac{1}{3}(k_1^2+k_2^2+k_1k_2)}+\frac{1}{2} \sum_{k<0, \atop k=-1,-2 \mod 3} w^{2k}q^{\frac{1}{3}k^2} \right)\,h_{1,0}(z,\tau)^3\right] \non \\ &&-\frac{1}{6} h_{1,0}(z,\tau;\mathbb{P}^2)^3-\frac{2}{2}\,h_{1,0}(z,\tau;\mathbb{P}^2)\,h_{2,0}(z,\tau;\mathbb{P}^2).\non \end{eqnarray} \end{proposition} The desired integer invariants are obtained from $h_{3,0}(z,\tau;\mathbb{P}^2)$ after subtraction of $\frac{1}{3}h_{1,0}(3z,3\tau;\mathbb{P}^2)=\frac{1}{3}\frac{i}{\theta(6z,3\tau)}$. The first non-vanishing coefficients are presented in Table \ref{tab:betti30}. They are in agreement with the expected dimension of $\mathcal{M}(\Gamma)$ (\ref{eq:dim}). \begin{table}[h!] \begin{tabular}{lrrrrrrrrrrrrrrrrr} $c_2$ & $b_0$ & $b_2$ & $b_4$ & $b_6$ & $b_8$ & $b_{10}$ & $b_{12}$ & $b_{14}$ & $b_{16}$ & $b_{18}$ & $b_{20}$ & $b_{22}$ & $b_{24}$ & $b_{26}$ & $b_{28}$ & $\chi$ \\ \hline 3 & 1 & 1 & 2 & 2 & 2 & 2 & & & & & & & & & & 18 \\ 4 & 1 & 2 & 5 & 9 & 15 & 19 & 22 & 23 & 24 & & & & & & & 216 \\ 5 & 1 & 2 & 6 & 12 & 25 & 43 & 70 & 98 & 125 & 142 & 154 & 156 & & & &1512 \\ 6 & 1 & 2 & 6 & 13 & 28 & 53 & 99 & 165 & 264 & 383 & 515 & 631 & 723 & 774 & 795 & 8109 \end{tabular} \caption{The Betti numbers $b_n$ (with $n\leq \dim_\mathbb{C} \mathcal{M}$) and the Euler number $\chi$ of the moduli spaces of semi-stable sheaves on $\mathbb{P}^2$ with $r=3$, $c_1=0$, and $3\leq c_2\leq 6$.} \label{tab:betti30} \end{table} \begin{proof} The terms added to $h_{3,C}(z,\tau;J_{1,\varepsilon})$ in the brackets are due to step (1). The last term on the first line and the term on the second line are due to filtrations with $\ell=\ell_\mu=2$. If one chooses $c_1(E_2)=bC-af$ as for $r=2$, the set of sheaves which are unstable for $J_{1,\varepsilon}$ but $\mu$-semistable for $J_{1,0}$ corresponds to $b<0$ and $a=0$. Similarly, the first term in parentheses on the third line is due to $\ell=\ell_\mu=3$, and the second term due to $\ell=3$ and $\ell_\mu=2$. The sum of the terms in the bracket is $H^\mu_{3,C}(z,\tau;J_{1,\varepsilon})$, and is divided by $B_{3,1}(z,\tau)$ following step (2). Finally, step (3) corresponds to the last line. \end{proof} As a consistency check, $h_{3,0}(z,\tau;\mathbb{P}^2)$ can also be computed from $h_{3,0}(z,\tau; J_{1,\varepsilon})$. Then the terms due to step (1) are for $\ell=2$: \begin{eqnarray} \label{eq:contbd4} &&\left(\frac{2}{2}+ 2\sum_{b<0 \atop b=0 \mod 6 } w^{b}q^{\frac{1}{12}b^2}\right)\, h_{1,0}(z,\tau) \, h_{2,0}(z,\tau;J_{1,\varepsilon})\\ &&\qquad + \left(2\sum_{b<0 \atop b=-3 \mod 6 } w^{b}q^{\frac{1}{12}b^2}\right)\, h_{1,0}(z,\tau) \, h_{2,C}(z,\tau;J_{1,\varepsilon}),\non \end{eqnarray} and for $\ell=3$: \begin{equation} \left(\sum_{k_1,k_2<0\atop k_1=k_2 \mod 3}w^{2(k_1+k_2)}q^{\frac{1}{3}(k_1^2+k_2^2+k_1k_2)}+ \frac{2}{2} \sum_{k<0 \atop k=0 \mod 3} w^{2k}q^{\frac{1}{3}k^2}+\frac{1}{6}\right)\,h_{1,0}(z,\tau)^3. \end{equation} We conclude by briefly comparing the BPS invariants computed above to the results obtained by Refs. \cite{Kool:2009, weist:2009} for Euler numbers of moduli spaces using toric localization of the moduli spaces. Ref. \cite{weist:2009} computed such Euler numbers for $\mu$-stable vector bundles \cite{weist:2009} with rank $r\leq 3$ on $\mathbb{P}^2$, whereas Ref. \cite{Kool:2009} computed such Euler numbers for $\mu$-stable torsion free sheaves with rank $r\leq 3$ on various smooth toric surfaces. If $\gcd(r,c_1)=1$ and for a generic choice of polarization, the moduli space of $\mu$-stable sheaves is isomorphic to the moduli space of Gieseker semi-stable sheaves. Otherwise, the moduli space of $\mu$-stable sheaves is a smooth open subset of the moduli space of Gieseker semi-stable sheaves. The difference between generating functions of Euler numbers for vector bundles and torsion free sheaves is an overall factor $\eta(\tau)^{r\chi(S)}$. For Chern classes such that $\mu$-stability is equivalent to Gieseker semi-stability, agreement of Refs. \cite{Kool:2009, weist:2009} with the techniques described in this paper is expected. This is indeed established in Refs. \cite{Manschot:2010nc, Manschot:2011dj}. In particular, Eq. (4.5) and Table 1 of Ref. \cite{Manschot:2010nc} agree with Corollary 4.10 in Ref. \cite{weist:2009} and Corollary 4.9 in Ref. \cite{Kool:2009}. If $\gcd(r,c_1)>1$ strictly Gieseker semi-stable sheaves can occur and therefore agreement of $h_{r,0}(z,\tau;\mathbb{P}^2)$ with Refs. \cite{Kool:2009, weist:2009} is not expected. Indeed the numbers in Table 1 above appear to be different from the Euler numbers computed by Theorem 4.14 of Ref. \cite{weist:2009} and Corollary 4.9 of Ref. \cite{Kool:2009}. It would be interesting to precisely understand the difference between the Euler numbers of the $\mu$-stable loci and the BPS invariants computed above. \providecommand{\href}[2]{#2}\begingroup\raggedright
{ "redpajama_set_name": "RedPajamaArXiv" }
1,479
\section{\label{sec:intr}Introduction} NMR spectroscopy is a powerful technique to explore the electronic state in materials from a microscopic viewpoint. By using the nuclear spins as the local magnetic probes, we can observe the microscopic magnetism of electrons around the nuclear sites through the hyperfine interactions between the nuclear and electronic spins. In spite of the microscopic nature of probing method the experimental setup around the sample is as simple as just mounting a sample in a radio-frequency (RF) coil. This simple geometry allows us to perform NMR experiments in extreme conditions such as low temperatures of a few mK \cite{yamashita-PRB102} and high pressures up to 9 GPa. \cite{kitagawa-JPSJ79} High magnetic field is another extreme condition of our interest. In high magnetic fields, increase in the nuclear magnetization improves the NMR signal intensity and increase in the electronic magnetization contributes to a large NMR frequency shift by creating larger internal fields at the nuclear site, which results in a better frequency resolution of the NMR spectrum. To take these advantages, NMR spectrometer for high field has been intensively developed and the magnetic fields available for high-resolution NMR spectroscopy is now stronger than 20 T.\cite{nagai-CRY41, hashi-JMR256} \begin{figure} \includegraphics[width=8.5cm]{Fig1.eps \caption{ Field profile of the dynamically controlled field pulse. NMR measurements are performed during the flat-top time, where field strength becomes constant by the PID control. To subtract the background voltage on the pickup coil precisely, we took background measurement time just before the onset of field pulse. } \label{fig1} \end{figure} NMR experiment in higher magnetic fields is crucial for the study of material science and fundamental solid state physics, because extremely high fields often bring the electronic spins into a nontrivial quantum state. The magnetic fields greater than 20 T are generated in a steady state by the hybrid magnet technology (up to 45.5 T) \cite{miller-IEEE13, pugnat-IEEE28, hahn-Nature570} or high-$T_c$ wire technology (up to 30.5 T) \cite{awaji-IEEE24,michael-IEEE29}. To access much higher magnetic fields, we use the pulse-field technology, with which fields of roughly 100 T are generated for the short duration of less than 1 second. \cite{jaime-PNAS109} In spite of the enormous field strength, NMR measurement in pulsed magnetic fields has been a challenge because the resonant frequency changes with time following the continuously changing magnetic fields. Nevertheless several trials have been performed\cite{haase-SSNMR23,haase-AMR27,kozlov-SSNMR28,zheng-JPSJ78,meier-RSI83,stork-JMR234} and high quality results are available recently.\cite{orlova-PRL118, tokunaga-PRB99} So far, it was only possible to perform the field-sweep NMR spectrum measurement since the field pulses generated by the passive LCR circuits were not under control. To extract other physical quantities, such as nuclear spin-lattice relaxation rate $1/T_1$, from the pulse-field NMR experiment, the field pulse should be dynamically controlled (Fig.~\ref{fig1})\cite{kohama-RSI86}. Here we report the successful measurements of NMR spectrum and $1/T_1$ in the dynamically controlled field pulse (flat-top pulse) through the development of versatile NMR spectrometer. These apparatuses enable us to further improve the data quality and pioneer novel electronic states that appear in the extremely high magnetic fields. \section{Measurement Setup} \subsection{Dynamic field control} In this study, we utilized for the first time the actively controlled flat-top pulse \cite{kohama-RSI86} to perform the pulse-field NMR experiment. The field profile and time schedules for a flat-top pulse is shown in Fig.~\ref{fig1}. To dynamically control the field strength during the field pulse, we insert a small feedback coil in the main magnet. The main magnet is driven by a capacitor bank (portable 2 kV, 15 mF at Hokkaido Univ. or built-in 10 kV 18 mF at ISSP) and the small feedback coil is driven by up to four 12 V batteries connected in series. The current on the feedback coil is controlled by the feedback voltage applied to the gate input of an insulated-gate bipolar transistor (IGBT) module. The magnetic fields at the sample space are measured by the induction voltage from the pickup coil wound at the end of the NMR probe. The pickup voltage is read at a sample rate of 1 MS/s by the analog to digital converter equipped with the multifunction reconfigurable I/O device USB-7856R (NI-National Instruments). Then, USB-7856R calculates the feedback voltage at the on-board field-programmable gate array (FPGA) device following the standard proportional-integral-derivative (PID) protocol and outputs voltage through the digital to analog converter. To irradiate RF pulses at an appropriate time during PID control (flat-top time) the NMR spectrometer should react to the field trigger at $t=0$. The general purpose in/out (GPIO) of USB-7856R receives an external trigger and generates trigger signal to the NMR spectrometer. By receiving the external trigger once at the USB-7856R, the time counter of USB-7856R can be precisely synchronized to the counter in the NMR spectrometer. Also GPIO can provide a field trigger signal to the capacitor bank together with the one for the NMR spectrometer if the capacitor bank needs an external trigger to generate a field pulse in the main magnet. \begin{figure} \includegraphics[width=8cm]{Fig2.eps \caption{ Block diagrams of the SDR-based NMR spectrometers. The main SDR board constructs an RF network with Tx/Rx and generates/receives digital signals at DIO. The frequency band is (a) 100 MHz to 1 GHz and (b) 15 MHz to 250 MHz. For the high frequency measurement Tx output and Rx input are directly connected to the buffer boards. To decrease the measurement frequency lower than 100 MHz, frequency mixers are installed. With this option the Tx output is downconverted and Rx input is upconverted using the LO signal of 850 MHz. } \label{fig2} \end{figure} \subsection{Versatile NMR spectrometer with SDR technology} We have developed an NMR spectrometer which can be flexibly optimized for the NMR measurement regardless of the type of electromagnets, i.e. steady or pulsed fields. Since the typical time window of the flat-top time is shorter than 100 ms, NMR measurements should be conducted at a precisely controlled timing and at a high repetition rate. We also need to implement a sophisticated measurement sequence such as rapid frequency skip at each scan. To accomplish this, we took advantage of the versatility of the software-defined radio (SDR) technology. The main SDR board is USRP-2901 (NI), which covers the frequency range from 70 MHz up to 6 GHz. The broad frequency band achieved by the SDR technology is difficult to cover with an ordinary analog-type heterodyne spectrometer. Our ultimate goal is to apply this method to the pulsed magnetic fields greater than 20 T, which is not easily accessible with the steady fields generated by superconducting magnets, and thus the frequency range required for the NMR experiment is normally higher than 100 MHz. Therefore, we use the direct RF input/output of the USRP-2901. We limit the lower frequency band to 100 MHz, as the linearity of the RF signal deteriorates at lower frequencies. To protect the SDR board from the over-voltage input, we inserted the transmission (Tx) and reflection (Rx) buffers, on which an RF switch, a fixed gain amplifier (ADL5536; Analog Devices), and back-to-back diodes for voltage clamping are mounted [Fig.~\ref{fig2}(a)]. At present, the specifications of amplifiers on the buffer boards limit the upper frequency band to approximately 1 GHz. In order to enable the measurements at frequencies lower than 100 MHz, we constructed a frequency conversion interface using a local oscillator signal (LO) of 850 MHz as shown in Fig~\ref{fig2}(b). For an analog system the quadrature detection is executed at a fixed frequency to maintain the orthogonality of inphase (I) and quadrature (Q) signals. In contrast to an analog system, our system uses a variable frequency for the quadrature detection, but a fixed one for LO. This is possible only with the broadband SDR technology, which reasonably maintains the I/Q orthogonality at any frequencies. With this low-frequency option, we can perform NMR measurement at $15 \sim 250$ MHz, which covers most NMR experiments in steady fields. The USRP-2901 is controlled through the USRP Hardware Drive (UHD) software (Ettus Research), which is the free and open-source software driver for USRP platform. The required Tx waveform is set in the personal computer (PC) and sent to the USRP-2901 through USB 3.0. We can easily modulate the baseband signal by programming the Tx waveform at PC, which enables the rapid frequency skip and irradiation of shaped pulse as we will discuss in \S IV-A. The timing of Tx generation is scheduled using the internal counter of URSP-2901. Since the field trigger coming from USB-7856R sets the counter to zero, timing for Tx generation is precisely synchronized to the pulse-field profile. The Rx signal is sampled at a rate of 10 MS/s with the vertical resolution of 12 bits. As the obtained data are continuously transferred to the PC, recording for a long time more than 1 second is possible. Synchronized with the Tx and Rx time, the digital output sends the gate signals to the power amplifier (PA) and the low-noise amplifier (LNA) to reduce the background noise and to protect small-power devices from a large signal, respectively. Here again, we placed a digital I/O (DIO) buffer to protect the SDR board and to provide sufficient drive current to the output signals. \section{NMR measurement with flat-top pulse fields} \begin{figure} \includegraphics[width=7cm]{Fig3.eps \caption{ Reproducibility test of 10 independent field pulses. (a) Field profiles of all field pulses around the flat-top time. The duration of flat-top time changes with the variation of peak fields generated by the main magnet. Even with this variation the magnetic fields are locked to the target value during the flat-top time. The downward arrow represents a time when the $^{65}$Cu-NMR measurement was performed. (b) $^{65}$Cu NMR spectra obtained at each field pulse and at a fixed carrier frequency. The magnetic fields are measured precisely from the peak positions of these FT spectra. (c) The peak position of $^{65}$Cu-NMR spectra for each field pulse. The distribution of magnetic fields with respect to the first field pulse (square) is less than 40 ppm. } \label{fig3} \end{figure} \subsection{Reproducibility of magnetic fields and NMR signals} In a conventional field pulse generated by a passive LCR circuit, the magnetic field strength evolves continuously, thus a thermodynamic equilibrium state has never been achieved. This feature is not suitable for the measurement of thermodynamical properties, such as $1/T_1$, that requires to follow the relaxation process at certain condition. In contrast, with our flat-top pulse, magnetic field strength is dynamically controlled and tuned to a target field for the time duration of up to 15 \% of the total width of the field pulses. The NMR relaxation processes can be obtained within this flat-top time. Another important advantage is the high reproducibility of the field strength during the flat-top time. Without the dynamic control the maximum fields change with the magnet temperature even if the charge voltage is precisely controlled. To demonstrate the reproducibility of flat-top pulse, we generated 10 independent field pulses and their field profiles measured by the pickup coils are shown in Fig.~\ref{fig3}(a). To measure the absolute values of the external fields, we observed the $^{65}$Cu-NMR spectrum by using the apparatus shown in Fig.~\ref{fig2}(a). As the NMR frequency is proportional to the external fields, we can determine the magnetic field precisely from the peak frequency. The RF pulses for the $^{65}$Cu-NMR measurement are irradiated at $t=60.5$ ms, as pointed by a downward arrow in Fig.~\ref{fig3}(a). The carrier frequency is fixed to 157.95 MHz. The free induction decay (FID) signals after a single RF pulse were collected and their Fourier transform (FT) spectra are shown in Fig.~\ref{fig3}(b). The peak frequencies of each spectrum were determined by the Gaussian fitting and displayed in Fig.~\ref{fig3}(c), where the vertical axis is the deviation of the peak frequency from the value for the first pulse plotted in ppm scale. This result evidences that the field reproducibility is better than 40 ppm. We note here that the standard deviation of the integrated NMR intensity was calculated to be 6.7 \%, which is sufficiently small for a single scan measurement. This result is important to measure the relaxation time as we will discuss in the next section. \begin{figure} \includegraphics[width=7cm]{Fig4.eps} \caption{ Relaxation profile of the nuclear magnetization measured for an antiferromagnet CrB$_{2}$ at $T=100$ K. The $^{11}$B-NMR intensity at each delay after the saturation pulse is recorded in pulsed fields (filled symbol) and steady fields (open symbols). Inset shows the RF pulse sequence for the relaxation rate measurement. The blue dashed line shows the result of least square fitting by the stretched exponential function. } \label{fig4} \end{figure} \subsection{Nuclear spin-lattice relaxation rate measurement} To measure the relaxation profile of the nuclear magnetization after the saturation pulse, NMR signal intensity is recorded at each delay between the saturation and spin-echo pulses. The inset of Fig.~\ref{fig4} shows the RF pulse sequence for one scan. Since the spin-echo pulses disturb the free relaxation of the nuclear magnetization, only one point in the relaxation profile can be measured by a single scan. For the pulse-field NMR measurement, we need to measure the NMR intensity for each delay at independent field pulses. Therefore, the reproducibilities of magnetic field and NMR intensity are crucially important. As we confirmed the reproducibility of NMR signal intensity for independent field pulses, we can now measure the relaxation profile by repeating the NMR measurements with different delays. To demonstrate the $1/T_1$ measurement, we measured the relaxation profile of a typical antiferromagnet CrB$_{2}$ at 100 K. This compound shows an antiferromagnetic phase transition at $T_{\rm N}= 88$ K, \cite{barnes-PLA29, castaing-SSC7} and thus $T_1$ is reasonably short near the transition temperature. \cite{kitaoka-JPSJ49} We measured the $^{11}$B-NMR signal at a target field of 13.0 T, which corresponds to the NMR frequency of approximately 178 MHz. In Fig.~\ref{fig4}, the results collected in the pulsed fields are shown by the filled symbols. We repeated the NMR scans twice for each delay to improve the data accuracy. The longest delay was 4.0 ms, which is limited by the width of the field pulse for the present experiment. As a reference, we plotted the relaxation profile measured in a steady field of 13.0 T by the empty symbols. We confirm that the results of pulse-field NMR measurement perfectly follow those in the steady fields up to the maximal delay possible for the present flat-top time. The relaxation profile $M(t)$ is fitted with a stretched exponential function \begin{equation} M(t) =M_0\left[1-A\exp \left( -\left(\frac{t}{T_1}\right)^{\beta}\right) \right]. \end{equation} Here, $M_0$ and $A$ are the nuclear magnetization in the thermal equilibrium and the saturation coefficient, respectively. We introduce a stretched exponent $\beta$ to better fit the experimental results. From the fit we obtained $T_1 =8.9 $ ms and $\beta = 0.7$ and the resulting relaxation curve is plotted by the blue dashed line. Although this experiment demonstrates the validity of $1/T_1$ measurement in the pulsed magnetic fields and opens a possibility to measure $1/T_1$ at extremely high fields, the maximum delay is still too short to fit the overall relaxation profile. As a next step, we should perform the NMR measurement in flat-top pulse with a long-duration pulse magnet that has a pulse width exceeding 1 second. \cite{herlach-RPR62, matsui-RSI92} We note that the ability to measure $1/T_{1}$ is easily expanded to the nuclear spin-spin relaxation rate $1/T_2$ measurement, which requires much less duration time of typically a few milliseconds. \section{NMR spectrum measurement for broad spectrum} \begin{figure} \includegraphics[width=8cm]{Fig5.eps \caption{ (a) Schematic diagram for the baseband modulation. The continuous carrier frequency is mixed with the baseband modulation waveform to construct a shaped pulse. The baseband waveform of the ham-flat RF pulse (b) and its FT power spectrum (c). The bandwidth $f_{\rm bw}$ is set to 100 kHz. } \label{fig5} \end{figure} When the NMR spectrum is narrower than the frequency window for a single RF pulse, which is typically a few hundreds of kHz, entire NMR spectrum can be measured at a fixed field and frequency as in the case of $^{65}$Cu-NMR spectra in Fig.~\ref{fig3}(b). For broader NMR spectra, however, NMR signal intensity should be recorded during either frequency sweep at a fixed field or field sweep at a fixed frequency. In a steady field, we choose one of these measurement modes depending on the overall spectrum width and physical properties at the measurement fields. In the previous pulse-field NMR studies, only the field-sweep experiment was performed. \cite{zheng-JPSJ78, stork-JMR234, orlova-PRL118, tokunaga-PRB99} Here we adopt both modes using the dynamically controlled field pulse. \begin{figure} \includegraphics[width=8cm]{Fig6.eps \caption{ (a) The RF pulse sequence for the frequency-sweep mode during the flat-top time period and FT power spectra of each RF pulse. Wide range frequency sweep is realized within a short time, while the overlap of Tx power spectrum is minimized to avoid the saturation of nuclear magnetization. Here single RF pulse for each frequency is displayed for simplicity. In reality spin-echo pulses sequence, such as $\pi/2-\tau-\pi$ sequence, is generated at each Tx frequency. } \label{fig6} \end{figure} \subsection{Frequency-sweep mode with flat-top pulse} We perform the frequency-sweep experiment during the flat-top time period in almost the same way as in the steady field. The only and the most different point is that we need to sweep the measurement frequency in a short time, namely less than 1 ms. If we irradiate several RF pulses at the same frequency in one field pulse, the signal intensity would gradually diminish because of the saturation of the nuclear magnetization. To avoid this, we should irradiate only one RF pulse sequence for each frequency by shifting the Tx frequency rapidly during the flat-top time. Although high speed frequency skip is rather difficult to execute with the ordinary analog heterodyne structure, the versatility of SDR board enables rapid and precise frequency shifting by its digital baseband modulation feature, with which the modulation of carrier frequency by the baseband signal [Fig.~\ref{fig5}(a)] is performed at a digital signal processing circuitry. Then, the Tx frequency shift of $\Delta f$ is achieved by multiplying a phase factor of $\phi(t)=\exp (2 \pi i \Delta f t)$ to the carrier frequency. Moreover, as rectangular RF pulses will irradiate the RF power in a broad frequency range, which is characterized by the sinc function with several side lobes, we employed a shaped RF pulse to further avoid the irradiation to the unwanted frequency. The baseband-modulation waveform of the shaped RF pulse is a convolution of the sinc function and the hamming window. (ham-flat) \begin{equation} w(t)= \left[ 0.54-0.46\cos \left( \frac{\pi f_{\rm bw}}{2}t\right) \right] \frac{ \sin \left(1.5 \pi \left( f_{\rm bw}t-2\right) \right)}{1.5 \pi \left( f_{\rm bw}t-2\right)}. \end{equation} Here, $f_{\rm bw}$ is a Tx frequency bandwidth. The waveform of the ham-flat pulse and its FT power spectrum are displayed in Figs.~\ref{fig5}(b) and (c). The FT spectrum shows that the RF power within the frequency window of $f_{\rm bw}$ is almost constant, which is characterized by the RF power at $\pm 0.5f_{\rm bw}$ being $-0.7$ dB. The RF power decays rapidly at lower or higher frequencies and becomes smaller than $-22$ dB at $\pm f_{\rm bw}$. By irradiating these shaped RF pulses at 2$f_{\rm bw}$ step as shown in Fig.~\ref{fig6}(a), only one RF pulse sequence is irradiated for each frequency without any overlap. The vacancies between two frequencies are filled by repeating the field-pulse generation with the initial frequency shifted by $f_{\rm bw}$. The flat-top pulse field is crucially important for the frequency-sweep mode firstly because the field should be fixed during the frequency sweep and also because the field strength has to be reproducible to fill the frequency vacancies. By repeating field-pulse generation, we can improve the signal to noise ratio (SNR). This frequency-sweep mode with rapid frequency skip can be used in the steady-field experiment to accelerate the repetition rate. When we use a single Tx frequency with a conventional NMR spectrometer, the repetition rate of the Tx irradiation is limited by the time scale of material-specific $1/T_1$, which is much longer than millisecond order. By using multiple Tx frequencies, for example five frequencies as shown in Fig.~\ref{fig6}(b), we can perform the NMR experiment at five different frequencies in parallel because these Tx frequencies do not interfere with each other. As a result, the repetition rate of Tx irradiation will become five times faster. \begin{figure} \includegraphics[width=8cm]{Fig7.eps \caption{ The frequency-sweep NMR spectra for CrB$_{2}$ above and below $T_N$. The NMR spectra were obtained in steady field (a) and pulsed field (b). The external magnetic field is 13.0 T for both cases. Sharp peak in the paramagnetic state and broadening in the ordered state were consistently observed in the pulsed field. } \label{fig7} \end{figure} As an example of the frequency-sweep spectrum measurement we show the $^{11}$B-NMR spectra for CrB$_{2}$ obtained in steady and pulsed fields in Figs.~\ref{fig7}(a) and (b). The target magnetic field is 13.0 T. A spectrum broadening was observed below $T_N= 88$ K because of the appearance of internal fields generated by the ordered moments. The sharp spectrum above $T_N$ and broadened spectrum below $T_N$ were both consistently observed in the pulse-field NMR measurement. We generated 40 field pulses with 3 RF pulses at each flat-top time to obtain the broadened NMR spectrum at 80 K. We repeated a lot of field-pulse generation to sweep over a broad frequency range and to increase the SNR as the signal intensity is significantly reduced below $T_N$. Nevertheless, the SNR of the NMR spectra obtained in the pulsed field cannot be compared to those in steady fields especially for the broad spectrum. Since the present frequency window covered by one field pulse is limited by approximately 1 MHz by the quality factor of the RF tank circuit, we need to repeat the field-pulse generation for many times to observe the full spectral shape. In the case of broad NMR spectrum, a field-sweep mode with long-duration pulse fields is more appropriate as we explain in the next section. \begin{figure} \includegraphics[width=8cm]{Fig8.eps \caption{ The field profile of the slope-top pulses (blue line). The constant sweep rate is (a) 2 mT/ms and (b) 10 mT/ms. Red peaks are the FT NMR spectra of $^{65}$Cu FID signals measured at corresponding time. Since the NMR measurement is one of the most precise magnetometer constant and arbitrary rate field-sweep ability is clearly demonstrated. } \label{fig8} \end{figure} \subsection{Field-sweep mode with slope-top pulse} As an alternative measurement mode for broad NMR spectra, we sweep the magnetic fields during the irradiation of RF pulses at a fixed frequency. This field-sweep mode is frequently used for very broad spectrum as the sweeping range is not limited by the mechanical parameters of the RF tank circuit. The field-sweep experiment has been already realized with the field pulse without the PID feedback control. \cite{zheng-JPSJ78, stork-JMR234, orlova-PRL118, tokunaga-PRB99} However, the relative NMR intensity is modified by the continuously changing sweep rate for the half-sinusoidal field-pulse profile. Here, we sweep the magnetic fields by changing the target field at a constant rate during the PID control (slope-top pulse). Fig.~\ref{fig8} shows the resulting magnetic field profiles at two sweep rates, 2 mT/ms and 10 mT/ms. The field profiles shown by the blue solid lines were measured by the pickup coil. To confirm the field strength at each moment, we performed the $^{65}$Cu-NMR measurement at every 0.3 ms starting from 57.5 ms. The peak positions of the $^{65}$Cu-NMR spectra follow the blue lines for both sweep rates, evidencing that the field strength is nicely controlled to decrease at a constant rate. We used a small RF power to avoid saturation of nuclear magnetization even at a very fast repetition rate. With this method suppression of NMR intensity was observed only for the last few NMR spectra around 61 ms, although $T_1$ of $^{65}$Cu at 77 K is longer than 10 ms. Since the external fields change at a constant rate, NMR spectrum intensity is correctly measured by irradiating the RF pulse at a fixed repetition rate. The main magnet we used for this experiment generates a field pulse with the total width of $30$ ms. With this magnet a magnetic field of 13.0 T is PID controlled for 3.5 ms (57.5 ms to 61 ms). Therefore, when we use the sweep rate of 10 mT/ms, we can obtain the NMR spectrum in the field range of 35 mT by one field-pulse generation, which corresponds to 400 kHz for $^{65}$Cu nuclear spins and is narrower than the frequency window of the frequency-sweep mode. To perform the field-sweep experiment over a broader field range, we connect the data obtained in the other slope-top field pulse starting from the last field of the previous field pulse. However, as the optimization of the field-generation parameters is imposed each time by shifting the field window, present field-sweep range is not sufficient for the measurement of broad NMR spectra. The use of long-duration field pulse with the pulse width longer than 1 s permits to generate the slop-top field with the duration of $\sim 100$ ms. This will allow us to sweep magnetic fields for 1 T by a single field-pulse generation. \subsection{Consideration for the choice of modes} As a result of the development of the flat-top and slope-top pulses we can choose the measurement modes of NMR spectra depending on the target materials. Here we discuss the advantages and disadvantages for these modes which should be considered to choose the best measurement mode. When the overall spectral width is broader than a few MHz, the field-sweep mode with the long-duration field pulse is the first choice. A disadvantage of this setup is a long cooling time of the long-duration pulse magnet, which typically takes a few hours. In this respect, the frequency-sweep mode in a smaller pulse magnet with the pulse width of approximately 30 ms is a better choice when the NMR spectral width is narrower than 2 MHz. Within the spectral width, we can cover the full frequency range by few field pulses and increase the SNR by quickly repeating the field-pulse generation. The frequency-sweep mode with long-duration pulse field is not recommended as the frequency range measurable during one field pulse is limited by the quality factor of RF tank circuit and repeating the same frequency during one field pulse will reduce the NMR intensity by the saturation of nuclear magnetization. For the frequency sweep of 1 MHz, 10 RF pulses sufficiently cover the full frequency range, which can be generated within 3 ms by a repetition time of 0.3 ms. The long-duration pulse field should be used for the sample whose nuclear magnetization cannot be polarized during the field-pulse duration of 30 ms. In this case frequency-sweep measurement should be performed at the very end of the flat-top time to give ample time for the polarization of nuclear spins. Another case to select frequency-sweep mode is to measure the sample that shows a field-induced phase transition. Field sweep across the critical magnetic field results in the drastic change in the spectral shape at the middle of the NMR spectrum, and thus the entire spectral shape cannot be measured. Frequency-sweep mode at slightly above and below the critical magnetic field will clearly reveal the microscopic magnetism in the field-induced electronic state. \section{Summary} We demonstrated various operating modes of our SDR-based NMR spectrometer using the flat-top field pulse. We first confirmed the reproducibility of field strength for independent field pulses, which is crucial to improve the SNR, to measure relaxation profile of nuclear magnetization, and to measure frequency-sweep NMR spectrum. The $1/T_1$ and frequency-sweep NMR spectrum measurements, which were difficult with a field pulse without the dynamic control, were successfully performed in a metallic antiferromagnet CrB$_{2}$. We also developed the slope-top field pulse to enable the NMR spectrum measurement with field-sweep mode. These results open a possibility to measure microscopic magnetism in extremely high magnetic fields. However, since the present study was performed in the field pulse with approximately 30 ms time duration, flat-top time was not sufficiently long for some materials. To further expand the applicability of NMR experiment in high fields, these NMR technology should be transferred and adapted to the pulse magnet with longer pulse duration. \begin{acknowledgements} One of the authors (K.M.) is a research fellow of the Japan Society for the Promotion of Science (JSPS). This work was partially supported by the JSPS Grant-in Aid for Scientific Research (Grant nos. 18H01163, 19H01832, and 20K20892), the Futaba Foundation, and ISSP Institutional Collaborative Research Program. \end{acknowledgements} \section*{Data availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \nocite{*}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,015
{"url":"https:\/\/forum.zkoss.org\/question\/113281\/toolbarbutton-content-with-text-overflow-ellipsis\/","text":"# Toolbarbutton - content with text-overflow: ellipsis\n\nandij62\n313 1 7\n\nHi all,\n\nI would like to set the content to \"text-overflow: ellipsis\", if the link is too long. This only works if I set the width to a fixed value, like \"width: 200px\". But I want to use \"width: 100%\" so that it works responsive. What do I have to do to make it work? Here my css ...\n\n .z-toolbarbutton-content { \u00a0\u00a0overflow: hidden; \u00a0\u00a0text-overflow: ellipsis; \u00a0\u00a0-o-text-overflow: ellipsis; \u00a0\u00a0white-space: nowrap; \u00a0\u00a0width: 100%; } \n\nRegards Andi\n\ndelete retag edit\n\nSort by \u00bb oldest newest most voted\n\ncor3000\n5833 2 7\n\nHere my interpretation of what you described (and maybe didn't): https:\/\/zkfiddle.org\/sample\/263nviu\/2-toolbar-buttons-with-ellipsis\n\nIf you want to use 100% you'll have to specify a width around the element using 100%. In the example above the width is determined by flex-box. The content already has 100% so it's not needed to specify again.\n\nWhat it also does is: it renders toolbar buttons normally (full length) as long as they fit into a row. If they don't fit any more it will start shortening the longest buttons first. So that shorter buttons remain unaffected by the shrinking, until necessary. This is achieved by max-width: min-content. Basically the flex box would like to assign the same width to every button, but due to the max-width some are already shorter so the don't need to flex.\n\nThank you! Exactly what I was looking for\n\n( 2021-03-23 17:25:12 +0800 )edit\n\ngreat to hear... have a good day\n\n( 2021-03-24 10:55:29 +0800 )edit\n[hide preview]","date":"2021-04-19 16:17:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.18176238238811493, \"perplexity\": 4049.987326046475}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038887646.69\/warc\/CC-MAIN-20210419142428-20210419172428-00454.warc.gz\"}"}
null
null
UTS dumps Sun email for Exchange The University of Technology Sydney today revealed plans to dump its current Sun ONE-based email system for staff use and adopt Microsoft's Exchange. Written by Suzanne Tindal, News Editor Suzanne Tindal News Editor Suzanne Tindal cut her teeth at ZDNet.com. on April 15, 2009 | Topic: Oracle Chris Cahill (Credit: UTS) UTS director of IT client services Chris Cahill said staff had been longing for a system with more integration and functionality, such as the ability to link calendar entries with email. Previously, staff had been accessing the Sun system via third-party email clients like Mozilla Thunderbird and Eudora. "The whole world is using Outlook and Exchange," Cahill said. "Any product out of the box works on that platform." Despite the change, however, he noted he still considered the Sun solution to be a "powerful, stable email system". The changeover is now on its way and scheduled to be completed in July. When staff have been migrated, the IT department will run an evaluation on what to do with student mail accounts, with the current idea being to put them on to either Gmail or Microsoft's Live@edu, Cahill said, adding that there was very little difference between the two systems. He believed a student roll-out could start next year depending on the results of the evaluation. There had been some resistance within the IT ranks as the staff shift away from open-source mail applications, according to Cahill, but it had made less work for the IT support team who had previously needed to create a workaround every time someone bought a new PDA. Now, most things just worked, he said. Being a university, the team supported almost any device, Cahill said. His preference was iPhone over Blackberry, however, since there was no need to build another infrastructure layer to support it. On security, he said he hadn't had any problems with the device. "We're not the CIA or a bank. We're a university," he said. Cahill's team had also rolled out Office 2007 last year across approximately 5000 desktops it manages, but there had been less excitement about that than about the move to Exchange, Cahill said. He hadn't yet looked at Windows 7, but he believed the university would move to it since he didn't think it should get too many releases behind. The university would skip over Vista, he said. The university has also started a suite of desktop architecture projects to be completed over a period of 18 months which involved implementing Symantec's Altiris and planning the move from Novell desktop services to Active Directory and Windows Server 2008. The university currently has licences with both Novell and Microsoft. Migrating to one will mean lower fees and a lower support cost in terms of maintaining two skills sets. Microsoft | Enterprise Software | Cloud | Big Data Analytics | Storage | Data Management Join Discussion for: UTS dumps Sun email for Exchange
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,415
Marc Nerlove est un économiste et économètre américain né le à Chicago. Parmi ses travaux les plus connus on peut mentionner l'étude des retards dans les anticipations (anticipations adaptatives), le développement de méthodes modernes des séries temporelles et des données de panel et l'analyse du secteur agricole. Ses estimations de fonctions de production utilisent pour la première fois la théorie et de la dualité. Nerlove a aussi développé avec Balestra une technique économétrique souvent utilisée dans le cas de données temporelles et transversales combinées. L'estimateur des moindres carrés généralisés proposé pour ces modèles à erreurs composées est appelé Balestra-Nerlove. Biographie Nerlove est né en 1933 à Chicago, Illinois (États-Unis). De 1949 à 1952, il étudie à l'Université de Chicago. Il passe ensuite à l'Université Johns-Hopkins où il obtient son Ph.D avec une étude sur l'offre des produits agricoles. Après une année à l'Université du Minnesota, il enseigne aux universités Stanford (1960-1965), Yale (1965-1969), de Chicago (1969-1974), Northwestern (1974-1982), de Pennsylvanie (1982-1993) et du Maryland. Nerlove a été président de la Société d'économétrie en 1981. Il est membre de l'Académie américaine des arts et des sciences et de l'Académie nationale des sciences. Il a obtenu la Médaille John Bates Clark et est Distinguished Fellow de l'American Economic Association. Distinctions Médaille John Bates Clark 1969 Docteur honoris causa de l'Université de Mannheim Docteur honoris causa de l'Université de Genève Principales publications " Adaptive Expectations and Cobweb Phenomena", Quarterly Journal of Economics, 1958, p. 227-240 The Dynamics of Supply: Estimation of Farmers' Response to Price, Baltimore, 1958 "Spectral Analysis of Seasonal Adjustment Procedures", Econometrica, 1964, p. 241-286 "Spectral Comparisons of Two Seasonal Adjustment Procedures," Journal of the American Statistical Association, 1965, p. 442-491 Estimation and Identification of Cobb-Douglas Production Functions, Chicago, 1965 "Pooling Cross-Section and Time-Series Data in the Estimation of a Dynamic Model: The Demand for Natural Gas", Econometrica, 1966, p. 585-612 (avec P. Balestra) "Experimental Evidence on the Estimation of Dynamic Economic Relations from a Time-Series of Cross Sections", Economic Studies Quarterly, 1967, p. 42-74 "Further Evidence on the Estimation of Dynamic Economic Relations from a Time Series of Cross-Sections", Econometrica, 1971, p. 359-382 "Lags in Economic Behavior", Econometrica, 1972, p. 221-251 Analysis of Economic Time Series: A Synthesis, New York, 1979 (avec D.M. Grether et J.L. Carvalho) "On the Formation of Price Expectations: An Analysis of Business Test Data by Log-Linear Probability Models", European Economic Review, 1981, p. 103-138 (avec H. Koenig et G. Oudiz) "Expectations, Plans and Realizations in Theory and Practice," Econometrica, 1983, p. 1251-1279 Household and Economy: Welfare Economics of Endogenous Fertility, New York, 1987 (avec A. Razin et E. Sadka) Essays on Panel Data Econometrics, New York, 2002 Notes Liens externes Économiste américain du XXe siècle Économiste américain du XXIe siècle Étudiant de l'université de Chicago Docteur de l'université Johns-Hopkins Professeur à l'université Yale Professeur à l'université Stanford Professeur à l'université de Chicago Docteur honoris causa Boursier Guggenheim Lauréat de la médaille John-Bates-Clark Membre associé de la Société d'économétrie Membre de l'Académie américaine des arts et des sciences Membre de l'Académie nationale des sciences Membre de la Société américaine de statistique Naissance en octobre 1933 Naissance à Chicago
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,421
Q: Accessory Support in SplitViewController App I am creating an app that utilizes a serial cable adapter for iOS. The basic design of the app is a Split View Controller with a detail view controller, from which I can launch a session (in a separate view) that sends information through the accessory. I got the application "working" in the sense that I set up the delegate that controls the accessory interface (supplied in SDK) from the session view controller, and it works the first time it is run. The only problem is that if I try to run the session a second time (by launching from the detail view controller again or switch between projects in the split view) it fails because of a pre-existing connection, i.e. the previously established connection. This is the console log below: ERROR - opening session failed ERROR - /SourceCache/ExternalAccessory/ExternalAccessory-242/EASession.m:-[EASession dealloc] - 139 unable to close session for _accessory=0x14d59250 and sessionID=65536 Cable Not Connected If it helps as well, here is some stripped-down code from my .h and .m files of the session view controller: .h @interface SessionViewController : UIViewController <RscMgrDelegate> { RscMgr *rscMgr; CableConnectState cableState; } @property (strong, nonatomic) IBOutlet UIBarButtonItem *cableStatus; - (void) sendStringToSerial: (NSTimer *) timer; @end .m @implementation SessionViewController @synthesize cableStatus; - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. // Serial Setup rscMgr = [[RscMgr alloc] init]; [rscMgr setDelegate:self]; //checks cableConnectState and adjusts the outlet cableStatus accordingly } - (void) sendStringToSerial: (NSTimer *) timer { [rscMgr writeString:someString]; } @end So the questions: Is the structure I have the best way to go, and if so, how do I fix the session issue? If not, any suggestions on where to go from here? Thanks!
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,902
Drumopama girisa är en svampart som beskrevs av Subram. 1957. Drumopama girisa ingår i släktet Drumopama, divisionen sporsäcksvampar och riket svampar. Inga underarter finns listade i Catalogue of Life. Källor Sporsäcksvampar girisa
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,209
\section{Introduction} The Virasoro algebra $\mathcal{V} ir$ is the unique non-trivial one-dimensional central extension of the Lie algebra of polynomial vector fields on the circle. It is foundational in algebraic approaches to two-dimensional conformal field theory, and it is the source of one of the first-constructed families of vertex operator algebras \cite{FZ1}. As with all Lie algebras, the full category of $\mathcal{V} ir$-modules is a symmetric tensor category, but for applications in physics, one restricts to categories of $\mathcal{V} ir$-modules with a fixed central charge: this is the scalar by which the canonical central element of $\mathcal{V} ir$ acts. The correct tensor product operation on such categories then becomes the fusion product of conformal field theory, which can be defined mathematically in terms of vertex algebraic intertwining operators (see for example \cite{HLZ3}). At central charge $c = c_{p,q}= 13-6(\frac{p}{q} +\frac{q}{p})$ for $p, q \geq 2$ and $\gcd(p, q) = 1$, the $\mathcal{V} ir$-module category of primary interest, corresponding to ``minimal models'' in rational conformal field theory \cite{BPZ}, is the representation category of the simple Virasoro vertex operator algebra $V_c$. The algebra $V_c$ is rational \cite{Wa} and $C_2$-cofinite \cite{Zh, DLM}, and thus its representations form a modular tensor category \cite{Hu_Vir_tens, Hu_rigid}. For all other central charges, however, the Virasoro vertex operator algebras are neither rational nor $C_2$-cofinite, and only recently has there been much progress in understanding the tensor structure of their representations. In \cite{CJORY}, it was shown that for any $c\in\mathbb{C}$, the category $\mathcal{O}_c$ of $C_1$-cofinite grading-restricted generalized modules for the universal Virasoro vertex operator algebra of central charge $c$ is the same as the category of finite-length $\mathcal{V} ir$-modules whose composition factors are irreducible quotients of reducible Verma modules of central charge $c$. As a consequence, it was shown that $\mathcal{O}_c$ satisfies the conditions of Huang-Lepowsky-Zhang's vertex tensor category theory \cite{HLZ1}-\cite{HLZ8}, and thus $\mathcal{O}_c$ is a braided tensor category as described in \cite{HLZ8}. Some details of the tensor structure on $\mathcal{O}_c$ are known for the following $c$: \begin{enumerate} \item For $c=13-6t-6t^{-1}$ with $t\notin\mathbb{Q}$, it was shown in \cite{CJORY} that $\mathcal{O}_c$ is a rigid semisimple tensor category, with tensor products of irreducible modules given by the fusion rules calculated previously in \cite{FZ2} using a Zhu algebra approach. \item For $c=1$, tensor products of simple modules in $\mathcal{O}_1$ were determined in \cite{McR} using the fusion rule calculations of \cite{Mi}, and it was shown in \cite[Remark 4.4.6]{CMY2} using results from \cite{McR} that $\mathcal{O}_1$ is rigid. The full category $\mathcal{O}_1$ is not semisimple, but its simple objects generate a semisimple tensor subcategory, namely, the category of $C_1$-cofinite unitary modules for the unitary vertex operator algebra $V_1$. \item For $c=13-6p-6p^{-1}$ with $p > 1$ an integer and for $c=25$, fusion rules for irreducible modules in $\mathcal{O}_c$ were calculated in \cite{Lin} and \cite{OH}, respectively. However, since these categories are not semisimple, fusion rules are not enough to identify tensor products of irreducible modules in $\mathcal{O}_c$. Rigidity for these categories has also remained open. \end{enumerate} In this work, we present a comprehensive analysis of the tensor category $\mathcal{O}_c$ at central charge $c=c_{p,1}=13-6p-6p^{-1}$ for integers $p > 1$; especially, we prove rigidity and compute all tensor products of irreducible modules. The simple Virasoro vertex operator algebras $V_c$ at these central charges occur as subalgebras of many of the best-known vertex operator algebras in logarithmic conformal field theory, including the singlet algebras \cite{Ka, A, AM_log_intw, CF, CMR, CMY2}, triplet algebras \cite{FHST, FGST1, FGST2, GR, NT, TW, CGR}, and logarithmic $\mathcal{B}_p$ algebras \cite{CRW, AuCKR, ACGY}. Reflecting the non-semisimplicity of the Virasoro zero-mode $L_0$ in logarithmic conformal field theory (which leads to logarithmic singularities in correlation functions), the Virasoro categories $\mathcal{O}_{c_{p,1}}$ are neither semisimple nor finite. Although the singlet and triplet algebra extensions of $V_c$ have been studied fairly extensively by mathematicians, most work on the Virasoro algebra itself at central charge $13-6p-6p^{-1}$ has appeared in the physics literature, in the study of ``logarithmic minimal models'' denoted $\mathcal{LM}(1,p)$. Starting with work of Gaberdiel and Kausch \cite{GaK}, indecomposable modules at these central charges have been constructed and fusion products have been predicted using a variety of methods \cite{PRZ, RP, RS, BFGT, BGT, Ra, MRR}. Comparison of these works with our results summarized in Theorem \ref{thm:main_thm} below shows that the vertex algebraic tensor category $\mathcal{O}_c$ can be viewed as a rigorous mathematical setting for logarithmic minimal models. For example, the formula in Theorem \ref{thm:main_thm}(3) for the tensor product of irreducible $V_c$-modules agrees with the fusion product conjecture in \cite[Equation 4.1]{GaK}. More precisely, the mathematics of $\mathcal{LM}(1,p)$ is captured by the tensor structure on the subcategory $\mathcal{O}_c^0$ of $\mathcal{O}_c$ mentioned in Theorem \ref{thm:main_thm}(2), which we introduced in order to obtain projective covers of irreducible modules. This turns out to be the smallest tensor subcategory of $\mathcal{O}_c$ that contains all irreducible modules. At central charge $c=c_{p,1}$, the Virasoro category $\mathcal{O}_{c}$ has simple modules labeled $\mathcal{L}_{r,s}$ for $r,s\in\mathbb{Z}$ such that $r\geq 1$ and $1\leq s\leq p$. Tensor products of these $V_c$-modules are described in the following theorem, which summarizes our main results: \begin{thm}\label{thm:main_thm} Let $V_c$ denote the simple Virasoro vertex operator algebra of central charge $c=13-6p-6p^{-1}$ for an integer $p > 1$. Then: \begin{enumerate} \item The tensor category $\mathcal{O}_c$ of $C_1$-cofinite grading-restricted generalized $V_c$-modules is rigid and ribbon, with duals given by the contragredient modules of \cite{FHL} and natural twist isomorphism $\theta=e^{2\pi iL_0}$. \item Every irreducible module $\mathcal{L}_{r,s}$ in $\mathcal{O}_c$ has a projective cover $\mathcal{P}_{r,s}$ in a natural tensor subcategory $\mathcal{O}_c^0$ of $\mathcal{O}_c$. \item Tensor products of the irreducible modules in $\mathcal{O}_c$ are as follows: \begin{equation*} \mathcal{L}_{r,s}\boxtimes \mathcal{L}_{r',s'} \cong \bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1} \bigg(\bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; (\mathrm{mod}\; 2)}}^{\min(s+s'-1, 2p-1-s-s')} \mathcal{L}_{k, \ell} \oplus \bigoplus_{\substack{\ell = 2p+1-s-s'\\ \ell+s+s' \equiv 1\; (\mathrm{mod}\; 2)}}^{p} \mathcal{P}_{k, \ell}\bigg) \end{equation*} for $r, r'\geq 1$ and $1\leq s,s'\leq p$, with sums taken to be empty if the lower bound exceeds the upper bound. \end{enumerate} \end{thm} The proof of Theorem \ref{thm:main_thm} begins in Section \ref{sec:first_fus}, where we largely determine which composition factors of the tensor products $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}$ show up in the lowest conformal weight spaces of the tensor product modules. To do so, we use the Zhu algebra approach developed in \cite{FZ1, Li, FZ2, HY}, among other references, but our calculations also resemble those done by physicists to compute fusion products using the Nahm-Gaberdiel-Kausch algorithm \cite{Na, GaK}. See \cite{KR} for a comparison of mathematicians' and physicists' approaches to fusion products; note that our work in Section \ref{sec:first_fus} as well as later in Proposition \ref{prop:P1_structure} recovers (in greater generality and more systematically) the results of the sample calculations in \cite[Sections 7 and 8]{KR}. To fully determine tensor products in $\mathcal{O}_c$, we use rigidity. To prove that $\mathcal{O}_{c}$ is rigid, we first prove that $\mathcal{L}_{1,2}$ is rigid (and self-dual) using explicit formulas for compositions of intertwining operators, obtained from solutions to Belavin-Polyakov-Zamolodchikov equations (Theorem \ref{rigidityofl12}); the method is the same as in \cite{TW} for the triplet algebras and in \cite{CMY2} for the singlet algebras. Next, the modules $\mathcal{L}_{r,1}$, $r\geq 1$, are the irreducible $V_c$-modules appearing in the decomposition of the doublet abelian intertwining algebra \cite{AM_doub} as a $V_c$-module. As $V_c$ is an $SU(2)$-fixed point subalgebra of the doublet, results in \cite{McR} show that the modules $\mathcal{L}_{r,1}$ generate a tensor subcategory of $\mathcal{O}_c$ that is braided tensor equivalent to an abelian $3$-cocycle twist of $\rep SU(2)$ (Theorem \ref{thm:Lr1_fus_rules}). Consequently, these $V_c$-modules are rigid. Once we know that the modules $\mathcal{L}_{1,2}$ and $\mathcal{L}_{r,1}$ are rigid, we can compute tensor products involving these modules using the preliminary results of Section \ref{sec:first_fus}. We show that all remaining irreducible modules in $\mathcal{O}_c$ occur as direct summands in repeated tensor products of the rigid modules $\mathcal{L}_{1,2}$ and $\mathcal{L}_{r,1}$, and thus are rigid. Finally, we use \cite[Theorem~4.4.1]{CMY2} to extend rigidity from irreducible modules to all finite-length modules in $\mathcal{O}_c$. The modules $\mathcal{L}_{r,s}$ do not have projective covers in the full category $\mathcal{O}_{c}$ since their associated Verma modules have infinite length. Thus to obtain projective covers, it is indeed necessary to introduce the tensor subcategory $\mathcal{O}_c^0$, which contains all irreducible modules in $\mathcal{O}_c$. We can define $\mathcal{O}_c^0$ in several ways: it turns out to be the tensor subcategory of $\mathcal{O}_c$ (closed under tensor products and subquotients) generated by $\mathcal{L}_{1,2}$, but it is more useful to define $\mathcal{O}_c^0$ as the M\"{u}ger centralizer of the semisimple subcategory of $\mathcal{O}_c$ that has simple objects $\mathcal{L}_{2n+1,1}$, $n\in\mathbb{N}$. Equivalently, this is the subcategory of modules in $\mathcal{O}_c$ that induce to ordinary modules for the triplet vertex operator algebra $\mathcal{W}(p)$, an infinite-order extension of $V_c$. In $\mathcal{O}_c^0$, the irreducible modules $\mathcal{L}_{r,p}$ are already projective (Theorem \ref{projoflrp}), and then we construct length-$3$ projective covers $\mathcal{P}_{1,s}$ from $\mathcal{L}_{1,p}$ recursively (Theorem \ref{thm:P1s_structure}), using the methods of \cite[Section 5.1]{CMY2}. Finally, we show that $\mathcal{P}_{r,s}=\mathcal{L}_{r,1}\boxtimes\mathcal{P}_{1,s}$ is a length-$4$ projective cover of $\mathcal{L}_{r,s}$ for $r\geq 2$ (Theorem \ref{thm:Prs_structure}). After constructing all projective covers, we complete the proof of the tensor product formula in Theorem \ref{thm:main_thm}(3), and we also determine all tensor products of the projective modules with irreducible modules and with each other (see the details in Theorem \ref{generalfusionrules}). In Section \ref{subsec:ss}, we investigate relations between $\mathcal{O}_c$ and representations of the affine Lie algebra $\widehat{\mathfrak{sl}}_2$ at levels $-2+p^{\pm1}$ (note that $V_c$ is the $W$-algebra obtained via quantum Drinfeld-Sokolov reduction from the universal affine vertex operator algebras for $\mathfrak{sl}_2$ at both levels \cite{FFr}; see also \cite[Chapter 15]{FB}). First, the tensor product formulas of Theorem \ref{generalfusionrules} show that $\mathcal{O}_c$ has a semisimplification which is a ribbon category with simple objects $\mathcal{L}_{r,s}$ for $r \geq 1$ and $1 \leq s \leq p-1$. As an abelian category, the semisimplification is the Deligne product of two subcategories: $\mathcal{O}_{c}^L$ containing the modules $\mathcal{L}_{r,1}$ for $r\geq 1$, and $\mathcal{O}_c^R$ containing the modules $\mathcal{L}_{1,s}$ for $1\leq s\leq p-1$. We then use \cite{ACGY} to show that $\mathcal{O}_{c}^L$ is braided tensor equivalent to the Kazhdan-Lusztig category $KL_{-2+1/p}(\mathfrak{sl}_2)$ of $\widehat{\mathfrak{sl}}_2$-modules at level $-2+p^{-1}$, while we use the main theorem of \cite{KW} to show that $\mathcal{O}_c^R$ is tensor equivalent to the $\widehat{\mathfrak{sl}}_2$-module category $KL_{-2+p}(\mathfrak{sl}_2)$. Note that $KL_{-2+p}(\mathfrak{sl}_2)$ is a modular tensor category since the simple affine vertex operator algebra of $\mathfrak{sl}_2$ at level $-2+p$ is rational and $C_2$-cofinite. The corresponding universal affine vertex operator algebra, however, has a non-semisimple $C_1$-cofinite module category; it would be interesting to see if this category bears any relation to the non-semisimple Virasoro category $\mathcal{O}_c$. There is in fact a conjectured Kazhdan-Lusztig-type tensor equivalence between $\mathcal{O}_c$, or rather $\mathcal{O}_c^0$, and a module category for the Lusztig limit of quantum $\mathfrak{sl}_2$ at the root of unity $e^{\pi i/p}$ \cite{BFGT, BGT}; see also \cite[Conjecture 11.4]{Ne} for a reformulation of this conjecture. As explained in \cite[Proposition 11.8]{Ne}, this conjecture is tied to the conjectured Kazhdan-Lusztig correspondence between the triplet algebra extension of $V_c$ and the restricted quantum group of $\mathfrak{sl}_2$ \cite{FGST1}; see also \cite{CGR} for a more precise conjecture. We conclude this paper by applying our results, together with the vertex operator algebra extension theory of \cite{HKL, CKM, CMY1}, to the triplet vertex operator algebra extension $\mathcal{W}(p)\supseteq V_c$. Using the rigid tensor category structure on $\mathcal{O}_c$, we can rather quickly derive rigidity of the tensor category $\mathcal{C}_{\mathcal{W}(p)}$ of $\mathcal{W}(p)$-modules, tensor product formulas in $\mathcal{C}_{\mathcal{W}(p)}$, and a construction of the projective covers of irreducible $\mathcal{W}(p)$-modules. The only properties of $\mathcal{W}(p)$ that we need come from \cite{AM_trip}: the classification of irreducible $\mathcal{W}(p)$-modules and their decompositions as direct sums of $V_c$-modules, as well as some of the structure of the Zhu algebra of $\mathcal{W}(p)$. Our results on $\mathcal{W}(p)$ recover those obtained in \cite{AM_log_mods, NT, TW}. Our tensor-categorical approach especially provides an alternative to the technical construction of projective covers for irreducible $\mathcal{W}(p)$-modules outlined in \cite{NT}. Note that since every vertex operator algebra has a built-in Virasoro subalgebra, vertex operator algebra extension techniques could be used to study the modules for many other vertex operator algebras. For example, the results on singlet algebras recently obtained in \cite{CMY2} could also be recovered from the structure of $\mathcal{O}_c$. Finally, we use our results together with ideas from \cite{McR2} to prove a precise relationship conjectured in \cite[Conjecture 11.6]{Ne} between the tensor categories $\mathcal{C}_{\mathcal{W}(p)}$ and $\mathcal{O}_c^0$. It was shown in \cite{ALM} that the full automorphism group of $\mathcal{C}_{\mathcal{W}(p)}$ is $PSL(2,\mathbb{C})$, with fixed-point subalgebra $V_c$. Consequently, there is a braided tensor category $(\mathcal{C}_{\mathcal{W}(p)})^{PSL(2,\mathbb{C})}$, called the equivariantization of $\mathcal{C}_{\mathcal{W}(p)}$, whose objects are $\mathcal{W}(p)$-modules equipped with a suitably compatible $PSL(2,\mathbb{C})$-action. Then an easy extension of \cite[Theorem 4.17]{McR2} (which was proved in a finite group setting) shows that there is a braided tensor equivalence from $\mathcal{O}_c^0$ to $(\mathcal{C}_{\mathcal{W}(p)})^{PSL(2,\mathbb{C})}$ given by induction. We remark that essentially the same proof shows that if $T^\vee\subseteq PSL(2,\mathbb{C})$ is the one-dimensional torus, then the $T^\vee$-equivariantization of $\mathcal{C}_{\mathcal{W}(p)}$ is braided tensor equivalent to the category $\mathcal{C}_{\mathcal{M}(p)}^0$ of modules for the singlet vertex operator algebra $\mathcal{M}(p)$ that was studied in \cite{CMY2}. Such a relationship had also been conjectured in \cite[Conjecture 11.6]{Ne}. We plan to explore the tensor structure of $\mathcal{O}_c$ for other central charges in future work. The remaining unsolved cases are the universal Virasoro vertex operator algebra at central charge $c_{p,q}$ and the simple Virasoro vertex operator algebra at central charge $c_t=13-6t-6t^{-1}$ for $t=-\frac{p}{q}$ a negative rational number. For $c_{p,q}$, the universal Virasoro vertex operator algebra is neither simple nor self-contragredient and thus the braided tensor category $\mathcal{O}_{c_{p,q}}$ will be poorly behaved. For example, it will not be rigid because tensor products of non-zero modules in $\mathcal{O}_{c_{p,q}}$ can be zero. However, we expect $\mathcal{O}_{c_t}$ for $t=-\frac{p}{q}$ to be rigid and quite interesting, and we expect $V_{c_t}$ to admit large conformal vertex algebra extensions analogous to the triplet $W$-algebras. These categories $\mathcal{O}_{c_t}$ will be subjects of forthcoming papers. \vspace{5mm} \noindent {\bf Acknowledgements.} We would like to thank Thomas Creutzig for many useful discussions. JY also thanks Florencia Orosz Hunziker for discussions on the Virasoro algebra. \section{Preliminaries} In this section we collect some results on the representation theory of the Virasoro Lie algebra, and on intertwining operators among modules for a vertex operator algebra. \subsection{The Virasoro algebra} Let $\mathcal{V} ir$ denote the Virasoro Lie algebra with basis $\lbrace L_n\,\vert\,n\in\mathbb{Z}\rbrace\cup\lbrace\mathbf{c}\rbrace$ with $\mathbf{c}$ central and commutation relations \begin{equation*} [L_m,L_n]=(m-n)L_{m+n}+\frac{m^3-m}{12}\delta_{m+n,0}\mathbf{c}. \end{equation*} We will sometimes use the decomposition $\mathcal{V} ir=\mathcal{V} ir_-\oplus\mathcal{V} ir_{\geq 0}$, where \begin{equation*} \mathcal{V} ir_-=\mathrm{span}\lbrace L_n\,\vert\, n<0\rbrace,\qquad\mathcal{V} ir_{\geq 0} =\mathrm{span}\lbrace L_n,\mathbf{c}\,\vert\,n\geq 0\rbrace. \end{equation*} For any vector space $\mathcal{U}$ on which $L_0$ and $\mathbf{c}$ act by commuting operators, $\mathcal{U}$ extends to a $\mathcal{V} ir_{\geq 0}$-module on which $L_n$ acts by zero for $n>0$, and then we can form the induced module $\mathrm{Ind}_{\mathcal{V} ir_{\geq 0}}^{\mathcal{V} ir} \mathcal{U}$. In particular, for any central charge $c\in\mathbb{C}$ and conformal dimension $h\in\mathbb{C}$, the one-dimensional $\mathcal{V} ir_{\geq 0}$-module $\mathbb{C}_{c,h}$ on which $\mathbf{c}$ acts by $c$ and $L_0$ acts by $h$ induces to the Verma module $V(c,h)=\mathrm{Ind}_{\mathcal{V} ir_{\geq 0}}^{\mathcal{V} ir} \mathbb{C}_{c,h}$. Every Verma module $V(c,h)$ has a unique irreducible quotient $L(c,h)$. For a central charge $c\in\mathbb{C}$, we define $V_c$ to be the quotient of the Verma module $V(c,0)$ (induced from $\mathbb{C}_{c,0}=\mathbb{C}\mathbf{1}$) by the submodule generated by the singular vector $L_{-1}\mathbf{1}$. By \cite{FZ1}, $V_c$ is a vertex operator algebra in the sense of \cite{LL}. Moreover, every $\mathcal{V} ir$-module $\mathcal{W}$ that is suitably graded by generalized $L_0$-eigenvalues is a grading-restricted generalized $V_c$-module. Specifically, we require a grading $\mathcal{W}=\bigoplus_{h\in\mathbb{C}} \mathcal{W}_{[h]}$ such that: \begin{enumerate} \item $\mathcal{W}_{[h]}$ is the generalized $L_0$-eigenspace with generalized eigenvalue $h$, \item $\dim\mathcal{W}_{[h]}<\infty$ for all $h\in\mathbb{C}$, and \item For any $h\in\mathbb{C}$, $\mathcal{W}_{[h+n]}=0$ for $n\in\mathbb{Z}$ sufficiently negative. \end{enumerate} The irreducible modules $L(c,h)$ for $h\in\mathbb{C}$ comprise all irreducible $V_c$-modules. We are interested, however, in the category $\mathcal{O}_c$ of $C_1$-cofinite grading-restricted generalized $V_c$-modules: by \cite{CJORY} this is the category of finite-length $\mathcal{V} ir$-modules at central charge $c$ whose composition factors are irreducible quotients of reducible Verma modules. (In particular, irreducible Verma modules are not $C_1$-cofinite.) Writing the central charge as $c=13-6t-6t^{-1}$ for some $t\in\mathbb{C}\setminus\lbrace 0\rbrace$, the Feigin-Fuchs criterion for the existence of singular vectors in Verma modules \cite{FF} implies that $\mathcal{O}_c$ contains all irreducible modules $\mathcal{L}_{r,s}=L(c,h_{r,s})$ for $r,s\in\mathbb{Z}_+$, where \begin{equation*} h_{r,s}:=\frac{r^2-1}{4} t-\frac{rs-1}{2}+\frac{s^2-1}{4} t^{-1} = \frac{(tr-s)^2}{4t}-\frac{(t-1)^2}{4t}. \end{equation*} Moreover, every irreducible module in $\mathcal{O}_c$ is isomorphic to $L(c,h_{r,s})$ for some $r,s\in\mathbb{Z}$ (see \cite[Section 5.3]{IK} for a full description of the irreducible modules in $\mathcal{O}_c$ for general central charges). For any $r,s\in\mathbb{Z}$, we use $\mathcal{V}_{r,s}$ to denote the Verma module $V(c,h_{r,s})$. It was established in \cite{CJORY} that for any central charge $c$, the category $\mathcal{O}_c$ of $V_c$-modules admits the vertex algebraic braided tensor category structure of \cite{HLZ1}-\cite{HLZ8}. In this work, we are mainly concerned with central charges $c_{p,1}=13-6p-6p^{-1}$ for integers $p > 1$. At these central charges, we can use the conformal weight symmetries $h_{r,s+p}=h_{r-1,s}$ and $h_{r,s}=h_{-r,-s}$ for $r,s\in\mathbb{Z}$ to show that any irreducible module in $\mathcal{O}_{c_{p,1}}$ is isomorphic to a unique $\mathcal{L}_{r,s}$ with $r\geq 1$ and $1\leq s\leq p$. Then we have the following embedding diagrams involving the Verma modules $\mathcal{V}_{r,s}$ (see for example \cite[Section 5.3]{IK}): \begin{enumerate} \item When $1\leq s\leq p-1$, we have the diagram \begin{equation*} \mathcal{V}_{1,s} \longleftarrow \mathcal{V}_{2,p-s} \longleftarrow \mathcal{V}_{3,s} \longleftarrow \mathcal{V}_{4,p-s} \longleftarrow \cdots \end{equation*} In particular, the maximal proper submodule of $\mathcal{V}_{r,s}$ is $\mathcal{V}_{r+1,p-s}$ when $r\geq 1$ and $1\leq s\leq p-1$. \item When $s=p$, we have the diagram \begin{equation*} \mathcal{V}_{i,p} \longleftarrow \mathcal{V}_{i+2,p} \longleftarrow \mathcal{V}_{i+4,p} \longleftarrow \mathcal{V}_{i+6,p} \longleftarrow \cdots \end{equation*} for $i=1,2$. In particular, the maximal proper submodule of $\mathcal{V}_{r,p}$ is $\mathcal{V}_{r+2,p}$ when $r\geq 1$. \end{enumerate} Note that the maximal proper submodule of $\mathcal{V}_{1,1}$ is a Verma module generated by a singular vector of degree $1$, so $V_c\cong\mathcal{L}_{1,1}$ as a $V_c$-module at the central charges we are considering. In particular, $V_c$ is a simple (and self-contragredient) vertex operator algebra. In addition to Verma modules, we will sometimes need to work with their contragredients $\mathcal{V}_{r,s}'$. Since irreducible Virasoro modules are self-contragredient, the surjections $\mathcal{V}_{r,s}\rightarrow\mathcal{L}_{r,s}$ dualize to injections $\mathcal{L}_{r,s}\rightarrow\mathcal{V}_{r,s}'$. In particular, $\mathcal{L}_{r,s}$ is the $V_c$-submodule of $\mathcal{V}_{r,s}'$ generated by the lowest conformal weight space. \subsection{Intertwining operators among modules for a vertex operator algebra} We recall the definition of (logarithmic) intertwining operator among a triple of modules for a vertex operator algebra $V$ from \cite{HLZ2}: \begin{defi} Suppose $W_1$, $W_2$, and $W_3$ are grading-restricted generalized $V$-modules. An \textit{intertwining operator} of type $\binom{W_3}{W_1\,W_2}$ is a linear map \begin{align*} \mathcal{Y}: W_1\otimes W_2 & \rightarrow W_3[\log x]\lbrace x\rbrace\nonumber\\ w_1\otimes w_2 & \mapsto \mathcal{Y}(w_1,x)w_2=\sum_{h\in\mathbb{C}}\sum_{k\in\mathbb{N}} (w_1)_{h,k} w_2\,x^{-h-1}(\log x)^k \end{align*} which satisfies the following properties: \begin{enumerate} \item \textit{Lower truncation}: For any $w_1\in W_1$, $w_2\in W_2$, and $h\in\mathbb{C}$, $(w_1)_{h+n,k} w_2 =0$ for $n\in\mathbb{Z}$ sufficiently large, independently of $k$. \item The \textit{Jacobi identity}: For $v\in V$ and $w_1\in W_1$, \begin{align*} x_0^{-1}\delta\left(\frac{x_1-x_2}{x_0}\right) Y_{W_3}(v,x_1)\mathcal{Y}(w_1,x_2) & - x_0^{-1}\left(\frac{-x_2+x_1}{x_0}\right)\mathcal{Y}(w_1,x_2)Y_{W_2}(v,x_1)\nonumber\\ & = x_1^{-1}\delta\left(\frac{x_2+x_0}{x_1}\right)\mathcal{Y}(Y_{W_1}(v,x_0)w_1,x_2). \end{align*} \item The \textit{$L_{-1}$-derivative property}: For $w_1\in W_1$, \begin{equation*} \mathcal{Y}(L_{-1} w_1,x)=\dfrac{d}{dx}\mathcal{Y}(w_1,x). \end{equation*} \end{enumerate} \end{defi} We will need two consequences of the Jacobi identity. Extracting the coefficient of $x_0^{-1} x_1^{-n-1}$ in the Jacobi identity yields the \textit{commutator formula} \begin{equation}\label{eqn:gen_comm_form} v_n\mathcal{Y}(w_1,x) = \mathcal{Y}(w_1,x)v_n+\sum_{i\geq 0} \binom{n}{i} x^{n-i}\mathcal{Y}(v_i w_1,x); \end{equation} in the special case that $v$ is the conformal vector $\omega$, this means \begin{equation}\label{eqn:Vir_comm_form} L_n\mathcal{Y}(w_1,x) =\mathcal{Y}(w_1,x)L_n+\sum_{i\geq 0}\binom{n+1}{i} x^{n+1-i}\mathcal{Y}(L_{i-1} w_1,x). \end{equation} Similarly, extracting the coefficient of $x_0^{-n-1} x_1^{-1}$ yields the \textit{iterate formula} \begin{align}\label{eqn:gen_it_form} \mathcal{Y}(v_n w_1,x) =\sum_{i\geq 0} (-1)^i\binom{n}{i}\left(v_{n-i}\, x^i\mathcal{Y}(w_1,x) -(-1)^n x^{n-i}\mathcal{Y}(w_1,x)v_i\right); \end{align} in the special case $v=\omega$ we have \begin{align}\label{eqn:Vir_it_form} \mathcal{Y}(L_n w_1,x) =\sum_{i\geq 0} (-1)^i\binom{n+1}{i}\left( L_{n+1-i}\,x^i\mathcal{Y}(w_1,x)+(-1)^{n} x^{n+1-i}\mathcal{Y}(w_1,x)L_{i-1}\right). \end{align} For grading-restricted generalized $V$-modules $W_1$, $W_2$, $W_3$, we say that an intertwining operator $\mathcal{Y}$ of type $\binom{W_3}{W_1\,W_2}$ is \textit{surjective} if \begin{equation*} W_3=\mathrm{span}\lbrace (w_1)_{h,k} w_2\,\vert\,w_1\in W_1, w_2\in W_2, h\in\mathbb{C}, k\in\mathbb{N}\rbrace. \end{equation*} Actually, we can reduce the spanning set for the image of an intertwining operator somewhat: \begin{lem}\label{lem:intw_op_surjectivity} Let $W_1$, $W_2$, and $W_3$ be grading-restricted generalized $V$-modules. An intertwining operator $\mathcal{Y}$ of type $\binom{W_3}{W_1\,W_2}$ is surjective if and only if \begin{equation*} W_3 =\mathrm{span}\lbrace (w_1)_{h,0} w_2\,\vert\,w_1\in W_1, w_2\in W_2, h\in\mathbb{C}\rbrace. \end{equation*} \end{lem} \begin{proof} We just need to show that all $(w_1)_{h,k} w_2$ for $k\in\mathbb{N}$ are contained in the span of the vectors $(w_1)_{h,0} w_2$ for $w_1\in W_1$, $w_2\in W_2$, and $h\in\mathbb{C}$. Using the $L_{-1}$-derivative property, \begin{align*} \mathcal{Y}(L_{-1}w_1,x)w_2 & = \frac{d}{dx}\sum_{h\in\mathbb{C}}\sum_{k\in\mathbb{N}} (w_1)_{h,k} w_2\,x^{-h-1}(\log x)^k \nonumber\\ & =\sum_{h\in\mathbb{C}}\sum_{k\in\mathbb{N}} (w_1)_{h,k} w_2\,x^{-h-2}\left(k(\log x)^{k-1}-(h+1)(\log x)^k\right). \end{align*} From this we see that \begin{equation*} (w_1)_{h,k+1} w_2 =\frac{1}{k+1}\left((h+1)(w_1)_{h,k} w_2+(L_{-1}w_1)_{h+1,k} w_2\right), \end{equation*} so that \begin{equation*} (w_1)_{h,k} w_2\in\mathrm{span}\lbrace (w_1)_{h,0} w_2\,\vert\,w_1\in W_1\,w_2\in W_2, h\in\mathbb{C}\rbrace \end{equation*} for all $k\in\mathbb{N}$ follows by induction on $k$. \end{proof} Associated to any intertwining operator $\mathcal{Y}$ of type $\binom{W_3}{W_1\,W_2}$, we have an \textit{intertwining map} \begin{equation*} I: W_1\otimes W_2\rightarrow\overline{W}_3 =\prod_{h\in\mathbb{C}} (W_3)_{[h]} \end{equation*} defined by \begin{equation*} I(w_1\otimes w_2) =\mathcal{Y}(w_1,1)w_2 \end{equation*} for $w_1\in W_1$, $w_2\in W_2$, where we realize the substitution $x\mapsto 1$ using the real-valued branch of logarithm $\ln 1=0$. In particular, for generalized $L_0$-eigenvectors $w_1\in W_1$ and $w_2\in W_2$, the coefficients $(w_1)_{h,0} w_2$ are simply the projections of $I(w_1\otimes w_2)$ to the conformal weight spaces of $W_3$. Thus we get the following corollary of Lemma \ref{lem:intw_op_surjectivity}: \begin{cor}\label{cor:intw_op_surjectivity} Let $W_1$, $W_2$, and $W_3$ be grading-restricted generalized $V$-modules. An intertwining operator $\mathcal{Y}$ of type $\binom{W_3}{W_1\,W_2}$ is surjective if and only if $W_3$ is spanned by projections of vectors $\mathcal{Y}(w_1,1)w_2$ for $w_1\in W_1$, $w_2\in W_2$ to the conformal weight spaces of $W_3$. \end{cor} In \cite{HLZ3}, tensor products of $V$-modules are defined in terms of intertwining maps; they can be defined equivalently in terms of intertwining operators: \begin{defi} Let $\mathcal{C}$ be a category of grading-restricted generalized $V$-modules containing $W_1$ and $W_2$. A \textit{tensor product} of $W_1$ and $W_2$ in $\mathcal{C}$ is a pair $(W_1\boxtimes W_2,\mathcal{Y}_\boxtimes)$, with $W_1\boxtimes W_2$ a module in $\mathcal{C}$ and $\mathcal{Y}_\boxtimes$ an intertwining operator of type $\binom{W_1\boxtimes W_2}{W_1\,W_2}$, which satisfies the following universal property: For any module $W_3$ in $\mathcal{C}$ and intertwining operator $\mathcal{Y}$ of type $\binom{W_3}{W_1\,W_2}$, there is a unique $V$-module homomorphism $f: W_1\boxtimes W_2\rightarrow W_3$ such that $\mathcal{Y}=f\circ\mathcal{Y}_\boxtimes$. \end{defi} If the tensor product $(W_1\boxtimes W_2, \mathcal{Y}_\boxtimes)$ exists, then the tensor product intertwining operator $\mathcal{Y}_\boxtimes$ is surjective \cite[Proposition 4.23]{HLZ3}. In \cite{HLZ1}-\cite{HLZ8}, it was shown under suitable conditions, such as closure under tensor products, that $V$-module categories $\mathcal{C}$ have braided tensor category structure. In \cite{CJORY}, it was shown that these conditions are satisfied by the category $\mathcal{O}_c$ of $C_1$-cofinite grading-restricted generalized modules for the Virasoro vertex operator algebra $V_c$ at any central charge $c$. For a detailed description of the braided tensor category structure on categories such as $\mathcal{O}_c$, in particular a description of the left and right unit isomorphisms $l$ and $r$, the associativity isomorphisms $\mathcal{A}$, and the braiding isomorphisms $\mathcal{R}$, see \cite{HLZ8} or the exposition in \cite[Section 3.3]{CKM}. \subsection{Zhu algebra construction of intertwining operators} Let $V$ be a vertex operator algebra with grading-restricted generalized modules $W_1$, $W_2$, and $W_3$. The \textit{fusion rule} $\mathcal{N}^{W_3}_{W_1, W_2}$ is the dimension of the space of intertwining operators of type $\binom{W_3}{W_1\,W_2}$. Here, we recall some general results on constructing intertwining operators and determining fusion rules using the Zhu algebra approach developed in \cite{FZ1, Li, FZ2, HY}, among other references. To start, consider a grading-restricted generalized $V$-module $W=\bigoplus_{h\in\mathbb{C}} W_{[h]}$. If we take $I$ to be the set of cosets in $\mathbb{C}/\mathbb{Z}$ such that for $i\in I$, $W_{[h]}\neq 0$ for some $h\in i$, then \begin{equation}\label{eqn:W_decomp} W=\bigoplus_{i\in I} \bigoplus_{n=0}^\infty W_{[h_i+n]} \end{equation} with $h_i$ the minimal conformal weight occurring in the coset $i$. Each $W_i=\bigoplus_{n=0}^\infty W_{[h_i+n]}$ is a $V$-submodule of $W$, so that $\vert I\vert =1$ if $W$ is non-zero and indecomposable, and $\vert I\vert$ is finite if $W$ is finitely generated. The decomposition \eqref{eqn:W_decomp} implies that $W$ has an $\mathbb{N}$-grading $W=\bigoplus_{n=0}^\infty W(n)$, given by \begin{equation*} W(n)=\bigoplus_{i\in I} W_{[h_i+n]}, \end{equation*} such that \begin{equation}\label{eqn:N-grading_cond} v_m\cdot W(n)\subseteq W(\deg v+n-m-1) \end{equation} for $v\in V$, $m\in\mathbb{Z}$, and $n\in\mathbb{N}$. Although this need not be the unique $\mathbb{N}$-grading such that \eqref{eqn:N-grading_cond} holds, we shall always use this particular $\mathbb{N}$-grading for grading-restricted generalized $V$-modules unless specified otherwise. If $W$ is finitely generated, so that $\vert I\vert<\infty$, then each $W(n)$ is the direct sum of finitely many generalized $L_0$-eigenspaces. In this case, we have well-defined projection maps \begin{equation*} \pi_n: \overline{W}=\prod_{h\in\mathbb{C}} W_{[h]}\rightarrow W(n) \end{equation*} for each $n\in\mathbb{N}$. Now suppose $W_1$, $W_2$, and $W_3$ are three grading-restricted generalized $V$-modules such that $W_3$ is finitely generated (to guarantee that the projection map $\pi_0: \overline{W}_3\rightarrow W_3(0)$ exists). Let $A(V)$ denote the Zhu algebra of $V$ defined in \cite{Zh} and let $A(W_1)$ denote the $A(V)$-bimodule defined in \cite{FZ1}. The degree-$0$ subspaces $W_2(0)$ and $W_3(0)$ are left $A(V)$-modules \cite{Zh}. Now for any intertwining operator $\mathcal{Y}$ of type $\binom{W_3}{W_1\,W_2}$, the following $A(V)$-module map was first constructed in \cite{FZ1}: \begin{align*} \pi(\mathcal{Y}): A(W_1) \otimes_{A(V)} W_2(0) &\longrightarrow W_3(0)\nonumber\\ [w_1]\otimes u_2 & \longmapsto \pi_0\left(\mathcal{Y}(w_1,1)u_2\right), \end{align*} where $[w_1]$ is the image of $w_1\in W_1$ in $A(W_1)$ and $\mathcal{Y}(\cdot,1)\cdot$ is the intertwining map associated to $\mathcal{Y}$. The next proposition is essentially a version of \cite[Proposition 24]{TW}, where the result is attributed to Nahm \cite{Na}: \begin{prop}\label{prop:piY_surjective} Assume that $W_1$, $W_2$, and $W_3$ are grading-restricted generalized $V$-modules such that $W_2$ is generated by $W_2(0)$ as a $V$-module and $W_3$ is finitely generated. If $\mathcal{Y}$ is a surjective intertwining operator, then $\pi(\mathcal{Y})$ is surjective. \end{prop} \begin{proof} Since $\mathcal{Y}$ is surjective, Corollary \ref{cor:intw_op_surjectivity} says that $W_3(0)$ is spanned by $\pi_0\left(\mathcal{Y}(w_1,1)w_2\right)$ for $w_1\in W_1$ and $w_2\in W_2$. Thus we need to show that \begin{equation*} \pi_0\left(\mathcal{Y}(w_1,1)w_2\right)\in \im\pi(\mathcal{Y}) \end{equation*} for any $w_1\in W_1$, $w_2\in W_2$. This holds by definition for $w_2\in W_2(0)$. For $w_2\in\bigoplus_{n\geq 1} W_2(n)$, we note that because $W_2(0)$ generates $W_2$ as a $V$-module, $w_2$ is a linear combination of vectors $v_n u_2$ for $u_2\in W_2(0)$, homogeneous $v\in V$, and $n\in\mathbb{Z}$ such that $\deg v-n-1> 0$ (see \cite[Proposition 4.5.6]{LL}). The commutator formula \eqref{eqn:gen_comm_form} then implies that for any $w_1\in W_1$, \begin{align*} \pi_0\left(\mathcal{Y}(w_1,1)v_n u_2\right) &=\pi_0\bigg(v_n\mathcal{Y}(w_1,1)u_2-\sum_{i\geq 0}\binom{n}{i} \mathcal{Y}(v_i w_1,1)u_2\bigg)\\ & = -\sum_{i\geq 0}\binom{n}{i}\pi_0(\mathcal{Y}(v_i w_1,1)u_2)\in \im\pi(\mathcal{Y}) \end{align*} since $\deg v_n>0$. This proves the proposition. \end{proof} Note that $\mathcal{Y}\mapsto\pi(\mathcal{Y})$ defines a linear map from intertwining operators of type $\binom{W_3}{W_1\,W_2}$ to $\hom_{A(V)}(A(W_1)\otimes_{A(V)} W_2(0), W_3(0))$. The main theorem of \cite{Li} (generalized to logarithmic intertwining operators in \cite{HY}) is that this linear map is an isomorphism under suitable conditions on $W_1$, $W_2$, and $W_3$. For simplicity, we will describe these conditions only when $V$ is a Virasoro vertex operator algebra $V_c$, in which case we have an isomorphism $A(V_c)\cong\mathbb{C}[x]$ given by $[\omega]\mapsto x$ \cite{FZ1}. Any $\mathbb{C}[x]$-module $\mathcal{U}$ is equivalently an $A(V_c)$-module, which is equivalently a $\mathcal{V} ir_{\geq 0}$-module on which $L_0$ acts by $x$ and $L_n$ acts by $0$ for $n>0$. We then have the induced generalized Verma module $\mathcal{V}=\mathrm{Ind}^{\mathcal{V} ir}_{\mathcal{V} ir_{\geq 0}} \mathcal{U}$. If $\mathcal{U}$ is finite dimensional, then we have an $A(V_c)\cong\mathbb{C}[x]$-module isomorphism $\mathcal{U}\cong\mathcal{U}^*$, so that the lowest conformal weight space $\mathcal{V}'(0)$ of the generalized Verma module contragredient is isomorphic to $\mathcal{U}$. Now the following theorem is the main result of \cite{Li, HY} for Virasoro vertex operator algebras (see also \cite[Lemma 2.19]{FZ2}): \begin{thm}\label{thm:Zhu_fus_rules} Suppose $\mathcal{W}_1$ is a grading-restricted generalized $V_c$-module generated by $\mathcal{W}_1(0)$ and $\mathcal{U}_2$, $\mathcal{U}_3$ are finite-dimensional $A(V_c)$-modules. Then $\mathcal{Y}\mapsto\pi(\mathcal{Y})$ defines a linear isomorphism from intertwining operators of type $\binom{\mathcal{V}_3'}{\mathcal{W}_1\,\mathcal{V}_2}$ to $\hom_{A(V_c)}(A(\mathcal{W}_1)\otimes_{A(V_c)} \mathcal{U}_2,\mathcal{U}_3)$, where $\mathcal{V}_i=\mathrm{Ind}^{\mathcal{V} ir}_{\mathcal{V} ir_{\geq 0}}\mathcal{U}_i$ for $i=2,3$. In particular, fusion rules satisfy \begin{equation*} \mathcal{N}_{\mathcal{W}_1,\mathcal{V}_2}^{\mathcal{V}_3'} =\dim \hom_{A(V_c)}(A(\mathcal{W}_1)\otimes_{A(V_c)} \mathcal{U}_2,\mathcal{U}_3). \end{equation*} \end{thm} \begin{rem}\label{rem:non_std_N_grad} In the preceding theorem, we need to define $\pi(\mathcal{Y})$ using the $\mathbb{N}$-grading on $\mathcal{V}_3'$ such that $\mathcal{V}_3'(0)=\mathcal{U}_3^*\cong\mathcal{U}_3$. This $\mathbb{N}$-grading will differ slightly from our usual $\mathbb{N}$-grading convention if $L_0$ has two eigenvalues on $\mathcal{U}_3$ that differ by a non-zero integer. \end{rem} \section{First results on Virasoro fusion}\label{sec:first_fus} In this section, our goal is to use Proposition \ref{prop:piY_surjective} to obtain upper bounds on tensor products of certain $V_c$-modules in $\mathcal{O}_c$, and to use Theorem \ref{thm:Zhu_fus_rules} to obtain lower bounds. At first, we consider arbitrary central charges, and then we specialize to the central charge $c_{p,1}$. \subsection{Results at general central charge} In this subsection, we assume $c=13-6t-6t^{-1}$ for any $t\in\mathbb{C}\setminus\lbrace 0\rbrace$. We want to see what Proposition \ref{prop:piY_surjective}, applied to the surjective tensor product intertwining operator $\mathcal{Y}_\boxtimes$, says about the tensor products $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}$ and $\mathcal{L}_{2,1}\boxtimes\mathcal{L}_{r,s}$ for $r,s\in\mathbb{Z}_+$. Thus we must first determine the $A(V_c)$-bimodules $A(\mathcal{L}_{1,2})$ and $A(\mathcal{L}_{2,1})$. This was done in \cite[Lemmas 2.10 and 2.11]{FZ2} under the assumption $t\notin\mathbb{Q}$; here, we review the calculations to confirm that the same results hold for general $t$. In this and the following sections, we use $v_{r,s}$ to denote a lowest-conformal-weight vector generating either $\mathcal{V}_{r,s}$ or one of its quotients, such as $\mathcal{L}_{r,s}$. We now compute $A(\mathcal{L}_{1,2})$, noting that $A(\mathcal{L}_{2,1})$ can be determined almost identically with the substitutions $v_{1,2}\mapsto v_{2,1}$, $h_{1,2}\mapsto h_{2,1}$, and $t^{-1}\mapsto t$. To begin, the isomorphism $A(V_c)\cong\mathbb{C}[x]$ corresponds to an isomorphism \begin{align*} \mathbb{C}[x,y] & \rightarrow A(\mathcal{V}_{1,2})\nonumber\\ x^m y^n & \mapsto [\omega]^m\cdot[v_{1,2}]\cdot[\omega]^n, \end{align*} where the left and right actions of $\mathbb{C}[x]$ on the bimodule $\mathbb{C}[x,y]$ are multiplication by $x$ and $y$, respectively, while the left and right actions of $A(V_c)$ on $A(\mathcal{V}_{1,2})$ are given by \begin{equation*} [\omega]\cdot[v] =[(L_0+2L_{-1}+L_{-2})v],\qquad[v]\cdot[\omega]=[(L_{-2}+L_{-1})v] \end{equation*} for $v\in\mathcal{V}_{1,2}$. Under this isomorphism, we can then identify $$A(\mathcal{L}_{1,2})\cong\mathbb{C}[x,y]/(f_{1,2}(x,y)),$$ where $f_{1,2}(x,y)$ is the polynomial corresponding to the singular vector $(L_{-1}^2-\frac{1}{t}L_{-2})v_{1,2}\in\mathcal{V}_{1,2}$ generating the maximal proper submodule of $\mathcal{V}_{1,2}$. To determine $f_{1,2}(x,y)$, we first note that for $v\in\mathcal{V}_{1,2}$, \begin{equation}\label{eqn:bimod_reln_2} [L_{-2} v]=[v]\cdot[\omega]-[L_{-1}v]. \end{equation} This together with \begin{align*} [\omega]\cdot[v] =(\mathrm{wt}\,v)[v]+2[L_{-1} v]+[L_{-2} v] \end{align*} implies \begin{equation}\label{eqn:bimod_reln} [L_{-1}v]=[\omega]\cdot[v]-[v]\cdot[\omega]-(\mathrm{wt}\,v)[v]. \end{equation} Consequently, \begin{align*} \bigg[\bigg(L_{-1}^2- & \frac{1}{t}L_{-2}\bigg)v_{1,2}\bigg] =[\omega]\cdot[L_{-1}v_{1,2}]-[L_{-1}v_{1,2}]\cdot[\omega]-(h_{1,2}+1)[L_{-1} v_{1,2}]\nonumber\\ &\qquad\qquad\qquad\qquad-\frac{1}{t}([v_{1,2}]\cdot[\omega]-[L_{-1}v_{1,2}])\nonumber\\ & =[\omega]\cdot\big([\omega]\cdot[v_{1,2}]-[v_{1,2}]\cdot[\omega]-h_{1,2}[v_{1,2}]\big) -\big([\omega]\cdot[v_{1,2}]-[v_{1,2}]\cdot[\omega]-h_{1,2}[v_{1,2}]\big)\cdot[\omega]\nonumber\\ &\qquad\qquad-(h_{1,2}+1)\big([\omega]\cdot[v_{1,2}]-[v_{1,2}]\cdot[\omega]-h_{1,2}[v_{1,2}]\big)\nonumber\\ & \qquad\qquad-\frac{1}{t}[v_{1,2}]\cdot[\omega]+\frac{1}{t}\big([\omega]\cdot[v_{1,2}]-[v_{1,2}]\cdot[\omega]-h_{1,2}[v_{1,2}]\big)\nonumber\\ & =[\omega]^2\cdot[v_{1,2}]-2[\omega]\cdot[v_{1,2}]\cdot[\omega]+[v_{1,2}]\cdot[\omega]^2-\left(2h_{1,2}+1-\frac{1}{t}\right)[\omega]\cdot[v_{1,2}]\nonumber\\ & \qquad\qquad + \left(2h_{1,2}+1-\frac{2}{t}\right)[v_{1,2}]\cdot[\omega]+h_{1,2}\left(h_{1,2}+1-\frac{1}{t}\right)[v_{1,2}]. \end{align*} This corresponds to the polynomial \begin{align*} f_{1,2}(x,y) & =x^2-2xy+y^2-\left(2h_{1,2}+1-\frac{1}{t}\right)x+\left(2h_{1,2}+1-\frac{2}{t}\right)y+h_{1,2}\left(h_{1,2}+1-\frac{1}{t}\right)\nonumber\\ & =\left(x-y-\left(h_{1,2}+1-\frac{1}{t}\right)\right)\left(x-y-h_{1,2}\right)-\frac{1}{t}y. \end{align*} We have now determined $A(\mathcal{L}_{1,2})$; similarly, we can use the singular vector $(L_{-1}^2-t\,L_{-2})v_{2,1}\in\mathcal{V}_{2,1}$ to show that \begin{equation*} A(\mathcal{L}_{2,1})\cong\mathbb{C}[x,y]/(f_{2,1}(x,y)) \end{equation*} where \begin{equation*} f_{2,1}(x,y)=\left(x-y-\left(h_{2,1}+1-t\right)\right)(x-y-h_{2,1})-t\,y. \end{equation*} Now it is easy to determine the $A(V_c)$-modules $\mathcal{M}_{r,s}=A(\mathcal{L}_{1,2})\otimes_{A(V_c)}\mathbb{C} v_{r,s}$ and $\mathcal{N}_{r,s}=A(\mathcal{L}_{2,1})\otimes_{A(V_c)}\mathbb{C} v_{r,s}$ for $r,s\in\mathbb{Z}_+$, where $\mathbb{C} v_{r,s}$ is both $\mathcal{V}_{r,s}(0)$ and $\mathcal{L}_{r,s}(0)$. We have \begin{align*} \mathcal{M}_{r,s} & \cong \mathbb{C}[x]/(f_{1,2}(x,h_{r,s})),\\ \mathcal{N}_{r,s} & \cong \mathbb{C}[x]/(f_{2,1}(x,h_{r,s})), \end{align*} where \begin{align*} f_{1,2}(x,h_{r,s}) &=\left(x-\left(h_{1,2}+h_{r,s}+1-\frac{1}{t}\right)\right)\left(x-(h_{1,2}+h_{r,s})\right)-\frac{h_{r,s}}{t}\\ &= (x-h_{r,s-1})(x-h_{r,s+1}),\nonumber\\ f_{2,1}(x,h_{r,s}) &=\left(x-\left(h_{2,1}+h_{r,s}+1-t\right)\right)\left(x-(h_{2,1}+h_{r,s})\right)-t\,h_{r,s}\\ &= (x-h_{r-1,s})(x-h_{r+1,s}). \end{align*} In other words, $L_0$ has eigenvalue(s) $h_{r,s\pm 1}$ on $\mathcal{M}_{r,s}$ and eigenvalue(s) $h_{r\pm 1,s}$ on $\mathcal{N}_{r,s}$. We can now apply Proposition \ref{prop:piY_surjective}: \begin{prop}\label{prop:conf_wts} Let $r,s\in\mathbb{Z}_+$ and let $\mathcal{W}$ be a grading-restricted generalized $V_c$-module in $\mathcal{O}_c$. \begin{enumerate} \item If there is a surjective intertwining operator of type $\binom{\mathcal{W}}{\mathcal{L}_{1,2}\,\mathcal{L}_{r,s}}$, then the conformal weights of $\mathcal{W}$ are contained in $\lbrace h_{r,s-1}+\mathbb{N}\rbrace\cup\lbrace h_{r,s+1}+\mathbb{N}\rbrace$. In particular, this conclusion holds for $\mathcal{W}=\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}$. \item If there is a surjective intertwining operator of type $\binom{\mathcal{W}}{\mathcal{L}_{2,1}\,\mathcal{L}_{r,s}}$, then the conformal weights of $\mathcal{W}$ are contained in $\lbrace h_{r-1,s}+\mathbb{N}\rbrace\cup\lbrace h_{r+1,s}+\mathbb{N}\rbrace$. In particular, this conclusion holds for $\mathcal{W}=\mathcal{L}_{2,1}\boxtimes\mathcal{L}_{r,s}$. \end{enumerate} \end{prop} \begin{proof} First note that $\mathcal{L}_{r,s}$ is generated by $\mathcal{L}_{r,s}(0)$ and that $\mathcal{W}$, as a $C_1$-cofinite module in $\mathcal{O}_c$, is finitely generated. So in the first case, Proposition \ref{prop:piY_surjective} says that $\mathcal{W}(0)$ is a homomorphic image of $\mathcal{M}_{r,s}$ as an $A(V_c)$-module. Thus the generalized $L_0$-eigenvalue(s) on $\mathcal{W}(0)$ are $h_{r,s\pm1}$, and then the first conclusion of the proposition follows from our $\mathbb{N}$-grading convention. The proof of the second part of the proposition is the same. \end{proof} From now on, we will mainly focus on the tensor products $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}$. We have shown that there is a surjective $A(V_c)$-homomorphism $\pi(\mathcal{Y}_\boxtimes): \mathcal{M}_{r,s}\rightarrow(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})(0)$. We may also regard $\pi(\mathcal{Y}_\boxtimes)$ as a $\mathcal{V} ir_{\geq 0}$-homomorphism, so if we set $\mathcal{W}_{r,s}=\mathrm{Ind}_{\mathcal{V} ir_{\geq 0}}^{\mathcal{V} ir}\mathcal{M}_{r,s}$, then the universal property of induced modules leads to a $V_c$-module homomorphism \begin{equation*} \Pi_{r,s}: \mathcal{W}_{r,s}\rightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s} \end{equation*} such that $(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})(0)\subseteq\im\Pi_{r,s}$. We show that $\Pi_{r,s}$ is usually surjective: \begin{prop}\label{prop:Pi_rs_surjective} If $h_{r,s-1}-h_{r,s+1}\notin\mathbb{Z}\setminus\lbrace 0\rbrace$, then the homomorphism $\Pi_{r,s}$ is surjective. \end{prop} \begin{proof} Set $\mathcal{W}=(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})/\im\Pi_{r,s}$ and let $\pi:\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}\rightarrow\mathcal{W}$ denote the canonical quotient map. The grading-restricted generalized module $\mathcal{W}$ is in $\mathcal{O}_c$, and $\pi\circ\mathcal{Y}_\boxtimes$ is a surjective intertwining operator of type $\binom{\mathcal{W}}{\mathcal{L}_{1,2}\,\mathcal{L}_{r,s}}$. Thus from Propositions \ref{prop:piY_surjective} and \ref{prop:conf_wts}(1), \begin{equation*} \mathcal{W}(0)\subseteq\mathcal{W}_{[h_{r,s-1}]}+\mathcal{W}_{[h_{r,s+1}]}. \end{equation*} The two sets $\lbrace h_{r,s-1}+\mathbb{N}\rbrace$ and $\lbrace h_{r,s+1}+\mathbb{N}\rbrace$ of potential conformal weights of $\mathcal{W}$ are either disjoint (if $h_{r,s-1}-h_{r,s+1}\notin\mathbb{Z}$) or identical (if $h_{r,s-1}=h_{r,s+1}$). Thus \begin{equation*} (\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})_{[h_{r,s-1}]}+(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})_{[h_{r,s+1}]} = (\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})(0) \subseteq\im\Pi_{r,s}, \end{equation*} which means $\mathcal{W}(0)=0$. By our $\mathbb{N}$-grading convention, $\mathcal{W}=0$ as well, that is, $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s} =\im\Pi_{r,s}$ and $\Pi_{r,s}$ is surjective. \end{proof} \begin{rem} The proof of the above proposition fails when, say, $h_{r,s-1}-h_{r,s+1}\in\mathbb{Z}_+$, because then it is possible that $(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})(0)=(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})_{[h_{r,s+1}]}$ and that $(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})/\im\Pi_{r,s}$ has a non-zero space of conformal weight $h_{r,s-1}$. \end{rem} Note that if $h_{r,s-1}\neq h_{r,s+1}$, then $\mathcal{W}_{r,s}\cong\mathcal{V}_{r,s-1}\oplus\mathcal{V}_{r,s+1}$. In these cases, we can determine the images of $v_{r,s\pm1}\in\mathcal{V}_{r,s\pm1}$ in $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}$ under the homomorphism $\Pi_{r,s}$. In fact, we get the following by determining the $x$-eigenvectors in $\mathbb{C}[x]/(f_{1,2}(x,h_{r,s})$ and using definitions: \begin{equation*} \begin{array}{rll} \mathbb{C} v_{r,s-1}\oplus\mathbb{C} v_{r,s+1} & \rightarrow \mathbb{C}[x]/(f_{1,2}(x,h_{r,s}) & \rightarrow A(\mathcal{L}_{1,2})\otimes_{A(V_c)}\mathbb{C} v_{r,s} \\ v_{r,s\pm1} & \mapsto x-h_{r,s\mp 1}+(f_{1,2}(x,h_{r,s})) & \mapsto ([\omega]-h_{r,s\mp1})\cdot[v_{1,2}]\otimes_{A(V_c)} v_{r,s} \end{array} \end{equation*} Then \eqref{eqn:bimod_reln} implies \begin{align*} ([\omega]-h_{r,s\mp1})\cdot[v_{1,2}]\otimes_{A(V_c)} v_{r,s} & =[v_{1,2}] \cdot([\omega]+h_{1,2}-h_{r,s\mp1})\otimes_{A(V_c)} v_{r,s}\nonumber\\ &\hspace{5em}+[L_{-1}v_{1,2}]\otimes_{A(V_c)} v_{r,s} \nonumber\\ & =(h_{1,2}+h_{r,s}-h_{r,s\mp1})[v_{1,2}]\otimes_{A(V_c)} v_{r,s}+[L_{-1} v_{1,2}]\otimes_{A(V_c)} v_{r,s}, \end{align*} which $\pi(\mathcal{Y}_\boxtimes)$ maps to \begin{equation*} \left(-\frac{1\pm r}{2}+\frac{1\pm s}{2} t^{-1}\right)\pi_0(v_{1,2}\boxtimes v_{r,s})+\pi_0(L_{-1}v_{1,2}\boxtimes v_{r,s}); \end{equation*} here $\boxtimes$ denotes the tensor product intertwining map $\mathcal{Y}_\boxtimes(\cdot,1)\cdot$. Rescaling these vectors a little, we may conclude: \begin{prop}\label{prop:top_level_eigenvectors} For $r,s\in\mathbb{Z}_+$, the vectors \begin{equation*} \Pi_{r,s}(v_{r,s\pm1}) =\left(1\pm s-(1\pm r)t\right)\pi_0(v_{1,2}\boxtimes v_{1,2})+2t\,\pi_0(L_{-1}v_{1,2}\boxtimes v_{1,2})\in(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})(0) \end{equation*} are, if non-zero, $L_0$-eigenvectors with eigenvalues $h_{r,s\pm1}$. \end{prop} \subsection{Results at specialized central charge} In this section, we now assume that $c=13-6p-6p^{-1}$ where $p > 1$ is an integer. In this case, irreducible modules in $\mathcal{O}_c$ are given by $\mathcal{L}_{r,s}$ for $r\geq 1$ and $1\leq s\leq p$, and conformal weights satisfy \begin{equation*} h_{r,s-1}-h_{r,s+1} =r-\frac{s}{p}. \end{equation*} We see that $h_{r,s-1}=h_{r,s+1}$ only when $(r,s)=(1,p)$, so that the generalized Verma module $\mathcal{W}_{r,s}=\mathrm{Ind}^{\mathcal{V} ir}_{\mathcal{V} ir_{\geq 0}} \mathcal{M}_{r,s}$ is given by \begin{equation*} \mathcal{W}_{r,s} \cong\left\lbrace\begin{array}{ccc} \mathcal{V}_{r,s-1}\oplus\mathcal{V}_{r,s+1} & \text{if} & (r,s)\neq(1,p)\\ \mathcal{V}^{(2)}_{1,p-1} & \text{if} & (r,s)=(1,p) \end{array} \right. , \end{equation*} where $\mathcal{V}^{(2)}_{1,p-1}$ is the generalized Verma module induced from the two-dimensional $\mathcal{V} ir_{\geq 0}$-module on which $L_0$ acts by the matrix $\left[\begin{array}{cc} h_{1,p-1} & 1 \\ 0 & h_{1,p-1}\\ \end{array}\right]$. Moreover, Proposition \ref{prop:Pi_rs_surjective} yields: \begin{cor}\label{cor:Pi_rs_surjective} The homomorphism $\Pi_{r,s}: \mathcal{W}_{r,s}\rightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}$ is surjective when $1\leq s\leq p-1$ and when $(r,s)=(1,p)$. \end{cor} Corollary \ref{cor:Pi_rs_surjective} gives an upper bound for the tensor product $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}$ when $1\leq s\leq p-1$ or when $(r,s)=(1,p)$: in the first case, $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}$ is a quotient of $\mathcal{V}_{r,s-1}\oplus\mathcal{V}_{r,s+1}$, and in the second, $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,p}$ is a quotient of $\mathcal{V}_{1,p-1}^{(2)}$. We next use Theorem \ref{thm:Zhu_fus_rules} to get lower bounds for these tensor products. We start by obtaining some non-zero intertwining operators: \begin{prop}\label{prop:intwo_op_exist} \hspace{2em} \begin{enumerate} \item When $r\geq 1$ and $s=1$, there is a non-zero intertwining operator of type $\binom{\mathcal{V}_{r,2}'}{\mathcal{L}_{1,2}\,\mathcal{L}_{r,1}}$. \item When $r\geq 1$ and $2\leq s\leq p-1$, or when $(r,s)=(1,p)$, there is an intertwining operator of type $\binom{\mathcal{W}_{r,s}'}{\mathcal{L}_{1,2}\,\mathcal{L}_{r,s}}$ that contains $\mathcal{W}_{r,s}'(0)\cong\mathcal{M}_{r,s}$ in its image. \end{enumerate} \end{prop} \begin{proof} Note that $\mathcal{L}_{1,2}$ is generated by $\mathcal{L}_{1,2}(0)$ and that $\mathcal{M}_{r,s}$ is finite dimensional. Thus by Theorem \ref{thm:Zhu_fus_rules}, the identity on $\mathcal{M}_{r,s}$ induces an intertwining operator $\mathcal{Y}$ of type $\binom{\mathcal{W}_{r,s}'}{\mathcal{L}_{1,2}\,\mathcal{V}_{r,s}}$ such that $\pi(\mathcal{Y})=\mathrm{Id}_{\mathcal{M}_{r,s}}$. This intertwining operator will induce a non-zero quotient intertwining operator $\overline{\mathcal{Y}}$ of type $\binom{\mathcal{W}_{r,s}'}{\mathcal{L}_{1,2}\,\mathcal{L}_{r,s}}$ if $\mathcal{Y}\vert_{\mathcal{L}_{1,2}\otimes\mathcal{J}_{r,s}}=0$, where $\mathcal{J}_{r,s}$ is the maximal proper submodule of $\mathcal{V}_{r,s}$. To show this, it is enough to show that there are no non-zero intertwining operators of type $\binom{\mathcal{W}_{r,s}'}{\mathcal{L}_{1,2}\,\mathcal{J}_{r,s}}$. Since $\mathcal{J}_{r,s}$ is a Verma module, this is equivalent to \begin{equation}\label{eqn:desc_cond} \dim\hom_{A(V_c)}(A(\mathcal{L}_{1,2})\otimes_{A(V_c)}\mathcal{J}_{r,s}(0), \mathcal{M}_{r,s}) =0, \end{equation} by Theorem \ref{thm:Zhu_fus_rules}. For $1\leq s\leq p-1$, $\mathcal{J}_{r,s}=\mathcal{V}_{r+1,p-s}$, so the $L_0$-eigenvalues on $A(\mathcal{L}_{1,2})\otimes_{A(V_c)}\mathcal{J}_{r,s}(0)=\mathcal{M}_{r+1,p-s}$ are $h_{r+1,p-s\pm1}$. For $2\leq s\leq p-1$, these never equal the $L_0$-eigenvalues $h_{r,s\pm1}$ on $\mathcal{M}_{r,s}$, proving the second assertion in the proposition for $s<p$. But when $s=1$, we have \begin{equation*} h_{r+1,p-1+1}=h_{r+1,p}=h_{r,0}=h_{r,1-1}, \end{equation*} so \eqref{eqn:desc_cond} fails. However, since \begin{equation*} \dim \hom_{A(V_c)}(\mathcal{M}_{r+1,p-1},\mathbb{C} v_{r,2}) =0, \end{equation*} we do get a non-zero intertwining operator of type $\binom{\mathcal{V}_{r,2}'}{\mathcal{L}_{1,2}\,\mathcal{L}_{r,1}}$ induced by a non-zero homomorphism $A(\mathcal{L}_{1,2})\otimes_{A(V_c)}\mathbb{C} v_{r,1}\rightarrow\mathbb{C} v_{r,2}$. This proves the first assertion of the proposition. For $(r,s)=(1,p)$, $\mathcal{J}_{1,p}=\mathcal{V}_{3,p}$, and the eigenvalues of $L_0$ on $\mathcal{M}_{3,p}$ are \begin{equation*} h_{3,p\pm1}=h_{3,p-1},h_{2,1}. \end{equation*} Neither equals the generalized eigenvalue $h_{1,p-1}$ of $L_0$ on $\mathcal{M}_{1,p}$, so \eqref{eqn:desc_cond} holds, proving the second assertion of the proposition for $(r,s)=(1,p)$. \end{proof} \begin{rem} For $r\geq 2$ and $s=p$, there is also a non-zero intertwining operator $\mathcal{Y}$ of type $\binom{\mathcal{W}_{r,p}'}{\mathcal{L}_{1,2}\,\mathcal{L}_{r,p}}$ induced by the identity on $\mathcal{M}_{r,p}$, but we cannot conclude that its image includes $\mathcal{M}_{r,p}\cong\mathbb{C} v_{r,p-1}\oplus\mathbb{C} v_{r-1,1}$, even though $\im\pi(\mathcal{Y})=\mathcal{M}_{r,p}$. The reason is that $\pi(\mathcal{Y})$ is defined using the non-standard $\mathbb{N}$-grading of Remark \ref{rem:non_std_N_grad} for $\mathcal{W}_{r,p}'$. In particular, the projection $\pi_0$ does not quite correspond to projection onto conformal weight spaces, which means that we cannot conclude that $\im\pi(\mathcal{Y})$ is contained in $\im\mathcal{Y}$. \end{rem} Using the intertwining operators we have obtained, we can prove: \begin{prop}\label{prop:Pi_rs_nontrivial} \hspace{2em} \begin{enumerate} \item For $r\geq 1$ and $s=1$, there is a surjective $V_c$-module map $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,1} \rightarrow \mathcal{L}_{r,2}$. \item For $r\geq 1$ and $2\leq s\leq p-1$, there is a surjective $V_c$-module map $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}\rightarrow\mathcal{L}_{r,s-1}\oplus\mathcal{L}_{r,s+1}$. \item For $(r,s)=(1,p)$, $(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,p})(0)\cong\mathcal{M}_{1,p}$ as $A(V_c)$-modules. \end{enumerate} \end{prop} \begin{proof} For the cases of $(r,s)$ that we are considering, we have shown that \begin{equation*} (\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})(0)=(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})_{[h_{r,s-1}]}+(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})_{[h_{r,s+1}]} \end{equation*} and that $\Pi_{r,s}: \mathcal{W}_{r,s}\rightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}$ is surjective. When $s=1$, the image of any non-zero intertwining operator of type $\binom{\mathcal{V}_{r,2}'}{\mathcal{L}_{1,2}\,\mathcal{L}_{r,1}}$ is a $C_1$-cofinite module in $\mathcal{O}_c$ by \cite[Key Theorem]{Miy}. Thus the universal property of the tensor product induces a non-zero map $f:\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,1}\rightarrow\mathcal{V}_{r,2}'$, whose image must contain the unique minimal non-zero submodule $\mathcal{L}_{r,2}$. Moreover, $\im f$ is a quotient of $\mathcal{W}_{r,1}\cong\mathcal{V}_{r,0}\oplus\mathcal{V}_{r,2}$ because $\Pi_{r,1}$ is surjective. As $\mathcal{L}_{r,2}$ is the only non-zero quotient of $\mathcal{V}_{r,0}\oplus\mathcal{V}_{r,2}$ that is also a submodule of $\mathcal{V}_{r,2}'$, it follows that $\im f=\mathcal{L}_{r,2}$, that is, we have a surjective map $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,1}\rightarrow\mathcal{L}_{r,2}$. Similarly, for $2\leq s\leq p-1$ or $(r,s)=(1,p)$, Proposition \ref{prop:intwo_op_exist} and the universal property of tensor products yield a homomorphism $f:\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}\rightarrow\mathcal{W}_{r,s}'$ whose image contains $$\mathcal{W}_{r,s}'(0)=(\mathcal{W}_{r,s}')_{[h_{r,s-1}]}+(\mathcal{W}_{r,s}')_{[h_{r,s+1}]}\cong\mathcal{M}_{r,s}.$$ This forces $\dim\,(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s})(0)\geq 2$, so $\Pi_{r,s}\vert_{\mathcal{M}_{r,s}}$ must be injective as well as surjective, proving the proposition in the $(r,s)=(1,p)$ case. When $2\leq s\leq p-1$, surjectivity of $\Pi_{r,s}$ implies that $\im f$ is generated by \begin{equation*} (f\circ\Pi_{r,s})(v_{r,s\pm1})\in\mathcal{M}_{r,s}\subseteq\mathcal{W}_{r,s}'\cong\mathcal{V}_{r,s-1}'\oplus\mathcal{V}_{r,s+1}'. \end{equation*} These vectors generate a submodule isomorphic to $\mathcal{L}_{r,s-1}\oplus\mathcal{L}_{r,s+1}$, so we have a surjection $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}\rightarrow\mathcal{L}_{r,s-1}\oplus\mathcal{L}_{r,s+1}$. \end{proof} The upper bound of Corollary \ref{cor:Pi_rs_surjective} and the lower bound of Proposition \ref{prop:Pi_rs_nontrivial} already provide strong constraints on the tensor product $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}$. To fully identify this tensor product, we will need $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}$ to be a self-contragredient $V_c$-module. This will follow from the rigidity of $\mathcal{L}_{1,2}$ and $\mathcal{L}_{r,s}$ in the tensor category $\mathcal{O}_c$, which we prove for $\mathcal{L}_{1,2}$ next. \section{Rigidity, categorical dimensions, and some fusion rules} In this section, we show that $\mathcal{O}_c$ is a rigid (and also ribbon) tensor category, and we calculate the categorical dimensions of all simple modules $\mathcal{L}_{r,s}$. In addition, we determine some tensor products in $\mathcal{O}_c$ involving $\mathcal{L}_{1,2}$, and some involving the modules $\mathcal{L}_{r,1}$ for $r\geq 1$. \subsection{Rigidity and categorical dimension for \texorpdfstring{$\mathcal{L}_{1,2}$}{L{1,2}}} We begin by showing that $\mathcal{L}_{1,2}$ is rigid and self-dual in $\mathcal{O}_c$. Since $V_c = \mathcal{L}_{1,1}$ is the unit object of $\mathcal{O}_c$, we first of all need an evaluation map $e: \mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,2}\rightarrow\mathcal{L}_{1,1}$ and a coevaluation $i:\mathcal{L}_{1,1}\rightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,2}$. The evaluation is easy: Since $\mathcal{L}_{1,2}$ is self-contragredient with lowest conformal weight $h_{1,2}$, symmetries of intertwining operators from \cite{FHL, HLZ2} applied to (possibly a rescaling of) the vertex operator $Y_{\mathcal{L}_{1,2}}$ yield an intertwining operator $\mathcal{E}$ of type $\binom{\mathcal{L}_{1,1}}{\mathcal{L}_{1,2}\,\mathcal{L}_{1,2}}$ such that \begin{equation*} \mathcal{E}(v_{1,2},x)v_{1,2}\in x^{-2 h_{1,2}}\big(\mathbf{1} + x \mathcal{L}_{1,1}[[x]]\big). \end{equation*} We then define the evaluation $e:\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,2}\rightarrow\mathcal{L}_{1,1}$ to be the unique map such that $e\circ\mathcal{Y}_\boxtimes =\mathcal{E}$. For the coevaluation, Proposition \ref{prop:top_level_eigenvectors} describes a homomorphism $\mathcal{V}_{1,1}\rightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,2}$. It will descend to a map $i: \mathcal{L}_{1,1}\rightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,2}$ such that \begin{equation*} i(\mathbf{1})= -\pi_0(v_{1,2}\boxtimes v_{1,2})+2p\,\pi_0(L_{-1}v_{1,2}\boxtimes v_{1,2}) \end{equation*} provided that $L_{-1} i(\mathbf{1})=0$ (since $L_{-1} v_{1,1}$ generates the maximal proper submodule of $\mathcal{V}_{1,1}$). To prove this, we use the commutator formula \eqref{eqn:Vir_comm_form}, the iterate formula \eqref{eqn:Vir_it_form}, and the relation $(L_{-1}^2-\frac{1}{p}L_{-2})v_{1,2}=0$ in $\mathcal{L}_{1,2}$ to compute \begin{align*} L_{-1}\pi_0(L_{-1}v_{1,2}\boxtimes v_{1,2}) & =\pi_1(L_{-1}^2 v_{1,2}\boxtimes v_{1,2})+\pi_1(L_{-1}v_{1,2}\boxtimes L_{-1}v_{1,2})\nonumber\\ & =\frac{1}{p}\pi_1(L_{-2}v_{1,2}\boxtimes v_{1,2})+L_{-1}\pi_0(v_{1,2}\boxtimes L_{-1}v_{1,2})-\pi_1(v_{1,2}\boxtimes L_{-1}^2 v_{1,2})\nonumber\\ & = \frac{1}{p}\pi_1(v_{1,2}\boxtimes(L_{-1}+L_0)v_{1,2})-L_{-1}\pi_0(L_{-1}v_{1,2}\boxtimes v_{1,2})\nonumber\\ &\qquad\qquad-\frac{1}{p}\pi_1(v_{1,2}\boxtimes L_{-2}v_{1,2}). \end{align*} We solve for $L_{-1}\pi_0(L_{-1}v_{1,2}\boxtimes v_{1,2})$ and apply the commutator formula \eqref{eqn:Vir_comm_form} to get \begin{align*} L_{-1}\pi_0(L_{-1} v_{1,2}\boxtimes v_{1,2}) & = \frac{h_{1,2}}{2p}\pi_1(v_{1,2}\boxtimes v_{1,2})+\frac{1}{2p}\pi_1(v_{1,2}\boxtimes L_{-1}v_{1,2})\nonumber\\ &\qquad\qquad+\frac{1}{2p}\pi_1((L_{-1}-L_0)v_{1,2}\boxtimes v_{1,2})\nonumber\\ & =\frac{1}{2p}\big(\pi_1(L_{-1}v_{1,2}\boxtimes v_{1,2}+\pi_1(v_{1,2}\boxtimes L_{-1}v_{1,2})\big)\nonumber\\ & = \frac{1}{2p} L_{-1}\pi_0(v_{1,2}\boxtimes v_{1,2}). \end{align*} Thus indeed \begin{equation*} L_{-1}\big(-\pi_0(v_{1,2}\boxtimes v_{1,2})+2p\,\pi_0(L_{-1}v_{1,2}\boxtimes v_{1,2})\big) =0, \end{equation*} showing that the coevaluation $i$ exists. We now prove the rigidity of $\mathcal{L}_{1,2}$: \begin{thm}\label{rigidityofl12} The module $\mathcal{L}_{1,2}$ is rigid and self-dual in the tensor category $\mathcal{O}_c$. \end{thm} \begin{proof} We need to show that the compositions \begin{equation*} \mathcal{L}_{1,2}\xrightarrow{l^{-1}}\mathcal{L}_{1,1}\boxtimes\mathcal{L}_{1,2}\xrightarrow{i\boxtimes\mathrm{Id}}(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,2})\boxtimes\mathcal{L}_{1,2}\xrightarrow{\mathcal{A}^{-1}}\mathcal{L}_{1,2}\boxtimes(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,2})\xrightarrow{\mathrm{Id}\boxtimes e}\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,1}\xrightarrow{r}\mathcal{L}_{1,2} \end{equation*} and \begin{equation*} \mathcal{L}_{1,2}\xrightarrow{r^{-1}}\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,1}\xrightarrow{\mathrm{Id}\boxtimes i}\mathcal{L}_{1,2}\boxtimes(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,2})\xrightarrow{\mathcal{A}}(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,2})\boxtimes\mathcal{L}_{1,2}\xrightarrow{e\boxtimes\mathrm{Id}}\mathcal{L}_{1,1}\boxtimes\mathcal{L}_{1,2}\xrightarrow{l}\mathcal{L}_{1,2} \end{equation*} are identical non-zero multiples of the identity (we can then rescale either $e$ or $i$ to get the identity). By Lemma 4.2.1 and Corollary 4.2.2 of \cite{CMY3}, it is enough to show that one of these two compositions is non-zero. We shall show that the second, which we label $\mathfrak{R}$ for convenience, is non-zero. In particular, we just need to show that $\langle v_{1,2},\mathfrak{R}(v_{1,2})\rangle\neq 0$, where $\langle\cdot,\cdot\rangle$ is the nondegenerate invariant bilinear form on $\mathcal{L}_{1,2}$ such that $\langle v_{1,2},v_{1,2}\rangle=1$. To compute $\langle v_{1,2},\mathfrak{R}(v_{1,2})\rangle$, we first use the definition of $r$ to get \begin{equation*} r^{-1}(v_{1,2})=r^{-1}\left(\pi_0\left(e^{L_{-1}}Y_{\mathcal{L}_{1,2}}(\mathbf{1},-1)v_{1,2}\right)\right)= \pi_0\left(\mathcal{Y}_\boxtimes(v_{1,2},1)\mathbf{1}\right). \end{equation*} Then we observe that $i(\mathbf{1})$ is the coefficient of the monomial $x^{-2 h_{1,2}} (\log x)^0$ in \begin{align*} x^{L_0}\big( 2p \,(L_{-1} x^{-L_0}v_{1,2}\boxtimes x^{-L_0}v_{1,2}) & -x^{-L_0}v_{1,2}\boxtimes x^{-L_0} v_{1,2}\big)\nonumber\\ & =2p\,x\mathcal{Y}_\boxtimes(L_{-1} v_{1,2},x)v_{1,2}-\mathcal{Y}_\boxtimes(v_{1,2},x)v_{1,2}\nonumber\\ & =\left(2p\,x\dfrac{d}{dx}-1\right)\mathcal{Y}_\boxtimes(v_{1,2},x)v_{1,2}. \end{align*} Thus $\langle v_{1,2},\mathfrak{R}(v_{1,2})\rangle$ is the coefficient of $x^{-2h_{1,2}}(\log x)^0$ in \begin{align*} \left(2p\,x\frac{d}{dx}-1\right)\left \langle v_{1,2}, \left[l\circ(e\boxtimes\mathrm{Id})\circ\mathcal{A}\circ\mathcal{Y}_\boxtimes\right](v_{1,2},1)\mathcal{Y}_\boxtimes(v_{1,2},x)v_{1,2}\right\rangle. \end{align*} This series is the expansion of a multivalued analytic function on the punctured unit disk. Alternatively, it is a single-valued analytic function on the simply-connected region \begin{equation*} U_1=\lbrace z\in\mathbb{C}\,\vert\,\vert z\vert <1\rbrace\setminus(-1,0], \end{equation*} where we choose the single-valued branch corresponding to the branch of logarithm \begin{equation*} \log z = \ln \vert z\vert +i\,\arg z \end{equation*} with $-\pi<\arg z<\pi$. From the definitions of $\mathcal{A}$, $e$, and $l$, the analytic continuation of this function to the simply-connected region \begin{equation*} U_2 =\lbrace z\in\mathbb{C}\,\vert\,\vert z\vert>\vert 1-z\vert>0\rbrace\setminus[1,\infty)=\lbrace z\in\mathbb{C}\,\vert\,\mathrm{Re}\,z>1/2\rbrace\setminus[1,\infty) \end{equation*} is \begin{align}\label{eqn:rigidity_iterate} \left(2p\,x\frac{d}{dx}-1\right) &\,\left \langle v_{1,2}, \left[l\circ(e\boxtimes\mathrm{Id})\circ\mathcal{Y}_\boxtimes\right]\left(\mathcal{Y}_\boxtimes(v_{1,2},1-x)v_{1,2},x\right)v_{1,2}\right\rangle\nonumber\\ & =\left(2p\,x\frac{d}{dx}-1\right)\left\langle v_{1,2}, Y_{\mathcal{L}_{1,2}}(\mathcal{E}(v_{1,2},1-x)v_{1,2},x)v_{1,2}\right\rangle. \end{align} This expression should be interpreted as a double series in $1-x$ and $x$, with the branch of logarithm $\log z$ used for both $1-x$ and $x$. Thus to show $\langle v_{1,2},\mathfrak{R}(v_{1,2})\rangle\neq 0$, we need to find the explicit expansion of \eqref{eqn:rigidity_iterate} as a series in $x$ and $\log x$ on $U_1\cap U_2$, and then extract the coefficient of $x^{-2 h_{1,2}}(\log x)^0$. Compositions of intertwining operators involving $C_1$-cofinite modules for the Virasoro algebra are solutions to Belavin-Polyakov-Zamolodchikov equations \cite{BPZ, Hu_Vir_tens}. When all insertions in the intertwining operators are lowest-conformal-weight vectors in $\mathcal{L}_{1,2}$ at central charge $c_{p,1}$, the specific differential equation appears in \cite{TW}; see also \cite{CMY2} for a more detailed derivation. On $U_1$, the series \begin{equation*} \left \langle v_{1,2}, \left[l\circ(e\boxtimes\mathrm{Id})\circ\mathcal{A}\circ\mathcal{Y}_\boxtimes\right](v_{1,2},1)\mathcal{Y}_\boxtimes(v_{1,2},x)v_{1,2}\right\rangle \end{equation*} is a solution to the second-order regular-singular-point differential equation \begin{equation*} x(1-x)\phi''(x)+\frac{1}{p}(1-2x)\phi'(x)-\frac{h_{1,2}}{p} x^{-1}(1-x)^{-1}\phi(x)=0. \end{equation*} Thus the analytic continuation \begin{equation*} \psi(x)=\left\langle v_{1,2}, Y_{\mathcal{L}_{1,2}}(\mathcal{E}(v_{1,2},1-x)v_{1,2},x)v_{1,2}\right\rangle \end{equation*} solves the same differential equation on $U_2$. If we write \begin{equation}\label{eqn:var_change} \psi(x)=x^{1/2p}(1-x)^{1/2p} f(x) \end{equation} for some analytic function $f(x)$, then $f(x)$ solves the hypergeometric differential equation \begin{equation}\label{eqn:hypgeo_diff_eq} x(1-x) f''(x) +\frac{2}{p}(1-2x)f'(x)+\frac{1}{p}\left(1-\frac{3}{p}\right)f(x)=0, \end{equation} whose solutions are well known (see for example \cite[Section 15.10]{DLMF}). For $p\geq 3$, \eqref{eqn:hypgeo_diff_eq} has the following basis of solutions on $U_2$ (see \cite[Equations 15.10.13 and 15.10.14]{DLMF}): \begin{align}\label{eqn:hypgeo_solns} f_1(x) & = x^{-1/p}{}_2 F_1\left(\frac{1}{p},1-\frac{1}{p};\frac{2}{p};-\frac{1-x}{x}\right)\nonumber\\ f_2(x) & =x^{-1/p}(1-x)^{1-2/p}{}_2 F_1\left(\frac{1}{p},1-\frac{1}{p};2-\frac{2}{p};-\frac{1-x}{x}\right). \end{align} On the other hand, the $L_0$-conjugation formula and the definition of $\mathcal{E}$ shows that \begin{align}\label{eqn:psi_series} (1-x)^{2h_{1,2}}\psi(x) & = \left(\frac{1-x}{x}\right)^{2h_{1,2}}\left\langle v_{1,2}, Y_{\mathcal{L}_{1,2}}\left(\mathcal{E}\left(v_{1,2},\frac{1-x}{x}\right)v_{1,2},1\right)v_{1,2}\right\rangle\nonumber\\ & =\left(\frac{1-x}{x}\right)^{2 h_{1,2}}\left(\langle v_{1,2},Y_{\mathcal{L}_{1,2}}(\mathbf{1},1)v_{1,2}\rangle\left(\frac{1-x}{x}\right)^{-2h_{1,2}} +\ldots\right)\nonumber\\ & \in 1+\left(\frac{1-x}{x}\right)\mathbb{C}\left[\left[\frac{1-x}{x}\right]\right]. \end{align} By examining the powers of $\frac{1-x}{x}$ in \eqref{eqn:var_change} and \eqref{eqn:hypgeo_solns}, we see that \begin{equation*} \psi(x)= x^{1/2p}(1-x)^{1/2p} f_2(x) = (1-x)^{-2h_{1,2}}\left(1+\frac{1-x}{x}\right)^{1/2p} {}_2 F_1\left(\frac{1}{p},1-\frac{1}{p};2-\frac{2}{p};-\frac{1-x}{x}\right). \end{equation*} Now we need to expand $\psi(x)$ in $U_1$ as a series in $x$. By the connection formulas for hypergeometric functions (see for example \cite[Equation 15.10.18]{DLMF}), we have \begin{align*} f_2(x) = \frac{\Gamma\big(1-\frac{2}{p}\big)\Gamma\big(2-\frac{2}{p}\big)}{\Gamma\big(1-\frac{1}{p}\big)\Gamma\big(2-\frac{3}{p}\big)} & {}_2 F_1\left(\frac{1}{p},\frac{3}{p}-1;\frac{2}{p}; x\right)\nonumber\\ &+\frac{\Gamma\big(\frac{2}{p}-1\big)\Gamma\big(2-\frac{2}{p}\big)}{\Gamma\big(\frac{1}{p}\big)\Gamma\big(1-\frac{1}{p}\big)} x^{1-2/p} {}_2 F_1\left(\frac{1}{p},1-\frac{1}{p};2-\frac{2}{p}; x\right) \end{align*} on $U_1\cap U_2$. Only the second term contributes to the coefficient of $x^{-2 h_{1,2}}$ in $(2p\,x\frac{d}{dx}-1)\psi(x)$: \begin{align*} \left(2p\,x\dfrac{d}{dx}-1\right) & x^{-2h_{1,2}}(1-x)^{1/2p} {}_2 F_1\left(\frac{1}{p},1-\frac{1}{p};2-\frac{2}{p}; x\right)\nonumber\\ & =x^{-2h_{1,2}}(1-x)^{1/2p}\left[\left(-4p\,h_{1,2} -\frac{x}{1-x}-1\right){}_2 F_1\left(\frac{1}{p},1-\frac{1}{p};2-\frac{2}{p}; x\right)\right.\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\left.+2p\,x\,{}_2 F_1 '\left(\frac{1}{p},1-\frac{1}{p};2-\frac{2}{p}; x\right) \right]\nonumber\\ & \in x^{-2 h_{1/2}}\big(2(p-2)+x\mathbb{C}[[x]]\big). \end{align*} We conclude that when $p\geq 3$, \begin{equation*} \langle v_{1,2}, \mathfrak{R}(v_{1,2})\rangle =2(p-2)\frac{\Gamma\big(\frac{2}{p}-1\big)\Gamma\big(2-\frac{2}{p}\big)}{\Gamma\big(\frac{1}{p}\big)\Gamma\big(1-\frac{1}{p}\big)} = -2(p-2)\frac{\sin(\pi/p)}{\sin(2\pi/p)}=-\frac{p-2}{\cos(\pi/p)}\neq 0, \end{equation*} using \cite[Equation 5.5.3]{DLMF} in the second equality. This proves $\mathcal{L}_{1,2}$ is rigid when $p\geq 3$. For $p = 2$, the equation \eqref{eqn:hypgeo_diff_eq} has logarithmic solutions. On the simply-connected region \begin{equation*} 1-U_1 =\lbrace z\in\mathbb{C}\,\vert\, \vert 1-z\vert<1\rbrace\setminus [1,2), \end{equation*} which has non-empty intersection with $U_2$, \eqref{eqn:hypgeo_diff_eq} has the following basis of solutions: \begin{align*} f_1(x) & = {}_2 F_1\left(\frac{1}{2},\frac{1}{2};1;1-x\right)\nonumber\\ f_2(x) & = f_1(x)\log(1-x)+G(1-x), \end{align*} where $G(x)$ is a power series (which we may assume has no constant term). Since \eqref{eqn:psi_series} shows that $(1-x)^{-2h_{1,2}}\psi(x)$ is analytic at $x=1$ with value $1$, we must have \begin{equation*} \psi(x)=x^{1/4}(1-x)^{1/4}f_1(x) = x^{1/4}(1-x)^{1/4}{}_2 F_1\left(\frac{1}{2},\frac{1}{2};1;1-x\right) \end{equation*} on $(1-U_1)\cap U_2$. We need to expand $\psi(x)$ on $U_1$ as a series in $x$; to do so, we use \cite[Equation 15.8.10]{DLMF}, which states that \begin{align*} {}_2 F_1\left(\frac{1}{2},\frac{1}{2};1;1-x\right) = -\frac{1}{\Gamma\big(\frac{1}{2}\big)\Gamma\big(\frac{1}{2}\big)}\sum_{n= 0}^\infty \frac{\big(\frac{1}{2}\big)_n \big(\frac{1}{2}\big)_n}{(n!)^2} x^n\cdot\left(\log x +C_n\right), \end{align*} for $x\in U_1\cap(1-U_1)$, where the constants $C_n$ can be expressed in terms of the digamma function. Thus on the non-empty open region $U_1\cap(1-U_1)\cap U_2$, \begin{align*} \left(4x\dfrac{d}{dx}-1\right)\psi(x) & =x^{1/4}(1-x)^{1/4}\left[4\cdot\frac{1}{4}-\frac{x}{1-x}-1\right] {}_2 F_1\left(\frac{1}{2},\frac{1}{2};1;1-x\right)\nonumber\\ &\qquad\qquad -\frac{4\,x^{1/4}(1-x)^{1/4}}{\Gamma\big(\frac{1}{2}\big)\Gamma\big(\frac{1}{2}\big)}\sum_{n= 0}^\infty \frac{\big(\frac{1}{2}\big)_n \big(\frac{1}{2}\big)_n}{(n!)^2}\cdot\left[n x^n(\log x+C_n)+x^n \right], \end{align*} and the coefficient of $x^{1/4}$ is \begin{equation*} -\frac{4}{\Gamma\big(\frac{1}{2}\big)\Gamma\big(\frac{1}{2}\big)} = -\frac{4}{\pi/\sin(\pi/2)} =-\frac{4}{\pi}\neq 0. \end{equation*} We conclude that $\langle v_{1,2},\mathfrak{R}(v_{1,2})\rangle\neq 0$ and thus $\mathcal{L}_{1,2}$ is rigid when $p=2$. \end{proof} Our calculations allow us to describe the evaluation and coevaluation for $\mathcal{L}_{1,2}$ explicitly. If we fix a non-zero lowest-conformal weight vector $v_{1,2}\in\mathcal{L}_{1,2}$, we take the evaluation to be \begin{align*} e: \mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,2} & \rightarrow \mathcal{L}_{1,1}\nonumber\\ \pi_0(v_{1,2}\boxtimes v_{1,2}) & \mapsto \mathbf{1}. \end{align*} The $L_0$-conjugation formula determines $e$ on the other possibly linearly independent lowest-conformal-weight vector: \begin{align*} e\left(\pi_0(L_{-1}v_{1,2}\boxtimes v_{1,2})\right) & =e\left(\pi_0(L_0(v_{1,2}\boxtimes v_{1,2})-L_0v_{1,2}\boxtimes v_{1,2}-v_{1,2}\boxtimes L_0v_{1,2})\right)\nonumber\\ & = (L_0-2h_{1,2}) e(\pi_0(v_{1,2}\boxtimes v_{1,2}))= -2h_{1,2}\mathbf{1}. \end{align*} With this choice of evaluation, we must take the coevaluation as follows: \begin{equation*} i(\mathbf{1}) = \left\lbrace\begin{array}{lll} \frac{\cos(\pi/p)}{p-2}\left(\pi_0(v_{1,2}\boxtimes v_{1,2})-2p\,\pi_0(L_{-1} v_{1,2}\boxtimes v_{1,2})\right) & \text{if} & p\geq 3\\ \frac{\pi}{4}\left(\pi_0(v_{1,2}\boxtimes v_{1,2})-4\,\pi_0(L_{-1} v_{1,2}\boxtimes v_{1,2})\right) & \text{if} & p=2 \end{array} \right. . \end{equation*} Using these explicit evaluation and coevaluation, we determine the categorical dimension \begin{equation*} \dim_{\mathcal{O}_c} \mathcal{L}_{1,2} = e\circ\mathcal{R}\circ(\theta\boxtimes\mathrm{Id})\circ i: \mathcal{L}_{1,1}\rightarrow\mathcal{L}_{1,1} \end{equation*} of $\mathcal{L}_{1,2}$ in $\mathcal{O}_c$, where $\theta=e^{2\pi i L_0}$ is the ribbon twist on $\mathcal{O}_c$: \begin{prop}\label{prop:L12_dim} In the tensor category $\mathcal{O}_c$, $\dim_{\mathcal{O}_c} \mathcal{L}_{1,2}=-2\cos(\pi/p)\,\mathrm{Id}_{\mathcal{L}_{1,1}}$. \end{prop} \begin{proof} Since $\mathcal{L}_{1,1}$ is simple, the dimension is just a scalar multiple of the identity. Using $a_p$ to denote $\frac{\cos(\pi/p)}{p-2}$ or $\frac{\pi}{4}$ according as $p\geq 3$ or $p=2$ (note that $a_2=\lim_{p\to 2} a_p$), we calculate \begin{align*} \dim_{\mathcal{O}_c} \mathcal{L}_{1,2} & \, : \mathbf{1} \mapsto a_p\,e^{2\pi i h_{1,2}} (e\circ\mathcal{R})\left(\pi_0(v_{1,2}\boxtimes v_{1,2})-2p\,\pi_0(L_{-1} v_{1,2}\boxtimes v_{1,2})\right)\nonumber\\ & =a_p\,e^{2\pi i h_{1,2}} (e\circ\pi_0)\left(e^{L_{-1}}\mathcal{Y}_\boxtimes(v_{1,2},e^{\pi i}) v_{1,2}-2p\,e^{L_{-1}}\mathcal{Y}_\boxtimes(v_{1,2},e^{\pi i})L_{-1}v_{1,2}\right)\nonumber\\ & =a_p\,e^{2\pi i h_{1,2}} e^{\pi i L_0}(e\circ\pi_0)\left( e^{-\pi i L_0} v_{1,2}\boxtimes e^{-\pi i L_0} v_{1,2}-2p\,(e^{-\pi i L_0} v_{1,2}\boxtimes e^{-\pi i L_0} L_{-1} v_{1,2})\right)\nonumber\\ & = a_p\,(e\circ\pi_0)\left(v_{1,2}\boxtimes v_{1,2}+2p\,(v_{1,2}\boxtimes L_{-1}v_{1,2})\right)\nonumber\\ & =a_p\,e\left(\pi_0(v_{1,2}\boxtimes v_{1,2})-2p\,\pi_0(L_{-1}v_{1,2}\boxtimes v_{1,2})\right)\nonumber\\ & =a_p\,(1+4ph_{1,2})\mathbf{1}\nonumber\\ & =2a_p(2-p)\mathbf{1} = -2\cos(\pi/p)\mathbf{1} \end{align*} as required. \end{proof} Note that the dimension formula is valid for all $p\geq 2$; in particular, $\dim_{\mathcal{O}_c} \mathcal{L}_{1,2} =0$ when $p=2$. Note also that if we ignore the braiding and twist isomorphisms, we still get \begin{equation}\label{eqn:L12_left_trace} e\circ i = -2\cos(\pi/p)\,\mathrm{Id}_{\mathcal{L}_{1,1}}. \end{equation} This quantity is an invariant of the tensor category structure on $\mathcal{O}_c$ (it depends on the associativity isomorphisms, but not on the braiding or ribbon twist). \subsection{Rigidity of \texorpdfstring{$\mathcal{O}_c$}{Oc} and some fusion rules}\label{sec:fus_and_rig} In this section, we determine the tensor products of $\mathcal{L}_{1,2}$ with the irreducible modules in $\mathcal{O}_c$, and we prove that $\mathcal{O}_c$ is rigid. But first, we establish rigidity and fusion products of the modules $\mathcal{L}_{r,1}$: \begin{thm}\label{thm:Lr1_fus_rules} The irreducible $V_c$-modules $\mathcal{L}_{r,1}$ are rigid for $r\geq 1$, and \begin{equation}\label{eqn:Lr1_fus_rules} \mathcal{L}_{r,1}\boxtimes\mathcal{L}_{r',1} \cong \bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1} \mathcal{L}_{k,1} \end{equation} for $r,r'\geq 1$. \end{thm} \begin{proof} We use a realization of $V_c$ as the fixed-point subalgebra of a compact automorphism group of an abelian intertwining algebra. The triplet vertex operator algebra $\mathcal{W}(p)$ is a $C_2$-cofinite vertex operator algebra extension of $V_c$; its automorphism group is $PSL(2,\mathbb{C})$ \cite{ALM} and $V_c$ is the fixed-point subalgebra. In particular, $V_c$ is the fixed-point subalgebra of the compact automorphism group $SO(3,\mathbb{R})$ acting on $\mathcal{W}(p)$. The triplet $\mathcal{W}(p)$ admits a simple current extension $\mathcal{A}(p)$ called the doublet \cite{AM_doub}; it is an abelian intertwining algebra. The Lie algebra $\mathfrak{sl}_2$ acts by derivations on $\mathcal{A}(p)$ \cite[Remark 2]{ACGY}, and this action exponentiates to an action of $SL(2,\mathbb{C})$ by automorphisms. In particular, $V_c$ is the fixed-point subalgebra of the compact automorphism group $SU(2)$ acting continuously on $\mathcal{A}(p)$. As an $SU(2)\times V_c$-module, \begin{equation*} \mathcal{A}(p)\cong\bigoplus_{r\geq 1} M_r\otimes \mathcal{L}_{r,1} \end{equation*} where $M_r$ is the $r$-dimensional irreducible $SU(2)$-module (again see \cite[Remark 2]{ACGY}). Now by the main theorems of \cite{McR}, the modules $\mathcal{L}_{r,1}$ are the simple objects of a semisimple tensor subcategory of $\mathcal{O}_c$ that is braided tensor equivalent to $\rep SU(2)$ (twisted by an abelian $3$-cocycle of $\mathbb{Z}/2\mathbb{Z}$). In particular, the modules $\mathcal{L}_{r,1}$ are rigid (since finite-dimensional $SU(2)$-modules are rigid) and the fusion rules \eqref{eqn:Lr1_fus_rules} hold. \end{proof} Now we can determine the tensor products of $\mathcal{L}_{1,2}$ with most irreducible modules in $\mathcal{O}_c$: \begin{thm}\label{thm:L12_fus_rules} For $r\geq 1$ and $1\leq s\leq p$, the irreducible $V_c$-module $\mathcal{L}_{r,s}$ is rigid. Moreover, \begin{equation}\label{eqn:most_L12_fusion} \mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}\cong\left\lbrace\begin{array}{lll} \mathcal{L}_{r,2} & \text{if} & s=1 \\ \mathcal{L}_{r,s-1}\oplus\mathcal{L}_{r,s+1} & \text{if} & 2\leq s\leq p-1 \end{array} \right. \end{equation} for all $r\geq 1$. \end{thm} \begin{proof} We prove the theorem by induction on $s$. For $s=1$, Theorem \ref{thm:Lr1_fus_rules} shows that $\mathcal{L}_{r,1}$ is rigid, but we still need to determine $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,1}$. We will prove that this tensor product is $\mathcal{L}_{r,2}$ by induction on $r$, with the base case $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,1}\cong\mathcal{L}_{1,2}$ clear because $\mathcal{L}_{1,1}$ is the unit object of $\mathcal{O}_c$. Now assume that we know $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,1}\cong\mathcal{L}_{r,2}$ for some $r\geq 1$, and consider $\mathcal{L}_{r+1,1}$. Because $\mathcal{L}_{1,2}$ is rigid, the tensoring functor $\mathcal{L}_{1,2}\boxtimes\bullet$ is exact, so by \eqref{eqn:Lr1_fus_rules} and the inductive hypothesis, we have an injection \begin{equation*} \mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r+1,1}\rightarrow\mathcal{L}_{1,2}\boxtimes(\mathcal{L}_{2,1}\boxtimes\mathcal{L}_{r,1})\cong\mathcal{L}_{2,1}\boxtimes\mathcal{L}_{r,2}. \end{equation*} Now on the one hand, Proposition \ref{prop:conf_wts}(1) says that the conformal weights of $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r+1,1}$ are contained in $\lbrace h_{r+1,0}+\mathbb{N}\rbrace\cup\lbrace h_{r+1,2}+\mathbb{N}\rbrace$, while on the other hand, Proposition \ref{prop:conf_wts}(2) says that the weights are contained in $\lbrace h_{r-1,2}+\mathbb{N}\rbrace\cup\lbrace h_{r+1,2}+\mathbb{N}\rbrace$. Since \begin{equation*} h_{r+1,0}-h_{r\pm1,2} =(r\mp r)\frac{p}{2}+r\pm1-p^{-1}\notin\mathbb{Z}, \end{equation*} we have $h_{r+1,0}\notin\lbrace h_{r-1,2}+\mathbb{N}\rbrace\cup\lbrace h_{r+1,2}+\mathbb{N}\rbrace$. Thus $v_{r+1,0}$ is in the kernel of the surjection \begin{equation*} \Pi_{r+1,1}: \mathcal{V}_{r+1,0}\oplus\mathcal{V}_{r+1,2}\rightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r+1,1} \end{equation*} from Section \ref{sec:first_fus}, and so there is a surjective map $\mathcal{V}_{r+1,2}\rightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r+1,1}$. But now because $\mathcal{L}_{1,2}$ and $\mathcal{L}_{r+1,1}$ are rigid and self-dual, their tensor product is also rigid and we have isomorphisms \begin{equation*} \mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r+1,1}\cong\mathcal{L}_{r+1,1}\boxtimes\mathcal{L}_{1,2}\cong\mathcal{L}_{r+1,1}'\boxtimes\mathcal{L}_{1,2}'\cong(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r+1,1})'. \end{equation*} As $\mathcal{L}_{r+1,2}$ is the only quotient of $\mathcal{V}_{r+1,2}$ that is self-contragredient, we conclude that $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r+1,1}\cong\mathcal{L}_{r+1,2}$. This proves the $s=1$ case of the theorem. Now assume by induction that for all $r\geq 1$ and some $s\in\lbrace 1,\ldots, p-1\rbrace$, $\mathcal{L}_{r,s}$ is rigid and \eqref{eqn:most_L12_fusion} holds. Then for all $r\geq 1$, $\mathcal{L}_{r,s+1}$ is also rigid, since it is a direct summand of the tensor product of rigid objects. If $s\leq p-2$, we still need to compute the fusion products $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s+1}$. By Corollary \ref{cor:Pi_rs_surjective} and Proposition \ref{prop:Pi_rs_nontrivial}, this tensor product is a homomorphic image of $\mathcal{V}_{r,s}\oplus\mathcal{V}_{r,s+2}$ that has $\mathcal{L}_{r,s}\oplus\mathcal{L}_{r,s+2}$ as a quotient. Also, since $\mathcal{L}_{1,2}$ and $\mathcal{L}_{r,s+1}$ are both rigid and self-dual, their tensor product is also rigid and self-dual. Thus $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s+1}$ also contains $\mathcal{L}_{r,s}\oplus\mathcal{L}_{r,s+2}$ as a submodule. As the only such homomorphic image of $\mathcal{V}_{r,s}\oplus\mathcal{V}_{r,s+2}$ is $\mathcal{L}_{r,s}\oplus\mathcal{L}_{r,s+2}$ itself, this proves the fusion rules of the theorem in the $s+1$ case. \end{proof} We shall descibe the fusion products $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,p}$ soon, but first note that we have now proved that all simple modules in $\mathcal{O}_c$ are rigid. This means we can use \cite[Theorem 4.4.1]{CMY2} to extend rigidity to general finite-length modules in $\mathcal{O}_c$: \begin{thm}\label{thm:rigidity} For $c=13-6p-6p^{-1}$ with $p > 1$ an integer, the tensor category $\mathcal{O}_c$ of $C_1$-cofinite grading-restricted generalized $V_c$-modules is rigid. Moreover, it is a braided ribbon tensor category with natural twist isomorphism $\theta=e^{2\pi i L_0}$. \end{thm} As another consequence of Theorem \ref{thm:L12_fus_rules}, we can derive some more fusion rules in $\mathcal{O}_c$: \begin{thm}\label{thm:Lr_Ls_fusion} For $r\geq 1$ and $1\leq s\leq p$, \begin{equation*} \mathcal{L}_{r,1}\boxtimes\mathcal{L}_{1,s}\cong \mathcal{L}_{r,s}. \end{equation*} \end{thm} \begin{proof} The $s=1$ case is clear and the $s=2$ case was proved in Theorem \ref{thm:L12_fus_rules}. We can prove the general case by induction on $s$. In particular, for $2\leq s\leq p-1$, Theorem \ref{thm:L12_fus_rules} shows that we have an exact sequence \begin{equation*} 0\longrightarrow\mathcal{L}_{1,s-1}\longrightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,s}\longrightarrow\mathcal{L}_{1,s+1}\longrightarrow 0. \end{equation*} Since $\mathcal{L}_{r,1}$ is rigid, the tensoring functor $\mathcal{L}_{r,1}\boxtimes\bullet$ is exact, and the inductive hypothesis implies that there is an exact sequence \begin{equation*} 0\longrightarrow\mathcal{L}_{r,s-1}\longrightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}\longrightarrow\mathcal{L}_{r,1}\boxtimes\mathcal{L}_{1,s+1}\longrightarrow 0. \end{equation*} Since $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,s}\cong \mathcal{L}_{r,s-1}\oplus\mathcal{L}_{r,s+1}$ by Theorem \ref{thm:L12_fus_rules}, it follows that $\mathcal{L}_{r,1}\boxtimes\mathcal{L}_{1,s+1}\cong \mathcal{L}_{r,s+1}$. \end{proof} We now turn to the fusion products $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,p}$. In the next section, we will show that these modules are projective covers of $\mathcal{L}_{r,p-1}$ in a certain tensor subcategory of $\mathcal{O}_c$, so we will use the notation $\mathcal{P}_{r,p-1}=\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,p}$. First we handle $r=1$: \begin{prop}\label{prop:P1_structure} The tensor product $\mathcal{P}_{1,p-1}$ is a self-dual indecomposable length-$3$ module with subquotients as indicated in the diagram \begin{equation*} \xymatrix{ \mathcal{L}_{1,p-1} \ar[r] \ar[rd] & (\mathcal{V}_{1,p-1}/\mathcal{V}_{3,p-1})' \ar[r] \ar[d] & \mathcal{L}_{2,1} \ar[d] \\ & \mathcal{P}_{1,p-1} \ar[r] \ar[rd] & \mathcal{V}_{1,p-1}/\mathcal{V}_{3,p-1} \ar[d] \\ & & \mathcal{L}_{1,p-1} \\ } \end{equation*} and Loewy diagram \begin{equation*} \xymatrixrowsep{1pc} \xymatrixcolsep{.75pc} \xymatrix{ & & \mathcal{L}_{1,p-1} \\ \mathcal{P}_{1,p-1}: & \mathcal{L}_{2,1} \ar[ru] & \\ & & \mathcal{L}_{1,p-1} \ar[lu]\\ } \end{equation*} \end{prop} \begin{proof} First, $\mathcal{P}_{1,p-1}$ is self-dual because $\mathcal{L}_{1,2}$ and $\mathcal{L}_{r,p}$ are self-dual and because the tensor product is commutative. Also, since $\Pi_{1,p-1}$ is surjective by Corollary \ref{cor:Pi_rs_surjective}, $\mathcal{P}_{1,p-1}$ is a quotient of the generalized Verma module $\mathcal{V}_{1,p-1}^{(2)}$. As this generalized Verma module has a unique maximal proper submodule (the sum of all proper submodules is proper because any proper submodule is graded and must intersect $\mathcal{V}_{1,p-1}^{(2)}(0)$ in its $L_0$-eigenspace), $\mathcal{P}_{1,p-1}$ has unique irreducible quotient $\mathcal{L}_{1,p-1}$. Then because $\mathcal{P}_{1,p-1}$ is self-dual, it also contains $\mathcal{L}_{1,p-1}$ as unique irreducible submodule. Since $\Pi_{1,p-1}$ is an isomorphism on degree-$0$ spaces by Proposition \ref{prop:Pi_rs_nontrivial}(3), the submodule $\mathcal{L}_{1,p-1}$ is generated by the image under $\Pi_{1,p-1}$ of an $L_0$-eigenvector in $\mathcal{V}_{1,p-1}^{(2)}(0)$. This means that $\ker\Pi_{1,p-1}$ contains the maximal proper submodule of the Verma submodule $\mathcal{V}_{1,p-1}\subseteq\mathcal{V}_{1,p-1}^{(2)}$. So far, we have shown that there is an exact sequence \begin{equation*} 0\longrightarrow\mathcal{L}_{1,p-1}\longrightarrow\mathcal{P}_{1,p-1}\longrightarrow \mathcal{V}_{1,p-1}/\mathcal{J}\longrightarrow 0, \end{equation*} where the submodule $\mathcal{J}\subseteq\mathcal{V}_{1,p-1}$ is a Verma module occurring in the embedding diagram \begin{equation*} \mathcal{V}_{1,p-1}\longleftarrow \mathcal{V}_{2,1}\longleftarrow\mathcal{V}_{3,p-1}\longleftarrow\mathcal{V}_{4,1}\longleftarrow\cdots \end{equation*} Let $\mathcal{L}_{r,s}$ denote the unique irreducible submodule of $\mathcal{V}_{1,p-1}/\mathcal{J}$ (that is, $\mathcal{J}=\mathcal{V}_{r+1,p-s}$). We have $r\geq 2$ because $\mathcal{L}_{1,p-1}$ does not admit non-split self-extensions at central charge $c_{p,1}$ \cite[Section 5.4]{GK}. Now let $\mathcal{Z}_{1,p-1}\subseteq\mathcal{P}_{1,p-1}$ denote the inverse image of $\mathcal{L}_{r,s}$ under the surjection $\mathcal{P}_{1,p-1}\rightarrow\mathcal{V}_{1,p-1}/\mathcal{J}$; thus we have an exact sequence \begin{equation*} 0\longrightarrow \mathcal{L}_{1,p-1}\longrightarrow\mathcal{Z}_{1,p-1}\longrightarrow\mathcal{L}_{r,s}\longrightarrow 0. \end{equation*} This sequence does not split because $\mathcal{L}_{1,p-1}$ is the unique irreducible submodule of $\mathcal{P}_{1,p-1}$, and $r\geq 2$. Applying the exact contragredient functor, we get the non-split sequence \begin{equation*} 0\longrightarrow\mathcal{L}_{r,s}\longrightarrow\mathcal{Z}_{1,p-1}'\longrightarrow\mathcal{L}_{1,p-1}\longrightarrow 0. \end{equation*} Since $h_{r,s}-h_{1,p-1}\in\mathbb{Z}_+$, $\mathcal{Z}_{1,p-1}'$ contains a singular vector of weight $h_{1,p-1}$, and therefore there is a non-zero homomorphism $\mathcal{V}_{1,p-1}\rightarrow\mathcal{Z}_{1,p-1}'$. The image has length at least $2$ (since $\mathcal{Z}_{1,p-1}'$ does not contain $\mathcal{L}_{1,p-1}$ as a submodule), and thus $\mathcal{Z}_{1,p-1}'$ is a homomorphic image of $\mathcal{V}_{1,p-1}$. The only length-$2$ quotient of $\mathcal{V}_{1,p-1}$ is $\mathcal{V}_{1,p-1}/\mathcal{V}_{3,p-1}$, so $\mathcal{Z}_{1,p-1}\cong(\mathcal{V}_{1,p-1}/\mathcal{V}_{3,p-1})'$ and therefore $(r,s)=(2,1)$. This verifies the top row in the subquotient diagram for $\mathcal{P}_{1,p-1}$, and also $\mathcal{P}_{1,p-1}/\mathcal{L}_{1,p-1}\cong\mathcal{V}_{1,p-1}/\mathcal{J}$ with $\mathcal{J}=\mathcal{V}_{3,p-1}$. This finishes the proof that $\mathcal{P}_{1,p-1}$ has the subquotients indicated in the diagram. Now the Loewy diagram is easy: the socle of $\mathcal{P}_{1,p-1}$ is $\mathcal{L}_{1,p-1}$ since this is the unique irreducible submodule, and then the socle of $\mathcal{P}_{1,p-1}/\mathcal{L}_{1,p-1}\cong\mathcal{V}_{1,p-1}/\mathcal{V}_{3,p-1}$ is $\mathcal{L}_{2,1}$. Moreover, the two extensions $(\mathcal{V}_{1,p-1}/\mathcal{V}_{3,p-1})'$ and $\mathcal{V}_{1,p-1}/\mathcal{V}_{3,p-1}$ of irreducible subquotients of $\mathcal{P}_{1,p-1}$ are both indecomposable. Finally, $\mathcal{P}_{1,p-1}$ itself is indecomposable since the intersection of any two non-zero submodules must contain the unique irreducible submodule $\mathcal{L}_{1,p-1}$. \end{proof} \begin{rem} Note that $\mathcal{P}_{1,p-1}$ is a logarithmic $V_c$-module, with maximum Jordan block size $2$ for $L_0$ beginning in degree $0$. \end{rem} Now we handle $r\geq 2$: \begin{prop}\label{prop:Pr_structure} For $r\geq 2$, the tensor product $\mathcal{P}_{r,p-1}$ is a self-dual indecomposable length-$4$ module with subquotients as indicated in the diagram \begin{equation*} \xymatrix{ \mathcal{L}_{r,p-1} \ar[r] \ar[d] \ar[rd] & (\mathcal{V}_{r,p-1}/\mathcal{V}_{r+2,p-1})' \ar[d] \ar[r] & \mathcal{L}_{r+1,1} \ar[d] \\ \mathcal{V}_{r-1,1}/\mathcal{V}_{r+1,1} \ar[r] \ar[d] & \mathcal{P}_{r,p-1} \ar[r] \ar[d] \ar[rd] & \mathcal{V}_{r,p-1}/\mathcal{V}_{r+2,p-1} \ar[d] \\ \mathcal{L}_{r-1,1} \ar[r] & (\mathcal{V}_{r-1,1}/\mathcal{V}_{r+1,1})' \ar[r] & \mathcal{L}_{r,p-1} \\ } \end{equation*} and Loewy diagram \begin{equation*} \xymatrixrowsep{1pc} \xymatrixcolsep{.75pc} \xymatrix{ & & \mathcal{L}_{r,p-1} & \\ \mathcal{P}_{r,p-1}: & \mathcal{L}_{r-1,1} \ar[ru] & & \mathcal{L}_{r+1,1} \ar[lu] \\ & & \mathcal{L}_{r,p-1} \ar[lu] \ar[ru] & \\ } \end{equation*} \end{prop} \begin{proof} First, $\mathcal{P}_{r,p-1}$ is self-dual exactly as in the $r=1$ case. Then from Theorem \ref{thm:Lr_Ls_fusion}, \begin{equation}\label{eqn:Prp-1} \mathcal{P}_{r,p-1} =\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,p}\cong\mathcal{L}_{r,1}\boxtimes(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,p})=\mathcal{L}_{r,1}\boxtimes\mathcal{P}_{1,p-1}. \end{equation} Thus because $\mathcal{L}_{r,1}\boxtimes\bullet$ is exact (since $\mathcal{L}_{r,1}$ is rigid), $\mathcal{P}_{r,p-1}$ contains submodules $\mathcal{L}_{r,p-1}\cong\mathcal{L}_{r,1}\boxtimes\mathcal{L}_{1,p-1}$ and $\mathcal{Z}_{r,p-1}\cong\mathcal{L}_{r,1}\boxtimes(\mathcal{V}_{1,p-1}/\mathcal{V}_{3,p-1})'$, and using \eqref{eqn:Lr1_fus_rules}, we have an exact sequence \begin{equation*} 0\longrightarrow\mathcal{L}_{r,p-1}\longrightarrow\mathcal{Z}_{r,p-1}\longrightarrow\mathcal{L}_{r-1,1}\oplus\mathcal{L}_{r+1,1}\longrightarrow 0. \end{equation*} Moreover, $\mathcal{Z}_{r,p-1}$ is a maximal proper submodule of $\mathcal{P}_{r,p-1}$ because we have an exact sequence \begin{equation*} 0\longrightarrow\mathcal{Z}_{r,p-1}\longrightarrow\mathcal{P}_{r,p-1}\longrightarrow\mathcal{L}_{r,p-1}\longrightarrow 0. \end{equation*} So $\mathcal{L}_{r,p-1}$ is both a submodule and quotient of $\mathcal{P}_{r,p-1}$. We claim that $\mathcal{L}_{r\pm1,1}$ are neither submodules nor quotients of $\mathcal{P}_{r,p-1}$. Indeed, using rigidity, \begin{align*} \hom_{V_c}(\mathcal{L}_{r\pm1,1},\mathcal{P}_{r,p-1}) & \cong\hom_{V_c}(\mathcal{L}_{r\pm1,1},\mathcal{L}_{r,1}\boxtimes\mathcal{P}_{1,p-1})\nonumber\\ &\cong\hom_{V_c}(\mathcal{L}_{r\pm1,1}\boxtimes\mathcal{L}_{r,1},\mathcal{P}_{1,p-1})=0, \end{align*} since $\mathcal{L}_{r\pm1,1}\boxtimes\mathcal{L}_{r,1}$ is a direct sum of submodules $\mathcal{L}_{r',1}$ that does not include $\mathcal{L}_{1,1}$ (by \eqref{eqn:Lr1_fus_rules}) and since $\mathcal{L}_{1,p-1}$ is the only irreducible submodule of $\mathcal{P}_{1,p-1}$. Then since $\mathcal{P}_{r,p-1}$ is self-dual, \begin{equation*} \hom_{V_c}(\mathcal{P}_{r,p-1},\mathcal{L}_{r\pm1,1})=0 \end{equation*} as well. So if we use $\mathcal{X}_{r\pm1,1}\subseteq\mathcal{Z}_{r,p-1}$ to denote the inverse images of $\mathcal{L}_{r\pm1,1}$ under the surjection $\mathcal{Z}_{r,p-1}\rightarrow\mathcal{L}_{r-1,1}\oplus\mathcal{L}_{r+1,1}$, the exact sequences \begin{equation*} 0\longrightarrow\mathcal{L}_{r,p-1}\longrightarrow\mathcal{X}_{r\pm1,1}\longrightarrow\mathcal{L}_{r\pm1,1}\longrightarrow 0 \end{equation*} do not split. Then using conformal weight considerations as in the $r=1$ case, $\mathcal{X}_{r+1,1}'$ is a quotient of $\mathcal{V}_{r,p-1}$ while $\mathcal{X}_{r-1,1}$ is a quotient of $\mathcal{V}_{r-1,1}$. Specifically, \begin{equation*} \mathcal{X}_{r+1,1}\cong(\mathcal{V}_{r,p-1}/\mathcal{V}_{r+2,p-1})' \end{equation*} and \begin{equation*} \mathcal{X}_{r-1,1}\cong\mathcal{V}_{r-1,1}/\mathcal{V}_{r+1,1}, \end{equation*} verifying the upper left half of the subquotient diagram for $\mathcal{P}_{r,p-1}$. We still need to determine $\mathcal{P}_{r,p-1}/\mathcal{X}_{r\pm1,1}$. These quotients appear in the exact sequences \begin{equation*} 0\longrightarrow\mathcal{Z}_{r,p-1}/\mathcal{X}_{r\pm1,1}\longrightarrow\mathcal{P}_{r,p-1}/\mathcal{X}_{r\pm1,1}\longrightarrow\mathcal{L}_{r,p-1}\longrightarrow 0, \end{equation*} with $\mathcal{Z}_{r,p-1}/\mathcal{X}_{r\pm1,1}\cong\mathcal{L}_{r\mp1,1}$. These sequences do not split because $\mathcal{L}_{r\pm1,1}$ are not quotients of $\mathcal{P}_{r,p-1}$, so conformal weight considerations as before show that \begin{equation*} \mathcal{P}_{r,p-1}/\mathcal{X}_{r+1,1}\cong(\mathcal{V}_{r-1,1}/\mathcal{V}_{r+1,1})' \end{equation*} and \begin{equation*} \mathcal{P}_{r,p-1}/\mathcal{X}_{r-1,1}\cong\mathcal{V}_{r,p-1}/\mathcal{V}_{r+2,p-1}. \end{equation*} This verifies all subquotients in the diagram for $\mathcal{P}_{r,p-1}$, and the Loewy diagram also follows easily. In particular, $\mathrm{Soc}(\mathcal{P}_{r,p-1})\cong\mathcal{L}_{r,p-1}$ because $\mathcal{L}_{r\pm1,1}$ are not submodules and $\mathcal{L}_{r,p-1}$ occurs as a submodule only once (otherwise $\mathcal{L}_{r\pm1,1}$ would be quotients), and then $\mathrm{Soc}(\mathcal{P}_{r,p-1}/\mathcal{L}_{r,p-1})\cong\mathcal{L}_{r-1,1}\oplus\mathcal{L}_{r+1,1}$ because again $\mathcal{L}_{r\pm1,1}$ are not quotients of $\mathcal{P}_{r,p-1}$. Finally, as in the $r=1$ case, $\mathcal{P}_{r,p-1}$ is indecomposable because the intersection of any two non-zero submodules must contain the irreducible socle $\mathcal{L}_{r,p-1}$. \end{proof} \begin{rem} Proposition \ref{prop:Pr_structure} shows that for $r\geq 2$, the homomorphism $\Pi_{r,p}: \mathcal{V}_{r,p-1}\oplus\mathcal{V}_{r,p+1}\rightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,p}$ is not surjective: its image is the Verma module quotient $\mathcal{V}_{r-1,1}/\mathcal{V}_{r+1,1}$ (note that $\mathcal{V}_{r-1,1}=\mathcal{V}_{r,p+1}$). \end{rem} We summarize the fusion rules of this section in the following theorem: \begin{thm}\label{thm:basic_fusion_rules} The following fusion rules hold in $\mathcal{O}_c$: \begin{enumerate} \item For $r,r'\geq 1$ and $1\leq s\leq p$, \begin{equation}\label{fr1} \mathcal{L}_{r',1}\boxtimes\mathcal{L}_{r,s}\cong\bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1} \mathcal{L}_{k,s}. \end{equation} \item For $r\geq1$ and $1\leq s\leq p$, \begin{equation}\label{fr2} \mathcal{L}_{1, 2}\boxtimes\mathcal{L}_{r, s}\cong\left\lbrace\begin{array}{lll} \mathcal{L}_{r,2} & \text{if} & s=1\\ \mathcal{L}_{r,s-1}\oplus\mathcal{L}_{r,s+1} & \text{if} & 2\leq s\leq p-1\\ \mathcal{P}_{r,p-1} & \text{if} & s=p \end{array} \right. , \end{equation} where $\mathcal{P}_{r,p-1}$ is the indecomposable module described in Propositions \ref{prop:P1_structure} and \ref{prop:Pr_structure}. \end{enumerate} \end{thm} We will use these formulas to compute all fusion products of irreducible modules later, but we will first need to construct additional indecomposable modules $\mathcal{P}_{r,s}$ that will appear in the fusion products. \subsection{Categorical dimensions in \texorpdfstring{$\mathcal{O}_c$}{Oc}} Now we can use Proposition \ref{prop:L12_dim} and Theorem \ref{thm:basic_fusion_rules} to compute the categorical dimensions of all irreducible modules in $\mathcal{O}_c$: \begin{thm}\label{thm:cat_dim} In the ribbon tensor category $\mathcal{O}_c$, \begin{equation}\label{eqn:rs_cat_dim} \dim_{\mathcal{O}_c} \mathcal{L}_{r,s} =(-1)^{(p+1)(r+1)+s+1}\, r\cdot \frac{\sin(\pi s/p)}{\sin(\pi/p)} \end{equation} for all $r\geq 1$ and $1\leq s\leq p$. \end{thm} \begin{proof} We have $\dim_{\mathcal{O}_c}\mathcal{L}_{1,1}=1$, and Proposition \ref{prop:L12_dim} shows that \begin{equation*} \dim_{\mathcal{O}_c}\mathcal{L}_{1,2}=-2\cos(\pi/p)=-\frac{\sin(2\pi/p)}{\sin(\pi/p)} =-\frac{q^2-q^{-2}}{q-q^{-1}} \end{equation*} where $q=e^{\pi i/p}$. We can now prove by induction on $s$ that $\dim_{\mathcal{O}_c}\mathcal{L}_{1,s}=(-1)^{s+1}\frac{\sin(s\pi/p)}{\sin(\pi/p)}$ for $1\leq s\leq p$. Indeed, if this formula holds for $s$, then using the fusion rules \eqref{fr2} and the fact that categorical dimension respects tensor products, we get \begin{align*} \dim_{\mathcal{O}_c}\mathcal{L}_{1,s+1} & =(\dim_{\mathcal{O}_c}\mathcal{L}_{1,2})(\dim_{\mathcal{O}_c}\mathcal{L}_{1,s})-\dim_{\mathcal{O}_c}\mathcal{L}_{1,s-1}\nonumber\\ & =(-1)^{s+2}\frac{(q^2-q^{-2})(q^s-q^{-s})}{(q-q^{-1})^2}-(-1)^{s}\frac{q^{s-1}-q^{-s+1}}{q-q^{-1}}\nonumber\\ & =\frac{(-1)^{s+2}}{q-q^{-1}}\left((q+q^{-1})(q^s-q^{-s})-q^{s-1}+q^{-s+1}\right) = (-1)^{s+2}\frac{q^{s+1}-q^{-s-1}}{q-q^{-1}}, \end{align*} as required. From this dimension formula, we can see that $\dim_{\mathcal{O}_c}\mathcal{L}_{1,p}=0$. Next we consider $\mathcal{L}_{2,1}$. Since this is a composition factor of $\mathcal{P}_{1,p-1}=\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,p}$ and since categorical dimension respects extensions, \begin{align*} \dim_{\mathcal{O}_c}\mathcal{L}_{2,1} & =\dim_{\mathcal{O}_{c}}\mathcal{P}_{1,p-1}-2\dim_{\mathcal{O}_c}\mathcal{L}_{1,p-1}\nonumber\\ & =(\dim_{\mathcal{O}_c}\mathcal{L}_{1,2})(\dim_{\mathcal{O}_c}\mathcal{L}_{1,p})-2(-1)^p\frac{\sin((p-1)\pi/p)}{\sin(\pi/p)}\nonumber\\ & =0+(-1)^{p+1}\, 2\cdot\frac{\sin(\pi-\pi/p)}{\sin(\pi/p)} =(-1)^{p+1}\, 2. \end{align*} From this, the $\mathfrak{sl}_2$-type fusion rules \eqref{fr1} and induction on $r$ show that \begin{equation*} \dim_{\mathcal{O}_c}\mathcal{L}_{r,1} =(-1)^{(p+1)(r+1)}\, r \end{equation*} for all $r\geq 1$. Then \eqref{eqn:rs_cat_dim} for general $r$ and $s$ follows from $\mathcal{L}_{r,s}\cong\mathcal{L}_{r,1}\boxtimes\mathcal{L}_{1,s}$. \end{proof} \section{Projective modules}\label{sec:proj} The category $\mathcal{O}_c$ is quite wild: for example, since all Verma modules $\mathcal{V}_{r,s}$ have infinite length, each irreducible module $\mathcal{L}_{r,s}$ has non-split extensions in $\mathcal{O}_c$ of arbitrary length. This means that no irreducible module $\mathcal{L}_{r,s}$ has a projective cover in $\mathcal{O}_c$, and consequently, there is probably no hope of any reasonable classification or description of the indecomposable objects in $\mathcal{O}_c$. We can remedy this situation somewhat by restricting attention to a tamer tensor subcategory, which we introduce next. \subsection{The tensor subcategory \texorpdfstring{$\mathcal{O}_c^0$}{Oc0}} Recall from Theorem \ref{thm:Lr1_fus_rules} that the modules $\mathcal{L}_{r,1}$ for $r\geq 1$ are the simple objects of a semisimple tensor subcategory of $\mathcal{O}_c$ that is braided tensor equivalent to an abelian $3$-cocycle twist of $\rep SU(2)$. Moreover, the modules $\mathcal{L}_{2n+1,1}$ for $n\in\mathbb{N}$ are the simple objects of a semisimple symmetric tensor subcategory that is equivalent to $\rep SO(3,\mathbb{R})$. These are the irreducible $V_c$-modules that appear in the decomposition of the triplet vertex operator algebra $\mathcal{W}(p)$ as a $V_c$-module: specifically, \begin{equation*} \mathcal{W}(p)\cong\bigoplus_{n=0}^\infty (2n+1)\cdot\mathcal{L}_{2n+1,0}. \end{equation*} Because the subcategory $\rep SO(3,\mathbb{R})$ of $\mathcal{O}_c$ is symmetric, monodromies satisfy \begin{equation*} \mathcal{R}_{\mathcal{L}_{2n'+1,1},\mathcal{L}_{2n+1,1}}\circ\mathcal{R}_{\mathcal{L}_{2n+1,1},\mathcal{L}_{2n'+1,1}} =\mathrm{Id}_{\mathcal{L}_{2n+1,1}\boxtimes\mathcal{L}_{2n'+1,1}} \end{equation*} for all $n,n'\in\mathbb{N}$, that is, the modules $\mathcal{L}_{2n+1,1}$ and $\mathcal{L}_{2n'+1,1}$ centralize each other. We define the subcategory $\mathcal{O}_c^0\subseteq\mathcal{O}_c$ to consist of all modules that centralize the $\mathcal{L}_{2n+1,1}$: \begin{defi}\label{def:oc0} The category $\mathcal{O}_c^0$ is the M\"{u}ger centralizer of $\rep SO(3,\mathbb{R})$ in $\mathcal{O}_c$, that is, $\mathcal{O}_c^0\subseteq\mathcal{O}_c$ is the full subcategory whose objects $\mathcal{W}$ satisfy \begin{equation*} \mathcal{R}_{\mathcal{L}_{2n+1,1},\mathcal{W}}\circ\mathcal{R}_{\mathcal{W},\mathcal{L}_{2n+1,1}} =\mathrm{Id}_{\mathcal{W}\boxtimes\mathcal{L}_{2n+1,1}} \end{equation*} for all $n\in\mathbb{N}$. \end{defi} The next result establishes the fundamental properties of $\mathcal{O}_c^0$: \begin{prop} The category $\mathcal{O}_c^0$ is a ribbon tensor subcategory of $\mathcal{O}_c$ that contains all irreducible $V_c$-modules $\mathcal{L}_{r,s}$ for $r\geq1$, $1\leq s\leq p$. \end{prop} \begin{proof} To show that $\mathcal{O}_c^0$ is a monoidal subcategory of $\mathcal{O}_c$, we just need to show that if $\mathcal{W}_1$ and $\mathcal{W}_2$ are modules in $\mathcal{O}_c^0$, then so is $\mathcal{W}_1\boxtimes\mathcal{W}_2$, that is, $$\mathcal{R}^2_{\mathcal{W}_1\boxtimes\mathcal{W}_2,\mathcal{L}_{2n+1,1}}=\mathrm{Id}_{(\mathcal{W}_1\boxtimes\mathcal{W}_2)\boxtimes\mathcal{L}_{2n+1,1}}$$ for all $n\in\mathbb{N}$. But this is straightforward from the hexagon axiom for the braiding $\mathcal{R}$. Then to show that $\mathcal{O}_c^0$ is abelian and thus a tensor subcategory of $\mathcal{O}_c$, it is enough to show that $\mathcal{O}_c^0$ is closed under submodules and quotient modules. This follows from the rigidity of $\mathcal{L}_{2n+1,1}$ and corresponding exactness of $\mathcal{L}_{2n+1,1}\boxtimes\bullet$, as well as the naturality of the braiding in $\mathcal{O}_c$. To show that $\mathcal{O}_c^0$ is rigid and thus a ribbon subcategory of $\mathcal{O}_c$, we just need to show closure under contragredients, that is, if $\mathcal{R}^2_{\mathcal{W},\mathcal{L}_{2n+1,1}}$ is the identity for each $n\in\mathbb{N}$, then so is $\mathcal{R}^2_{\mathcal{W}',\mathcal{L}_{2n+1,1}}$. Since any such $\mathcal{W}$ is rigid (in $\mathcal{O}_c$) by Theorem \ref{thm:rigidity}, we can use \cite[Lemma 8.9.1]{EGNO}, which states that $\mathcal{R}_{\mathcal{W}',\mathcal{L}_{2n+1,1}}$ agrees with the composition \begin{align*} \mathcal{W}'\boxtimes & \mathcal{L}_{2n+1,1} \xrightarrow{r^{-1}} (\mathcal{W}'\boxtimes\mathcal{L}_{2n+1,1})\boxtimes V_c\xrightarrow{\mathrm{Id}\boxtimes i_\mathcal{W}} (\mathcal{W}'\boxtimes\mathcal{L}_{2n+1,1})\boxtimes(\mathcal{W}\boxtimes\mathcal{W}')\nonumber\\ & \xrightarrow{assoc.} \mathcal{W}'\boxtimes((\mathcal{L}_{2n+1,1}\boxtimes\mathcal{W})\boxtimes\mathcal{W}') \xrightarrow{\mathrm{Id}\boxtimes(\mathcal{R}_{\mathcal{W},\mathcal{L}_{2n+1,1}}^{-1}\boxtimes\mathrm{Id})} \mathcal{W}'\boxtimes((\mathcal{W}\boxtimes\mathcal{L}_{2n+1,1})\boxtimes\mathcal{W}')\nonumber\\ & \xrightarrow{assoc.} (\mathcal{W}'\boxtimes\mathcal{W})\boxtimes(\mathcal{L}_{2n+1,1}\boxtimes\mathcal{W}')\xrightarrow{e_\mathcal{W}\boxtimes\mathrm{Id}} V_c\boxtimes(\mathcal{L}_{2n+1,1}\boxtimes\mathcal{W}')\xrightarrow{l} \mathcal{L}_{2n+1,1}\boxtimes\mathcal{W}', \end{align*} where the arrows marked $assoc.$ represent compositions of associativity isomorphisms in $\mathcal{O}_c$. Using the opposite braiding, $\mathcal{R}_{\mathcal{L}_{2n+1,1},\mathcal{W}'}^{-1}$ is the identical composition, except that $\mathcal{R}_{\mathcal{W},\mathcal{L}_{2n+1,1}}^{-1}$ is replaced with $\mathcal{R}_{\mathcal{L}_{2n+1,1},\mathcal{W}}$. But $\mathcal{R}_{\mathcal{W},\mathcal{L}_{2n+1,1}}^{-1}=\mathcal{R}_{\mathcal{L}_{2n+1,1},\mathcal{W}}$ since $\mathcal{W}$ is an object of $\mathcal{O}_c^0$, so the compositions giving $\mathcal{R}_{\mathcal{W}',\mathcal{L}_{2n+1,1}}$ and $\mathcal{R}_{\mathcal{L}_{2n+1,1},\mathcal{W}'}^{-1}$ are the same. Therefore the monodromy of $\mathcal{W}'$ with each $\mathcal{L}_{2n+1,1}$ is the identity. Finally, to show that each $\mathcal{L}_{r,s}$ is an object of $\mathcal{O}_c^0$, we use the balancing equation for monodromies: \begin{align*} \mathcal{R}^2_{\mathcal{L}_{r,s},\mathcal{L}_{2n+1,1}} =\theta_{\mathcal{L}_{r,s}\boxtimes\mathcal{L}_{2n+1,1}}\circ(\theta_{\mathcal{L}_{r,s}}^{-1}\boxtimes\theta_{\mathcal{L}_{2n+1,1}}^{-1}). \end{align*} Recall that $\theta=e^{2\pi i L_0}$ and that $$\mathcal{L}_{r,s}\boxtimes\mathcal{L}_{2n+1,1}\cong\bigoplus_{\substack{k = |r-2n-1|+1\\ k+r+2n \equiv 0\; ({\rm mod}\; 2)}}^{r+2n} \mathcal{L}_{k,s} =\bigoplus_{k=1}^{\min(r,2n+1)} \mathcal{L}_{r+2(n-k+1),s}$$ (from Theorem \ref{thm:basic_fusion_rules}). Thus on the $\mathcal{L}_{r+2(n-k+1),s}$ summand of $\mathcal{L}_{r,s}\boxtimes\mathcal{L}_{2n+1,1}$, the monodromy is given by the scalar \begin{equation*} e^{2\pi i(h_{r+2(n-k+1),s}-h_{r,s}-h_{2n+1,1})} =e^{2\pi i\left[(pr-s)(n-k+1)+(k-1)^2p-(2k-1)np+n\right]} =1. \end{equation*} Thus $\mathcal{R}^2_{\mathcal{L}_{r,s},\mathcal{L}_{2n+1,1}}=\mathrm{Id}_{\mathcal{L}_{r,s}\boxtimes\mathcal{L}_{2n+1,1}}$ for all $n\in\mathbb{N}$ as required. \end{proof} \begin{rem} Although $\mathcal{O}_c^0$ is closed under submodules, quotients, and contragredients, it need not be closed under arbitrary (non-split) extensions. Thus it is possible for a module to be projective in the subcategory $\mathcal{O}_c^0$ even if it is not projective in $\mathcal{O}_c$. In fact, we will show that every irreducible module $\mathcal{L}_{r,s}$ has a projective cover in $\mathcal{O}_c^0$, although not in $\mathcal{O}_c$. \end{rem} We now begin to obtain projective objects in $\mathcal{O}_c^0$: \begin{thm}\label{projoflrp} For all $r\geq 1$, the module $\mathcal{L}_{r,p}$ is both projective and injective in $\mathcal{O}_c^0$. \end{thm} \begin{proof} Since $\mathcal{L}_{r,p}$ is self-dual, injectivity of $\mathcal{L}_{r,p}$ will follow from projectivity. Moreover, it is enough to show that $\mathcal{L}_{1,p}$ is projective because $\mathcal{L}_{r,p}\cong\mathcal{L}_{r,1}\boxtimes\mathcal{L}_{1,p}$ from Theorem \ref{thm:Lr_Ls_fusion} (recall that projective objects form a tensor ideal in any rigid tensor category). Now because $\mathcal{L}_{1,p}$ is simple, it will be projective in $\mathcal{O}_c^0$ if all surjections $\mathcal{W}\twoheadrightarrow\mathcal{L}_{1,p}$ with $\mathcal{W}$ an object of $\mathcal{O}_c^0$ split. In fact, because all modules in $\mathcal{O}_c^0$ have finite length, we may assume that $\mathcal{W}$ has length $2$. (If all length-$2$ extensions of $\mathcal{L}_{1,p}$ split, then so do all finite-length extensions by induction on length.) Thus we are reduced to considering extensions \begin{equation}\label{eqn:proj_exact_seq} 0\longrightarrow\mathcal{L}_{r,s}\longrightarrow\mathcal{W}\longrightarrow\mathcal{L}_{1,p}\longrightarrow 0. \end{equation} It is easy to see that $h_{1,p}$ is the minimum of all conformal weights $h_{r,s}$ at central charge $c_{p,1}$, so because $\mathcal{L}_{1,p}$ does not admit non-split self-extensions (see \cite[Section 5.4]{GK}), we may assume $h_{r,s}>h_{1,p}$. This means that $\mathcal{W}$ contains a singular vector of conformal weight $h_{1,p}$, and thus $\mathcal{W}$ contains a homomorphic image of the Verma module $\mathcal{V}_{1,p}$. If the image of $\mathcal{V}_{1,p}$ in $\mathcal{W}$ has length $1$, the exact sequence \eqref{eqn:proj_exact_seq} splits, so we may assume the length is $2$. In this case, the structure of $\mathcal{V}_{1,p}$ as a $\mathcal{V} ir$-module shows that $\mathcal{W}\cong \mathcal{V}_{1,p}/\mathcal{V}_{5,p}$ and $(r,s)=(3,p)$. Thus we just need to show that $\mathcal{V}_{1,p}/\mathcal{V}_{5,p}$ is not an object of $\mathcal{O}_c^0$, and for this it is sufficient to show that the monodromy $\mathcal{R}_{\mathcal{L}_{3,1},\mathcal{V}_{1,p}/\mathcal{V}_{5,p}}^2$ is non-trivial. From the balancing equation \begin{equation*} \mathcal{R}_{\mathcal{L}_{3,1},\mathcal{V}_{1,p}/\mathcal{V}_{5,p}}^2=\theta_{\mathcal{L}_{3,1}\boxtimes(\mathcal{V}_{1,p}/\mathcal{V}_{5,p})}\circ(\theta_{\mathcal{L}_{3,1}}^{-1}\boxtimes\theta_{\mathcal{V}_{1,p}/\mathcal{V}_{5,p}}^{-1}) = e^{2\pi i(L_0-h_{3,1}-h_{1,p})}, \end{equation*} it is enough to show that $\mathcal{L}_{3,1}\boxtimes(\mathcal{V}_{1,p}/\mathcal{V}_{5,p})$ is a logarithmic $V_c$-module, that is, $L_0$ acts non-semisimply on the tensor product. To show this, we prove that $\mathcal{L}_{3,1}\boxtimes(\mathcal{V}_{1,p}/\mathcal{V}_{5,p})$ surjects onto a logarithmic self-extension of $\mathcal{L}_{3,p}$. First, the exactness of $\mathcal{L}_{3,1}\boxtimes\bullet$ and the fusion rules of Theorem \ref{thm:basic_fusion_rules} imply there is an exact sequence \begin{equation*} 0\longrightarrow\mathcal{L}_{1,p}\oplus\mathcal{L}_{3,p}\oplus\mathcal{L}_{5,p}\longrightarrow\mathcal{L}_{3,1}\boxtimes(\mathcal{V}_{1,p}/\mathcal{V}_{5,p})\longrightarrow\mathcal{L}_{3,p}\longrightarrow 0. \end{equation*} We quotient out the submodule $\mathcal{L}_{1,p}\oplus\mathcal{L}_{5,p}$ from the tensor product to get a surjection \begin{equation*} f:\mathcal{L}_{3,1}\boxtimes(\mathcal{V}_{1,p}/\mathcal{V}_{5,p})\longrightarrow\mathcal{L}_{3,p}^{(2)}, \end{equation*} where $\mathcal{L}_{3,p}^{(2)}$ is some self-extension of $\mathcal{L}_{3,p}$. We want to show that $L_0$ acts non-semisimply on $\mathcal{L}_{3,p}^{(2)}(0)$; this $A(V_c)$-module is $2$-dimensional and $h_{3,p}$ is its only $L_0$-eigenvalue. The intertwining operator $\mathcal{Y}=f\circ\mathcal{Y}_\boxtimes$ of type $\binom{\mathcal{L}_{3,p}^{(2)}}{\mathcal{L}_{3,1}\,\mathcal{V}_{1,p}/\mathcal{V}_{5,p}}$ is surjective because $f$ and $\mathcal{Y}_\boxtimes$ are surjective. Then Proposition \ref{prop:piY_surjective} implies that \begin{equation*} \pi(\mathcal{Y}): A(\mathcal{L}_{3,1})\otimes_{A(V_c)} (\mathcal{V}_{1,p}/\mathcal{V}_{5,p})(0)\longrightarrow\mathcal{L}_{3,p}^{(2)}(0) \end{equation*} is surjective, so $\mathcal{L}_{3,p}^{(2)}(0)$ is a homomorphic image of $A(\mathcal{L}_{3,1})\otimes_{A(V_c)}\mathbb{C} v_{1,p}$. This latter $A(V_c)$-module was determined in \cite{FZ2} (under the unnecessary assumption that $p\notin\mathbb{Q}$): we now review the computation. The computation of the $A(V_c)\cong\mathbb{C}[x]$-bimodule $A(\mathcal{L}_{3,1})$ is similar to the computation of $A(\mathcal{L}_{1,2})$ from Section \ref{sec:first_fus}. Recall there is an isomorphism \begin{align*} \varphi: A(\mathcal{V}_{3,1}) & \rightarrow \mathbb{C}[x,y]\\ [\omega]^m\cdot[v_{3,1}]\cdot[\omega]^n & \mapsto x^m y^n, \end{align*} and that \begin{equation*} A(\mathcal{L}_{3,1})\cong\mathbb{C}[x,y]/(f_{3,1}(x,y)) \end{equation*} where $f_{3,1}(x,y)=\varphi([\widetilde{v}])$ for a singular vector $\widetilde{v}\in\mathcal{V}_{3,1}$ generating the maximal proper submodule. We can take \begin{equation*} \widetilde{v}=\left( L_{-1}^3-4p L_{-1}L_{-2}+2p(2p+1)L_{-3}\right)v_{3,1}. \end{equation*} Then to compute $\varphi([\widetilde{v}])$, first note that \eqref{eqn:bimod_reln} implies \begin{equation*} \varphi([L_{-1} v])=(x-y-\mathrm{wt}\,v)\varphi([v]) \end{equation*} for $v\in\mathcal{V}_{3,1}$, while \eqref{eqn:bimod_reln_2} implies \begin{equation*} \varphi([L_{-2}v])=y\varphi([v])-\varphi([L_{-1}v]) =(2y-x+\mathrm{wt}\,v)\varphi([v]). \end{equation*} Then the relation \begin{equation*} [L_{-n} v] =(-1)^n[(n-1)L_{-2}v+(n-2)L_{-1}v] \end{equation*} in $A(\mathcal{V}_{3,1})$ specialized to $n=3$ (see the proof of \cite[Lemma 2.11]{FZ2}) implies \begin{align*} \varphi([L_{-3}v]) & =-2\varphi([L_{-2}v])-\varphi([L_{-1} v])=(x-3y-\mathrm{wt}\,v)\varphi([v]). \end{align*} Using these formulas, we get \begin{align*} f_{3,1}(x,y)&= (x-y-h_{3,1}-2)(x-y-h_{3,1}-1)(x-y-h_{3,1})\nonumber\\ &\hspace{3em}-4p(x-y-h_{3,1}-2)(2y-x+h_{3,1})+2p(2p+1)(x-3y-h_{3,1})\nonumber\\ & =(x-y)\left((x-y-2p+1)(x-y-1)-4p\,y\right) \end{align*} (see \cite[Example 2.12]{FZ2}). We now have \begin{align*} A(\mathcal{L}_{3,1})\otimes_{A(V_c)} \mathbb{C} v_{1,p} &\cong\mathbb{C}[x]/(f_{3,1}(x,h_{1,p})), \end{align*} and it turns out that \begin{align*} f_{3,1}(x,h_{1,p})&=(x-h_{1,p})(x-h_{3,p})^2. \end{align*} Thus $L_0$ acts non-semisimply on the only $2$-dimensional quotient of $A(\mathcal{L}_{3,1})\otimes_{A(V_c)}\mathbb{C} v_{1,p}$ whose only $L_0$-eigenvalue is $h_{3,p}$. So $\mathcal{L}_{3,p}^{(2)}$ is a logarithmic module in $\mathcal{O}_c$, proving that $\mathcal{V}_{1,p}/\mathcal{V}_{5,p}$ is not an object of $\mathcal{O}_c^0$. This completes the proof that $\mathcal{L}_{1,p}$ is projective in $\mathcal{O}_c^0$. \end{proof} As the modules $\mathcal{L}_{r,p}$ are irreducible, they are their own projective covers in $\mathcal{O}_c^0$. For this reason, we will sometimes use the alternate notation $\mathcal{L}_{r,p}=\mathcal{P}_{r,p}$ for $r\geq 1$. The irreducible modules $\mathcal{L}_{r,p-1}$ also have projective covers in $\mathcal{O}_c^0$: \begin{prop}\label{prop:Prp-1_proj_cover} For $r\geq 1$, the module $\mathcal{P}_{r,p-1}$ is a projective cover of $\mathcal{L}_{r,p-1}$ in $\mathcal{O}_c^0$. \end{prop} \begin{proof} The module $\mathcal{P}_{r,p-1}$ is projective in $\mathcal{O}_c^0$ because it is by definition the tensor product of a rigid with a projective module. From Propositions \ref{prop:P1_structure} and \ref{prop:Pr_structure}, there is also a surjective homomorphism $q: \mathcal{P}_{r,p-1}\rightarrow\mathcal{L}_{r,p-1}$. Now let $\mathcal{P}$ be any projective module in $\mathcal{O}_c^0$ with surjective homomorphism $\widetilde{q}: \mathcal{P}\rightarrow\mathcal{L}_{r,p-1}$. Because both $\mathcal{P}$ and $\mathcal{P}_{r,p-1}$ are projective, there are homomorphisms $f: \mathcal{P}\rightarrow\mathcal{P}_{r,p-1}$ and $g:\mathcal{P}_{r,p-1}\rightarrow\mathcal{P}$ such that the diagrams \begin{equation*} \xymatrix{ & \mathcal{P} \ar[ld]_{f} \ar[d]^{\widetilde{q}} \\ \mathcal{P}_{r,p-1} \ar[r]_q & \mathcal{L}_{r,p-1} \\ } \qquad \qquad \xymatrix{ & \mathcal{P}_{r,p-1} \ar[ld]_{g} \ar[d]^{q} \\ \mathcal{P} \ar[r]_(.4){\widetilde{q}} & \mathcal{L}_{r,p-1} \\ } \end{equation*} commute; we need to show that $f$ is surjective. Indeed, $f\circ g$, as an endomorphism of a finite-length indecomposable module, is either nilpotent or an isomorphism by Fitting's Lemma, and it cannot be nilpotent because for all $N\in\mathbb{N}$, \begin{equation*} q\circ(f\circ g)^N = q\neq 0. \end{equation*} Therefore $f\circ g$ is an isomorphism, which means $f$ is surjective (and $g$ is injective). \end{proof} \subsection{The remaining projective covers}\label{subsec:more_proj_covers} For $p=2$, we have shown that every irreducible module has a projective cover in $\mathcal{O}_c^0$. For $p\geq 3$, we now construct projective covers of the remaining irreducible modules $\mathcal{L}_{r,s}$, $s\leq p-2$, using the method of \cite[Section 5.1]{CMY2}. In fact, many of the arguments from \cite{CMY2} go through almost verbatim in this context. \subsubsection{The case \texorpdfstring{$r = 1$}{r=1}} From Proposition \ref{prop:P1_structure}, the maximal submodule $\mathcal{Z}_{1,p-1}$ of the projective module $\mathcal{P}_{1,p-1}$ is isomorphic to $(\mathcal{V}_{1,p-1}/\mathcal{V}_{3,p-1})'$, and there is an exact sequence \begin{equation}\label{z1p-1} 0 \longrightarrow \mathcal{L}_{1,p-1} \longrightarrow \mathcal{Z}_{1,p-1} \longrightarrow \mathcal{L}_{2,1} \longrightarrow 0. \end{equation} Since $\mathcal{L}_{1,2}$ is rigid, the functor $\mathcal{L}_{1,2}\boxtimes \bullet$ is exact. Applying $\mathcal{L}_{1,2} \boxtimes \bullet$ to \eqref{z1p-1} and using the fusion rules \eqref{fr2}, we get an exact sequence \begin{equation*} 0 \longrightarrow \mathcal{L}_{1, p-2}\oplus \mathcal{L}_{1,p} \longrightarrow \mathcal{L}_{1,2} \boxtimes \mathcal{Z}_{1,p-1} \longrightarrow \mathcal{L}_{2,2} \longrightarrow 0. \end{equation*} Because $\mathcal{L}_{1,p}$ is injective in $\mathcal{O}_{c}^0$, it is a direct summand of $\mathcal{L}_{1,2} \boxtimes \mathcal{Z}_{1,p-1}$. Let $\mathcal{Z}_{1, p-2}$ be a submodule complement of $\mathcal{L}_{1,p}$ in $\mathcal{L}_{1,2} \boxtimes \mathcal{Z}_{1,p-1}$, that is, \begin{equation}\label{decz1p-1} \mathcal{L}_{1,2} \boxtimes \mathcal{Z}_{1,p-1} = \mathcal{L}_{1,p} \oplus \mathcal{Z}_{1, p-2}. \end{equation} It is easy to see that there is an exact sequence \begin{equation}\label{z1p-2} 0 \longrightarrow \mathcal{L}_{1,p-2} \longrightarrow \mathcal{Z}_{1, p-2} \longrightarrow \mathcal{L}_{2,2} \longrightarrow 0. \end{equation} We claim that this exact sequence does not split. Indeed, the rigidity of $\mathcal{L}_{1,2}$, the fusion rules \eqref{fr2}, and the Loewy diagram of $\mathcal{Z}_{1,p-1}$ imply \begin{align*} \hom (\mathcal{L}_{2,2}, \mathcal{L}_{1,2} \boxtimes \mathcal{Z}_{1,p-1}) &\cong \hom(\mathcal{L}_{1,2} \boxtimes \mathcal{L}_{2,2}, \mathcal{Z}_{1,p-1})\\ &\cong \hom(\mathcal{L}_{2,1}\oplus \mathcal{L}_{2,3}, \mathcal{Z}_{1,p-1}) = 0. \end{align*} So $\mathcal{L}_{2,2}$ cannot be a submodule of $\mathcal{Z}_{1,p-2}\subseteq\mathcal{L}_{1,2}\boxtimes\mathcal{Z}_{1,p-1}$. Note that the non-splitting of \eqref{z1p-2} together with conformal weight considerations show that $\mathcal{Z}_{1,p-2}\cong(\mathcal{V}_{1,p-2}/\mathcal{V}_{3,p-2})'$, just as in the proof of Proposition \ref{prop:P1_structure}. Now we apply $\mathcal{L}_{1,2}\boxtimes \bullet$ to the exact sequence \[ 0 \longrightarrow \mathcal{Z}_{1,p-1} \longrightarrow \mathcal{P}_{1,p-1} \longrightarrow \mathcal{L}_{1,p-1} \longrightarrow 0. \] Using \eqref{fr2} and the decomposition \eqref{decz1p-1}, we get the exact sequence \begin{equation*} 0 \longrightarrow \mathcal{L}_{1,p} \oplus \mathcal{Z}_{1,p-2} \longrightarrow \mathcal{L}_{1,2}\boxtimes \mathcal{P}_{1,p-1} \longrightarrow \mathcal{L}_{1,p-2}\oplus \mathcal{L}_{1,p} \longrightarrow 0. \end{equation*} Because $\mathcal{L}_{1,p}$ is both projective and injective in $\mathcal{O}_c^0$, $2\cdot \mathcal{L}_{1,p}$ is a direct summand of $\mathcal{L}_{1,2}\boxtimes \mathcal{P}_{1,p-1}$. Defining $\mathcal{P}_{1,p-2}$ to be a direct summand of $\mathcal{L}_{1,2} \boxtimes \mathcal{P}_{1,p-1}$ complementary to $2\cdot \mathcal{L}_{1,p}$, we get an exact sequence \begin{equation}\label{p1p-2} 0 \longrightarrow \mathcal{Z}_{1,p-2} \longrightarrow \mathcal{P}_{1,p-2} \longrightarrow \mathcal{L}_{1,p-2} \longrightarrow 0. \end{equation} We claim that ${\rm Soc}(\mathcal{P}_{1,p-2}) = \mathcal{L}_{1, p-2}$. Indeed, \eqref{z1p-2} and \eqref{p1p-2} show that the composition factors of $\mathcal{P}_{1,p-2}$ are $\mathcal{L}_{1,p-2}$, $\mathcal{L}_{1,p-2}$, and $\mathcal{L}_{2,2}$. We have already seen that $\mathcal{L}_{2,2}$ is not a submodule of $\mathcal{P}_{1,p-2}$, while \begin{align*} \dim\hom(\mathcal{L}_{1,p-2},\mathcal{P}_{1,p-2}) & =\dim\hom(\mathcal{L}_{1,p-2},\mathcal{L}_{1,2}\boxtimes\mathcal{P}_{1,p-1})\nonumber\\ &=\dim\hom(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,p-2},\mathcal{P}_{1,p-1})\nonumber\\ &=\dim\hom(\mathcal{L}_{1,p-3}\oplus\mathcal{L}_{1,p-1},\mathcal{P}_{1,p-1}) =1, \end{align*} proving the claim. Next, the exact sequences \eqref{p1p-2} and \eqref{z1p-2} give \begin{equation}\label{qp1p-2} 0 \longrightarrow \mathcal{L}_{2,2} \longrightarrow \mathcal{P}_{1,p-2}/\mathcal{L}_{1,p-2} \longrightarrow \mathcal{L}_{1,p-2} \longrightarrow 0. \end{equation} We claim this sequence does not split and thus ${\rm Soc}(\mathcal{P}_{1,p-2}/\mathcal{L}_{1,p-2}) = \mathcal{L}_{2,2}$. Otherwise, we would have $\mathcal{P}_{1,p-2}/\mathcal{L}_{1,p-2} \cong \mathcal{L}_{1,p-2} \oplus \mathcal{L}_{2,2}$; using the rigidity of $\mathcal{L}_{1,2}$, this would imply \begin{align*} \hom(\mathcal{P}_{1,p-1}/\mathcal{L}_{1,p-1}, \mathcal{L}_{1,2}\boxtimes \mathcal{L}_{2,2}) &\cong \hom(\mathcal{L}_{1,2}\boxtimes(\mathcal{P}_{1,p-1}/\mathcal{L}_{1,p-1}), \mathcal{L}_{2,2})\\ & \cong \hom((\mathcal{L}_{1,2}\boxtimes \mathcal{P}_{1,p-1})/(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,p-1}), \mathcal{L}_{2,2})\\ & \cong \hom((\mathcal{P}_{1,p-2}/\mathcal{L}_{1,p-2})\oplus \mathcal{L}_{1,p}, \mathcal{L}_{2,2}) \neq 0, \end{align*} whereas in fact the Loewy diagram of $\mathcal{P}_{1,p-1}$ shows \begin{align*} \hom(\mathcal{P}_{1,p-1}/\mathcal{L}_{1,p-1}, \mathcal{L}_{1,2}\boxtimes \mathcal{L}_{2,2}) &\cong \hom(\mathcal{V}_{1,p-1}/\mathcal{V}_{3,p-1}, \mathcal{L}_{2,1}\oplus \mathcal{L}_{2,3}) = 0. \end{align*} This proves the claim; note that because \eqref{qp1p-2} does not split, $\mathcal{P}_{1,p-2}/\mathcal{L}_{1,p-2}\cong\mathcal{V}_{1,p-2}/\mathcal{V}_{3,p-2}$, just as in the proof of Proposition \ref{prop:P1_structure}. We have now derived the Loewy diagram for $\mathcal{P}_{1,p-2}$ stated in the next proposition. Moreover, $\mathcal{P}_{1,p-2}$ is indecomposable for the same reasons as $\mathcal{P}_{1,p-1}$, and $\mathcal{P}_{1,p-2}$ is projective in $\mathcal{O}_c^0$ because it is a direct summand of the projective tensor product $\mathcal{L}_{1,2}\boxtimes\mathcal{P}_{1,p-1}$. Thus the argument of Proposition \ref{prop:Prp-1_proj_cover} shows that $\mathcal{P}_{1,p-2}$ is a projective cover of $\mathcal{L}_{1,p-2}$: \begin{prop} The module $\mathcal{P}_{1,p-2}$ is indecomposable and a projective cover of $\mathcal{L}_{1,p-2}$ in $\mathcal{O}_c^0$. It has Loewy diagram \begin{equation*} \xymatrixrowsep{1pc} \xymatrixcolsep{.75pc} \xymatrix{ & & \mathcal{L}_{1,p-2} \\ \mathcal{P}_{1,p-2}: & \mathcal{L}_{2,2} \ar[ru] & \\ & & \mathcal{L}_{1,p-2} \ar[lu]\\ } \end{equation*} \end{prop} Now that we have projective covers $\mathcal{P}_{1, p-2}$, $\mathcal{P}_{1,p-1}$, and $\mathcal{P}_{1,p}$, we proceed to construct modules $\mathcal{P}_{1,s}$ for $1 \leq s \leq p-3$ recursively (assuming now that $p\geq 4$). Fix $s \in \{1,2, \dots, p-3\}$ and assume we have $\mathcal{P}_{1,\sigma}$ for all $s+1 \leq \sigma \leq p-1$ such that: \begin{itemize} \item The module $\mathcal{P}_{1, \sigma}$ is a projective cover of $\mathcal{L}_{1,\sigma}$ in $\mathcal{O}_c^0$. \item The Loewy diagram of $\mathcal{P}_{1, \sigma}$ is \begin{equation*} \xymatrixrowsep{1pc} \xymatrixcolsep{.75pc} \xymatrix{ & & \mathcal{L}_{1,\sigma} \\ \mathcal{P}_{1,\sigma}: & \mathcal{L}_{2,p-\sigma} \ar[ru] & \\ & & \mathcal{L}_{1,\sigma} \ar[lu]\\ } \end{equation*} \end{itemize} We now define $\mathcal{P}_{1,s}$ as follows. We have a surjection \[ \mathcal{L}_{1,2}\boxtimes \mathcal{P}_{1,s+1} \longrightarrow \mathcal{L}_{1,2}\boxtimes \mathcal{L}_{1,s+1} \cong \mathcal{L}_{1,s}\oplus \mathcal{L}_{1,s+2} \longrightarrow \mathcal{L}_{1,s+2}. \] Because $\mathcal{L}_{1,2}$ is rigid and $\mathcal{P}_{1,s+1}$ is projective, $\mathcal{L}_{1,2}\boxtimes \mathcal{P}_{1,s+1}$ is also projective. So because $\mathcal{P}_{1,s+2}$ is the projective cover of $\mathcal{L}_{1,s+2}$, we get a surjective map \[ \mathcal{L}_{1,2}\boxtimes \mathcal{P}_{1,s+1} \longrightarrow \mathcal{P}_{1,s+2}. \] Since $\mathcal{P}_{1,s+2}$ is projective, this surjection splits and $\mathcal{P}_{1,s+2}$ is a direct summand of $\mathcal{L}_{1,2}\boxtimes \mathcal{P}_{1,s+1}$. Define $\mathcal{P}_{1,s}$ to be a complement of $\mathcal{P}_{1,s+2}$: \[ \mathcal{L}_{1,2}\boxtimes \mathcal{P}_{1,s+1} = \mathcal{P}_{1,s}\oplus \mathcal{P}_{1,s+2}. \] The module $\mathcal{P}_{1,s}$ is in $\mathcal{O}_c^0$ because this category is a tensor subcategory of $\mathcal{O}_c$, and it is projective in $\mathcal{O}_c^0$ because it is a summand of a projective module. We can now prove: \begin{thm}\label{thm:P1s_structure} The module $\mathcal{P}_{1,s}$ is a projective cover of $\mathcal{L}_{1,s}$ in $\mathcal{O}_c^0$ with Loewy diagram \begin{equation*} \xymatrixrowsep{1pc} \xymatrixcolsep{.75pc} \xymatrix{ & & \mathcal{L}_{1,s} \\ \mathcal{P}_{1,s}: & \mathcal{L}_{2,p-s} \ar[ru] & \\ & & \mathcal{L}_{1,s} \ar[lu]\\ } \end{equation*} \end{thm} \begin{proof} From its Loewy diagram, $\mathcal{P}_{1,s+1}$ has a maximal proper submodule $\mathcal{Z}_{1,s+1}$ with non-split exact sequence \begin{equation}\label{z1s} 0 \longrightarrow \mathcal{L}_{1,s+1} \longrightarrow \mathcal{Z}_{1,s+1} \longrightarrow \mathcal{L}_{2,p-s-1} \longrightarrow 0. \end{equation} We apply $\mathcal{L}_{1,2}\boxtimes \bullet$ to \eqref{z1s} and use the fusion rules \eqref{fr2} to get an exact sequence \begin{equation*} 0 \longrightarrow \mathcal{L}_{1,s}\oplus \mathcal{L}_{1,s+2} \longrightarrow \mathcal{L}_{1,2}\boxtimes \mathcal{Z}_{1,s+1} \longrightarrow \mathcal{L}_{2,p-s}\oplus \mathcal{L}_{2, p-s-2} \longrightarrow 0. \end{equation*} This shows that the conformal weights of $\mathcal{L}_{1,2}\boxtimes \mathcal{Z}_{1,s+1}$ are contained in the two distinct cosets $h_{1,s}+\mathbb{Z}$ and $h_{1,s+2}+\mathbb{Z}$, and thus $\mathcal{L}_{1,2}\boxtimes \mathcal{Z}_{1,s+1}$ decomposes as a direct sum of two modules, say $\mathcal{Z}_{1,s}$ and $\widetilde{\mathcal{Z}}_{1,s+2}$, with exact sequences \begin{equation}\label{z1s-1} 0 \longrightarrow \mathcal{L}_{1,s} \longrightarrow \mathcal{Z}_{1,s} \longrightarrow \mathcal{L}_{2, p-s} \longrightarrow 0 \end{equation} and \[ 0 \longrightarrow \mathcal{L}_{1,s+2} \longrightarrow \widetilde{\mathcal{Z}}_{1,s+2} \longrightarrow \mathcal{L}_{2, p-s-2} \longrightarrow 0. \] We claim that \eqref{z1s-1} does not split. Otherwise, $\mathcal{L}_{2,p-s}$ is a submodule of $\mathcal{Z}_{1,s}$, and thus also a submodule of $\mathcal{L}_{1,2}\boxtimes \mathcal{Z}_{1,s+1}$. Rigidity of $\mathcal{L}_{1,2}$ would then imply \[ \hom(\mathcal{L}_{1,2}\boxtimes \mathcal{L}_{2,p-s}, \mathcal{Z}_{1,s+1}) \cong \hom(\mathcal{L}_{2,p-s}, \mathcal{L}_{1,2}\boxtimes \mathcal{Z}_{1,s+1}) \neq 0. \] However by the fusion rules \eqref{fr2} and the non-split exact sequence \eqref{z1s} for $\mathcal{Z}_{1,s+1}$, there is no non-zero homomorphism \[ \mathcal{L}_{1,2}\boxtimes \mathcal{L}_{2,p-s} \cong \mathcal{L}_{2,p-s-1} \oplus \mathcal{L}_{2,p-s+1} \longrightarrow \mathcal{Z}_{1,s+1}. \] As a result, ${\rm Soc}(\mathcal{Z}_{1,s}) = \mathcal{L}_{1,s}$. Now we apply $\mathcal{L}_{1,2}\boxtimes \bullet$ to the exact sequence \[ 0 \longrightarrow \mathcal{Z}_{1,s+1} \longrightarrow \mathcal{P}_{1,s+1} \longrightarrow \mathcal{L}_{1,s+1} \longrightarrow 0 \] and get an exact sequence \[ 0 \longrightarrow \mathcal{Z}_{1,s} \oplus \widetilde{\mathcal{Z}}_{1,s+2} \longrightarrow \mathcal{L}_{1,2}\boxtimes \mathcal{P}_{1,s+1} \longrightarrow \mathcal{L}_{1,s} \oplus \mathcal{L}_{1,s+2} \longrightarrow 0. \] Conformal weight considerations again show that $\mathcal{P}_{1,s}$ satisfies the exact sequence \begin{equation}\label{p1s-1} 0 \longrightarrow \mathcal{Z}_{1,s} \longrightarrow \mathcal{P}_{1,s} \longrightarrow \mathcal{L}_{1,s} \longrightarrow 0. \end{equation} We claim that ${\rm Soc}(\mathcal{P}_{1,s}) = \mathcal{L}_{1,s}$. Otherwise, since \eqref{z1s-1} is non-split, we would have ${\rm Soc}(\mathcal{P}_{1,s}) = 2\cdot\mathcal{L}_{1,s}$, and rigidity of $\mathcal{L}_{1,2}$ would imply \[ \dim \hom(\mathcal{L}_{1,2}\boxtimes \mathcal{L}_{1,s}, \mathcal{P}_{1,s+1}) = \dim \hom(\mathcal{L}_{1,s}, \mathcal{L}_{1,2}\boxtimes \mathcal{P}_{1,s+1}) = 2. \] However, this would conflict with \[ \dim \hom(\mathcal{L}_{1,2}\boxtimes \mathcal{L}_{1,s}, \mathcal{P}_{1,s+1}) = \dim \hom([\mathcal{L}_{1,s-1} \oplus] \mathcal{L}_{1,s+1}, \mathcal{P}_{1,s+1}) = 1, \] where the summand in the brackets occurs for $s \geq 2$. The exact sequences \eqref{z1s-1} and \eqref{p1s-1} give an exact sequence \begin{equation}\label{qp1s-1} 0 \longrightarrow \mathcal{L}_{2,p-s} \longrightarrow \mathcal{P}_{1,s}/\mathcal{L}_{1,s} \longrightarrow \mathcal{L}_{1,s} \longrightarrow 0. \end{equation} We claim that \eqref{qp1s-1} does not split and thus ${\rm Soc}(\mathcal{P}_{1,s}/\mathcal{L}_{1,s}) = \mathcal{L}_{2,p-s}$. Otherwise, we would have $\mathcal{P}_{1,s}/\mathcal{L}_{1,s} = \mathcal{L}_{2,p-s} \oplus \mathcal{L}_{1,s}$, and rigidity of $\mathcal{L}_{1,2}$ would imply \begin{align*} \hom(\mathcal{P}_{1,s+1}/\mathcal{L}_{1,s+1}, \mathcal{L}_{1,2}\boxtimes \mathcal{L}_{2,p-s}) &\cong \hom((\mathcal{L}_{1,2}\boxtimes \mathcal{P}_{1,s+1})/(\mathcal{L}_{1,2}\boxtimes \mathcal{L}_{1,s+1}), \mathcal{L}_{2,p-s})\\ &\cong \hom((\mathcal{P}_{1,s+2}/\mathcal{L}_{1,s+2}) \oplus (\mathcal{P}_{1,s}/\mathcal{L}_{1,s}), \mathcal{L}_{2,p-s}) \neq 0. \end{align*} However, in fact \begin{align*} \hom(\mathcal{P}_{1,s+1}/\mathcal{L}_{1,s+1}, \mathcal{L}_{1,2}\boxtimes \mathcal{L}_{2,p-s}) & \cong \hom(\mathcal{P}_{1,s+1}/\mathcal{L}_{1,s+1}, \mathcal{L}_{2,p-s-1}\oplus \mathcal{L}_{2,p-s+1}) = 0. \end{align*} Thus $\mathcal{P}_{1,s}/\mathcal{L}_{1,s}$ is indecomposable, and we have verified the Loewy diagram for $\mathcal{P}_{1,s}$. Finally, $\mathcal{P}_{1,s}$ is a projective cover of $\mathcal{L}_{1,s}$ in $\mathcal{O}_c^0$ by the argument of Proposition \ref{prop:Prp-1_proj_cover}. \end{proof} \begin{rem} As in the proof of Proposition \ref{prop:P1_structure}, we have $\mathcal{P}_{1,s}/\mathcal{L}_{1,s}\cong \mathcal{V}_{1,s}/\mathcal{V}_{3,s}$ and $\mathcal{Z}_{1,s}\cong(\mathcal{V}_{1,s}/\mathcal{V}_{3,s})'$. \end{rem} \subsubsection{The case \texorpdfstring{$r \geq 2$}{r>=2}} We can construct the projective cover $\mathcal{P}_{r,s}$ for $r\geq 2$, $1 \leq s \leq p-2$ exactly as in \cite[Section~5]{CMY2}. Alternatively, we can simply define $\mathcal{P}_{r,s}=\mathcal{L}_{r,1}\boxtimes\mathcal{P}_{1,s}$. \begin{thm}\label{thm:Prs_structure} For $r\geq 2$ and $1\leq s\leq p-2$, the module $\mathcal{P}_{r,s}=\mathcal{L}_{r,1}\boxtimes\mathcal{P}_{1,s}$ is a projective cover of $\mathcal{L}_{r,s}$ in $\mathcal{O}_c^0$. It has Loewy diagram \begin{equation*} \xymatrixrowsep{1pc} \xymatrixcolsep{.75pc} \xymatrix{ & & \mathcal{L}_{r,s} & \\ \mathcal{P}_{r,s}: & \mathcal{L}_{r-1,p-s} \ar[ru] & & \mathcal{L}_{r+1,p-s} \ar[lu] \\ & & \mathcal{L}_{r,s} \ar[lu] \ar[ru] & \\ } \end{equation*} \end{thm} \begin{proof} The Loewy diagram for $\mathcal{P}_{r,s}$ follows from that for $\mathcal{P}_{1,s}$ exactly as in the $s=p-1$ case of Proposition \ref{prop:Pr_structure}. Also as in Proposition \ref{prop:Pr_structure}, $\mathcal{P}_{r,s}$ is indecomposable and there is a surjection $\mathcal{P}_{r,s}\rightarrow\mathcal{L}_{r,s}$. Moreover, $\mathcal{P}_{r,s}$ is projective in $\mathcal{O}_c^0$ since it is the tensor product of a rigid with a projective module. Thus the argument of Proposition \ref{prop:Prp-1_proj_cover} shows that $\mathcal{P}_{r,s}$ is a projective cover of $\mathcal{L}_{r,s}$. \end{proof} \section{Tensor product formulas and semisimplification} We now compute all tensor products involving irreducible modules $\mathcal{L}_{r,s}$ and their projective covers $\mathcal{P}_{r,s}$. As a consequence, we show that there is a semisimple subquotient category of $\mathcal{O}_c$ which is a product of two $\mathfrak{sl}_2$-type tensor subcategories. \subsection{General fusion rules} We first show how the irreducible modules $\mathcal{L}_{r',1}$ and $\mathcal{L}_{1,2}$ tensor with the projective covers; recall that $\mathcal{P}_{r,p}=\mathcal{L}_{r,p}$ for $r\geq 1$: \begin{thm}\label{thm:prs_fusion} \begin{itemize} \item[(1)]For $r,r'\geq 1$ and $1\leq s \leq p$, \begin{equation}\label{moreprs} \mathcal{L}_{r',1}\boxtimes \mathcal{P}_{r,s} \cong\bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1} \mathcal{P}_{k,s} \end{equation} \item[(2)]For $p\geq 3$ and $r \geq 1$, $1\leq s \leq p-1$, \begin{equation}\label{more1prs} \mathcal{L}_{1,2}\boxtimes \mathcal{P}_{r,s} \cong \begin{cases} \mathcal{P}_{1,2}\oplus \mathcal{P}_{2,p} \;\;\; &\mbox{if}\;\;\; r=s = 1\\ \mathcal{P}_{r,2}\oplus \mathcal{P}_{r-1,p}\oplus \mathcal{P}_{r+1, p} \;\;\; &\mbox{if}\;\;\; s = 1, \; r \geq 2\\ \mathcal{P}_{r,s-1}\oplus \mathcal{P}_{r,s+1}\;\;\; &\mbox{if}\;\;\; 2\leq s\leq p-2\\ \mathcal{P}_{r,p-2}\oplus 2\cdot \mathcal{P}_{r,p}, \;\;\; &\mbox{if}\;\;\; s = p-1 \end{cases} \end{equation} \item[(3)] For $p=2$ and $r\geq 1$, \begin{equation}\label{more2prs} \mathcal{L}_{1,2}\boxtimes\mathcal{P}_{r,1} \cong \begin{cases} 2\cdot\mathcal{P}_{1,2}\oplus \mathcal{P}_{2,2}\;\;\; &\mbox{if}\;\;\; r = 1 \\ \mathcal{P}_{r-1,2}\oplus 2\cdot\mathcal{P}_{r,2}\oplus\mathcal{P}_{r+1,2}\;\;\; &\mbox{if}\;\;\; r \geq 2 \end{cases} \end{equation} \end{itemize} \end{thm} \begin{proof} The $s=p$ case of \eqref{moreprs} is just the $s=p$ case of \eqref{fr1}. For $1\leq s\leq p-1$ and $r=1$, \eqref{moreprs} is just \eqref{eqn:Prp-1} and the definition of $\mathcal{P}_{r',s}$ in Theorem \ref{thm:Prs_structure}. For $r\geq 2$, we simply calculate \begin{align*} \mathcal{L}_{r',1}\boxtimes\mathcal{P}_{r,s} & =\mathcal{L}_{r',1}\boxtimes(\mathcal{L}_{r,1}\boxtimes\mathcal{P}_{1,s})\cong(\mathcal{L}_{r',1}\boxtimes\mathcal{L}_{r,1})\boxtimes\mathcal{P}_{1,s}\nonumber\\ & \cong \bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1} \mathcal{L}_{k,1}\boxtimes\mathcal{P}_{1,s}\cong\bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1} \mathcal{P}_{k,s}, \end{align*} using the $r=1$ case and \eqref{fr1}. Next, note that the $r=1$, $2\leq s\leq p-1$ cases of \eqref{more1prs} are immediate from our construction of the modules $\mathcal{P}_{1,s}$ in Section \ref{subsec:more_proj_covers}. For $r\geq 2$, we can then use \eqref{moreprs} and the $r=1$ case. It remains to prove the $s=1$ cases of \eqref{more1prs}. Taking $s=1$ now, the maximal proper submodule $\mathcal{Z}_{r,1}$ of $\mathcal{P}_{r,1}$ satisfies the exact sequence \begin{equation*} 0 \longrightarrow \mathcal{L}_{r,1} \longrightarrow \mathcal{Z}_{r,1} \longrightarrow [\mathcal{L}_{r-1, p-1}\oplus] \mathcal{L}_{r+1, p-1} \longrightarrow 0, \end{equation*} where from now on, terms in brackets vanish if $r=1$. Applying $\mathcal{L}_{1,2}\boxtimes \bullet$ and using the fusion rules \eqref{fr2}, we have \begin{equation}\label{seq:more1prs_proof} 0 \longrightarrow \mathcal{L}_{r,2} \longrightarrow \mathcal{L}_{1,2}\boxtimes \mathcal{Z}_{r,1} \longrightarrow [\mathcal{L}_{r-1, p-2}\oplus \mathcal{L}_{r-1,p}] \oplus \mathcal{L}_{r+1, p-2}\oplus \mathcal{L}_{r+1,p} \longrightarrow 0. \end{equation} Since both of $\mathcal{L}_{r\pm1,p}$ are projective, $[\mathcal{L}_{r-1,p}\oplus] \mathcal{L}_{r+1,p}$ is a direct summand of $\mathcal{L}_{1,2}\boxtimes \mathcal{Z}_{r,1}$. Then the complement $\widetilde{\mathcal{Z}}_{r, 2}$ of $[\mathcal{L}_{r-1,p}\oplus] \mathcal{L}_{r+1,p}$ satisfies the exact sequence \begin{equation}\label{seq:z_til} 0 \longrightarrow \mathcal{L}_{r,2} \longrightarrow \widetilde{\mathcal{Z}}_{r, 2} \longrightarrow [\mathcal{L}_{r-1, p-2}\oplus] \mathcal{L}_{r+1, p-2} \longrightarrow 0. \end{equation} Now consider the exact sequence \begin{equation*} 0 \longrightarrow \mathcal{Z}_{r,1} \longrightarrow \mathcal{P}_{r,1} \longrightarrow \mathcal{L}_{r,1} \longrightarrow 0. \end{equation*} Applying $\mathcal{L}_{1,2}\boxtimes \bullet$ and using the fusion rules \eqref{fr2}, we have \begin{equation*} 0 \longrightarrow \widetilde{\mathcal{Z}}_{r,2}\oplus [\mathcal{L}_{r-1,p}\oplus] \mathcal{L}_{r+1,p} \longrightarrow \mathcal{L}_{1,2}\boxtimes \mathcal{P}_{r,1} \longrightarrow \mathcal{L}_{r,2} \longrightarrow 0. \end{equation*} Since both of $\mathcal{L}_{r\pm1,p}$ are injective, there exists a direct summand $\widetilde{\mathcal{P}}_{r,2}$ of $\mathcal{L}_{1,2}\boxtimes \mathcal{P}_{r,1}$ complementary to $[\mathcal{L}_{r-1,p}\oplus] \mathcal{L}_{r+1,p}$ satisfying the exact sequence \begin{equation}\label{seq:p_til} 0 \longrightarrow \widetilde{\mathcal{Z}}_{r,2} \longrightarrow \widetilde{\mathcal{P}}_{r,2} \longrightarrow \mathcal{L}_{r,2} \longrightarrow 0. \end{equation} The module $\widetilde{P}_{r,2}$ is projective in $\mathcal{O}_c^0$ since it is a summand of a projective module. Since $\mathcal{P}_{r,2}$ is a projective cover of $\mathcal{L}_{r,2}$, there is thus a surjection $\widetilde{P}_{r,2}\longrightarrow\mathcal{P}_{r,2}$; but since \eqref{seq:z_til} and \eqref{seq:p_til} show that these two modules have the same length, we get $\widetilde{P}_{r,2}\cong\mathcal{P}_{r,2}$. Therefore \begin{equation*} \mathcal{L}_{1,2}\boxtimes \mathcal{P}_{r,1} \cong \mathcal{P}_{r,2} \oplus \mathcal{L}_{r+1,p} [\oplus \mathcal{L}_{r-1,p}], \end{equation*} proving \eqref{more1prs} for $s = 1$. Now when $p=2$, we need to replace the exact sequence \eqref{seq:more1prs_proof} with \begin{equation*} 0\longrightarrow\mathcal{L}_{r,2}\longrightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{Z}_{r,1}\longrightarrow [\mathcal{L}_{r-1,2}\oplus] \mathcal{L}_{r+1,2}\longrightarrow 0. \end{equation*} Since both $\mathcal{L}_{r\pm1,2}=\mathcal{P}_{r\pm1,2}$ are projective, this exact sequence splits. The exact sequence \begin{equation*} 0\longrightarrow \mathcal{L}_{1,2}\boxtimes\mathcal{Z}_{r,1}\longrightarrow\mathcal{L}_{1,2}\boxtimes\mathcal{P}_{r,1}\longrightarrow \mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,1}\longrightarrow 0 \end{equation*} also splits because $\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{r,1}\cong\mathcal{L}_{r,2}$ is projective. Then these two split exact sequences together imply \eqref{more2prs}. \end{proof} Finally, here are all fusion rules involving the simple modules $\mathcal{L}_{r,s}$ and their projective covers in $\mathcal{O}_c^0$: \begin{thm}\label{generalfusionrules} All tensor products in $\mathcal{O}_c$ of the $V_c$-modules $\mathcal{L}_{r,s}$ and $\mathcal{P}_{r,s}$ are as follows, with sums taken to be empty if the lower bound exceeds the upper bound: \begin{itemize} \item[(1)] For $r, r' \geq 1$ and $1 \leq s, s' \leq p$, \begin{align}\label{caser} & \mathcal{L}_{r,s}\boxtimes \mathcal{L}_{r',s'} \cong \bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1} \bigg(\bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; (\mathrm{mod}\; 2)}}^{\min(s+s'-1, 2p-1-s-s')} \mathcal{L}_{k, \ell} \oplus \bigoplus_{\substack{\ell = 2p+1-s-s'\\ \ell+s+s' \equiv 1\; (\mathrm{mod}\; 2)}}^{p} \mathcal{P}_{k, \ell}\bigg). \end{align} \item[(2)] For $r, r' \geq 1$, $1 \leq s \leq p$, and $1 \leq s' \leq p-1$, \begin{align}\label{casePM} \mathcal{L}_{r,s}\boxtimes \mathcal{P}_{r', s'} & \cong \bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1}\bigg(\bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{\min(s+s'-1, p)}\mathcal{P}_{k, \ell}\oplus \bigoplus_{\substack{\ell = 2p+1-s-s'\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{k, \ell}\bigg)\nonumber\\ &\qquad\oplus\bigoplus_{\substack{\ell = p-s+s'+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\bigg(\bigoplus_{\substack{k = \max(|r-r'|,1)\\ k+r+r' \equiv 0\; ({\rm mod}\; 2)}}^{r+r'-2} \mathcal{P}_{k,\ell}\oplus\bigoplus_{\substack{k = |r-r'|+2\\ k+r+r' \equiv 0\; ({\rm mod}\; 2)}}^{r+r'}\mathcal{P}_{k,\ell}\bigg). \end{align} \item[(3)]For $r,r'\geq 1$ and $1\leq s,s' \leq p-1$, \begin{align}\label{casePP} \mathcal{P}_{r,s}\boxtimes\mathcal{P}_{r',s'} & \cong2\cdot\bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1}\bigg(\bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{\min(s+s'-1, p)}\mathcal{P}_{k, \ell}\oplus\bigoplus_{\substack{\ell = 2p+1-s-s'\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{k,\ell}\bigg)\nonumber\\ &\qquad \oplus\bigoplus_{\substack{\ell = s+s'+1\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\bigg( \bigoplus_{\substack{k = \max(|r-r'|-1,1)\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-3}\mathcal{P}_{k,\ell}\oplus \bigoplus_{\substack{k = \max(|r-r'|+1,2)\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1}\mathcal{P}_{k,\ell}\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\oplus\bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1}\mathcal{P}_{k,\ell}\oplus\bigoplus_{\substack{k = |r-r'|+3\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'+1}\mathcal{P}_{k,\ell} \bigg)\nonumber\\ &\qquad\oplus\bigoplus_{\substack{k = \max(|r-r'|,1)\\ k+r+r' \equiv 0\; ({\rm mod}\; 2)}}^{r+r'-2}\bigg(\bigoplus_{\substack{\ell = |p-s-s'|+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{k, \ell}\oplus\bigoplus_{\substack{\ell = p-\vert s-s'\vert+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{k, \ell}\bigg)\nonumber\\ &\qquad\oplus\bigoplus_{\substack{k = |r-r'|+2\\ k+r+r' \equiv 0\; ({\rm mod}\; 2)}}^{r+r'}\bigg(\bigoplus_{\substack{\ell = |p-s-s'|+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{k, \ell}\oplus\bigoplus_{\substack{\ell = p-\vert s-s'\vert+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{k, \ell}\bigg) \end{align} \end{itemize} \end{thm} \begin{proof} The proof of the $r=r'=1$ case of \eqref{caser} is exactly the same as the corresponding proof in \cite[Theorem 5.2.1]{CMY2}, so we omit the details: \begin{equation*}\label{caser_1} \mathcal{L}_{1,s}\boxtimes \mathcal{L}_{1,s'} \cong \bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; (\mathrm{mod}\; 2)}}^{{\rm min}\{s+s'-1, 2p-1-s-s'\}}\mathcal{L}_{1, \ell} \oplus \bigoplus_{\substack{\ell = 2p+1-s-s'\\ \ell+s+s' \equiv 1\; (\mathrm{mod}\; 2)}}^{p}\mathcal{P}_{1, \ell}. \end{equation*} The general case then follows from the commutativity and associativity of tensor products in $\mathcal{O}_c$ and the fusion rules \eqref{fr1} and \eqref{moreprs}: \begin{align*} \mathcal{L}_{r,s}\boxtimes\mathcal{L}_{r',s'} & \cong(\mathcal{L}_{r,1}\boxtimes\mathcal{L}_{r',1})\boxtimes(\mathcal{L}_{1,s}\boxtimes\mathcal{L}_{1,s'})\nonumber\\ & \cong\bigg(\bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; (\mathrm{mod}\; 2)}}^{r+r'-1}\mathcal{L}_{k, 1}\bigg)\boxtimes\bigg(\bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; (\mathrm{mod}\; 2)}}^{{\rm min}\{s+s'-1, 2p-1-s-s'\}}\mathcal{L}_{1, \ell} \oplus \bigoplus_{\substack{\ell = 2p+1-s-s'\\ \ell+s+s' \equiv 1\; (\mathrm{mod}\; 2)}}^{p}\mathcal{P}_{1, \ell}\bigg)\nonumber\\ &\cong \bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; (\mathrm{mod}\; 2)}}^{r+r'-1}\bigg(\bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; (\mathrm{mod}\; 2)}}^{{\rm min}\{s+s'-1, 2p-1-s-s'\}}\mathcal{L}_{k, \ell} \oplus \bigoplus_{\substack{\ell = 2p+1-s-s'\\ \ell+s+s' \equiv 1\; (\mathrm{mod}\; 2)}}^{p}\mathcal{P}_{k, \ell}\bigg). \end{align*} Let us now consider the $r=r'=1$ case of \eqref{casePM}: \begin{align}\label{eqn:LPr=r'=1} \mathcal{L}_{1,s}\boxtimes\mathcal{P}_{1,s'} & \cong \bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{\min(s+s'-1, p)}\mathcal{P}_{1, \ell}\oplus \bigoplus_{\substack{\ell = 2p+1-s-s'\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{1, \ell} \oplus \bigoplus_{\substack{\ell = p-s+s'+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{2, \ell}. \end{align} The case $s=1$ is easy since $\mathcal{L}_{1,1}$ is the unit object of $\mathcal{O}_c$ and since only the first sum in \eqref{eqn:LPr=r'=1} is non-empty (because $s'\leq p-1$). Then for $s=2$, \eqref{eqn:LPr=r'=1} in the cases $s'=1$, $2\leq s'\leq p-2$, and $s'=p-1$ yields the corresponding cases of \eqref{more1prs} and \eqref{more2prs}. This proves \eqref{eqn:LPr=r'=1} when $p=2$, and for $p\geq 3$, we can finish the proof using induction on $s$. Thus assume we have proved \eqref{eqn:LPr=r'=1} for some $s$ such that $2\leq s\leq p-1$, and consider the $s+1$ case. Since \begin{equation*} \mathcal{L}_{1,2}\boxtimes(\mathcal{L}_{1,s}\boxtimes\mathcal{P}_{1,s'}) \cong(\mathcal{L}_{1,2}\boxtimes\mathcal{L}_{1,s})\boxtimes\mathcal{P}_{1,s'}\cong(\mathcal{L}_{1,s-1}\boxtimes\mathcal{P}_{1,s'})\oplus(\mathcal{L}_{1,s+1}\boxtimes\mathcal{P}_{1,s'}) \end{equation*} and since all these tensor products have finite length, the Krull-Schmidt Theorem guarantees that we can determine the indecomposable summands of $\mathcal{L}_{1,s+1}\boxtimes\mathcal{P}_{1,s'}$ by subtracting the indecomposable summands of $\mathcal{L}_{1,s-1}\boxtimes\mathcal{P}_{1,s'}$ from those of $\mathcal{L}_{1,2}\boxtimes(\mathcal{L}_{1,s}\boxtimes\mathcal{P}_{1,s'})$. So we get \begin{align*} \mathcal{L}_{1,s+1}\boxtimes\mathcal{P}_{1,s'}\cong \left\lbrace\begin{array}{lll} (\mathcal{L}_{1,s}\boxtimes\mathcal{P}_{1,2})\oplus(\mathcal{L}_{1,s}\boxtimes\mathcal{L}_{2,p})\ominus(\mathcal{L}_{1,s-1}\boxtimes\mathcal{P}_{1,1}) & \text{if} & s'=1\\ (\mathcal{L}_{1,s}\boxtimes\mathcal{P}_{1,s'-1})\oplus(\mathcal{L}_{1,s}\boxtimes\mathcal{P}_{1,s'+1})\ominus(\mathcal{L}_{1,s-1}\boxtimes\mathcal{P}_{1,s'}) & \text{if} & 2\leq s'\leq p-2\\ (\mathcal{L}_{1,s}\boxtimes\mathcal{P}_{1,p-2})\oplus2\cdot(\mathcal{L}_{1,s}\boxtimes\mathcal{L}_{1,p})\ominus(\mathcal{L}_{1,s-1}\boxtimes\mathcal{P}_{1,p-1}) & \text{if} & s'=p-1\\ \end{array} \right. \end{align*} using the fusion rules \eqref{more1prs}. Analysis of these three formulas using the $s$ and $s-1$ cases of \eqref{eqn:LPr=r'=1} (which hold by induction), as well as $s'=p$ cases of \eqref{caser}, then yields the $s+1$ case of \eqref{eqn:LPr=r'=1}. For $s'=p-1$, it is helpful to divide the analysis into cases $s<p-1$ and $s=p-1$. Now we prove \eqref{casePM} for general $r,r'$ using the $r=r'=1$ case along with \eqref{fr1} and \eqref{moreprs}: \begin{align*} \mathcal{L}_{r,s}\boxtimes\mathcal{P}_{r',s'} & \cong(\mathcal{L}_{r,1}\boxtimes\mathcal{L}_{r',1})\boxtimes(\mathcal{L}_{1,s}\boxtimes\mathcal{P}_{1,s'})\nonumber\\ & \cong \bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1}\bigg(\bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{\min(s+s'-1, p)}(\mathcal{L}_{k,1}\boxtimes\mathcal{P}_{1, \ell})\oplus \bigoplus_{\substack{\ell = 2p+1-s-s'\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}(\mathcal{L}_{k,1}\boxtimes\mathcal{P}_{1, \ell})\bigg)\nonumber\\ & \hspace{2em}\oplus \bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1}\bigoplus_{\substack{\ell = p-s+s'+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}(\mathcal{L}_{k,1}\boxtimes\mathcal{P}_{2, \ell})\nonumber\\ & \cong \bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1}\bigg(\bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{\min(s+s'-1, p)}\mathcal{P}_{k, \ell}\oplus \bigoplus_{\substack{\ell = 2p+1-s-s'\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{k, \ell}\bigg)\nonumber\\ &\hspace{2em}\oplus\bigoplus_{\substack{\ell = p-s+s'+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\bigg(\bigoplus_{\substack{k = \max(|r-r'|,1)\\ k+r+r' \equiv 0\; ({\rm mod}\; 2)}}^{r+r'-2} \mathcal{P}_{k,\ell}\oplus\bigoplus_{\substack{k = |r-r'|+2\\ k+r+r' \equiv 0\; ({\rm mod}\; 2)}}^{r+r'}\mathcal{P}_{k,\ell}\bigg) \end{align*} as required. To prove \eqref{casePP}, we again take $r=r'=1$ first. The exact sequences \begin{equation*} 0\longrightarrow\mathcal{Z}_{1,s}\boxtimes\mathcal{P}_{1,s'}\longrightarrow\mathcal{P}_{1,s}\boxtimes\mathcal{P}_{1,s'}\longrightarrow\mathcal{L}_{1,s}\boxtimes\mathcal{P}_{1,s'}\longrightarrow 0 \end{equation*} and \begin{equation*} 0\longrightarrow\mathcal{L}_{1,s}\boxtimes\mathcal{P}_{1,s'}\longrightarrow\mathcal{Z}_{1,s}\boxtimes\mathcal{P}_{1,s'}\longrightarrow\mathcal{L}_{2,p-s}\boxtimes\mathcal{P}_{1,s'}\longrightarrow 0, \end{equation*} both of which split since $\mathcal{L}_{1,s}\boxtimes\mathcal{P}_{1,s'}$ and $\mathcal{L}_{2,p-s}\boxtimes\mathcal{P}_{1,s'}$ are projective in $\mathcal{O}_c^0$, imply that \begin{equation*} \mathcal{P}_{1,s}\boxtimes\mathcal{P}_{1,s'}\cong 2\cdot(\mathcal{L}_{1,s}\boxtimes\mathcal{P}_{1,s'}) \oplus(\mathcal{L}_{2,p-s}\boxtimes\mathcal{P}_{1,s'}). \end{equation*} Thus by \eqref{casePM}, \begin{align*} \mathcal{P}_{1,s}\boxtimes\mathcal{P}_{1,s'} & \cong2\cdot\bigg(\bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{\min(s+s'-1, p)}\mathcal{P}_{1, \ell}\oplus \bigoplus_{\substack{\ell = 2p+1-s-s'\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{1, \ell} \oplus \bigoplus_{\substack{\ell = p-s+s'+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{2, \ell}\bigg)\nonumber\\ &\qquad \oplus \bigoplus_{\substack{\ell = |p-s-s'|+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{\min(p-s+s'-1, p)}\mathcal{P}_{2, \ell}\oplus \bigoplus_{\substack{\ell = p+s-s'+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{2, \ell} \oplus \bigoplus_{\substack{\ell = s+s'+1\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}(\mathcal{P}_{1, \ell}\oplus\mathcal{P}_{3,\ell})\nonumber\\ &\cong \bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{ p}\mathcal{P}_{1, \ell}\oplus\bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{\min(s+s'-1, p)}\mathcal{P}_{1, \ell}\oplus\bigoplus_{\substack{\ell = s+s'+1\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{3,\ell}\nonumber\\ & \qquad\oplus \bigoplus_{\substack{\ell = 2p+1-s-s'\\ \ell+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}(2\cdot\mathcal{P}_{1, \ell}) \oplus \bigoplus_{\substack{\ell = |p-s-s'|+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{2, \ell}\oplus\bigoplus_{\substack{\ell = p-\vert s-s'\vert+1\\ \ell+p+s+s' \equiv 1\; ({\rm mod}\; 2)}}^{p}\mathcal{P}_{2, \ell}. \end{align*} We now get \eqref{casePP} by tensoring this expression with $\mathcal{L}_{r,1}\boxtimes\mathcal{L}_{r',1}$ as before. \end{proof} \begin{rem} The fusion rules for irreducible $V_c$-modules in $\mathcal{O}_c$ follow from the tensor product formula \eqref{caser}: For $r,r',r''\geq 1$ and $1\leq s,s',s''\leq p$, \begin{equation*} \dim \hom_{V_c}(\mathcal{L}_{r,s}\boxtimes\mathcal{L}_{r',s'}, \mathcal{L}_{r'',s''})\leq 1, \end{equation*} with equality if and only if $$r''\in\lbrace r+r'-1,r+r'-3,\ldots,\vert r-r'\vert+1\rbrace$$ and $$s''\in\lbrace s+s'-1,s+s'-3,\ldots,\vert s-s'\vert+1\rbrace$$ (with $s''\leq p$ also). This agrees with \cite[Theorem 2.3]{Lin}, but note that the fusion rule result of \cite{Lin} does not distinguish between $\mathcal{L}_{r'',s''}$ or $\mathcal{P}_{r'',s''}$ appearing as a summand of $\mathcal{L}_{r,s}\boxtimes\mathcal{L}_{r',s'}$. \end{rem} \subsection{Semisimplification}\label{subsec:ss} Theorem \ref{generalfusionrules} show that the full subcategory $\mathcal{O}'_c\subseteq\mathcal{O}_c$ whose objects are finite direct sums of modules $\mathcal{L}_{r,s}$ and $\mathcal{P}_{r,s}$ for $r\geq 1$, $1\leq s\leq p$ is an additive monoidal subcategory of $\mathcal{O}_c$ (but it is not abelian since it is not closed under submodules and quotients). Since the modules $\mathcal{L}_{r,s}$ and $\mathcal{P}_{r,s}$ are all self-dual, $\mathcal{O}'_c$ is a ribbon category, and thus we can define its semisimplification $\overline{\mathcal{O}_c'}$ as usual to be the quotient of $\mathcal{O}_c'$ by the tensor ideal of negligible morphisms. Recall (see for example \cite[Definition 3.3.16]{BK}) that $f: \mathcal{W}_1\rightarrow\mathcal{W}_2$ in $\mathcal{O}'_c$ is negligible if the categorical trace $\tr_{\mathcal{O}'_c}f\circ g$ vanishes for all morphisms $g:\mathcal{W}_2\rightarrow\mathcal{W}_1$. Moreover, an object $\mathcal{W}$ in $\mathcal{O}'_c$ is negligible if $\mathrm{Id}_\mathcal{W}$ is negligible; such objects are isomorphic to $0$ in the semisimplification $\overline{\mathcal{O}_c'}$. \begin{lem} An irreducible module $\mathcal{L}_{r,s}$ is negligible in $\mathcal{O}'_c$ if and only if $s=p$. Moreover, all projective modules $\mathcal{P}_{r,s}$ are negligible. \end{lem} \begin{proof} Since $\mathcal{L}_{r,s}$ is irreducible, $\mathrm{End}_{\mathcal{O}'_c}(\mathcal{L}_{r,s})=\mathbb{C}\cdot \mathrm{Id}_{\mathcal{L}_{r,s}}$ and thus $\mathcal{L}_{r,s}$ is negligible if and only if its categorical dimension $\tr_{\mathcal{O}'_c} \mathrm{Id}_{\mathcal{L}_{r,s}}$ vanishes. Then \eqref{eqn:rs_cat_dim} shows that $\dim_{\mathcal{O}_c}\mathcal{L}_{r,s}=0$ if and only if $s=p$. For the projective modules, the definitions and constructions in Sections \ref{sec:fus_and_rig} and \ref{subsec:more_proj_covers} show that every $\mathcal{P}_{r,s}$ is in the tensor ideal generated by the modules $\mathcal{L}_{r,p}$. Since negligible morphisms are a tensor ideal containing all $\mathrm{Id}_{\mathcal{L}_{r,p}}$, each $\mathcal{P}_{r,s}$ is negligible. \end{proof} \begin{cor} The category $\overline{\mathcal{O}_c'}$ is a semisimple abelian category with simple objects $\mathcal{L}_{r,s}$ for $r\geq 1$ and $1\leq s\leq p-1$. \end{cor} Since negligible morphisms form a tensor ideal, the semisimplification $\overline{\mathcal{O}_c'}$ is also a (ribbon) tensor category. Tensor products of simple objects in $\overline{\mathcal{O}_c'}$ follow from Theorem \ref{generalfusionrules}(1): \begin{prop}\label{prop:ss_fus_rules} Simple objects in $\overline{\mathcal{O}_c'}$ have the following tensor products: \begin{equation*} \mathcal{L}_{r,s}\boxtimes \mathcal{L}_{r',s'} \cong \bigoplus_{\substack{k = |r-r'|+1\\ k+r+r' \equiv 1\; ({\rm mod}\; 2)}}^{r+r'-1} \bigoplus_{\substack{\ell = |s-s'|+1\\ \ell+s+s' \equiv 1\; (\mathrm{mod}\; 2)}}^{\min(s+s'-1, 2p-1-s-s')} \mathcal{L}_{k, \ell} \end{equation*} for $r, r'\geq 1$ and $1\leq s,s'\leq p-1$. \end{prop} From this proposition, we see that as an abelian category, $\overline{\mathcal{O}_c'}$ decomposes as the Deligne product of two tensor subcategories. First, the modules $\mathcal{L}_{r,1}$ are the simple objects of a tensor subcategory which we denote by $\mathcal{O}_c^L$. As discussed in the proof of Theorem \ref{thm:Lr1_fus_rules} (see also \cite[Corollary~14]{ACGY}), $\mathcal{O}_c^L$ is braided tensor equivalent to an abelian $3$-cocycle twist of $\mathrm{Rep}\, SU(2)$ (or $\mathrm{Rep}\,\mathfrak{sl}_2$). This same cocycle twist of $\rep\mathfrak{sl}_2$ is also braided tensor equivalent to the Kazhdan-Lusztig category $KL_{-2+1/p}(\mathfrak{sl}_2)$ of modules for the simple affine vertex operator algebra $L_{-2+1/p}(\mathfrak{sl}_2)$ at level $-2+\frac{1}{p}$ \cite[Corollary~9]{ACGY}. Thus $\mathcal{O}_c^L$ is braided tensor equivalent to $KL_{-2+1/p}(\mathfrak{sl}_2)$, although they have different ribbon twists because the conformal weights of $\mathcal{L}_{2,1}$ differ from those of the corresponding simple $L_{-2+1/p}(\mathfrak{sl}_2)$-module. Secondly, although the modules $\mathcal{L}_{1,s}$ for $1\leq s\leq p-1$ do not form the simple objects of a tensor subcategory of $\mathcal{O}_c$, they do in the semisimple subquotient $\overline{\mathcal{O}_c'}$. We denote by $\mathcal{O}_c^R$ the subcategory generated by $\mathcal{L}_{1,s}$ for $1\leq s\leq p-1$. Then the $r=r'=1$ case of Proposition \ref{prop:ss_fus_rules} yields precisely the Frenkel-Zhu fusion rules \cite{FZ1} for the simple affine vertex operator algebra $L_{-2+p}(\mathfrak{sl}_2)$, under the identification of $\mathcal{L}_{1,s}$ with the simple $L_{-2+p}(\mathfrak{sl}_2)$-module induced from the $s$-dimensional simple $\mathfrak{sl}_2$-module. We can actually prove a stronger relationship: \begin{prop} The subcategory $\mathcal{O}_c^R$ is tensor equivalent to the category $KL_{-2+p}(\mathfrak{sl}_2)$ of modules for the simple affine vertex operator algebra $L_{-2+p}(\mathfrak{sl}_2)$. \end{prop} \begin{proof} From \cite{F}, the category $KL_{-2+p}(\mathfrak{sl}_2)$ is equivalent (as modular tensor categories) to the semisimplification of the category of tilting modules for the Lusztig quantum group $U_q(\mathfrak{sl}_2)$ at $q = e^{\pi i/p}$ \cite{AnP}. We denote this category by $\mathcal{C}(q, \mathfrak{sl}_2)$. Proposition \ref{prop:ss_fus_rules} shows that the Grothendieck rings of the categories $\mathcal{O}_c^R$ and $\mathcal{C}(q,\mathfrak{sl}_2)$ are isomorphic under the map $[\mathcal{L}_{1,s}] \rightarrow [V_{s-1}]$, with $V_{s-1}$ the $s$-dimensional irreducible representation of $U_{q}(\mathfrak{sl}_2)$. Then by \cite[Theorem $A_\ell$]{KW}, $\mathcal{O}_c^R$ is tensor equivalent to $\mathcal{C}(\tilde{q}, \mathfrak{sl}_2)^{\tau}$, where $\tilde{q}^2$ is a primitive root of unity of order $p$ (unique up to $\tilde{q}^2\rightarrow\tilde{q}^{-2}$) and $\tau$ denotes modification of the associativity isomorphisms in $\mathcal{C}(\tilde{q}, \mathfrak{sl}_2)$ by a $3$-cocycle on $\mathbb{Z}/2\mathbb{Z}$. Up to coboundaries, there is only one non-trivial $3$-cocycle $\tau$ on $\mathbb{Z}/2\mathbb{Z}$: it modifies the usual associativity isomorphism $V_1\otimes(V_1\otimes V_1)\rightarrow(V_1\otimes V_1)\otimes V_1$ in $\mathcal{C}(\tilde{q},\mathfrak{sl}_2)$ by a sign. The tensor categories $\mathcal{C}(\tilde{q},\mathfrak{sl}_2)$ for various $2p$th roots of unity can be distinguished using the evaluation $e_{V_1}: V_1^*\otimes V_1\rightarrow\mathbb{C}$ and coevaluation $i_{V_1}: \mathbb{C}\rightarrow V_1\otimes V_1^*$ (see for example \cite[Exercise 8.18.8]{EGNO}). Specifically, if we identify $V_1=V_1^*=V_1^{**}$, then $e_{V_1}\circ i_{V_1}\in\mathbb{C}$ is an invariant of the tensor category structure on $\mathcal{C}(\tilde{q},\mathfrak{sl}_2)$, and in fact \begin{equation}\label{eqn:quantum_left_tr} e_{V_1}\circ i_{V_1} = -\tilde{q}-\tilde{q}^{-1}. \end{equation} If $\tau$ is the non-trivial $3$-cocycle on $\mathbb{Z}/2\mathbb{Z}$, then $e_{V_1}\circ i_{V_1}= \tilde{q}+\tilde{q}^{-1}$ in $\mathcal{C}(\tilde{q},\mathfrak{sl}_2)^\tau$, since modification of $\mathcal{A}_{V_1,V_1,V_1}$ by a sign means that either $e_{V_1}$ or $i_{V_1}$ should be modified by a sign to get rigidity. For our tensor category $\mathcal{O}_c^R$, we showed in \eqref{eqn:L12_left_trace} that \begin{equation*} e_{\mathcal{L}_{1,2}}\circ i_{\mathcal{L}_{1,2}} =-2\cos(\pi/p) =-\frac{\sin(2\pi/p)}{\sin(\pi/p)}= -\frac{e^{2\pi i/p}-e^{-2\pi i/p}}{e^{\pi i/p}-e^{-\pi i/p}} =-e^{\pi i/p}-e^{-\pi i/p}. \end{equation*} Comparing with \eqref{eqn:quantum_left_tr}, we see that $\mathcal{O}_c^R$ must be tensor equivalent to either $\mathcal{C}(q,\mathfrak{sl}_2)$ or $\mathcal{C}(-q,\mathfrak{sl}_2)^\tau$. But these two quantum group categories are equivalent to each other: Since $\pm q$ square to the same primitive $p$th root of unity, \cite{KW} implies that $\mathcal{C}(q,\mathfrak{sl}_2)$ is tensor equivalent to a $3$-cocycle twist of $\mathcal{C}(-q,\mathfrak{sl}_2)$, and this cocycle has to be the non-trivial one because $\mathcal{C}(q,\mathfrak{sl}_2)$ and $\mathcal{C}(-q,\mathfrak{sl}_2)$ are not tensor equivalent. We conclude that $\mathcal{O}_c^R$ is tensor equivalent to $\mathcal{C}(q,\mathfrak{sl}_2)$, and thus also to the tensor category of $L_{-2+p}(\mathfrak{sl}_2)$-modules. \end{proof} \begin{rem} The appearance of affine $\mathfrak{sl}_2$ tensor categories in the semisimplification of $\mathcal{O}_c$ is not surprising because the Virasoro algebra at central charge $13-6p-6p^{-1}$ is the quantum Drinfeld-Sokolov reduction \cite{FFr} of both universal affine vertex operator algebras $V_{-2+1/p}(\mathfrak{sl}_2)$ and $V_{-2+p}(\mathfrak{sl}_2)$ (see also \cite[Chapter 15]{FB}). \end{rem} \begin{rem} As a braided tensor category, $\overline{\mathcal{O}_c'}$ is not quite the Deligne product of $\mathcal{O}_c^L$ and $\mathcal{O}_c^R$, since these two subcategories do not quite centralize each other. Indeed, the balancing equation for monodromies implies \begin{equation*} \mathcal{R}_{\mathcal{L}_{r,1},\mathcal{L}_{1,s}}^2 =\theta_{\mathcal{L}_{r,s}}\circ(\theta_{\mathcal{L}_{r,1}}^{-1}\boxtimes\theta_{\mathcal{L}_{1,s}}^{-1}) = e^{2\pi i(h_{r,s}-h_{r,1}-h_{1,s})}=e^{\pi i (r+s-rs-1)}, \end{equation*} which is not trivial if $r,s\in2\mathbb{Z}$. \end{rem} \section{Connections between Virasoro and triplet vertex operator algebras} In this section, we show how to obtain basic results in the representation theory of triplet vertex operator algebras $\mathcal{W}(p)$ using extension theory \cite{HKL, CKM, CMY1} applied to the Virasoro category $\mathcal{O}_c^0$. Then, we show that the Virasoro category $\mathcal{O}_c^0$ is braided tensor equivalent to the $PSL(2,\mathbb{C})$-equivariantization of the category of grading-restricted generalized $\mathcal{W}(p)$-modules. \subsection{Representation theory of triplet vertex operator algebras} We have already used the vertex operator algebra embedding $V_c \subseteq \mathcal{W}(p)$ in Section \ref{sec:fus_and_rig}, where $c=13-6p-6p^{-1}$ for $p > 1$ an integer. The triplet algebra $\mathcal{W}(p)$ is $C_2$-cofinite \cite{AM_trip}, so by \cite{Hu_C2}, every grading-restricted generalized $\mathcal{W}(p)$ module has finite length, the category $\mathcal{C}_{\mathcal{W}(p)}$ of grading-restricted generalized $\mathcal{W}(p)$-modules has the vertex algebraic braided tensor category structure of \cite{HLZ8}, and every irreducible $\mathcal{W}(p)$-module has a projective cover in $\mathcal{C}_{\mathcal{W}(p)}$. Two of these projective covers were constructed explicitly in \cite{AM_log_mods}, and the remaining ones were obtained in \cite{NT}. Fusion rules and rigidity of $\mathcal{C}_{\mathcal{W}(p)}$ were established in \cite{TW}. We now rederive these results as a straightforward consequence of the braided tensor category structure on $\mathcal{O}_c^0$; we would especially like to emphasize that our tensor-categorical approach provides an alternative to the technical projective cover constructions in \cite{NT}. To begin, we recall from \cite{AM_trip} that $\mathcal{W}(p)$ has $2p$ distinct irreducible modules, which we label $\mathcal{W}_{r,s}$ for $r=1,2$ and $1\leq s\leq p$, with $\mathcal{W}_{1,1}$ isomorphic to $\mathcal{W}(p)$ itself. As $V_c$-modules, \begin{equation}\label{dec:triplet} \mathcal{W}_{r,s}\cong\bigoplus_{n=0}^\infty (2n+r)\cdot\mathcal{L}_{2n+r,s}. \end{equation} This means that every irreducible $\mathcal{W}(p)$-module is an object in the direct limit completion $\mathrm{Ind}(\mathcal{O}_c)$, which consists of all generalized $V_c$-modules that are unions of their $C_1$-cofinite submodules. As shown in \cite{CMY1}, $\mathrm{Ind}(\mathcal{O}_c)$ has the vertex algebraic braided tensor category structure of \cite{HLZ8}, and thus $\mathcal{W}(p)$ is a commutative algebra object in $\mathrm{Ind}(\mathcal{O}_c)$ \cite{HKL}. We can then define $\rep^0\mathcal{W}(p)$ to be the category of generalized $\mathcal{W}(p)$-modules which, as $V_c$-modules, are objects of $\mathrm{Ind}(\mathcal{O}_c)$. This category also has the vertex algebraic braided tensor category structure of \cite{HLZ8} (see \cite[Theorem 3.65]{CKM} and \cite[Theorem 7.7]{CMY1}). From Proposition 3.1.3 and Remark 3.1.4 of \cite{CMY2}, $\mathcal{C}_{\mathcal{W}(p)}$ is a subcategory of $\rep^0\mathcal{W}(p)$; since $\mathcal{C}_{\mathcal{W}(p)}$ also has braided tensor category structure, it is a tensor subcategory of $\rep^0\mathcal{W}(p)$. We also have the category $\rep\mathcal{W}(p)$ of not-necessarily-local $\mathcal{W}(p)$-modules which, as $V_c$-modules, are objects of $\mathrm{Ind}(\mathcal{O}_c)$. There is an induction tensor functor \begin{align*} \mathcal{F}: \mathcal{O}_c & \rightarrow \rep\mathcal{W}(p)\\ \mathcal{W} & \mapsto \mathcal{W}(p)\boxtimes\mathcal{W}\\ f & \mapsto \mathrm{Id}_{\mathcal{W}(p)}\boxtimes f. \end{align*} Since the modules $\mathcal{L}_{2n+1,1}$ appearing in the decomposition of $\mathcal{W}(p)$ as a $V_c$-module are rigid, induction is an exact functor (see the similar \cite[Proposition 3.2.2]{CMY2} and its proof). Moreover, induction satisfies Frobenius reciprocity: if we use $\mathcal{G}:\rep^0\mathcal{W}(p)\rightarrow\mathrm{Ind}(\mathcal{O}_c)$ to denote the forgetful functor, then \begin{equation*} \hom_{\mathcal{W}(p)}(\mathcal{F}(\mathcal{W}),\mathcal{X})\cong\hom_{V_c}(\mathcal{W},\mathcal{G}(\mathcal{X})) \end{equation*} for objects $\mathcal{W}$ in $\mathcal{O}_c$ and $\mathcal{X}$ in $\rep\mathcal{W}(p)$. \begin{lem}\label{lem:ind_functor} Induction restricts to a functor $\mathcal{F}:\mathcal{O}_c^0 \rightarrow \mathcal{C}_{\mathcal{W}(p)}$. \end{lem} \begin{proof} From the $r=s=1$ case of \eqref{dec:triplet} and naturality of the braiding, \begin{align*} \mathcal{R}_{\mathcal{W}, \mathcal{W}(p)}\circ \mathcal{R}_{\mathcal{W}(p), \mathcal{W}} & = \bigoplus_{n =0}^{\infty} (2n+1)\cdot\mathcal{R}_{\mathcal{W}, \mathcal{L}_{2n+1}} \circ \mathcal{R}_{\mathcal{L}_{2n+1}, \mathcal{W}}\nonumber\\ &= \bigoplus_{n =0}^{\infty}(2n+1)\cdot \mathrm{Id}_{\mathcal{L}_{2n+1}\boxtimes \mathcal{W}} =\mathrm{Id}_{\mathcal{W}(p)\boxtimes \mathcal{W}} \end{align*} if $\mathcal{W}$ is in $\mathcal{O}_c^0$. Then \cite[Theorem~2.65]{CKM} implies $\mathcal{F}(\mathcal{W})$ is an object of $\rep^0\mathcal{W}(p)$. Also, finite-length modules in $\mathcal{O}_c$ induce to finite-length modules in $\rep\mathcal{W}(p)$ because induction is exact. In particular, modules in $\mathcal{O}_c^0$ induce to finite-length modules in $\rep^0\mathcal{W}(p)$, which are necessarily grading restricted and thus are in $\mathcal{C}_{\mathcal{W}(p)}$. \end{proof} \begin{rem} Our definition of $\mathcal{O}_c^0$ was chosen so that $\mathcal{O}_c^0$ is precisely the subcategory of modules in $\mathcal{O}_c$ that induce to local $\mathcal{W}(p)$-modules (in $\rep^0\mathcal{W}(p)$). \end{rem} We now compute the inductions of simple $V_c$-modules. First we need the following lemma, which is just basic algebra: \begin{lem} Suppose $\mathcal{X}$ is an object of $\rep\mathcal{W}(p)$ and $\mathcal{W}$ is an irreducible $\mathcal{W}(p)$-module such that $\dim\hom_{\mathcal{W}(p)}(\mathcal{X},\mathcal{W})<\infty$. Then there is a surjective $\mathcal{W}(p)$-homomorphism $\mathcal{X}\rightarrow\hom_{\mathcal{W}(p)}(\mathcal{X},\mathcal{W})^*\otimes\mathcal{W}$. \end{lem} \begin{proof} Let $\lbrace f_i\rbrace_{i=1}^I$ be a basis of $\hom_{\mathcal{W}(p)}(\mathcal{X},\mathcal{W})$ and let $\lbrace f_i^*\rbrace_{i=1}^I$ be the corresponding dual basis of $\hom_{\mathcal{W}(p)}(\mathcal{X},\mathcal{W})^*$. Then we have the $\mathcal{W}(p)$-homomorphism \begin{align*} F: \mathcal{X} & \rightarrow \hom_{\mathcal{W}(p)}(\mathcal{X},\mathcal{W})^*\otimes\mathcal{W}\nonumber\\ b & \mapsto \sum_{i=1}^I f_i^*\otimes f_i(b). \end{align*} To show that $F$ is surjective, note that the cokernel $\coker F=\hom_{\mathcal{W}(p)}(\mathcal{X},\mathcal{W})^*\otimes\mathcal{W}/\im F$ is isomorphic to a finite direct sum of copies of $\mathcal{W}$ (since $\mathcal{W}$ is irreducible), so $F$ is surjective if and only if $\hom_{\mathcal{W}(p)}(\coker F,\mathcal{W})=0$. Thus suppose $g\in\hom_{\mathcal{W}(p)}(\coker F,\mathcal{W})$; it is enough to show that $g\circ q=0$, where $q: \hom_{\mathcal{W}(p)}(\mathcal{X},\mathcal{W})^*\otimes\mathcal{W}\rightarrow\coker F$ is the natural quotient map. Now, because $\mathcal{W}$ is irreducible, there is a linear isomorphism \begin{align*} \hom_{\mathcal{W}(p)}(\mathcal{X},\mathcal{W}) & \rightarrow\hom_{\mathcal{W}(p)}\big(\hom_{\mathcal{W}(p)}(\mathcal{X},\mathcal{W})^*\otimes \mathcal{W},\mathcal{W}\big)\nonumber\\ f & \mapsto \left[f^*\otimes w\mapsto\langle f^*,f\rangle w\right] \end{align*} Thus $g\circ q$ has this form for some $f\in\hom_{\mathcal{W}(p)}(\mathcal{X},\mathcal{W})$, and moreover, $g\circ q$ annihilates $\im F$. In other words, \begin{align*} 0 = (g\circ q)(F(b)) =\sum_{i=1}^I (g\circ q)\big(f_i^*\otimes f_i(b)\big) =\sum_{i=1}^I \langle f_i^*,f\rangle f_i(b)=f(b) \end{align*} for all $b\in\mathcal{X}$. Thus $f=0$ and therefore $g\circ q=0$ as well, proving $F$ is surjective. \end{proof} \begin{prop}\label{prop:inductions} For $r\geq 1$ and $1\leq s\leq p$, \begin{equation}\label{ind:irred} \mathcal{F}(\mathcal{L}_{r,s})\cong r\cdot\mathcal{W}_{\bar{r},s}, \end{equation} where $r\cdot$ denotes the direct sum of $r$ copies and $\bar{r}=1$ or $2$ according as $r$ is even or odd. \end{prop} \begin{proof} By Frobenius reciprocity and \eqref{dec:triplet}, \begin{equation*} \dim\hom_{\mathcal{W}(p)}(\mathcal{F}(\mathcal{L}_{r,s}),\mathcal{W}_{\bar{r},s}) =\dim\hom_{V_c}(\mathcal{L}_{r,s},\mathcal{G}(\mathcal{W}_{\bar{r},s})) = r, \end{equation*} so by the preceding lemma, there is a surjective homomorphism $F: \mathcal{F}(\mathcal{L}_{r,s})\rightarrow r\cdot\mathcal{W}_{\bar{r},s}$. To show that $F$ is also injective, it is enough to show that $\mathcal{F}(\mathcal{L}_{r,s})$ and $r\cdot\mathcal{W}_{\bar{r},s}$ are isomorphic as grading-restricted $V_c$-modules, since then they will have the same graded dimension. Indeed, using the fusion rules of Theorems \ref{thm:Lr1_fus_rules} and \ref{thm:Lr_Ls_fusion}, \begin{align*} \mathcal{G}(\mathcal{F}(\mathcal{L}_{r,s})) & \cong \bigoplus_{n=0}^\infty (2n+1)\cdot(\mathcal{L}_{2n+1,1}\boxtimes\mathcal{L}_{r,s})\nonumber\\ &\cong\bigoplus_{n=0}^\infty\bigoplus_{k=0}^{\min(r-1,2n)} (2n+1)\cdot\mathcal{L}_{2n+r-2k,s}. \end{align*} For any $m\in\mathbb{N}$, we need to determine the multiplicity of $\mathcal{L}_{2m+\bar{r},s}$ in this direct sum: we get $2n+1$ copies of $\mathcal{L}_{2m+\bar{r}}$ for each $k=n-m+\frac{r-\bar{r}}{2}$ such that \begin{equation*} 0\leq n-m+\frac{r-\bar{r}}{2}\leq\min(r-1,2n), \end{equation*} that is, \begin{equation*} \left\vert m-\frac{r-\bar{r}}{2}\right\vert\leq n\leq m-1+\frac{r+\bar{r}}{2}. \end{equation*} Thus for $m\leq\frac{r-\bar{r}}{2}$, the multiplicity of $\mathcal{L}_{2m+\bar{r},s}$ is \begin{align*} \sum_{i=0}^{2m+\bar{r}-1} & \left[2\left(-m+\frac{r-\bar{r}}{2}+i\right)+1\right]\nonumber\\ &=(2m+\bar{r})(-2m+r-\bar{r}+1)+2\cdot\frac{(2m+\bar{r}-1)(2m+\bar{r})}{2} =r\cdot(2m+\bar{r}), \end{align*} while for $m\geq\frac{r-\bar{r}}{2}$, the multiplicity of $\mathcal{L}_{2m+\bar{r},s}$ is \begin{equation*} \sum_{i=0}^{r-1} \left[2\left(m-\frac{r-\bar{r}}{2}+i\right)+1)\right] =r\cdot\left(2m-r+\bar{r}+1\right)+2\cdot\frac{(r-1)r}{2} =r\cdot(2m+\bar{r}). \end{equation*} We conclude that \begin{equation*} \mathcal{G}(\mathcal{F}(\mathcal{L}_{r,s}))\cong r\cdot\bigoplus_{m=0}^\infty (2m+\bar{r})\cdot\mathcal{L}_{2m+\bar{r},s}\cong \mathcal{G}(r\cdot\mathcal{W}_{\bar{r},s}) \end{equation*} as required, where the last isomorphism comes from \eqref{dec:triplet}. \end{proof} Now we use Proposition \ref{prop:inductions} together with the fusion rules \eqref{fr1} and \eqref{fr2} for irreducible $V_c$-modules to determine fusion rules of irreducible $\mathcal{W}(p)$-modules, previously proved in \cite{TW}: \begin{thm}\label{TW} \begin{itemize} \item[(1)] The $\mathcal{W}(p)$-module $\mathcal{W}_{2,1}$ is a self-dual simple current with \begin{equation}\label{walgebra: fr2} \mathcal{W}_{2,1}\boxtimes \mathcal{W}_{r,s} \cong \mathcal{W}_{3-r, s} \end{equation} for $r=1,2$ and $1\leq s\leq p$. \item[(2)] The $\mathcal{W}(p)$-module $\mathcal{W}_{1,2}$ has fusion rules \begin{equation}\label{walgebra: fr3} \mathcal{W}_{1,2}\boxtimes \mathcal{W}_{r,s} \cong \begin{cases} \mathcal{W}_{r, 2} \;\;\; &\mbox{if}\;\;\; s = 1\\ \mathcal{W}_{r, s-1}\oplus \mathcal{W}_{r, s+1}\;\;\; &\mbox{if}\;\;\; 2 \leq s \leq p-1 \end{cases} \end{equation} for $r=1,2$ and $1\leq s\leq p-1$. \end{itemize} \end{thm} \begin{proof} We use the fact that induction is a monoidal functor. For \eqref{walgebra: fr2}, we have \begin{align*} 2r\cdot(\mathcal{W}_{2,1}\boxtimes\mathcal{W}_{r,s}) & \cong \mathcal{F}(\mathcal{L}_{2,1})\boxtimes\mathcal{F}(\mathcal{L}_{r,s}) \cong\mathcal{F}(\mathcal{L}_{2,1}\boxtimes\mathcal{L}_{r,s})\nonumber\\ & \cong\left\lbrace\begin{array}{lll} \mathcal{F}(\mathcal{L}_{2,s}) & \text{if} & r=1\\ \mathcal{F}(\mathcal{L}_{1,s})\oplus\mathcal{F}(\mathcal{L}_{3,s}) & \text{if} & r=2\\ \end{array}\right. \nonumber\\ & \cong\left\lbrace\begin{array}{lll} 2\cdot\mathcal{W}_{2,s} & \text{if} & r=1\\ (1+3)\cdot\mathcal{W}_{1,s} & \text{if} & r=2\\ \end{array}\right. \cong 2r\cdot\mathcal{W}_{3-r,s}. \end{align*} From this we see that $\mathcal{W}_{3-r,s}$ can be the only composition factor of $\mathcal{W}_{2,1}\boxtimes\mathcal{W}_{r,s}$, occurring with multiplicity $1$. The proof of \eqref{walgebra: fr3}, using \eqref{fr2}, is similar. \end{proof} The category $\mathcal{C}_{\mathcal{W}(p)}$ also inherits rigidity from $\mathcal{O}_c^0$: \begin{thm}\label{thm:CW(p)_rigid} The category $\mathcal{C}_{\mathcal{W}(p)}$ is rigid. \end{thm} \begin{proof} Since $\mathcal{C}_{\mathcal{W}(p)}$ is the category of finite-length $\mathcal{W}(p)$-modules, it is closed under contragredients and \cite[Theorem~4.4.1]{CMY2} implies that it is enough to prove that simple $\mathcal{W}(p)$-modules are rigid. But this holds because by Proposition \ref{prop:inductions}, every simple $\mathcal{W}(p)$-module is a summand of the induction of a rigid $V_c$-module (see for example \cite[Lemma 1.16]{KO}, \cite[Exercise 2.10.6]{EGNO}, or \cite[Proposition 2.77]{CKM}). \end{proof} Next, we use fusion rules and rigidity in $\mathcal{C}_{\mathcal{W}(p)}$ to obtain all projective covers of simple modules in $\mathcal{C}_{\mathcal{W}(p)}$; these modules have been constructed previously in \cite{AM_log_mods, NT}. The next proposition was obtained in \cite[Section 5.1]{NT} using results from \cite{AM_trip}, but we will repeat the proof for completeness: \begin{prop}\label{prop:Wrp_proj} The simple $\mathcal{W}(p)$-modules $\mathcal{W}_{r,p}$ for $r=1,2$ are projective in $\mathcal{C}_{\mathcal{W}(p)}$. In particular, each $\mathcal{W}_{r,p}$ is its own projective cover. \end{prop} \begin{proof} As in the proof of Theorem \ref{projoflrp}, it is enough to show that all length-$2$ exact sequences \begin{equation}\label{eqn:Wrp_proj_exact} 0\longrightarrow \mathcal{W}_{r',s'}\longrightarrow\mathcal{X}\longrightarrow\mathcal{W}_{r,p}\longrightarrow 0 \end{equation} in $\mathcal{C}_{\mathcal{W}(p)}$ split. We first claim that $L(0)$ acts semisimply on $\mathcal{X}$ if $(r',s')\neq(r,p)$. This is because the nilpotent part $L(0)_{nil}$ of $L(0)$ is a $\mathcal{W}(p)$-module endomorphism of $\mathcal{X}$ such that $\mathrm{Im}\, L(0)_{nil}\subseteq\mathcal{W}_{r',s'} \subseteq\ker L(0)_{nil}$. Thus $L(0)_{nil}\neq 0$ would imply \begin{equation*} \mathcal{W}_{r',s'}\cong\mathrm{Im}\,L(0)_{nil}\cong\mathcal{X}/\ker L(0)_{nil}\cong\mathcal{W}_{r,p}, \end{equation*} which contradicts the assumption that $\mathcal{W}_{r',s'}$ and $\mathcal{W}_{r,p}$ are non-isomorphic. Now because $\mathcal{X}$ is a non-logarithmic module and because the irreducible modules $\mathcal{W}_{r',s'}$, $\mathcal{W}_{r,p}$ lie in different Virasoro blocks by \eqref{dec:triplet}, the block decomposition of the category of non-logarithmic $\mathcal{W}(p)$-modules proved in \cite[Theorem 4.4]{AM_trip} implies that \eqref{eqn:Wrp_proj_exact} splits. It remains to consider the $(r',s')=(r,p)$ case of \eqref{eqn:Wrp_proj_exact}. Let $A(\mathcal{W}(p))$ be the Zhu algebra of $\mathcal{W}(p)$; then the lowest conformal weight space $\mathcal{X}_{[h_{r,p}]}$ is a two-dimensional self-extension of the irreducible $A(\mathcal{W}(p))$-module $\mathcal{W}_{[h_{r,p}]}$. By \cite[Theorem 5.9]{AM_trip}, $A(\mathcal{W}(p)) \cong I\times M_r(\mathbb{C})$ where $I$ is an ideal that acts trivially on $(\mathcal{W}_{r,p})_{[h_{r,p}]}$ (and any of its self-extensions) and $M_r(\mathbb{C})$ is the simple $r\times r$ matrix algebra. Thus $\mathcal{X}_{[h_{r,p}]}$ is a semisimple $A(\mathcal{W}(p))$-module that generates $\mathcal{X}$ as a $\mathcal{W}(p)$-module. This means that $\mathcal{X}$ is a homomorphic image of $\overline{F}((\mathcal{W}_{r,p})_{[h_{r,p}]})\oplus \overline{F}((\mathcal{W}_{r,p})_{[h_{r,p}]})$, where for a finite-dimensional $A(\mathcal{W}(p))$-module $M$, $\overline{F}(M)$ denotes the generalized Verma $\mathcal{W}(p)$-module defined in \cite[Definition 2.7]{Li}. In particular, $\mathcal{X}$ has to be length-$2$ quotient $\mathcal{W}_{r,p}\oplus\mathcal{W}_{r,p}$ of the direct sum of generalized Verma $\mathcal{W}(p)$-modules, and thus \eqref{eqn:Wrp_proj_exact} splits in this case as well. \end{proof} To obtain the remaining projective covers, we define $\mathcal{R}_{1,s}=\mathcal{F}(\mathcal{P}_{1,s})$ and then $\mathcal{R}_{2,s}=\mathcal{W}_{2,1}\boxtimes\mathcal{R}_{1,s}$ for $1\leq s\leq p$. With this notation, $\mathcal{R}_{r,p}\cong\mathcal{W}_{r,p}$ for $r=1,2$. To show that the modules $\mathcal{R}_{r,s}$ are projective, we will need their fusion products with $\mathcal{W}_{2,1}$ and $\mathcal{W}_{1,2}$ (see \cite[Proposition~38]{TW} where, however, the slightly different formula in the $p=2$ case is omitted): \begin{prop}\label{prop:Wp_proj_fus} For $r=1, 2$ and $1\leq s\leq p$, \begin{equation}\label{walgebra:proj:fr1} \mathcal{W}_{2,1}\boxtimes \mathcal{R}_{r,s} \cong \mathcal{R}_{3-r,s}, \end{equation} \begin{equation}\label{walgebra:proj:fr2} \mathcal{W}_{1,2}\boxtimes \mathcal{R}_{r,s} \cong \begin{cases} \mathcal{R}_{r,2} \oplus 2\cdot\mathcal{R}_{3-r,p} &\mbox{if}\; s = 1\\ \mathcal{R}_{r,s-1}\oplus \mathcal{R}_{r,s+1} &\mbox{if}\; 2\leq s \leq p-2\\ \mathcal{R}_{r,p-2} \oplus 2\cdot \mathcal{R}_{r,p} &\mbox{if}\; s = p-1\\ \mathcal{R}_{r,p-1} & \mbox{if}\; s=p \end{cases} \quad\text{if}\quad p\geq 3, \end{equation} \begin{equation}\label{walgebra:proj:fr3} \mathcal{W}_{1,2}\boxtimes\mathcal{R}_{r,s} \cong \begin{cases} 2\cdot\mathcal{R}_{r,2}\oplus 2\cdot\mathcal{R}_{3-r,2} &\mbox{if}\; s=1\\ \mathcal{R}_{r,1} &\mbox{if}\;s=2\\ \end{cases} \quad\text{if}\quad p=2. \end{equation} \end{prop} \begin{proof} The $r=1$ case of \eqref{walgebra:proj:fr1} is the definition of $\mathcal{R}_{2,s}$, and then the $r=2$ case follows using $\mathcal{W}_{2,1}\boxtimes\mathcal{W}_{2,1}\cong\mathcal{W}_{1,1}$. The $r=1$ cases of \eqref{walgebra:proj:fr2} and \eqref{walgebra:proj:fr3} follow from \begin{equation*} \mathcal{W}_{1,2}\boxtimes\mathcal{R}_{1,s}\cong\mathcal{F}(\mathcal{L}_{1,2})\boxtimes\mathcal{F}(\mathcal{P}_{1,s})\cong\mathcal{F}(\mathcal{L}_{1,2}\boxtimes\mathcal{P}_{1,s}) \end{equation*} together with \eqref{fr2}, \eqref{more1prs}, \eqref{more2prs}, and the formula $$\mathcal{F}(\mathcal{P}_{2,s}) \cong \mathcal{F}(\mathcal{L}_{2,1}\boxtimes \mathcal{P}_{1,s}) \cong 2\cdot (\mathcal{W}_{2,1}\boxtimes \mathcal{R}_{1,s})\cong 2\cdot \mathcal{R}_{2,s}.$$ Then the $r=2$ cases follow by tensoring the $r=1$ cases with $\mathcal{W}_{2,1}$ and applying \eqref{walgebra:proj:fr1}. \end{proof} Now we can show that the modules $\mathcal{R}_{r,s}$ are projective covers: \begin{thm} For $r=1,2$ and $1\leq s \leq p-1$, the $\mathcal{W}(p)$-module $\mathcal{R}_{r,s}$ is a projective cover of $\mathcal{W}_{r,s}$ in $\mathcal{C}_{\mathcal{W}(p)}$ with Loewy diagram \begin{equation*} \xymatrixrowsep{1pc} \xymatrixcolsep{.75pc} \xymatrix{ & & \mathcal{W}_{r,s} & \\ \mathcal{R}_{r,s}: & \mathcal{W}_{3-r,p-s} \ar[ru] & & \mathcal{W}_{3-r,p-s} \ar[lu] \\ & & \mathcal{W}_{r,s} \ar[lu] \ar[ru] & \\ } \end{equation*} \end{thm} \begin{proof} We take $r=1$ first. Recall from Theorem \ref{thm:P1s_structure} that $\mathcal{P}_{1,s}$ has Loewy diagram \begin{equation*} \xymatrixrowsep{1pc} \xymatrixcolsep{.75pc} \xymatrix{ & & \mathcal{L}_{1,s} \\ \mathcal{P}_{1,s}: & \mathcal{L}_{2,p-s} \ar[ru] & \\ & & \mathcal{L}_{1,s} \ar[lu]\\ } \end{equation*} Applying the exact functor $\mathcal{F}$ to $\mathcal{P}_{1,s}$ and using \eqref{ind:irred}, we see that $\mathcal{W}_{1, s}$ and $\mathcal{W}_{2, p-s}$ are the only composition factors of $\mathcal{R}_{1,s}$, both occurring with multiplicity $2$. We also see that $\mathcal{W}_{1,s}$ is a submodule of $\mathcal{R}_{1,s}$, and there is a surjective $\mathcal{W}(p)$-module map $\mathcal{R}_{1,s} \rightarrow \mathcal{W}_{1,s}$. To determine the Loewy diagram of $\mathcal{R}_{1,s}$, we first note that the fusion rules \eqref{moreprs} and the decomposition \eqref{dec:triplet} implies that \[ \mathcal{G}(\mathcal{R}_{1,s}) \cong \bigoplus_{n=0}^{\infty}(2n+1)\cdot\mathcal{P}_{2n+1,s}. \] Then by Frobenius reciprocity, \begin{align*} \dim\hom_{\mathcal{W}(p)}(\mathcal{W}_{1, s}, \mathcal{R}_{1,s}) & = \dim\hom_{\mathcal{W}(p)}(\mathcal{F}(\mathcal{L}_{1, s}), \mathcal{R}_{1,s})\nonumber\\ &= \dim\hom_{V_c}\left(\mathcal{L}_{1, s}, \bigoplus_{n=0}^{\infty}(2n+1)\cdot\mathcal{P}_{2n+1,s}\right) = 1, \end{align*} while \begin{align*} 2\cdot\dim\hom_{\mathcal{W}(p)}(\mathcal{W}_{2, p-s}, \mathcal{R}_{1,s}) & = \dim\hom_{\mathcal{W}(p)}(\mathcal{F}(\mathcal{L}_{2, p-s}), \mathcal{R}_{1,s})\nonumber\\ &= \dim\hom_{V_c}\left(\mathcal{L}_{2, p-s}, \bigoplus_{n=0}^{\infty}(2n+1)\cdot\mathcal{P}_{2n+1,s}\right) = 0. \end{align*} From these, we see that ${\rm Soc}(\mathcal{R}_{1,s}) = \mathcal{W}_{1,s}$. Next, if we apply the exact functor $\mathcal{F}$ to the exact sequence \begin{equation*} 0\longrightarrow\mathcal{L}_{2,p-s}\longrightarrow\mathcal{P}_{1,s}/\mathcal{L}_{1,s}\longrightarrow\mathcal{L}_{1,s}\longrightarrow 0, \end{equation*} we get the exact sequence \begin{equation*} 0\longrightarrow2\cdot\mathcal{W}_{2,p-s}\longrightarrow\mathcal{R}_{1,s}/\mathcal{W}_{1,s}\longrightarrow\mathcal{W}_{1,s}\longrightarrow 0. \end{equation*} This sequence does not split because by exactness of induction and Frobenius reciprocity, \begin{align*} \hom_{\mathcal{W}(p)}(\mathcal{R}_{1,s}/\mathcal{W}_{1,s}, \mathcal{W}_{2,p-s})&\cong\hom_{\mathcal{W}(p)}(\mathcal{F}(\mathcal{P}_{1,s}/\mathcal{L}_{1,s}),\mathcal{W}_{2,p-s})\nonumber\\ & \cong\hom_{V_c}\left(\mathcal{P}_{1,s}/\mathcal{L}_{1,s},\bigoplus_{n=0}^\infty (2n+2)\cdot\mathcal{L}_{2n+2,p-s}\right) =0. \end{align*} Consequently, $\mathrm{Soc}(\mathcal{R}_{1,s}/\mathcal{W}_{1,s})=2\cdot\mathcal{W}_{2,p-s}$, and we have verified the row structure of the Loewy diagram for $\mathcal{R}_{1,s}$. Moreover, all four length-$2$ subquotients indicated in the Loewy diagram for $\mathcal{R}_{1,s}$ are indecomposable because \begin{equation*} \hom_{\mathcal{W}(p)}(\mathcal{W}_{2,p-s},\mathcal{R}_{1,s})=\hom_{\mathcal{W}(p)}(\mathcal{R}_{1,s},\mathcal{W}_{2,p-s})=0. \end{equation*} This completes the verification of the Loewy diagram for $r=1$. The Loewy diagram for $\mathcal{R}_{2,s}=\mathcal{W}_{2,1}\boxtimes\mathcal{R}_{1,s}$ now follows from that of $\mathcal{R}_{1,s}$ by \cite[Proposition 2.5]{CKLR} since $\mathcal{W}_{2,1}$ is a simple current. Next, the fusion rules \eqref{walgebra:proj:fr2} and \eqref{walgebra:proj:fr3} show that each $\mathcal{R}_{r,s}$ for $1\leq s\leq p-1$ is a direct summand of $\mathcal{W}_{1,2}\boxtimes\mathcal{R}_{r,s+1}$. Since $\mathcal{R}_{r,p}\cong\mathcal{W}_{r,p}$ is projective by Proposition \ref{prop:Wrp_proj}, and since the subcategory of projective objects in $\mathcal{C}_{\mathcal{W}(p)}$ is closed under direct summands and tensoring with rigid objects, it follows that each $\mathcal{R}_{r,s}$ is projective. Then the same argument as in the proof of Proposition \ref{prop:Prp-1_proj_cover} shows that it is a projective cover of $\mathcal{W}_{r,s}$. \end{proof} \subsection{The Virasoro category \texorpdfstring{$\mathcal{O}_c^0$}{Oc0} as an equivariantization} Here we prove a relation between the $V_c$-module category $\mathcal{O}_c^0$ and the $\mathcal{W}(p)$-module category $\mathcal{C}_{\mathcal{W}(p)}$ that was conjectured in \cite[Conjecture 11.6]{Ne}. We recall from \cite[Theorem 2.3]{ALM} that the full automorphism group of $\mathcal{W}(p)$ is $PSL(2,\mathbb{C})$ for any integer $p>1$. Moreover, the action of $PSL(2,\mathbb{C})$ on $\mathcal{W}(p)$ is continuous in the sense that every finite-dimensional conformal weight space of $\mathcal{W}(p)$ with the Euclidean topology is a continuous $PSL(2,\mathbb{C})$-module. The group $PSL(2,\mathbb{C})$ also acts on the category $\mathcal{C}_{\mathcal{W}(p)}$ of grading-restricted generalized $\mathcal{W}(p)$-modules by \begin{equation}\label{eqn:G-action} g\cdot(\mathcal{X},Y_\mathcal{X}) = (\mathcal{X},Y_\mathcal{X}(g^{-1}(\cdot),x)) \end{equation} for $g\in PSL(2,\mathbb{C})$. Thus we can form the $PSL(2,\mathbb{C})$-equivariantization of the category $\mathcal{C}_{\mathcal{W}(p)}$, as defined for example in \cite[Section 2.7]{EGNO}, which consists of $PSL(2,\mathbb{C})$-equivariant objects in $\mathcal{C}_{\mathcal{W}(p)}$. We will show that $\mathcal{O}_c^0$ is braided tensor equivalent to the $PSL(2,\mathbb{C})$-equivariantization of $\mathcal{C}_{\mathcal{W}(p)}$; the proof is a straightforward generalization of \cite[Theorem 4.17]{McR2} to infinite automorphism groups. First, we recall a slightly variant definition of equivariantization that is more convenient for our purposes. We use \cite[Section 2.3]{McR2} as a reference, but note that there, equivariantizations of categories involving twisted modules for a superalgebra were considered. Here, we only need to consider untwisted modules for a vertex operator algebra, so the situation is simpler. Let $V$ be a vertex operator algebra, $G$ a complex reductive Lie group acting continuously on $V$ by automorphisms, and $\mathcal{C}$ a braided tensor category of grading-restricted generalized $V$-modules. Assume also that $\mathcal{C}$ is closed under the action of $G$ given by \eqref{eqn:G-action}. \begin{defi} The \textit{$G$-equivariantization} $\mathcal{C}^G$ of $\mathcal{C}$ is the following category: \begin{itemize} \item Objects of $\mathcal{C}^G$ are pairs $(\mathcal{X},Y_\mathcal{X};\varphi_\mathcal{X})$ where $(\mathcal{X},Y_\mathcal{X})$ is an object of $\mathcal{C}$ and $\varphi_\mathcal{X}: G\rightarrow GL(\mathcal{X})$ is a continuous group representation such that \begin{equation}\label{eqn:G-equiv_compat_cond} \varphi_\mathcal{X}(g)\cdot Y_\mathcal{X}(v,x)=Y_\mathcal{X}(g\cdot v,x)\varphi_\mathcal{X}(g) \end{equation} for all $g\in G$. \item Morphisms from $(\mathcal{X}_1,Y_{\mathcal{X}_1};\varphi_{\mathcal{X}_1})$ to $(\mathcal{X}_2,Y_{\mathcal{X}_2};\varphi_{\mathcal{X}_2})$ in $\mathcal{C}^G$ consist of all $V\times G$-module homomorphisms $f:\mathcal{X}_1\rightarrow\mathcal{X}_2$, that is, \begin{equation*} f\circ Y_{\mathcal{X}_1}(v,x)=Y_{\mathcal{X}_2}(v,x)\circ f\qquad\text{and}\qquad f\circ\varphi_{\mathcal{X}_1}(g)=\varphi_{\mathcal{X}_2}(g)\circ f \end{equation*} for all $v\in V$, $g\in G$. \end{itemize} \end{defi} \begin{rem} The compatibility condition \eqref{eqn:G-equiv_compat_cond} implies that each $\varphi_\mathcal{X}(g)$ commutes with the action of $L(0)$ on $\mathcal{X}$. Thus the condition that $\varphi_\mathcal{X}$ be continuous simply means that each finite-dimensional conformal weight space of $\mathcal{X}$ is a continuous $G$-module. \end{rem} As explained for example in \cite[Section 2.3]{McR2}, $\mathcal{C}^G$ is a braided tensor category in the setting that $G$ is a finite group and that all modules in $\mathcal{C}$ are objects of a braided tensor category of modules for the $G$-fixed-point subalgebra $V^G\subseteq V$. The same constructions work when $G$ is infinite, but we need to make sure that the action of $G$ on a tensor product $\mathcal{X}_1\boxtimes\mathcal{X}_2$ is continuous: \begin{lem}\label{lem:G-equiv_tens_prod} If $V$-modules $\mathcal{X}_1$ and $\mathcal{X}_2$ are objects of $\mathcal{C}^G$, then $\mathcal{X}_1\boxtimes\mathcal{X}_2$ is also an object of $\mathcal{C}^G$ with $G$-action $\varphi_{\mathcal{X}_1\boxtimes\mathcal{X}_2}$ characterized by \begin{equation}\label{eqn:G-action_tens_prod_def} \varphi_{\mathcal{X}_1\boxtimes\mathcal{X}_2}(g)\cdot\mathcal{Y}_\boxtimes(b_1,x)b_2 =\mathcal{Y}_\boxtimes(\varphi_{\mathcal{X}_1}(g) b_1,x)\varphi_{\mathcal{X}_2}(g) b_2 \end{equation} for $g\in G$, $b_1\in\mathcal{X}_1$, and $b_2\in\mathcal{X}_2$, where $\mathcal{Y}_\boxtimes$ is the tensor product intertwining operator of type $\binom{\mathcal{X}_1\boxtimes\mathcal{X}_2}{\mathcal{X}_1\,\mathcal{X}_2}$. \end{lem} \begin{proof} Using \eqref{eqn:G-equiv_compat_cond} for $\varphi_{\mathcal{X}_1}$ and $\varphi_{\mathcal{X}_2}$, it is easy to check that $$\mathcal{Y}_\boxtimes(\varphi_{\mathcal{X}_1}(g)(\cdot),x)\varphi_{\mathcal{X}_2}(g): \mathcal{X}_1\otimes \mathcal{X}_2\rightarrow(\mathcal{X}_1\boxtimes \mathcal{X}_2)[\log x]\lbrace x\rbrace$$ is an intertwining operator of type $\binom{g^{-1}\cdot(\mathcal{X}_1\boxtimes \mathcal{X}_2)}{\mathcal{X}_1\,\,\mathcal{X}_2}$ for any $g\in G$. Thus the universal property of tensor products induces a unique $V$-module homomorphism \begin{equation*} \varphi_{\mathcal{X}_1\boxtimes\mathcal{X}_2}(g): \mathcal{X}_1\boxtimes\mathcal{X}_2\rightarrow g^{-1}\cdot(\mathcal{X}_1\boxtimes\mathcal{X}_2) \end{equation*} such that \eqref{eqn:G-action_tens_prod_def} holds. The definition of the vertex operator for $g^{-1}\cdot(\mathcal{X}_1\boxtimes\mathcal{X}_2)$ implies that $\varphi_{\mathcal{X}_1\boxtimes\mathcal{X}_2}(g)$ satisfies \eqref{eqn:G-equiv_compat_cond} for all $g\in G$. Moreover, \eqref{eqn:G-action_tens_prod_def} and the fact that $\varphi_{\mathcal{X}_1}$ and $\varphi_{\mathcal{X}_2}$ are group homomorphisms implies that $\varphi_{\mathcal{X}_1\boxtimes\mathcal{X}_2}$ defines a homomorphism from $G$ to $GL(\mathcal{X}_1\boxtimes\mathcal{X}_2)$. We still need to check that the $G$-action on each finite-dimensional conformal weight space of $\mathcal{X}_1\boxtimes\mathcal{X}_2$ is continuous. Recall that for $b_1\in\mathcal{X}_1$, $b_2\in\mathcal{X}_2$, $h\in\mathbb{C}$, and $k\in\mathbb{N}$, the coefficient of $x^{-h-1}(\log x)^k$ in $\mathcal{Y}_\boxtimes(b_1,x_2)b_2$ is denoted by $(b_1)_{h,k}b_2$. Thus \eqref{eqn:G-action_tens_prod_def} implies that for each $h\in\mathbb{C}$ and $k\in\mathbb{N}$, \begin{align*} \psi_{h,k}: \mathcal{X}_1\otimes\mathcal{X}_2 &\rightarrow \mathcal{X}_1\boxtimes\mathcal{X}_2\nonumber\\ b_1\otimes b_2 & \mapsto (b_1)_{h,k}b_2 \end{align*} is a $G$-module homomorphism. Then because $G$ is a complex reductive Lie group acting continuously on the finite-dimensional conformal weight spaces of $\mathcal{X}_1$ and $\mathcal{X}_2$, each of $\mathcal{X}_1$ and $\mathcal{X}_2$, and thus $\mathcal{X}_1\otimes\mathcal{X}_2$ as well, decomposes as the direct sum of finite-dimensional irreducible continuous $G$-modules. Consequently, the image of each $\psi_{h,k}$ is a direct sum of finite-dimensional irreducible continuous $G$-modules. Finally, because $\mathcal{Y}_\boxtimes$ is a surjective intertwining operator, $\mathcal{X}_1\boxtimes\mathcal{X}_2$ is a continuous $G$-module. \end{proof} It is now easy to see using \eqref{eqn:G-action_tens_prod_def} and the vertex algebraic tensor category structure on $\mathcal{C}$ (see \cite{HLZ8} or the exposition in \cite[Section 3.3]{CKM}) that $\mathcal{C}^G$ is a tensor category. For example, if $f_1:\mathcal{X}_1\rightarrow\widetilde{\mathcal{X}}_1$ and $f_2:\mathcal{X}_2\rightarrow\widetilde{\mathcal{X}}_2$ are morphisms in $\mathcal{C}^G$, then the $V$-module homomorphism $f_1\boxtimes f_2: \mathcal{X}_1\boxtimes\mathcal{X}_2\rightarrow\widetilde{\mathcal{X}}_1\boxtimes\widetilde{\mathcal{X}}_2$ is also a $G$-module homomorphism because \begin{align*} (f_1\boxtimes f_2)\left(\varphi_{\mathcal{X}_1\boxtimes\mathcal{X}_2}(g)\cdot\mathcal{Y}_\boxtimes(b_1,x)b_2\right) & =\mathcal{Y}_\boxtimes(f_1(\varphi_{\mathcal{X}_1}(g)b_1),x)f_2(\varphi_{\mathcal{X}_2}(g)b_2)\nonumber\\ & =\mathcal{Y}_\boxtimes(\varphi_{\widetilde{X}_1}(g)f_1(b_1),x)\varphi_{\widetilde{X}_2}(g)f_2(b_2)\nonumber\\ & =\varphi_{\widetilde{X}_1\boxtimes\widetilde{X}_2}(g)\cdot(f_1\boxtimes f_2)\left(\mathcal{Y}_\boxtimes(b_1,x)b_2\right) \end{align*} for $b_1\in\mathcal{X}_1$, $b_2\in\mathcal{X}_2$, and $g\in G$. The unit object of $\mathcal{C}^G$ is $V$ with $G$-action $\varphi_V(g)=g$ for $g\in G$. Then \eqref{eqn:G-equiv_compat_cond}, \eqref{eqn:G-action_tens_prod_def}, and the definitions of the structure isomorphisms in $\mathcal{C}$ show that the unit, associativity, and braiding isomorphisms in $\mathcal{C}$ all commute with the $G$-actions on objects of $\mathcal{C}^G$ and thus define braided tensor category structure on $\mathcal{C}^G$. Now take $V=\mathcal{W}(p)$, $G=PSL(2,\mathbb{C})$, and $\mathcal{C}=\mathcal{C}_{\mathcal{W}(p)}$. In this case, recall that Lemma \ref{lem:ind_functor} and the rigidity of $\mathcal{O}_c^0$ show that induction defines an exact functor $\mathcal{F}: \mathcal{O}_c^0\rightarrow\mathcal{C}_{\mathcal{W}(p)}$. But just as explained in \cite[Section 2.3]{McR2}, induction actually defines a functor into the $PSL(2,\mathbb{C})$-equivariantization: \begin{lem}\label{lem:ind_BTF} Induction defines an exact braided tensor functor $\mathcal{F}: \mathcal{O}_c^0\rightarrow(\mathcal{C}_{\mathcal{W}(p)})^{PSL(2,\mathbb{C})}$. \end{lem} \begin{proof} For an object $\mathcal{W}$ in $\mathcal{O}_c^0$, recall that $\mathcal{F}(\mathcal{W})=\mathcal{W}(p)\boxtimes\mathcal{W}$ as a generalized $V_c$-module, where $\boxtimes$ is the tensor product in $\mathrm{Ind}(\mathcal{O}_c)$. Thus $\mathcal{F}(\mathcal{W})$ admits the $PSL(2,\mathbb{C})$-action \begin{equation*} \varphi_{\mathcal{F}(\mathcal{W})}(g)=g\boxtimes\mathrm{Id}_\mathcal{W} \end{equation*} for $g\in PSL(2,\mathbb{C})$. Just as in \cite[Section 2.3]{McR2}, $\varphi_{\mathcal{F}(\mathcal{W})}(g)$ satisfies \eqref{eqn:G-equiv_compat_cond} because $g$ is an automorphism of $\mathcal{W}(p)$, but we need to check that $\varphi_{\mathcal{F}(\mathcal{W})}$ is continuous. As in the proof of Lemma \ref{lem:G-equiv_tens_prod}, we have the tensor product intertwining operator \begin{align*} \mathcal{Y}_\boxtimes: \mathcal{W}(p)\otimes\mathcal{W} & \rightarrow (\mathcal{W}(p)\boxtimes\mathcal{W})[\log x]\lbrace x\rbrace\nonumber\\ v\otimes w & \mapsto \mathcal{Y}_\boxtimes(v,x)w=\sum_{h\in\mathbb{C}}\sum_{k\in\mathbb{N}} v_{h,k} w\,x^{-h-1}(\log x)^k, \end{align*} and for any $h\in\mathbb{C}$, $k\in\mathbb{N}$, we have a $G$-module homomorphism \begin{align*} \psi_{h,k}: \mathcal{W}(p)\otimes\mathcal{W} & \rightarrow\mathcal{W}(p)\boxtimes\mathcal{W}\nonumber\\ v\otimes w & \mapsto v_{h,k} w \end{align*} where $\mathcal{W}$ is a trivial $G$-module. Since $\mathcal{W}(p)\otimes\mathcal{W}$ is a direct sum of finite-dimensional irreducible continuous $G$-modules, the same is true of the image of each $\psi_{h,k}$. Thus because $\mathcal{Y}_\boxtimes$ is a surjective intertwining operator, $\mathcal{F}(\mathcal{W})$ is a continuous $G$-module. We have now shown that $(\mathcal{F}(\mathcal{W});\varphi_{\mathcal{F}(\mathcal{W})})$ is an object of $(\mathcal{C}_{\mathcal{W}(p)})^{PSL(2,\mathbb{C})}$, and it is clear that if $f:\mathcal{W}_1\rightarrow\mathcal{W}_2$ is a homomorphism in $\mathcal{O}_c^0$, then $\mathcal{F}(f)=\mathrm{Id}_{\mathcal{W}(p)}\boxtimes f$ is also a homomorphism of $G$-modules and thus a morphism in $(\mathcal{C}_{\mathcal{W}(p)})^{PSL(2,\mathbb{C})}$. Thus induction defines a functor $\mathcal{F}: \mathcal{O}_c^0\rightarrow(\mathcal{C}_{\mathcal{W}(p)})^{PSL(2,\mathbb{C})}$ which is exact because $\mathcal{O}_c^0$ is rigid, as mentioned previously. See \cite[Theorems 2.11 and 2.12]{McR2} for a proof that $\mathcal{F}$ is additionally a braided tensor functor. Note that these results in \cite{McR2} do not require the group to be finite and that the braided tensor category structure on $(\mathcal{C}_{\mathcal{W}(p)})^{PSL(2,\mathbb{C})}$ defined in this subsection indeed agrees with that in \cite[Section 2.3]{McR2}. \end{proof} Now we can prove the main result of this section, that $\mathcal{F}$ is actually an equivalence of braided tensor categories. The proof largely repeats that of \cite[Theorem 4.17]{McR2}, but some additional care is needed because $PSL(2,\mathbb{C})$ is an infinite group: \begin{thm}\label{thm:G-equiv} The induction functor $\mathcal{F}: \mathcal{O}_c^0\rightarrow(\mathcal{C}_{\mathcal{W}(p)})^{PSL(2,\mathbb{C})}$ is an equivalence of braided tensor categories. \end{thm} \begin{proof} For notational simplicity, set $G = PSL(2, \mathbb{C})$. Since $\mathcal{F}$ is a braided tensor functor by Lemma \ref{lem:ind_BTF}, we just need to show it is an equivalence of categories. Thus, we will show there is a $G$-invariants functor $\mathcal{I}: (\mathcal{C}_{\mathcal{W}(p)})^G\rightarrow\mathcal{O}_c^0$ such that $\mathcal{I}\circ\mathcal{F}$ and $\mathcal{F}\circ\mathcal{I}$ are naturally isomorphic to the respective identity functors. For an object $(\mathcal{X}, Y_\mathcal{X}; \varphi_\mathcal{X})$ in $(\mathcal{C}_{\mathcal{W}(p)})^{G}$, define \[ \mathcal{X}^{G} = \{b \in \mathcal{X}\,|\,\varphi_\mathcal{X}(g)b = b\; \mathrm{for\; all}\; g \in G\}. \] By \eqref{eqn:G-equiv_compat_cond}, for any $g\in G$, $v\in V_c=\mathcal{W}(p)^G$, and $b\in\mathcal{X}^G$, we have \[ \varphi_\mathcal{X}(g)\cdot Y_\mathcal{X}(v,x)b = Y_\mathcal{X}(g\cdot v,x)\varphi_\mathcal{X}(g)b= Y_\mathcal{X}(v, x)b, \] and it follows that $\mathcal{X}^{G}$ is a $V_{c}$-submodule of $\mathcal{X}$. Then since objects of $\mathcal{C}_{\mathcal{W}(p)}$ are modules in $\mathrm{Ind}(\mathcal{O}_c)$ when viewed as $V_c$-modules (see Proposition 3.1.3 and Remark 3.1.4 of \cite{CMY2}) and since $\mathrm{Ind}(\mathcal{O}_c)$ is closed under submodules, $\mathcal{X}^G$ is a module in $\mathrm{Ind}(\mathcal{O}_c)$. For a morphism $f: (\mathcal{X}_1, Y_{\mathcal{X}_1}; \varphi_{\mathcal{X}_1}) \rightarrow (\mathcal{X}_2, Y_{\mathcal{X}_2}; \varphi_{\mathcal{X}_2})$ in $(\mathcal{C}_{\mathcal{W}(p)})^{G}$, define $f^G = f|_{\mathcal{X}_1^{G}}$. Since $f$ intertwines the $G$-actions on $\mathcal{X}_1$ and $\mathcal{X}_2$, the image of $f^G$ is contained in $\mathcal{X}_2^G$ and hence \[ f^G: \mathcal{X}_1^G \rightarrow \mathcal{X}_2^G \] is a morphism in $\mathrm{Ind}(\mathcal{O}_c)$. Thus we have a $G$-invariants functor $\mathcal{I}: \mathcal{C}_{\mathcal{W}(p)}\rightarrow\mathrm{Ind}(\mathcal{O}_c)$, and we will show below that the image of $\mathcal{I}$ is actually contained in $\mathcal{O}_c^0$. Now we show that for a module $\mathcal{W} \in \mathcal{O}_c^0$, we have a natural isomorphism $\mathcal{F}(\mathcal{W})^{G} \cong \mathcal{W}$. Since $\mathcal{W}(p)$ is a semisimple $G$-module, there is a $V_c$-module projection $\varepsilon_{\mathcal{W}(p)}: \mathcal{W}(p) \rightarrow \mathcal{W}(p)^G$ that is a one-sided inverse to the inclusion $\iota_{\mathcal{W}(p)}: \mathcal{W}(p)^G \rightarrow \mathcal{W}(p)$. Then recalling that $\mathcal{F}(\mathcal{W}) = \mathcal{W}(p) \boxtimes \mathcal{W}$ and $\varphi_{\mathcal{F}(\mathcal{W})}(g) = g \boxtimes \mathrm{Id}_\mathcal{W}$ for $g \in G$, we see that \[ \mathcal{F}(\mathcal{W})^G\hookrightarrow\mathcal{W}(p)\boxtimes\mathcal{W}\xrightarrow{\varepsilon_{\mathcal{W}(p)}\boxtimes\mathrm{Id}_\mathcal{W}} \mathcal{W}(p)^G\boxtimes\mathcal{W}\xrightarrow{l_\mathcal{W}} \mathcal{W} \] is a natural isomorphism, with inverse $(\iota_{\mathcal{W}(p)} \boxtimes \mathrm{Id}_W)\circ l_\mathcal{W}^{-1}$, just as in the proof of \cite[Theorem 4.17]{McR2}. Next, for $(\mathcal{X}, Y_\mathcal{X}; \varphi_\mathcal{X})$ in $(\mathcal{C}_{\mathcal{W}(p)})^{G}$, recall that $\mathcal{F}(\mathcal{X}^G)$ is at first an object of the category $\rep\mathcal{W}(p)$ of not-necessarily-local $\mathcal{W}(p)$-modules which are objects of $\mathrm{Ind}(\mathcal{O}_c)$ when viewed as $V_c$-modules. Moreover, as in the proof of Lemma \ref{lem:ind_BTF}, $\mathcal{F}(\mathcal{X}^G)$ is a semisimple $G$-module: \begin{equation*} \mathcal{F}(\mathcal{X}^G)\cong\bigoplus_{\chi\in\widehat{G}} \mathcal{W}(p)_\chi\boxtimes\mathcal{X}^G, \end{equation*} where the sum runs over the finite-dimensional irreducible continuous characters of $G$ and $\mathcal{W}(p)_\chi$ is the isotypical component of $\mathcal{W}(p)$ corresponding to $\chi$. We have a similar decomposition $\mathcal{X}=\bigoplus_{\chi\in\widehat{G}} \mathcal{X}_\chi$ because by assumption, $G$ acts continuously on the finite-dimensional conformal weight spaces of $\mathcal{X}$. Now we show that we have a natural isomorphism $\mathcal{F}(\mathcal{X}^G)\cong\mathcal{X}$. Let $\iota_\mathcal{X}: \mathcal{X}^G \rightarrow \mathcal{X}$ denote the inclusion, and let $\mu_\mathcal{X}: \mathcal{W}(p)\boxtimes\mathcal{X}\rightarrow\mathcal{X}$ denote the unique $V_c$-module homomorphism induced by the intertwining operator $Y_\mathcal{X}$. Then just as in the proof of \cite[Theorem 4.17]{McR2}, \[ \Psi_\mathcal{X} = \mu_\mathcal{X}\circ(\mathrm{Id}_{\mathcal{W}(p)} \boxtimes \iota_\mathcal{X}): \mathcal{W}(p) \boxtimes \mathcal{X}^G \rightarrow \mathcal{X} \] is a $\mathcal{W}(p)\times G$-module homomorphism, and $\Psi_\mathcal{X}$ defines a natural transformation from $\mathcal{F}\circ\mathcal{I}$ to the inclusion of $(\mathcal{C}_{\mathcal{W}(p)})^G$ into the equivariantization $(\rep\mathcal{W}(p))^G$. Moreover, since $\Psi_\mathcal{X}$ is a $G$-module homomorphism, it restricts to a map $\mathcal{F}(\mathcal{X}^G)^G\rightarrow\mathcal{X}^G$ which is an isomorphism because it amounts to the left unit isomorphism $\mathcal{W}(p)^G\boxtimes\mathcal{X}^G\rightarrow\mathcal{X}^G$ in $\mathrm{Ind}(\mathcal{O}_c)$. Thus the kernel and cokernel of $\Psi_\mathcal{X}$ are both $\mathcal{W}(p)\times G$-modules in $(\rep\mathcal{W}(p))^G$ with no $G$-invariants, and both are semisimple as $G$-modules because $\mathcal{F}(\mathcal{X}^G)$ and $\mathcal{X}$ are semisimple $G$-modules. Then the argument that concludes the proof of \cite[Theorem 4.17]{McR2} applies to show that the kernel and cokernel of $\Psi_\mathcal{X}$ are both $0$, so that $\Psi_\mathcal{X}$ is an isomorphism. It still remains to show that if $(\mathcal{X},Y_\mathcal{X};\varphi_\mathcal{X})$ is in $(\mathcal{C}_{\mathcal{W}(p)})^G$, then $\mathcal{X}^G$ is in $\mathcal{O}_c^0$. It is enough to show that $\mathcal{X}^G$ has finite length, which is equivalent to showing that any decreasing $V_c$-submodule sequence \[ \mathcal{X}^G\supseteq\mathcal{W}_0 \supseteq \mathcal{W}_1 \supseteq \cdots \supseteq \mathcal{W}_n \supseteq \cdots \] and any increasing sequence \[ \mathcal{W}_0 \subseteq \mathcal{W}_1 \subseteq \cdots \subseteq \mathcal{W}_n \subseteq \cdots \subseteq \mathcal{X}^G \] are stationary, that is, $\mathcal{W}_n = \mathcal{W}_{n+1}$ for $n$ sufficiently large (\cite[Theorem~2.1]{S}; see also \cite[Exercise~8.20]{KS}). Applying the exact functor $\mathcal{F}$ to the decreasing sequence yields a decreasing sequence of $\mathcal{W}(p)$-submodules in $\mathcal{F}(\mathcal{X}^G)\cong\mathcal{X}$. Because $\mathcal{X}$ has finite length, $\mathcal{F}(\mathcal{W}_{n}) = \mathcal{F}(\mathcal{W}_{n+1})$ for $n$ sufficiently large, which means $\mathcal{F}(\mathcal{W}_n/\mathcal{W}_{n+1}) = 0$ since $\mathcal{F}$ is exact. Moreover, since $\mathcal{W}(p)$ is a semisimple $V_c$-module, $\mathcal{F}(\mathcal{W}_n/\mathcal{W}_{n+1})$ contains $V_c\boxtimes(\mathcal{W}_n/\mathcal{W}_{n+1})\cong\mathcal{W}_n/\mathcal{W}_{n+1}$ as a $V_c$-submodule. So $\mathcal{W}_n/\mathcal{W}_{n+1}=0$ for $n$ sufficiently large. Similarly, the increasing series is also stationary. \end{proof} \begin{rem} Consider the one-dimensional torus $T^{\vee}\subseteq PSL(2,\mathbb{C})$. The fixed-point subalgebra $\mathcal{W}(p)^{T^\vee}$ is the singlet vertex operator algebra $\mathcal{M}(p)$, whose tensor categories were studied in \cite{CMY2}. Then similar arguments as above show that induction yields a braided tensor equivalence from the $\mathcal{M}(p)$-module category $\mathcal{C}_{\mathcal{M}(p)}^0$ defined in \cite{CMY2} to the $T^\vee$-equivariantization of $\mathcal{C}_{\mathcal{W}(p)}$. In a little more detail: \begin{itemize} \item The definition of $\mathcal{C}_{\mathcal{M}(p)}^0$ in \cite[Definition 3.1.2]{CMY2}, combined with \cite[Proposition 3.2.2]{CMY2} and the argument in the proof of Lemma \ref{lem:ind_BTF}, shows that induction defines an exact braided tensor functor $\mathcal{F}: \mathcal{C}_{\mathcal{M}(p)}^0\rightarrow(\mathcal{C}_{\mathcal{W}(p)})^{T^\vee}$. \item Taking $T^\vee$-invariants yields a functor $\mathcal{I}$ from $(\mathcal{C}_{\mathcal{W}(p)})^{T^\vee}$ to the category $\rep^0\mathcal{M}(p)$ of generalized $\mathcal{M}(p)$-modules which are objects of $\mathrm{Ind}(\mathcal{O}_c)$ when viewed as $V_c$-modules. Induction extends to an exact functor from $\rep^0\mathcal{M}(p)$ to the $T^\vee$-equivariantization of the category $\rep\mathcal{W}(p)$ of not-necessarily-local $\mathcal{W}(p)$-modules which are objects of $\rep^0\mathcal{M}(p)$ when viewed as generalized $\mathcal{M}(p)$-modules. \item Because $\mathcal{W}(p)$ is a semisimple $\mathcal{M}(p)\times T^\vee$-module, and because objects of $(\mathcal{C}_{\mathcal{W}(p)})^{T^\vee}$ are semisimple $T^\vee$-modules, the arguments in the proof of Theorem \ref{thm:G-equiv} show that $\mathcal{I}\circ\mathcal{F}$ is naturally isomorphic to the identity on $\mathcal{C}_{\mathcal{M}(p)}^0$, and that $\mathcal{F}\circ\mathcal{I}$ is naturally isomorphic to the inclusion of $(\mathcal{C}_{\mathcal{W}(p)})^{T^\vee}$ into $(\rep\mathcal{W}(p))^{T^\vee}$. \item Since for any module $\mathcal{X}$ in $(\mathcal{C}_{\mathcal{W}(p)})^{T^\vee}$, $\mathcal{F}(\mathcal{X}^{T^\vee})\cong\mathcal{X}$ is a $\mathcal{W}(p)$-module in $\mathcal{C}_{\mathcal{W}(p)}$, $\mathcal{X}^{T^\vee}$ is by definition an object of $\mathcal{C}_{\mathcal{M}(p)}^0$. Thus the image of $\mathcal{I}$ is actually $\mathcal{C}_{\mathcal{M}(p)}^0$. \end{itemize} Note that \cite[Conjecture 11.6]{Ne} predicted that taking $T^\vee$-invariants should yield an embedding of $(\mathcal{C}_{\mathcal{W}(p)})^{T^\vee}$ into the category of $\mathcal{M}(p)$-modules. Thus the above argument proves a strong form of this conjecture: $\mathcal{I}$ in fact yields a braided tensor equivalence with the specific subcategory $\mathcal{C}_{\mathcal{M}(p)}^0$ of $\mathcal{M}(p)$-modules. \end{rem}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,351
package model import play.api.libs.json._ /** * Represents the Swagger definition for ResponseTimeMonitorData. * @param additionalProperties Any additional properties this model may have. */ @javax.annotation.Generated(value = Array("org.openapitools.codegen.languages.ScalaPlayFrameworkServerCodegen"), date = "2022-06-04T08:11:54.386355Z[Etc/UTC]") case class ResponseTimeMonitorData( `class`: Option[String], timestamp: Option[Int], average: Option[Int] additionalProperties: ) object ResponseTimeMonitorData { implicit lazy val responseTimeMonitorDataJsonFormat: Format[ResponseTimeMonitorData] = { val realJsonFormat = Json.format[ResponseTimeMonitorData] val declaredPropNames = Set("`class`", "timestamp", "average") Format( Reads { case JsObject(xs) => val declaredProps = xs.filterKeys(declaredPropNames) val additionalProps = JsObject(xs -- declaredPropNames) val restructuredProps = declaredProps + ("additionalProperties" -> additionalProps) val newObj = JsObject(restructuredProps) realJsonFormat.reads(newObj) case _ => JsError("error.expected.jsobject") }, Writes { responseTimeMonitorData => val jsObj = realJsonFormat.writes(responseTimeMonitorData) val additionalProps = jsObj.value("additionalProperties").as[JsObject] val declaredProps = jsObj - "additionalProperties" val newObj = declaredProps ++ additionalProps newObj } ) } }
{ "redpajama_set_name": "RedPajamaGithub" }
655
import math import random import numpy as np from federatedml.secureprotol.affine_encoder import AffineEncoder from federatedml.secureprotol.gmpy_math import invert, mpz class IterativeAffineCipher(object): """ Formulas: The r-th round of encryption method is Enc_r(x) = a_r * x % n_r; The overall encryption scheme is Enc(x) = Enc_n o ... o Enc_1(x) Note: The key round supported is upper bounded by some number dependent of key size. """ def __init__(self): pass @staticmethod def generate_keypair(key_size=1024, key_round=5, encode_precision=2 ** 100, randomized=True): if randomized: return IterativeAffineCipher.generate_randomized_keypair(key_size, key_round, encode_precision) else: return IterativeAffineCipher.generate_deterministic_keypair(key_size, key_round, encode_precision) @staticmethod def generate_randomized_keypair(key_size, key_round, encode_precision): key_size_array = np.linspace(start=int(key_size / 2), stop=key_size, num=key_round) key_size_array = np.floor(key_size_array).astype(np.int64) n_array = [0 for _ in range(key_round)] a_array = [0 for _ in range(key_round)] i = 0 for key_size in key_size_array: n = random.SystemRandom().getrandbits(key_size) while True: a_ratio = random.SystemRandom().random() a_size = int(key_size * a_ratio) if a_size is 0: continue a = random.SystemRandom().getrandbits(a_size) if math.gcd(n, a) == 1: break n_array[i] = n a_array[i] = a i = i + 1 # pick a generator and a scalar g = random.SystemRandom().getrandbits(key_size // 10) x = random.SystemRandom().getrandbits(160) return RandomizedIterativeAffineCipherKey(a_array, n_array, g, x, encode_precision=encode_precision) @staticmethod def generate_deterministic_keypair(key_size, key_round, encode_precision): key_size_array = np.linspace(start=int(key_size / 2), stop=key_size, num=key_round) key_size_array = np.floor(key_size_array).astype(np.int64) n_array = [0 for _ in range(key_round)] a_array = [0 for _ in range(key_round)] i = 0 for key_size in key_size_array: n = random.SystemRandom().getrandbits(key_size) while True: a_ratio = random.SystemRandom().random() a_size = int(key_size * a_ratio) if a_size is 0: continue a = random.SystemRandom().getrandbits(a_size) if math.gcd(n, a) == 1: break n_array[i] = n a_array[i] = a i = i + 1 return DeterministicIterativeAffineCipherKey(a_array, n_array, encode_precision) class IterativeAffineCipherKey(object): def __init__(self, a_array, n_array, encode_precision=2 ** 100): if len(a_array) != len(n_array): raise ValueError("a_array length must be equal to n_array") self.a_array = a_array self.n_array = n_array self.key_round = len(self.a_array) self.a_inv_array = self.mod_inverse() self.affine_encoder = AffineEncoder(mult=encode_precision) def mod_inverse(self): a_array_inv = [0 for _ in self.a_array] for i in range(self.key_round): a_array_inv[i] = invert(self.a_array[i], self.n_array[i]) return a_array_inv def encrypt(self, plaintext): pass def decrypt(self, ciphertext): pass class RandomizedIterativeAffineCipherKey(IterativeAffineCipherKey): def __init__(self, a_array, n_array, g, x, encode_precision=2 ** 100): super(RandomizedIterativeAffineCipherKey, self).__init__(a_array, n_array, encode_precision) self.g = g self.x = x self.h = g * x % self.n_array[0] def encrypt(self, plaintext): return self.raw_encrypt(self.affine_encoder.encode(plaintext)) def decrypt(self, ciphertext): if isinstance(ciphertext, int) is True and ciphertext is 0: return 0 return self.affine_encoder.decode(self.raw_decrypt(ciphertext), ciphertext.mult_times) def raw_encrypt(self, plaintext): plaintext = self.encode(plaintext) ciphertext = RandomizedIterativeAffineCiphertext(plaintext[0], plaintext[1], self.n_array[-1], self.affine_encoder.mult) for i in range(self.key_round): ciphertext = self.raw_encrypt_round(ciphertext, i) return ciphertext def raw_decrypt(self, ciphertext): plaintext1 = ciphertext.cipher1 plaintext2 = ciphertext.cipher2 for i in range(self.key_round): plaintext1, plaintext2 = self.raw_decrypt_round(plaintext1, plaintext2, i) encoded_result = RandomizedIterativeAffineCiphertext( cipher1=plaintext1, cipher2=plaintext2, n_final=ciphertext.n_final, multiple=ciphertext.multiple, mult_times=ciphertext.mult_times ) return self.decode(encoded_result) def encode(self, plaintext): y = random.SystemRandom().getrandbits(160) return int(mpz(y) * self.g % self.n_array[0]), (plaintext + y * self.h) % self.n_array[0] def decode(self, ciphertext): intermediate_result = (ciphertext.cipher2 - self.x * ciphertext.cipher1) % self.n_array[0] if intermediate_result / self.n_array[0] > 0.9: intermediate_result -= self.n_array[0] return intermediate_result / ciphertext.multiple ** ciphertext.mult_times def raw_encrypt_round(self, plaintext, round_index): return RandomizedIterativeAffineCiphertext( plaintext.cipher1, int(mpz(self.a_array[round_index]) * plaintext.cipher2 % self.n_array[round_index]), plaintext.n_final, self.affine_encoder.mult ) def raw_decrypt_round(self, ciphertext1, ciphertext2, round_index): cur_n = self.n_array[self.key_round - 1 - round_index] cur_a_inv = self.a_inv_array[self.key_round - 1 - round_index] plaintext1 = ciphertext1 % cur_n plaintext2 = cur_a_inv * ciphertext2 % cur_n if plaintext1 / cur_n > 0.9: plaintext1 -= cur_n if plaintext2 / cur_n > 0.9: plaintext2 -= cur_n return plaintext1, plaintext2 class DeterministicIterativeAffineCipherKey(IterativeAffineCipherKey): def encrypt(self, plaintext): return self.raw_encrypt(self.affine_encoder.encode(plaintext)) def decrypt(self, ciphertext): if isinstance(ciphertext, int) is True and ciphertext is 0: return 0 return self.affine_encoder.decode(self.raw_decrypt(ciphertext), mult_times=ciphertext.mult_times) def raw_encrypt(self, plaintext): ciphertext = DeterministicIterativeAffineCiphertext(plaintext, self.n_array[-1], self.affine_encoder.mult) for i in range(self.key_round): ciphertext = self.raw_encrypt_round(ciphertext, i) return ciphertext def raw_decrypt(self, ciphertext): plaintext = ciphertext.cipher for i in range(self.key_round): plaintext = self.raw_decrypt_round(plaintext, i) return plaintext def raw_encrypt_round(self, plaintext, round_index): return DeterministicIterativeAffineCiphertext( (self.a_array[round_index] * plaintext.cipher) % self.n_array[round_index], plaintext.n_final, self.affine_encoder.mult ) def raw_decrypt_round(self, ciphertext, round_index): plaintext = int((mpz(self.a_inv_array[self.key_round - 1 - round_index]) * ciphertext) % self.n_array[self.key_round - 1 - round_index]) if plaintext / self.n_array[self.key_round - 1 - round_index] > 0.9: return plaintext - self.n_array[self.key_round - 1 - round_index] else: return plaintext class IterativeAffineCiphertext(object): def __init__(self, n_final, multiple, mult_times): self.n_final = n_final self.multiple = multiple self.mult_times = mult_times class RandomizedIterativeAffineCiphertext(IterativeAffineCiphertext): def __init__(self, cipher1, cipher2, n_final, multiple=2 ** 23, mult_times=0): super(RandomizedIterativeAffineCiphertext, self).__init__(n_final, multiple, mult_times) self.cipher1 = cipher1 self.cipher2 = cipher2 def __add__(self, other): if isinstance(other, RandomizedIterativeAffineCiphertext): if self.multiple != other.multiple or self.n_final != other.n_final: raise TypeError("Two addends must have equal multiples and n_finals") if self.mult_times > other.mult_times: mult_times_diff = self.mult_times - other.mult_times return RandomizedIterativeAffineCiphertext( cipher1=(self.cipher1 + other.cipher1 * other.multiple * mult_times_diff) % self.n_final, cipher2=(self.cipher2 + other.cipher2 * other.multiple * mult_times_diff) % self.n_final, n_final=self.n_final, multiple=self.multiple, mult_times=self.mult_times ) elif self.mult_times < other.mult_times: mult_times_diff = other.mult_times - self.mult_times return RandomizedIterativeAffineCiphertext( cipher1=(other.cipher1 + self.cipher1 * self.multiple * mult_times_diff) % self.n_final, cipher2=(other.cipher2 + self.cipher2 * self.multiple * mult_times_diff) % self.n_final, n_final=self.n_final, multiple=self.multiple, mult_times=other.mult_times ) else: return RandomizedIterativeAffineCiphertext( cipher1=(self.cipher1 + other.cipher1) % self.n_final, cipher2=(self.cipher2 + other.cipher2) % self.n_final, n_final=self.n_final, multiple=self.multiple, mult_times=self.mult_times ) elif isinstance(other, int) and other == 0: return self else: raise TypeError("Addition only supports RandomizedIterativeAffineCiphertext" "and initialization with int zero") def __radd__(self, other): return self.__add__(other) def __sub__(self, other): return self + (other * -1) def __rsub__(self, other): return other + (self * -1) def __mul__(self, other): if isinstance(other, float) or isinstance(other, np.float32) or isinstance(other, np.float64): return RandomizedIterativeAffineCiphertext( cipher1=self.cipher1 * int(other * self.multiple) % self.n_final, cipher2=self.cipher2 * int(other * self.multiple) % self.n_final, n_final=self.n_final, multiple=self.multiple, mult_times=self.mult_times + 1 ) elif isinstance(other, int) or isinstance(other, np.int32) or isinstance(other, np.int64): return RandomizedIterativeAffineCiphertext( cipher1=self.cipher1 * int(other) % self.n_final, cipher2=self.cipher2 * int(other) % self.n_final, n_final=self.n_final, multiple=self.multiple, mult_times=self.mult_times ) else: raise TypeError("Multiplication only supports native and numpy int and float") def __rmul__(self, other): return self.__mul__(other) class DeterministicIterativeAffineCiphertext(IterativeAffineCiphertext): def __init__(self, cipher, n_final, multiple=2 ** 23, mult_times=0): super(DeterministicIterativeAffineCiphertext, self).__init__(n_final, multiple, mult_times) self.cipher = cipher def __add__(self, other): if isinstance(other, DeterministicIterativeAffineCiphertext): if self.multiple != other.multiple or self.n_final != other.n_final: raise TypeError("Two addends must have equal multiples and n_finals") if self.mult_times > other.mult_times: mult_times_diff = self.mult_times - other.mult_times return DeterministicIterativeAffineCiphertext( cipher=(self.cipher + other.cipher * other.multiple * mult_times_diff) % self.n_final, n_final=self.n_final, multiple=self.multiple, mult_times=self.mult_times ) elif self.mult_times < other.mult_times: mult_times_diff = other.mult_times - self.mult_times return DeterministicIterativeAffineCiphertext( cipher=(self.cipher * self.multiple * mult_times_diff + other.cipher) % self.n_final, n_final=self.n_final, multiple=self.multiple, mult_times=other.mult_times ) else: return DeterministicIterativeAffineCiphertext( cipher=(self.cipher + other.cipher) % self.n_final, n_final=self.n_final, multiple=self.multiple, mult_times=other.mult_times ) elif isinstance(other, int) and other == 0: return self else: raise TypeError("Addition only supports IterativeAffineCiphertext and initialization with int zero") def __radd__(self, other): return self.__add__(other) def __sub__(self, other): return self + (other * -1) def __rsub__(self, other): return other + (self * -1) def __mul__(self, other): if isinstance(other, float) or isinstance(other, np.float32) or isinstance(other, np.float64): return DeterministicIterativeAffineCiphertext( cipher=self.cipher * int(other * self.multiple) % self.n_final, n_final=self.n_final, multiple=self.multiple, mult_times=self.mult_times + 1 ) elif isinstance(other, int) or isinstance(other, np.int32) or isinstance(other, np.int64): return DeterministicIterativeAffineCiphertext( cipher=self.cipher * int(other) % self.n_final, n_final=self.n_final, multiple=self.multiple, mult_times=self.mult_times ) else: raise TypeError("Multiplication only supports native and numpy int and float") def __rmul__(self, other): return self.__mul__(other)
{ "redpajama_set_name": "RedPajamaGithub" }
3,105
\section{Conclusion}\label{sec:Conclusion} In this work we have demonstrated that software engineering techniques can make machine learning accessible to average users. We are able to represent ML algorithms with holes that our approach can efficiently and optimally fill with rules guided systematic search. We have also implemented a machine learning pipeline to search across different ML algorithms and transfer knowledge between models being evaluated to minimize the human effort. The ultimate objective of our research is to build an end-to-end machine learning solution that can find the best model without any human intervention. The work presented here is a very promising first step in this direction. \section{Implementation and Evaluation}\label{sec:Evaluation} In order to evaluate our proposed framework, we implemented a prototype having the following classifiers: Logistic Regression, Perceptron, Support Vector Machine (SVM) Classifier and Linear SVM Classifier. Instead of implementing these classifiers from scratch, we used \texttt{scikit-learn} \cite{pedregosa2011scikit} as the underlying models along with some basic preprocessing and grid search to find the optimal hyperparameters. Our framework is designed generically to further incorporate other classifiers such as Naive Bayes, as well as preprocessing techniques and optimization insights. In the same fashion, we can scale it for other machine learning problems such as \emph{regression}, \emph{clustering} etc. We evaluated the prototype on some common datasets from UCI Machine Learning Repository~\cite{UCI2013}. Table~\ref{tbl:testResults} shows the best model with optimal hyperparameters, accuracies and execution time of the test suit. In order to evaluate usability, we are interviewing ML practitioners to gather data on the time, effort and cost splurged in writing and debugging code. So far, we have anecdotal evidence that even experienced ML users fall into common pitfalls such as not using stratification, not identifying outliers, forgetting to normalize, or using different scales on train and test sets resulting in loss of precious time and effort. We plan to carry out a full scale qualitative and quantitative usability study and anticipate that ML users and particularly domain experts would benefit from the implementation support and ML insights provided by the framework. \vspace{-.2cm} \begin{table}[t!] \small \setlength\tabcolsep{2.1pt} \centering \setlength\extrarowheight{-1pt} \begin{tabular}{|c | c | p{5.2cm} | l |} \hline \bfseries{Dataset} & \bfseries{Best} & \bfseries{Hyperparameters} & \bfseries{Accu} \\ \hline\hline Breast Cancer & LR & solver=liblinear, max\_iter=10, penalty=l1, C=1 & 0.98\\ \hline {\multirow{2}{*}{Iris}} & {\multirow{2}{*}{LR}} & solver=newton-cg, max\_iter=100, & {\multirow{2}{*}{1}} \\ &&multi\_class=multinomial, pen=l2, C=100&\\ \hline Glass & SVC & {kernel=rbf, C=10} & 0.76\\ \hline Ionosphere & SVC & {kernel=rbf, C=10} & 0.88\\ \hline Diabetes & SVC & {kernel=linear, C=1} & 0.78\\ \hline Sonar & SVC & {kernel=rbf, C=1}& 0.82\\ \hline \end{tabular} \caption[Table caption text]{Best Models with Hyperparameters and Accuracy} \label{tbl:testResults} \vspace{-1.8em} \end{table} \section{Illustrative Examples}\label{sec:example} In this section we demonstrate how our prototype classifiers real life datasets by selecting the right model with optimal hyperparameters and pruning search space using partial-evaluation and rules. Suppose we want to determine labels of `Glass Identification' dataset from UCI repository~\cite{UCI2013}. We call the program as:\\ \noindent \texttt{metaClassifier(format='CSV', source='./glass.csv',} \\\texttt{\tab[2.3cm] verbose=True, hasHeader=True)} \\Some arguments like data source are mandatory while others like data format are optional which are either inferred like data format or can be set as default such as default \texttt{verbose=False}. The arguments are parsed and the program is initialized. The holes of data acquisition sketches are filled by determining the data source and format and the potential search space is built i.e. classifiers and their hyperparameters' candidate space. By default, the size of search space for classification is more than a thousand i.e. the sum of the combinations of hyperparameters of all classifiers under consideration. In case of the \texttt{SVC}, the hyperparameters' candidate space is given below having forty combinations of hyperparameters: \vspace{-.25cm} \[ CSV_{candSpace}=\begin{bmatrix} \left \{ \begin{aligned} kernel &\in \{linear, rbf, sigmoid\}\\ C &\in \{1, 10, 100, 1000, 10000\}\\ \end{aligned} \right \} \\ \left \{ \begin{aligned} kernel &\in \{poly\}\\ C &\in \{1, 10, 100, 1000, 10000\}\\ degree &\in \{3,4,5,6,7\} \end{aligned} \right \} \end{bmatrix} \] \vspace{-.2cm} The dataset is analyzed to determine that it has ten features, two hundreds and fourteen instances and six classes implying it is a multiclass dataset and prune out all classifiers and/or hyperparameters from candidate space specific to binary-class datasets like for Logistic Regression, $multi\_class\in\{multinomial\}$ instead of $multi\_class\in\{multinomial, ovr\}$, because $ovr$ repeatedly uses a binary approach to fit multiclass datasets. Next, linearly separable test' using \texttt{perceptron} is carried out. Its code is generated by filling the holes of \texttt{perceptron} with default parameters (\texttt{penalty=None, alpha=0.0001} etc.) to classify the dataset and the accuracy is computed which is $43\%$ ($<50\%$). Concluding that the data is not linearly separable, the search space is updated by removing linear classifiers such as \texttt{linearSVC}, and hyperparameters such as \texttt{liblinear} from solvers for \texttt{Logistic Regression} and \texttt{linear} from kernels of \texttt{SVC}. This reduces the search space to about $80\%$ of the original. After preliminary pruning, the program begins to search the best model and optimal hyperparameters while dynamically pruning the remaining search space. It starts from the sketch of \texttt{SVC}:\\ \centerline{\texttt{SVC(C=??, kernel=??, degree=??)}} \\by filling the holes from the above given candidate space e.g. \texttt{C=1, kernel=rbf} (\texttt{linear} kernel is pruned and \texttt{degree} is needed only if \texttt{kernel=poly}). Next it tries \texttt{kernel=rbf, C=\{10, 100\}} and determines that by increasing \texttt{C} beyond $10$, the accuracy is not increasing so rules out higher values of \texttt{C}. Same rule applies to \texttt{max\_iter} and some other numeric hyperparameters to reduce search space. Further models and their hyperparameters are tried while pruning out the search space. Search space reduces to about $60\%$ of the original and when it exhausts, the model configuration with highest accuracy is selected to predict the labels, which is \texttt{SVC(C=10, kernel=rbf)} in this case. Consider another example of `Wisconsin Breast Cancer Dataset` from UCI dataset repository which is linearly separable binary class dataset. Passing the linear separability test i.e. running \texttt{perceptron}, \texttt{LinearSVC} on this dataset, we can rule out all non-linear options (both classifiers and hyperparameters). \section{Future Directions} Search based software engineering can revolutionize the field of machine learning. We strongly believe that work presented in this paper will be foundational in a massive effort to democratize machine learning and enable average users and domain experts to take full advantage of the technology. We have abstracted and templatized ML algorithms thus exposing interfaces for searching it. We foresee many promising future direction arising from our work: \textbf{Self Application of Machine Learning:} One key observation of our work is that we can apply machine learning to discover and find better machine learning solutions. The curse of dimensionality and search space explosion have been a key hindrance in applying machine learning to itself. Initial approaches attempted to explore the entire space or approach it with random searching. We have presented a more systematic method that prunes the space based on human insights, which is comparable to how data scientists make decision and produce equivalent results. We believe this provides an avenue to explore other ML techniques to pick the best ML algorithm. For example, a Deep Neural Network (DNN) which decides which ML approach will work best for the given data and classification problem. \textbf{Transfer Learning among ML Algorithms:} When data scientist run a machine learning algorithm, they naturally select the parameters and hyperparameters based on earlier runs of other algorithms or prior runs on a smaller dataset. This transfer of knowledge is the key to reducing the state space. In this paper we have used heuristics to transfer information, but in the future we are working on analytical solutions to this problem. The key advantage of an analytical solution is that it allows us to initialize the parameters of the algorithms optimally thus reducing the time for algorithm convergence. \textbf{Searching for Algorithm Architecture:} We also see the possibility of extending our work to directly search the architecture of ML algorithm, chaining different algorithms and composing them like software modules. A simple example of compositional search would be to have a DNN that combines DNNs to find nose, ears, lips and eyes to find a face. This creates a possibility of building Wide Neural Networks where parts are searched in parallel. \textbf{Optimizing for Commodity Hardware:} MapReduce and Spark have demonstrated the power of distributed system in exploring large search spaces in fraction of seconds. We believe that search based ML will become main stream as we port our search to exploit large scale distributed systems. \section{Introduction} Machine Learning (ML) is a set of techniques that give computers the ability to learn specific tasks from the data without being explicitly programmed. We are living in the golden age of machine learning---ML algorithms have defeated humans in chess, learned to drive autonomously, beat humans in Jeopardy, and out performed humans in fundamentally innate tasks of image recognition as well as speech understanding. While most of these battles were won by well funded teams of highly trained engineers, machine learning tools remain prohibitively expensive for average users and domain experts. In this paper we present software engineering techniques that make machine learning algorithms more usable by reducing the time to specify them and making them less costly to build. Sculley et al.~\cite{Sculley14}, based on their experience at Google, observe that software engineers developing ML code often have less time to write high quality code or explore the best solutions. Fern\'{a}ndez-Delgad et al.~\cite{Fernandez14} in a large scale study of various classifiers and datasets note that even well trained data scientists choose only classifiers from a range of familiar classifiers and do not necessarily pick the best ones. Moreover, the developers are susceptible to various machine learning pitfalls that existing APIs and frameworks do not protect against~\cite{CommonErrors}. As more and more applications move to incorporate machine learning components in them, there is a need to provide better software engineering and tool support to developers. Solar-Lezama~\cite{Solar-Lezama:2008:PSS:1714168} presented the \emph{Sketching} framework to synthesize and generate code automatically from partial implementation i.e. a high level specifications with holes. To generally express the models as templates, we use a similar approach used by sketches. Search mechanisms fill the holes to specify the models for specific data and hyperparameters. Sketching uses SAT-based inductive synthesis but our framework uses heuristics guided searching to fill the holes. To reduce the search space and runtime, we exploit partial evaluation~\cite{futamura1983partial}. While searching optimal hyperparameters, we can treat a previously trained model as partial evaluated and can update it for new combinations of hyperparameters. Thornton et al. used \emph{Bayesian optimization} in Auto-Weka~\cite{thornton2013auto} to search for models and hyperparameters for a given dataset and integrated it with Weka~\cite{holmes1994weka}. The focus of their technique is to optimize and automate the model searching mechanism using a statistical approach i.e. Bayesian optimization. By contrast, we show that we can develop a comprehensive machine learning framework using software engineering techniques such as sketching, partial evaluation, and searching using rule based decision. \begin{comment} We aim to build a framework using software engineering techniques to select the best algorithm (model) for a given problem and dataset using search. We generalize the models by denoting them in templatized form, similar to the program sketches proposed by Armando~\cite{solar2013program}. The searching mechanism of our framework fills the \emph{holes} of sketches to specialize and generate code. Machine learning algorithms solving the same problem share properties. By evaluating one model, we can transfer valuable information to other models and reduce the search space. While finding the optimal hyper-parameters, we can exploit partial evaluation~\cite{futamora}. Moreover, while solving a problem using different models, we can share code and computation. We reduce our search space using some heuristics and rules, which enable our framework to generate code for best model without state space explosion. The advantages of the framework will be three-fold; first, it will enable the domain experts to take advantages of machine learning tools, second, it will reduce the time and budget of ML projects and third, the automatically (standardized) code generated will be less prone to error. \end{comment} In this paper we present a proof of concept implementation of our machine learning pipeline. We have selected a set of supervised\footnote{Supervised machine learning algorithms assume that the training data has class labels available} learning algorithms and stored their templates, similar to sketches with holes. The user can set up the problem by providing a data source, its format, and select the column with class labels. Our approach preprocesses the data, normalizes it, and analyzes the data for its classes and features. We use a systematic search to find the best hyperparameter configuration that fills the "holes" for each classification algorithm. We only partially evaluate the search space and employ heuristics to prune our search space. Our approach ranks the algorithms based on their accuracy and returns the top candidate. {\bfseries\emph{Contributions:}} We make various novel contributions in this work. We are the first to employ code synthesis and compiler optimization techniques to generate machine learning code. Existing model selection techniques~\cite{thornton2013auto} do not prune search space and simply attempt to find an optimum value based on a given criteria, and they do not work across various algorithms. We borrow meticulous engineering steps traditionally used in compiler optimization to prune the search space as well as transfer computation and information across algorithms. \begin{comment} We built a framework prototype for \emph{classification} problems using well-known classifiers along with data acquisition, preprocessing and data visualization techniques. We run our framework on common classification datasets from UCI machine learning repository and concluded that our ``Search Based Code Generation'' technique is useful and can be used to build a comprehensive and strong framework to solve machine learning problem without or with minimal interventions of users. Rest of the paper is organized as follows: First, we define machine learning terms we use repeatedly in the paper ~\cref{sec:Preliminaries}. Next, we demonstrate our framework using real examples in~\cref{sec:example}.~\cref{sec:Methodology} explains the methodology of our framework which searches a model for classification problems and also outlines some optimization to avoid the search space explosion and reduce run-time. In~\cref{sec:Evaluation} we evaluate our prototype framework to model classification of real datasets from UCI machine learning repository and compare our results to the solutions by humans of the same classification problems on the same datasets. In~\cref{sec:RelatedWork}, we describe the related work. In the end, we give our concluding remarks in~\cref{sec:Conclusion}. \end{comment} \begin{comment} Rest of the paper is organized as follows: First, we define machine learning terms we use repeatedly in this paper. Then we illustrate utility of our framework with an example. Subsequently, we explain our methodology and outlines some optimization to avoid the search space explosion. We present initial evaluation of our approach. Finally, we conclude and discuss future directions. \end{comment} \section{Approach}\label{sec:Methodology} \begin{figure*}[h!] \centering \includegraphics[width=.99\textwidth]{"pipeline"} \vspace{-.55cm} \caption{Framework Pipeline} \vspace{-.26cm} \label{fig:pipeline} \end{figure*} Our approach to develop the framework is similar to fundamental ML pipeline. Every module of pipeline automates the work to mitigate the human intervention and reduces the search space wherever possible. Figure~\ref{fig:pipeline} demonstrates our pipeline approach. Module~A {\bfseries acquires} the datasets from given source provided by a user as arguments. The test and training datasets are fed to Module~B for {\bfseries data preparation} without any human intervention to standardize the data like predicting the missing values and data transformation. After common preprocessing, the framework {\bfseries inspects} data in Module~D to get its size i.e. the number of instances, features, number of classes, and also run some simple tests like \emph{linear separability test}. Using the results of inspection and heuristics, Module~E {\bfseries updates} the search space by including or excluding the linear models and their hyperparameters---building the list of classifiers $\mathbfcal{C}$ and the list of their corresponding hyperparameters $\mathbfcal{H}$. After preparation, the labeled (training) datasets is used for {\bfseries model-selection} using {\bfseries stratified K-fold cross validation} where the framework selects the best model without user intervention. In Module~C, labeled dataset is {\bfseries split} in $k$ stratified folds i.e. $D_{labeled}\!=\!\{D_1, D_2, ..., D_k\}$. Module~F picks the classifier sketch $\mathcal{C}$ and its hyperparameters' candidate space $\mathbfcal{H}$ from search space and generates code by filling the holes of sketch and the generated model is trained using train set i.e. $D_{labeled}\backslash D_i, 1<i<k$. In Module~G, the labels are predicted for \emph{test features} i.e. $D_i$ using {\bfseries model testing}. Accuracy is computed using predicted labels and actual labels in Module~H, {\bfseries Model Evaluation}. After each model evaluation, \emph{heuristics} are gathered to update and prune the search space for example if accuracy is not increased by increasing the value of \texttt{max\_iter} in Logistic Regression, we can remove all combinations of hyperparameters with higher values of \texttt{max\_iter}. When the search space exhausts, our framework selects the model with highest accuracy to {\bfseries predict} the labels of unlabeled (test) dataset in Module~I and the predicted labels are output to user. \begin{algorithm}[t!] \caption{Meta Classifier} \label{algo:metaclassifier} \begin{algorithmic}[1] \Procedure{metaClassifier}{$args$} \State $options\gets\Call{parse}{args}$\label{lin:parsing} \Comment $options$ is global dictionary \State $\mathcal{D}_{train}, \mathcal{D}_{test}=\Call{acqureData}{\null}$ \label{lin:dataAcquire} \State $\Call{preprocess}{\mathcal{D}_{train}, \mathcal{D}_{test}} $ \label{lin:dataPrep} \Comment Common preprocessing \State $\Call{inspectData}{\mathcal{D}_{train}, \mathcal{D}_{test}}$ \label{lin:dataTest} \Comment Inspect and run basic test \State Build list $\mathbfcal{C}$ and corresponding $\mathbfcal{H}$ \Comment based on data tests \label{lin:buildLists} \State $results \gets \phi$ \Comment To hold results of cross validation \While{not interrupted And $\mathcal{C}\gets\mathbfcal{C}.\Call{removeHead}{\null}$} \label{lin:searchStart} \State$\mathcal{D}_{train},features=\Call{SelectFeatures}{\mathcal{C}, \mathcal{D}_{train}}$ \label{lin:featureSelection} \While{$\mathbfcal{H}_\mathcal{C}$ not exhausts} \Comment $\mathbfcal{H}_\mathcal{C}$ is the grid of $\mathcal{C}$\label{lin:hyperOpt:Start} \State Initialize $\mathcal{C}$ with $\mathcal{H}_\mathcal{C}^j$ \Comment Set $j^{th}$ combination \State $acc, std=\Call{evaluateModel}{\mathcal{C}, \mathcal{D}_{train}}$ \State $results.\Call {Add}{acc, std, \mathcal{C}, \mathcal{H}_\mathcal{C}^j, features}$ \State Update $\mathbfcal{H}_\mathcal{C}$ \EndWhile\label{lin:hyperOpt:End} \State Update $\mathbfcal{C}$ and $\mathbfcal{H}$ \EndWhile \label{lin:searchEnd} \State Get appropriate $model$ i.e with $max(accu)$ and $min(std)$ \State $y=model.\Call{predict}{\mathcal{D}_{test}}$ \label{predict:line} \label{lin:predict} \State \Return $y$ \EndProcedure \end{algorithmic} \end{algorithm} Algorithm~\ref{algo:metaclassifier} outlines our framework to solve classification problems automatically. The framework takes minimum information like data source and format, labels' information, time budget, etc. as $args$. It parses the $args$ and sets the fields of $options$ (line~\ref{lin:parsing}); a dictionary holding global variables and behavior of the framework. \subsection{Model Selection, Hyperparameters Optimization and Feature Selection} \label{metaSearch:Section} Our framework prototype treats the classification problem as a \emph{meta-search} --- the combined selection of classifier, its best hyperparameters and features selection. Suppose we have a set of classifiers, $\mathbfcal{C}=\{\mathcal{C}_1, \mathcal{C}_2,...,\mathcal{C}_n\}$ and their respective hyperparameters spaces as $\mathbfcal{H}=\{\mathcal{H}_1, \mathcal{H}_2,..., \mathcal{H}_n\}$. Algorithm~\ref{algo:metaclassifier} from line~\ref{lin:searchStart} to~\ref{lin:searchEnd} iterates over the search space while \emph{updating} and \emph{reducing}. The search continues until interruption due to resource budget constraints or the search space $\mathbfcal{S}= \sum_{i}|\mathcal{H}_i|$ exhausts. In time constraints, where $\mathbfcal{S}$ does not exhaust, to find the \emph{potentially} best \emph{model}, the searching algorithm takes some decisions based on previous observations (explained in~\cref{sec:optimizations}). In a given dataset, each feature/attribute doesn't contribute to a solution equally. Line~\ref{lin:featureSelection} selects best features of the dataset for $\mathcal{C}$. From line~\ref{lin:hyperOpt:Start} to \ref{lin:hyperOpt:End}, the algorithm iterates over all \emph{feasible} combinations of hyperparameters of $\mathcal{H}_i$ for a given classifier $\mathcal{C}_i$ and using \emph{stratified cross validation}, the scores (accuracy and std. deviation) are calculated and recorded in $results$. \subsection{Optimizations}\label{sec:optimizations} The na\"{i}ve searching mechanism can end up owing limited time budget and in those cases, sometime the searched model may not be the best one. To mitigate these situations, we used different software engineering techniques. In this subsection, we describe the non-exhaustive list of the optimization tactics used in our prototype framework. \vspace{3.5px} \noindent {\bfseries Sharing of Code and Computation:} Machine learning models solving similar problems like classification have tendency to share code and computation to a reasonable extent. For example, instead of loading data from either local storage or remote server for every model separately, an obvious optimization is to load data once and perform some \emph{common} preprocessing before executing some model on it. \vspace{3.5px} \noindent {\bfseries Templatization and Code Generation:} For generalization of the models, we define them as templates in the same fashion as of the sketches for program synthesis~\cite{Solar-Lezama:2008:PSS:1714168}. The partial implementation of models is generic enough to synthesize (generate) the final code with specific data and hyperparameters. \vspace{3.5px} \noindent {\bfseries Partial Evaluation:} While searching the best hyperparameters of a model, for two consecutive combinations (where usually, the value of one hyperparameter is changed), we can use the previously fitted model as a partially evaluated module and update it. For example, finding the optimal number of epochs for stochastic gradient descent used by many ML models, instead of restarting iteration and initializing coefficients to zeros, we can resume iterations by retaining the coefficients. Moreover, different hyperparameters of a model usually control independent properties of an underlying algorithm. \vspace{3.5px} \noindent {\bfseries Heuristics:} Previous execution of a model with a combination of hyperparameters may, in some cases, omit other combinations. For example, while searching for the best hyperparameters for logistic regression, if the accuracy does not improve with the increase of \texttt{max\_iter} (but keeping all other hyperparameters constant), we can skip all combinations of hyperparameters with higher values of \texttt{max\_iter}. Similarly, the observations derived from the executions of one model can lead to the \emph{prioritization} of other models. This raises the probability that the best model is selected despite interruption due to time constraints. For example, if we run \texttt{SGDClassifier} and its accuracy with \texttt{loss=perceptron} is very low, we should try classifier $Perceptron$ at the end. We continue updating our search space i.e. the list $\mathbfcal{C}$ and hyperparameters $\mathbfcal{H}$ based on the observations to reduce the search space and prioritize models. \vspace{3.5px} \noindent {\bfseries Rule based Optimization:} The knowledge and insights of the data scientists guiding the search for the best performing model can be, in fact, translated into a long rule-based list, thereby automating the human decision making process, as in rule based expert systems. Following are a few examples: \begin{itemize} \item Linearly Separable Test: We should start the search using SVM classifier (i.e $C_1=SVM$) without some kernel or with linear kernel. If the score/accuracy is reasonably high, we can conclude that data is potentially linearly separable and we should first try other linear classifiers such as SVM with fine tuning. \item Each hyperparameter is not used in every combination. Thus, we can ignore these to reduce the search space. For instance, in case of \texttt{scikit-learn}'s logistic regression, \texttt{n\_jobs} and \texttt{warm\_start} hyperparameters are ignored if \texttt{solver} is set to \texttt{liblinear}. Similarly, in case of \texttt{sklearn.svm.SVC}, the hyperparameter \texttt{degree} is ignored for all kernels except \texttt{kernel=poly}. \end{itemize}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,754
The design, construction and test run of a solid adsorption solar refrigerator are presented. It used activated carbon/methanol as the adsorbent/adsorbate pair. The refrigerator has three major components: collector/generator/adsorber, condenser and evaporator. Its flat plate type collector/generator/adsorber used clear plane glass sheet of effective exposed area of 1.2 m2. The steel condenser tube with a square plan view was immersed in pool of stagnant water contained in a reinforced sandcrete tank. The evaporator is a spirally coiled copper tube immersed in stagnant water. Adsorbent cooling during the adsorption process is both by natural convection of air over the collector plate and tubes and night sky radiation facilitated by removing the collector box end cover plates. Ambient temperatures during the adsorbate generation and adsorption process varied over 18.5–34 °C. The refrigerator yielded evaporator temperatures ranging over 1.0–8.5 °C from water initially in the temperature range 24–28 °C. Accordingly, the maximum daily useful cooling produced was 266.8 kJ/m2 of collector area. Design, construction and test run of a solid adsorption solar refrigerator using activated carbon/methanol, as adsorbent/adsorbate pair. Available from: https://www.researchgate.net/publication/223832247_Design_construction_and_test_run_of_a_solid_adsorption_solar_refrigerator_using_activated_carbonmethanol_as_adsorbentadsorbate_pair [accessed Dec 26, 2015].
{ "redpajama_set_name": "RedPajamaC4" }
6,593
Guazzoni bezeichnet: einen ehemaligen italienischen Motorradhersteller, siehe Guazzoni (Motorrad) Guazzoni ist der Familienname folgender Personen: Enrico Guazzoni (1876–1949), italienischer Filmregisseur, Drehbuchautor und Filmproduzent
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,068
Existence is a ride travelled alone mostly, just be watchful on where are you taking your path to. dumping all the troubles or solely taking their spot. it soothes my fatigued emotion or when I'm feeling anxious. when we discover the inflection of our love's unconquerable holds. put your palm in pit alone and harmony will fill my thought. but when emotion sparks an effort, what assets they bring. Someone who replies to your mysterious facial looks with equally weird facial affects is the best kind of person throughout. You owe it through yourself to display everything you've always fantasized of implying. An illusion is always created by heart, we can transform reality by transforming our thought. Do be the moon and excite characters even if you are far away from the whole.
{ "redpajama_set_name": "RedPajamaC4" }
3,379
Home / Anniversary / Johnny Marr to help Dinosaur Jr celebrate 25th anniversary of 'You're Living All Over Me' Anniversary, Tour Dates The reunited Dinosaur Jr will celebrate the 25th anniversary of You're Living All Over Me this December with a concert in New York City that will feature a meeting of the six-string minds, with legendary Smiths guitarist Johnny Marr set to trade licks with J Mascis on "a song or two," according to the band's website. The show is set for Dec. 1 at New York's Terminal 5, and will feature Dino Jr running through its 1987 album in its entirety, followed by "a second set spanning their catalog with very special musical guests joining them onstage." Marr is the first such guest to be announced, and more are due to be revealed in the run-up to the show. In addition to the show, Dinosaur Jr also will mark the You're Living All Over Me anniversary with the release next month of a vinyl-only live album recorded on that album's tour. Dinosaur Jr tour dates: Oct. 25: Black Cat, Washington, D.C., USA Oct. 26: Jefferson Theater, Charlottesville, VA, USA Oct. 27: Union Transfer, Pennsylvania, PA, USA Oct. 31: Music Zone @ KITEC, Hong Kong, China Nov. 1: Don't Look Back, Neo Studio, Taipei, Taiwan Nov. 2: Big Cat, Nishishinsaibashi, Chuo-ku, Osaka, Japan Nov. 3: Hostess Club Weekender @ Zepp Diver City, Tokyo, Japan Nov. 22: El Teatro, Buenos Aires, Argentina Nov. 24: Primavera Fauna, Santiago, Chile Nov. 28: Pearl Street Nightclub, Northampton, MA, USA Nov. 29: The State Theatre, Portland, ME, USA Nov. 30: Paradise Rock Club, Boston, MA, USA Dec. 1: Terminal 5, New York, NY, USA PREVIOUSLY ON SLICING UP EYEBALLS: Dinosaur Jr to release 1987 live set on 'Chocomel Daze' limited-edition vinyl LP Stream: Dinosaur Jr, 'Watch the Corners' — first track off upcoming 'I Bet On Sky' Dinosaur Jr announces new album 'I Bet On Sky,' 29-date fall North American tour Stream: 'Lost' Deep Wound practice tape from 1983 — featuring J Mascis, Lou Barlow Dinosaur Jr to reissue 1st three albums in cassette box set — plus bonus Sebadoh tape Stream: Dinosaur Jr, 'I've Been Waiting For You' — unreleased Neil Young cover (1988) Dinosaur Jr reissuing 1988′s 'Bug' on purple cassette for upcoming full-album tour Tags: Dinosaur Jr, J Mascis, Johnny Marr, The Smiths, You're Living All Over Me
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,417
// Copyright (c) Lex Li. All rights reserved. // // Licensed under the MIT license. See LICENSE file in the project root for full license information. using System.Reflection; using System.Runtime.InteropServices; namespace Microsoft.Web.Administration { [Obfuscation(Exclude = true, ApplyToMembers = false)] internal class NonEmptyStringValidator : ConfigurationValidatorBase { public NonEmptyStringValidator() { } public NonEmptyStringValidator(string parameter) { } public override void Validate(object value) { var data = (string)value; if (string.IsNullOrEmpty(data)) { throw new COMException("String must not be empty\r\n"); } } } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,347
\section{Conclusion and future work} \label{sec:conclusion} We have presented SmartQueue\xspace, a deep reinforcement learning query scheduler that seeks to maximize buffer hit rates in database management systems. While simple, SmartQueue\xspace was able to provide substantial improvements over naive and simple heuristics, suggesting that cache-aware deep learning powered query schedulers are a promising research direction. SmartQueue\xspace is only an early prototype, and in the future we plan to conduct a full experimental study of SmartQueue\xspace. In general, we believe the following areas of future work are promising. \vspace{1mm} \noindent \textbf{Neural network architecture.} While effective in our initial experiments, a fully connected neural network is likely not the correct inductive bias~\cite{inductive_bias_ml} for this problem. A fully connected neural network is not likely to innately carry much useful information for query scheduling~\cite{inductive_bias_rl}, nor is there much of an intuitive connection between a fully-connected architecture and the query scheduling problem~\cite{inductive_bias_dl}. The first layer of our network learns one linear combination per neuron of the entire input. These linear combinations would have to be extremely sparse to learn features like "the query reads this block, which is cached." Other network architectures -- like locally connected neural networks~\cite{locally-connected} -- may provide significant benefit. \vspace{1mm} \noindent \textbf{SLAs.} Improving raw workload latency is helpful, but often applications have much more complex performance requirements (e.g., some queries are more important than others). Integrating query priorities and customizable Service Level Agreements (SLAs) into SmartQueue\xspace by modifying the reward signal could result in an buffer-aware and SLA-compliant scheduler. \vspace{1mm} \noindent \textbf{Query optimization.} Different query plans may perform differently with different buffer states. Integrating SmartQueue\xspace into the query optimizer -- so that query plans can be selected to maximize buffer usage -- may provide significant performance gains. \vspace{1mm} \noindent \textbf{Buffer management.} SmartQueue\xspace only considers query ordering, and assumes that the buffer management policy is opaque. A larger system could consider both query ordering and buffer management, choosing to evict or hold buffered blocks based on future queries. Such a system could represent an end-to-end query scheduling and buffer management policy. \section{Preliminary Results} \label{sec:experiments} Here, we present preliminary experiments demonstrating that SmartQueue\xspace can generate query ordering that increase the buffer hit ratio and improve query execution times compared with alternative non-learned schedulers. \paragraph*{Experimental Setup} Our experimental study used workloads generated using the 99 query templates of the TPC-DS benchmark~\cite{tpcds}. We deployed a database with a size of 49GB on single node server with 4 cores, 32GB of RAM. For our experiments, we generated $1,000$ random query instances out of these 99 templates and placed them in a random order in the execution queue. The benchmark includes 165 tables and indexes, and the number of blocks for each of these ranged between $100$ and $130,0000$. However, after downsizing both the query vector and buffer state bitmaps, our representation vectors have a size of {$165 \times 1,000$}, including index tables. We run our experiments on PostgreSQL~\cite{url-postgres} with a shared buffer pool size of 2GB.\footnote{We configured PostgreSQL to bypass the OS filesystem cache. In future work, multiple levels of caching should be considered.} For each query, we collect its query plan without executing the query by using the \texttt{EXPLAIN} command. SmartQueue\xspace uses a fully-connected neural network. Our DRL agent was implemented with Keras\cite{keras} and uses 2 hidden layers with 128 nerons each. We also use an adaptive learning rate optimization algorithm (Adam~\cite{adam}) and our loss function is the mean squared error. In our study, we compare SmartQueue with two alternative scheduling approaches. \emph{First-Come-First-Served (FCFS)} simply executes queries in the order they appear in the queue. \emph{Greedy} employs a simple heuristic to identify the query with the best expected hit ratio given the current contents of the buffer pool. Specifically, for each queued query it calculates the dot product of the buffer state bitmap with the data requests bitmap, estimating essentially the probability of buffer hits for each data block request. We then order all queries based on the sum of these probabilities over all blocks and execute the query with the highest sum value. Following the execution, the new buffer state is calculated and the heuristic is applied again until the queue is empty. This greedy approach focuses on short-terms buffer hits improvements. \begin{figure*}[t] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figures/e2_buffer.pdf} \caption{Average buffer hit ratio} \label{fig:e2_buffer} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figures/e2_latency.pdf} \caption{Query execution time} \label{fig:e2_latency} \end{subfigure} \caption{SmartQueue\xspace's effectiveness (buffer hit ratio and query completion rate) with increasing training sets. } \label{fig:no_split} \end{figure*} \paragraph*{\bf Effectiveness} First, we demonstrate that SmartQueue\xspace can improve its effectiveness as it collects more experience. In this set of experiments, we placed all $1,000$ queries in the queue and we start scheduling them using SmartQueue\xspace. In the beginning our agent will make arbitrary scheduling decisions, but as it schedules more queries, SmartQueue\xspace collects more experience from its past actions and starts improving its policy. To demonstrate that, we evaluated the learned model at different stages of its training. Figure~\ref{fig:e2_buffer} and Figure~\ref{fig:e2_latency} shows how the model performs as we increase the number of training queries. In Figure~\ref{fig:e2_buffer}, we measure the average buffer hit ratio when scheduling our $1,000$ queries and we compare it with the buffer hit ratio of FCFS and Greedy (which is not affected by the number of training queries). We observe that the DRL agent is able to improve the buffer hit ratio as it schedules more queries. It outperforms the buffer hit of the other two heuristics eventually converging into a ration that is 65\% higher than FCFS and $35\%$ higher than Greedy. In addition, Figure \ref{fig:e2_latency} shows the number of executed queries over time. The results demonstrate that DRL-guided scheduling of SmartQueue\xspace allows our approach to execute the workload of $1,000$ queries around $42\%$ faster than Greedy and $55\%$ faster than FCFS. This indicates that SmartQueue\xspace can effectively capture the relationship between buffer pool state and data access patterns, and leverage that to better utilize the buffer pool and improve its query scheduling decisions. \paragraph*{\bf Adaptability to new queries} Next we studies SmartQueue\xspace's ability to adapt to unseen queries. For these experiments, we trained SmartQueue\xspace by first scheduling $950$ random queries out of 79 TPC-DS templates. We then test the model over 50 random queries out 20 unseen before TPC-DS templates. Figure~\ref{fig:e1_buffer} demonstrates how average buffer hit ratio of the testing queries is affected as SmartQueue\xspace collects experience increases from scheduling more training queries. The graph shows that the average buffer hit ratio of the testing queries is increased from 0.2 (when the SmartQueue\xspace is untrained) to 0.64 (when SmartQueue\xspace has schedule all 950 queries). Furthermore, SmartQueue\xspace outperforms FCFS and Greedy after having scheduled less than $500$ queries. Finally, Figure ~\ref{fig:e1_latency}, shows that the query latency of our testing queries keeps decreasing (and eventually outperforms FCFS and Greedy) as SmartQueue\xspace is trained on more queries. Our approach enables unseen queries to be eventually executed $11\%$ faster than FCFS and $22\%$ than Greedy. These results indicate that query scheduling policy can adapt to new query templates leading to significant performance and resource sharing improvements. \begin{figure*}[t] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figures/e1_buffer.pdf} \caption{Average buffer hit ratio } \label{fig:e1_buffer} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figures/e1_latency.pdf} \caption{Query execution time} \label{fig:e1_latency} \end{subfigure} \caption{Buffer hit ratio and latency improvement on unseen query templates and increasing training queries. } \label{fig:train_test_split} \end{figure*} \paragraph*{\bf Overhead} We also measured the training and inference time. Our proof-of-concept prototype needed 240 mins to incorporate 950 queries in our agent (so in average the training overhead is $3.95$ mins per query). This time does not include the execution time of the query. This training overhead can potentially be optimized by offloading it into another thread, introducing early stopping, or re-using previous network weights to get a good "starting point." There is no training overhead for FCFS and Greedy. The inference time of SmartQueue\xspace is {3.12}seconds while the inference time for Greedy is 2.52 seconds and {0.0012} seconds for FCFS. \section{Introduction} Query scheduling, the problem of deciding which of a set of queued queries to execute next, is an important and challenging task in modern database systems. Query scheduling can have a significant impact on query performance and resource utilization while it may need to account for a wide number of considerations, such as cached data sets, available resources (e.g., memory), per-query performance goals, query prioritization, or inter-query dependencies (e.g., correlated data access patterns). In this work, we attempt to address the query scheduling problem by leveraging overlapping data access requests. Smart query scheduling policies can take advantage of such overlaps, allowing queries to share cached data, whereas naive scheduling policies may induce unnecessary disk reads. For example, consider three queries $q_1, q_2, q_3$ which need to read disk blocks $(b_1, b_2)$, $(b_4, b_5)$, and $(b_2, b_3)$ respectively. If the DBMS's buffer pool (i.e., the component of the database engine that caches data blocks) can only cache two blocks at once, executing the queries in the order of $[q_1, q_2, q_3]$ will result in reading 6 blocks from disk. However, if the queries are executing in the order $[q_1, q_3, q_2]$, then only 5 blocks will be read from disk, as $q_2$ will use the cached $b_2$. Since buffer pool hits can be orders of magnitude faster than cache misses, such savings could be substantial. In reality, designing a query scheduler that is aware of the current buffer pool is a complex task. First, the exact data block read set of a query is not known ahead of time, and is dependent on data and query plan parameters (e.g., index lookups). Second, a smart scheduler must balance short-term rewards (e.g., executing a query that will take advantage of the current buffer state) against long-term strategy (e.g., selecting queries that keep the most important blocks cached). One could imagine many simple heuristics, such as greedily selecting the next query with the highest expected buffer usage, to solve this problem. However, a hand-designed policy to handle the complexity of the entire problem, including different buffer sizes, shifting query workloads, heterogeneous data types (e.g., index files vs base relations), and balancing short-term gains against long-term strategy is much more difficult to conceive. Here, we showcase a prototype of SmartQueue\xspace, a deep reinforcement learning (DRL) system that automatically learns to maximize buffer hits in an adaptive fashion. Given a set of queued queries, SmartQueue\xspace combines a simple representation of the database's buffer state, the expected reads of queries, and deep Q-learning model to order queued queries in a way that garners long-term increases in buffer hits. SmartQueue\xspace is fully learned, and requires minimal tuning. SmartQueue\xspace custom-tailors itself to the user's queries and database, and learns policies that are significantly better than naive or simple heuristics. In terms of integrating SmartQueue\xspace into an existing DBMS, our prototype only requires access to the execution plan for each incoming query (to assess likely reads) and the current state of the DBMS buffer pool (i.e., its cached data blocks). We present our system model and formalized our learning task in Section~\ref{s:model}. We present preliminary experimental results from a proof-of-concept prototype implementation in Section~\ref{sec:experiments}, related work in Section~\ref{s:related}, and in Section~\ref{sec:conclusion} we highlight directions for future work. \section{The SmartQueue Model}\label{s:model} SmartQueue\xspace is a learned query scheduler that automatically learns how to order the execution of queries to minimize disk access requests. The core of SmartQueue\xspace includes a deep reinforcement learning (DRL) agent~\cite{deep_rl} that learns a query scheduling policy through continuous interactions with its environment, i.e., the database and the incoming queries. This DRL agent is not a static model, instead it \emph{continuously} learns from its past scheduling decisions and \emph{adapts} to new data access and caching patterns. Furthermore, as we discuss below, using a DRL model allows us to define a reward function and scheduling policy that captures long-term benefits vs short-term gains in disk access. Our system model is depicted in Figure~\ref{fig:system_model}. Incoming user queries are placed into an execution queue and SmartQueue\xspace decides their order of execution. For each query execution, the database collects the required \emph{data blocks} of each input base relation, where a data block is the smallest data unit used by the database engine. Data blocks requests are first resolved by the buffer pool. Blocks found in the buffer (\emph{buffer hits}) are returned for processing while the rest of the blocks (\emph{buffer misses}) are read from disk and placed into the buffer pool (after possible block evictions). Higher buffer hit rates (and hence lower disk access rates) can enormously impact query execution times but require strategic query scheduling, as execution ordering affects the data blocks cached in the buffer pool. One tempting solution to address this challenge could involve a greedy scheduler which executes the query that will re-use the maximum number of cached data blocks. While this simple approach would yield short term benefits, it ignores the long-term impact of each choice. Specifically, while the next query for execution will maximally utilize the buffer pool contents, it will also lead to newly cached data blocks, which will affect future queries. A greedy approach fails to identify whether these new cached blocks could be of any benefit to the unscheduled yet queries. SmartQueue\xspace addresses this problem by training a deep reinforcement learning agent to make scheduling decisions that maximize long term benefits. Specifically, it uses a model that simultaneously estimates and tries to improve a weighted average between short-term buffer hits and the long-term impact of query scheduling choices. In the next paragraphs, we discuss the details of our approach: (a) the input features vector that capture data access requests (\emph{Query Bitmap}) and buffer state (\emph{Buffer Bitmap}), and (b) the formalized DRL task. \paragraph*{Buffer Bitmap} One input to the DRL model is the state of the buffer pool, namely which blocks are currently cached in memory. Buffer state $B$ is represented by a bitmap where rows represent base relations and columns represent data blocks. The $(i,j)$ entry is set to 1 if the $j$-th block of relation $i$ is cached in the buffer pool and is set to zero otherwise. Since the number of blocks of any given relation can be very high and different for each relation, each row vector $F_i$ is downsized by calculating a simple moving average over the number of its blocks entries. Specifically $D_i$ is the downsized row of a relation $i$ and $F_i$ is the full size row, we have: \begin{equation} B_{ij} = \lfloor|F_i|/|D_i|\rfloor \times \sum_{k=j \times \lfloor|D_i|/|F_i|\rfloor}^{(j+1) \times \lfloor|D_i|/|F_i|\rfloor} F_{ik} \end{equation} \paragraph*{Query Vector} The second input to the DLR model is the data block requests of each query in the queue. Specifically, given a query $q$, we generate a vector that indicates the data blocks to be accessed by $q$ for each base relation in the database. To implement this, SmartQueue\xspace collects the query plan of $q$, and approximates the probability of each table's data block being accessed. Our approach handle requests of index file and base relations similarly, as both type of blocks will be cached into the buffer pool. The query vector is downsized in the same was as the buffer bitmap. Full table scans for a base relation $i$ indicate that all data blocks of the given relation will be accessed, and therefore each cell of the $i$-th row vector has the value of 1. For indexed table scans, we calculate the number of tuples to be accessed based on the selectivity of the index scan. If the index scan is feeding a loop-based operator (i.e., nested loop join) the selectivity is adapted accordingly to account for any iterations over the relation. We assume the relation is uniformly stored across data blocks and therefore, if $x\%$ tuples of a base relation are to be selected from an indexed operation, we set the access probability of each data block of the relation to $x\%$. Similarly, we assume that the indexed operation reads $x\%$ of the index's blocks. We note that much more sophisticated probabilistic models could be used, but for this preliminary work we use this simple approximation. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/system-model.pdf} \vspace{-2mm} \caption{SmartQueue\xspace's system model} \label{fig:system_model} \vspace{-5mm} \end{figure} \paragraph*{Deep Q-Learning} SmartQueue\xspace uses deep Q-learning~\cite{deep_learning} in order to decide which query to execute next. As with any deep reinforcement learning system, SmartQueue\xspace is an agent that operates over a set of states $S$ (buffer pool states) and a set of actions $A$ per state (candidate queries to executed next). SmartQueue\xspace models the problem of query scheduling as a Markov Decision Process (MDP)~\cite{rl_book}: by picking one query from the queue to execute, the agent transitions from the current to a new buffer pool state (i.e., data blocks cached). Executing a new query on the current buffer state, provide the agent with a reward. In our case, the reward of an action is the buffer hit ratio of the executed query calculated as $\frac{\mbox{\textit{buffer hits}} }{\mbox{\textit{total block requests}}}$. The goal of the agent is to learn a \emph{scheduling policy} that maximizes its total reward. This is an continues learning process: as more queries arrive and the agent makes more scheduling decisions, it collects more information (i.e., context of the decision and its reward) and adapts its policy accordingly. The scheduling policy is expressed as a function $Q(S_t,A_t)$, that outputs a \emph{Q-value} for taking an action $A_t$ (i.e., a query to execute next) on a buffer state $S_t$. Given a state $S_t$ and a action $A_t$, the Q-value $Q(S_t, A_t)$ is calculated by adding the maximum reward attainable from future buffer states to the reward for achieving its current buffer state, effectively influencing the current scheduling decision by the potential future reward. This potential reward is a weighted sum of the expected buffer hit ratios of all future scheduling decisions starting from the current buffer state. Formally, after each action $A_t$ on a state $S_t$ the agent learns a new policy $Q^{new}(S_t, A_t)$ defined as: \begin{equation} Q(S_t, A_t)+\alpha[R_{t} + \gamma \max_{\alpha}{(Q(S_{t+1},\alpha)-Q(S_t, A_t))}] \end{equation} The parameter $\gamma$ is the discount factor which weighs the contribution of short-term vs. long-term rewards. Adjusting the value of $\gamma$ will diminish (e.g., favor choosing queries that will make use of the current buffer state) or increase (e.g., favor choosing queries that will allow long-term increased usage of the buffer) the contribution of future rewards. The parameter $\alpha$ is the learning rate or step size. This simply determines to what extent newly acquired information overrides old information: a low learning rate implies that new information should be treated skeptically, and may be appropriate when a workload is mostly stable but contains some outliers. A high learning rate implies that new information is more fully trusted, and may be appropriate when query workloads smoothly change over time. Since the above is a recursive equation, it starts with making arbitrary assumptions for all $Q$-values (and hence arbitrary initial scheduling decisions). However, as more experience is collected through the execution of incoming queries, the network likely converges to the optimal policy~\cite{dqn}. \section{Introduction} Query optimization is an important task for database management systems. Work on query optimization has a long history~\cite{systemr}. Despite decades of study, the most important elements of query optimization -- cardinality estimation and cost modeling -- have proven difficult to crack~\cite{howgood}. Several recent works~\cite{deep_card_est, deep_card_est2, qo_state_rep, rejoin, sanjay_wat, neo, learn_cost} have applied machine learning techniques to these stubborn problems. While all of these new solutions demonstrate remarkable results, they suffer from fundamental limitations that prevent them from being integrated into a real-world DBMS. Most notably, these techniques (including those coming from authors of this paper) suffer from two main drawbacks. \begin{enumerate} \item{\textbf{Data}: most proposed machine learning techniques require an impractical amount of training data. For example, ML-powered cardinality estimators require gathering precise cardinalities from the underlying database, a prohibitively expensive operation in practice (this is why we wish to estimate cardinalities in the first place). Reinforcement learning techniques must process thousands of queries before outperforming traditional optimizers.} \item{\textbf{Changes}: while performing an expensive training operation once may already be impractical, changes in data or schema may make the situation worse. Learned cardinality estimation techniques must be retrained when data changes, or risk becoming stale. Many proposed reinforcement learning techniques assume that both the data and the schema remain constant.} \item{\textbf{Catastrophe}: learning techniques can outperform traditional optimizers on average, but often perform catastrophically (e.g., 100x regression in query performance) in the tail, especially when training data is sparse. While some approaches offer statistical guarantees of their dominance in the average case~\cite{skinnerdb}, such failures, even if rare, are unacceptable in many real world applications.} \end{enumerate} We propose a new class of learned query optimizers designed to remedy these problems based on multiplexing a family of simple query optimizers. Early results indicate that our approach can (1) outperform traditional query optimizers with minimal training data ($\approx 100$ query executions), (2) maintain this advantage even in the presence of data and schema changes, and (3) \emph{never} incur a catastrophic execution. Our new design rests on the observation that \textbf{writing a good query optimizer is hard, but writing a simple query optimizer is comparably easy.} For example, writing a query optimizer to generate left-deep join trees with index nested loop joins (INLJ) or maximally-parallel join trees with hash joins (HJ) is a matter of transcription from a textbook. While such simple optimizers still require cardinality estimates, simple optimizers require radically less complexity in their cost models and plan enumerators. At a high level, our approach assumes a family of simple optimizers and treats each as an arm in a contextual multi-armed bandit problem. Our system learns a model that predicts which simple optimizer will lead to good performance for a particular query. When a query arrives, our system selects a simple optimizer, executes the resulting query plan, and observes a reward. Over time, our system refines its model to more accurately predict which simple optimizer will most benefit an incoming query. For example, our system can learn to use an left-deep INLJ plan for highly selective queries, and to use a bushy HJ plan for less selective queries. We assume that no simple optimizer ever generates a catastrophic query plan, and thus our system cannot ever select a catastrophic plan. The core of our approach is a learned tree convolution model~\cite{tree_conv}. Upon receiving a new query, we generate query plans from each simple optimizer. Each node in the query plan tree is represented by a vector containing an estimated cardinality and a one-hot encoding of the query operator. Then, a tree convolutional neural network (TCNN) is used to predict the performance of each query plan. Note that the TCNN's predictions only need to be precise enough to choose a good simple optimizer. We show preliminary results using the JOB dataset~\cite{howgood} on PostgreSQL in Figure~\ref{fig:wall}. Each boxplot shows the distribution of regret, the difference between the query time achieved and the time achieved by the optimal simple optimizer. The left side shows PostgreSQL (note that the PostgreSQL optimizer never beats the optimal simple optimizer), and the right side shows 25 iterations (of 50 queries each) of our system. Over time, our system outperforms the PostgreSQL optimizer. Compared to other learned systems, our system requires very little training data: good results are observed after seeing only 100 queries. Because of our reliance on a family of simple query optimizers, our system never produces catastrophic query plans. Finally, because each simple optimizer is capable of handling schema changes and shifts in data distribution, there is hope that our system can handle these changes as well. Ongoing work involves testing this assumption, as well as performing a full experimental analysis of our system. \message{ !name(main.tex) !offset(45) } \end{document} \section{Related work} \label{s:related} Prior work on query scheduling have focused on query parallelism~\cite{q-cop}, elastic cloud databases~\cite{azar, cost_wait, leitner, wisedb-cidr, pmax, sqlvm, nashdb}, meeting SLAs~\cite{icbs,sla-tree,wisedb-vldb,slos,perfenforce_demo,sloorchestrator,activesla,smartsla}, or cluster scheduling~\cite{decima,opennebula,step}. In terms of buffer pools and caching, most prior work has focused on smart cache management~\cite{cache-augment,cache-tables} (i.e., assuming the query order is fixed and choose which blocks to evict or replace), or on (memory) cache-aware algorithms~\cite{mem-cache}. Here, we take a flipped approach, in which we assume the buffer management policy is fixed and the query order may be modified (e.g., batch processing). More broadly, work on learned indexes follows recent trends in integrating machine learning components into systems~\cite{pillars}, especially database systems. Machine learning techniques have also been applied to query optimization~\cite{neo, skinnerdb, qo_state_rep}, cardinality estimation~\cite{deep_card_est2, naru, plan_loss}, cost modeling~\cite{learn_cost}, data integration~\cite{termite, deep_entity}, tuning~\cite{ml_tuning}, and security~\cite{sql_embed}.
{ "redpajama_set_name": "RedPajamaArXiv" }
563
Question*: If a community owner sells one or more homes (e.g. those received via abandonment or pre-abandonment) with the help of a Mortgage Loan Originator ("MLO") working for a third party mortgage banker or mortgage broker, after the buyer's installment contract is completed and signed, can the community owner then collect the payments himself? In other words, now that MLO has complied with the SAFE Act, can this receivable be returned to the community owner to collect the monthly payments? My concern is that someone might say that since the owner is now receiving the payments, he is engaged in the business of "loan servicing" - even though it's his own home; he's not in the lending or servicing business; and not receiving or expecting compensation for the act of servicing). It could pose a real financial hardship on community owners if they had to pay a third party for servicing that they can do themselves. The primary reason park owners do this is to fill vacant homes, not to make big money on the sale itself. Comment: The above question supplements the FAQs posted by MHCO regarding the SAFE Act earlier this month. According to the Oregon Department of Community and Business Services ("DCBS"), the Act applies to manufactured community owners who sell homes acquired following abandonment or pre-abandonment. Accordingly, an owner who provides financing by carrying back an installment contract, will have to either become licensed as a Mortgage Loan Originator ("MLO") or hire - as an employee or independent contractor - a third party MLO to perform the credit component of the transaction.
{ "redpajama_set_name": "RedPajamaC4" }
8,001
David W. Panuelo, né le dans les îles Carolines, est un homme d'État micronésien. Il est président des États fédérés de Micronésie depuis le . Biographie Né dans ce qui est alors le Territoire sous tutelle des îles du Pacifique, sous souveraineté des États-Unis, il étudie au Collège d'État d'Oregon oriental dans l'Oregon, où il obtient en 1987 son Bachelor of arts en science politique. En 1988, il suit une formation professionnelle en Australie et est employé, à partir de cette même année, par le ministère des Affaires étrangères des États fédérés de Micronésie. Il est nommé adjoint au chef de la mission diplomatique aux Fidji. Membre du Congrès comme représentant de Pohnpei depuis les élections de 2011, il est réélu le en battant Peter Christian. Le 11 mai suivant, il succède à ce dernier en étant élu président des États fédérés de Micronésie par le Congrès. Battu aux élections législatives de mars 2023, il ne peut briguer de second mandat à la présidence de la République, et son mandat s'achèvera en mai. Prises de position En réponse à la prise d'assaut du Capitole des États-Unis par des partisans de Donald Trump en janvier 2021, le président Panuelo publie une lettre ouverte « au peuple et au gouvernement des États-Unis d'Amérique », condamnant vivement Donald Trump : « le président Donald J. Trump a ouvertement sollicité des actes de terrorisme intérieur contre le peuple et le gouvernement des États-Unis d'Amérique. [...] [L]e peuple de Micronésie a vu le président des États-Unis rejeter la démocratie et les principes démocratiques, et adhérer au fascisme, en appelant ouvertement ses partisans à renverser la volonté du peuple américain et le processus démocratique ». Le 8 février 2021, solidairement avec les présidents des quatre autres États souverains micronésiens (Kiribati, Îles Marshall, Nauru et Palaos), il annonce que son pays quittera le Forum des îles du Pacifique, jugeant que cette organisation manque de considération pour les pays de la Micronésie. En février 2022, les chefs d'État des cinq États micronésiens annoncent qu'ils suspendent leur retrait de l'organisation. Les cinq présidents s'accordent à proposer des réformes au Forum, et demandent au Forum de les adopter au mois de juin au plus tard. Le 7 juin, à la suite d'une médiation menée par les Fidji et la Nouvelle-Zélande, les dirigeants des États micronésiens confirment qu'ils demeureront membres du Forum. En contrepartie, ils obtiennent l'institutionnalisation d'une rotation du poste de secrétaire-général entre la Micronésie, la Mélanésie et la Polynésie, la garantie que le poste revienne à un candidat micronésien en 2024, et la création d'un poste de commissaire aux affaires maritimes dont le siège sera dans un État micronésien et dont le premier titulaire sera un Micronésien. En février 2022, en réponse à l'invasion de l'Ukraine par la Russie, les États fédérés de Micronésie sont le premier pays à rompre formellement leurs relations diplomatiques avec la Russie, établies en 1999. David Panuelo en notifie l'ambassade russe à Manille, qui est accréditée aux ÉFM. Le président Panuelo explique que son pays condamne « avec la plus grande force ces actes choquants de tyrannie » de la part de la Russie « qui causent une instabilité globale et la perte de vies et de libertés au peuple ukrainien ». Se disant conscient que la prise de position de son gouvernement n'est qu'une « maigre consolation » pour les Ukrainiens, il affirme sa solidarité envers l'Ukraine, « un pays qui, comme le nôtre, adhère à la démocratie, aux principes démocratiques et à l'état de droit ». Fin mars, alors que la Chine a refusé de prendre position sur le conflit, il appelle publiquement son homologue Xi Jinping à demander à la Russie de cesser sa guerre contre l'Ukraine. En mars 2022, il écrit une lettre ouverte au Premier ministre des Salomon, Manasseh Sogavare, pour lui exprimer son inquiétude après l'accord de partenariat stratégique signé entre les Îles Salomon et la Chine, dont le texte n'est pas rendu public par le gouvernement salomonais mais qui permettrait à celui-ci de faire appel aux forces militaires chinoises pour le maintien de l'ordre aux Salomon. En mai 2022, en amont d'une rencontre entre le ministre chinois des Affaires étrangères Wang Yi et les ministres des Affaires étrangères de la plupart des petits États insulaires du Pacifique, le président Panuelo écrit à ses homologues océaniens pour exprimer son inquiétude face à la proposition chinoise d'accord multilatéral en matière de sécurité intérieure. Il écrit que l'accord rendrait les pays océaniens dépendants de la Chine, affecterait leur souveraineté, et accroîtrait le risque d'un conflit régional entre la Chine d'une part et les États-Unis, le Japon, l'Australie et la Nouvelle-Zélande de l'autre. Le 9 mars 2023, au lendemain de sa défaite aux élections législatives et sachant donc que son mandat présidentiel s'achèvera en mai sans possibilité de renouvellement, David Panuelo écrit une lettre au président du Congrès fédéral, Wesley Simina, aux gouverneurs des États et aux présidents des parlements des États, dans laquelle il relate : « J'ai été directement menacé dans mon intégrité physique par des représentants de la république populaire de Chine dans le cadre de leurs fonctions officielles ». Il accuse par ailleurs la Chine d'actes d'espionnage dans son pays, de mesures de surveillance contre sa propre personne, et de corruption d'élus micronésiens à qui des représentants de l'ambassade de Chine auraient remis des enveloppes d'argent en liquide. Il accuse des navires chinois d'avoir procédé sans autorisation à une cartographie des ressources maritimes de la zone économique exclusive de la Micronésie, et d'avoir menacé les bateaux micronésiens qui approchaient pour s'enquérir de leurs activités. Il accuse enfin l'ambassadeur de Chine d'avoir harcelé ses ministres et lui au téléphone pour qu'ils autorisent l'entrée d'ouvriers chinois dans le pays durant la pandémie de Covid-19 aux États fédérés de Micronésie, puis pour qu'ils commandent des vaccins chinois contre la pandémie, malgré le refus explicite de son gouvernement ; son ministre de la Santé Marcus Samo, son ministre des Affaires étrangères Kandhi Elieisar et lui ont tous trois dû changer de numéro de téléphone. Affirmant que la Chine cherche à s'assurer le soutien diplomatique de la Micronésie et à compromettre la souveraineté du pays, David Panuelo recommande au Congrès que son pays établisse des relations diplomatiques officielles avec Taïwan, et rompe donc ses relations avec la république populaire de Chine. Il indique avoir rencontré le mois précédent le ministre taïwanais des Affaires étrangères, Joseph Wu, et obtenu que Taïwan verse aux États fédérés de Micronésie US$ d'aide au développement en échange d'une reconnaissance diplomatique, suivis de US$ par an. Références Voir aussi Liens externes Site officiel de la Présidence des États fédérés de Micronésie Articles connexes Liste des présidents des États fédérés de Micronésie Élections générales micronésiennes de 2019 Liste des dirigeants actuels des États États fédérés de Micronésie Président des États fédérés de Micronésie Naissance en avril 1964 Pohnpei (État)
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,447
Domien Ingels (Gent, 23 juli 1881 - Bachte-Maria-Leerne, 16 november 1946) was een Belgisch beeldhouwer en schilder. Bekend zijn een aantal standbeelden van het Ros Beiaard in Gent. Ingels beeldhouwde het paard, Aloïs De Beule de mannen. Belgisch beeldhouwer
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,900
Q: Not able to access Microsoft Teams APIs I am working on POC where I want to access Microsoft Teams API.i.e. https://graph.microsoft.com/beta/me/joinedTeams But getting below error details. Error details are as follows. { "error": { "code": "", "message": "Authorization has been denied for this request.", "innerError": { "request-id": "ac2efa19-dc29-4573-9ece-ba98b564818e", "date": "2018-02-16T12:55:15" } } } I have given below permissions from microsoft azure for my registered application. Bookings.Manage.All Bookings.Read.All Bookings.ReadWrite.All BookingsAppointment.ReadWrite.All Calendars.Read Calendars.Read.Shared Calendars.ReadWrite Calendars.ReadWrite.Shared Contacts.Read Contacts.Read.Shared Contacts.ReadWrite Contacts.ReadWrite.Shared Device.Command Device.Read EAS.AccessAsUser.All email Files.Read Files.Read.All Files.Read.Selected Files.ReadWrite Files.ReadWrite.All Files.ReadWrite.AppFolder Files.ReadWrite.Selected Financials.ReadWrite.All Mail.Read Mail.Read.Shared Mail.ReadWrite Mail.ReadWrite.Shared Mail.Send Mail.Send.Shared MailboxSettings.Read MailboxSettings.ReadWrite Notes.Create Notes.Read Notes.Read.All Notes.ReadWrite Notes.ReadWrite.All Notes.ReadWrite.CreatedByApp offline_access openid People.Read profile Sites.Manage.All Sites.Read.All Sites.ReadWrite.All Tasks.Read Tasks.Read.Shared Tasks.ReadWrite Tasks.ReadWrite.Shared User.Read User.ReadBasic.All User.ReadWrite UserTimelineActivity.Write.CreatedByApp Above permissions I can see when I decrypt access token. I have gone through that Microsoft Teams (beta) API: Looks like you may not have the permissions for this call. Please modify your permissions post and already have given permission as per above post but still getiing same error. Here is screen shot enter image description here Thanks A: The permissions required for getting the joined teams is User.Read.All, User.ReadWrite.All. Please go through the link for more information about the joined teams graph api call.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,949
Champagne Comedy – The Late Show A Quiet Word With Audrey's Kitchen Have You Been Paying Attention? The Joy Of Sets The Mick Molloy Show The Mick Molloy Show Episode 1 Arseless Chaps D-Generation Breakfast Show Going For Bronze Kennedy-Molloy Lonely Hearts Club Martin-Molloy The Two Tones Segs & Skits Category: TV Sports Fever! S01E04 Ratings By Matt F in News, TV Well done to Santo Cilauro, Sam Pang and Ed Kavalee for getting to their fourth episode on air at such a premium start of 10:46pm. With the extra dosage of Channel 9's cricket coverage, in which we can't get enough of, and more soccer, we had some extra sports covered. This included tryouts for swimming in the olympics … Channel 7, Ed Kavalee, ratings, Rob Sitch, Sam Pang, Santo Cilauro, Sports Fever! Judith Lucy on In Gordon Street Tonight I may not have published this here in time to give you warning (but if you follow this site on Twitter, it was promoted!) but if you didn't have your eyes on ABC TV last night, you would've missed Judith Lucy having a chat to Adam Hills on In Gordon Street Tonight. Lucy sat comfortably … Adam Hills In Gordon Street Tonight, Judith Lucy By Matt F in TV Source: au.tv.yahoo.com/plus7/sports-fever Sancho, Stan and Ted graced an average 182,000 television screens last night at around 10:37pm (probably an extra 1 million viewers on a two minute Optus time delay are missing somewhere). A wonderful show that if you don't follow any type of sport, Sports Fever! makes it extremely entertaining. This week, baseball, parts of … Channel 7, Ed Kavalee, ratings, Sam Pang, Santo Cilauro, Sports Fever! Rob Sitch on In Gordon Street Tonight Former Spicks And Specks host Adam Hills is back with the second season of In Gordon Street Tonight. The first episode was launched tonight (Wednesday 8th February) with the first guests Rob Sitch and Josh Lawson, promoting Any Questions For Ben. Sitch briefly spoke about how they had to alter a park to get the autumn … Adam Hills In Gordon Street Tonight, Any Questions For Ben, Josh Lawson, Rob Sitch Source: au.tv.yahoo.com/plus7/sports-fever With a slightly earlier start of 10:40pm (technically it was 10:43pm as per my watch), the perfectly timed second episode of Sports Fever! with "Santo, Sam and Ted" raked in 200,000 late night viewers. Sure, it's a little down from their premiere first episode last week, but… well… I have nothing. It was … Channel 7, ratings, Sports Fever! Santo, Sam & Ed's Sports Fever! (SSESF) S01E01 Ratings source: au.tv.yahoo.com/plus7 Well, after a few false starts with the scheduled time (originally 10;30pm, then moved to 10:45pm, only to have Jason Segel and Neil Patrick Harris still teasing the kids about how their dad met their mother), Santo, Sam & Ed's Sports Fever! finally got the ball rolling a few minutes before 11pm. The … Channel 7, Ed Kavalee, Sam Pang, Santo Cilauro, Sports Fever! Judith Lucy – Nothing Fancy Interview By Matt F in Books, TV Judith Lucy, the "Fifth Beatle" to The Late Show in 1993, is on tour with her latest show Nothing Fancy. In a recent interview with the Sydney Morning Herald, Lucy talks about her previous stage shows, her 2011 TV show on the ABC Spiritual Journey and her 2008 book The Judith Lucy Alphabet. Here's a tiny excerpt … interview, Judith Lucy, Spiritual Journey Santo, Sam & Ed are goooooo!!! UPDATE: SPORTS FEVER WILL BEGIN AT 10:45PM MONDAY JANUARY 30, instead of the previous 10:30PM time. Get your Diego Maradonuts out and park your bum on the lounge for a late night live show tradition you haven't done for 20 years! Sadly, I'm not talking about The Late Show… It's Santo, Sam & Ed's Sports … Channel 7, Ed Kavalee, Sam Pang, Santo Cilauro, Sports Fever!, Working Dog Heckle! Show us your tusks! If you ever went to a recording of The Joy Of Sets, you'd know that the entire studio audience was primarily made of Get This listeners. (Weekend listeners were registered to go, but they never turned up as it was filmed on a Friday). During the early days of filming, Tony Martin and Ed Kavalee … Get This, The Joy Of Sets FlashBackChat By Matt F in TV, Video Thanks to forum regular Mason Hell-Cat / Flemishdog and the beauty of YouTube, want to know what the fuss was all about before The Late Show appeared? The D-Generation got their televisual start with a sketch comedy show on the ABC in 1986, with cast members Rob Sitch, Santo Cilauro, John Harrison, Madga Szubanski, Marg … Backchat, D-Generation Got any news / stories / info to share? Contact us: champagnelateshow@gmail.com Have You Been Paying Attention? – 8:30pm Mondays – Channel 10 – TenPlay (audio podcast via iTunes) Santo, Sam & Ed's Total Football / Cup Fever Podcast – iTunes and iView TEAM Effort Podcast – iTunes | Podbean Tony Martin's SIZZLETOWN Podcast – iTunes | Podbean Kennedy/Molloy – 4 – 6pm Triple M Network (check your local station for times) "Every 7 Minutes" history "Every 7 Minutes" history Select Month July 2019 May 2019 April 2019 March 2019 February 2019 January 2019 December 2018 November 2018 October 2018 September 2018 August 2018 July 2018 June 2018 May 2018 April 2018 March 2018 February 2018 December 2017 November 2017 October 2017 September 2017 August 2017 June 2017 May 2017 April 2017 February 2017 January 2017 December 2016 November 2016 October 2016 September 2016 August 2016 July 2016 May 2016 April 2016 March 2016 February 2016 January 2016 December 2015 November 2015 October 2015 September 2015 August 2015 July 2015 May 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 February 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 March 2009 January 2009 December 2008 August 2008 July 2008 June 2008 April 2008 February 2008 January 2008 December 2007 November 2007 October 2007 September 2007 May 2007 April 2007 March 2007 February 2007 January 2007 December 2006 November 2006 October 2006 July 2006 June 2006 April 2006 February 2006 January 2006 December 2005 November 2005 October 2005 September 2005 August 2005 June 2005 May 2005 March 2005 February 2005 Categories Select Category Archives Audio Blu-Ray Books DVD Movies News Poll Radio SVOD Tony Martin TV Uncategorized Video Persons of Interest Champagne Comedy on Twitter Sink The Slipper 20 Years ABC All Aussie Adventures Any Questions For Ben article Bargearse Brian Nankervis Channel 7 Channel 10 D-Generation DVD Ed Kavalee forum Frontline Get This Glenn Robbins Graham and The Colonel Have You Been Paying Attention? HYBPA iTunes Jane Kennedy Judith Lucy Logies Martin/Molloy Martin / Molloy Mick Molloy Pictures Of You podcast radio ratings Richard Marsland Rob Sitch Russell Coight Sam Pang Santo Cilauro Sports Fever! The Joy Of Sets The Late Show The Olden Days Tom Gleisner Tony Martin Triple M Utopia Website Working Dog This is a fan site only, written by and for the fans who actively support The Late Show and the original cast's current activities, and this site has been online in some form since 1996. We have nothing to do with the Australian Broadcasting Corporation who originally aired the series - all comments, citations and fan-produced analyses on the site are our own. Got an issue with anything on the main site or in the discussion forum? Let me know at kimgilmour (at) gmail (dot) com.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,793
Space Syntax was developed in the 1970s by Bill Hillier at the Bartlett School, University College London (UCL). Initially Space Syntax was used exclusively as an analysis and planning tool for urban scale scenarios. Today it is being used across all architectural scales from rooms to buildings and large cities. Space Syntax incorporates various analytical methods such as isovists (visibility graph analysis). One limitation of this analysis method is that it currently works only in plan and cannot be utilized for the analysis of vertical communication patterns. Together with researchers at the Centre for Knowledge Architecture in Dresden we have developed a tool that overcomes this limitation, by extending the scientific analysis of isovists into the third dimension. For example, this allows us to analyse the visual connectivity across multiple levels within a high-rise building through the introduction of a vertical void. The aim was to develop a communicative zone for interaction, exchange and distraction by maximizing the vertical view axis. Communication is an essential factor for the development of innovation in knowledge-driven businesses and institutions. Through an exchange of ideas and information, range of current competencies and ideas of the staff can be exponentially intensified. This contributes to the long-term performance of the company. Since 1992, HENN has collaborated with Prof. Thomas Allen of the Sloan School of Business at Massachusetts Institute of Technology (MIT), Cambridge, CT on the research field of organisation and architecture of innovation. Netgraphing is an analysis tool to visualise communication networks and patterns. It maps the existing structure and intensity of communication patterns in a company and displays the normally invisible flow of information exchange. The data of individual communication flows can be amalgamated to produce a collective image for the entire business. This is subsequently categorized and sorted according to specific parameters. Hillier B. and Hanson J. (1984), The Social Logic of Space, Cambridge University Press: Cambridge Hillier B. (1999), Space is the Machine: A Configurational Theory of Architecture, Cambridge University Press Hillier B. (1983), Space Syntax: A Different Urban Perspective, Architects Journal, vol. 178, no. 48, Nov. 30, pp. 47-63.
{ "redpajama_set_name": "RedPajamaC4" }
7,338
/***** Class: Pulley.View Extends: Backbone.View Notes: All view classes should extend this class. It adds additional methods that are needed throughout the app. *****/ registerNamespace('Pulley.view'); Pulley.View = Backbone.View.extend({ //Override Backbone.View vars tagName: 'div', className: 'view', //model: null, // Backbone.Model template: Handlebars.compile('<div class="view">Template goes here</div>'), //template: Handlebars.compile($("#view-template").html()), //el //$el //attributes defaults: { //id: null, // integer }, /*events: { 'click .openbutton': 'open' },*/ //Module vars stateContainer: null, // The container that state views are inserted into. Defaults to el if undefined. setWidth: null, // Number setHeight: null, // Number _locks: [], // Array of GUID Numbers. previousState: null, // Pulley.view.State currentState: null, // Pulley.view.State states: [], // Array of Pulley.view.States initialize: function (options) { //Valid options: model, collection, el, id, className, tagName, attributes and events //Unneccessary to call super. this.type = this.__proto__.type; Pulley.Class.applyAttributesToObject(options, this); if(!this.children){ this.children = {}; } //Keep all active controllers in the window object, so we can find them if needed. They are automatically removed on destroy. //Use the method c('myFoo') to find controllers by id. this._createGlobalReference(); //Mark the view as initialized, to prevent it from being auto-initialized again. this.initialized = true; //this.$el.attr('data-initialized','true'); //this.$el.attr('data-cid', this.cid); this.render(); }, render: function () { //Pulley.View.prototype.render.apply(this, arguments); // Call super. if(!this.drawn){ this.drawn = true; var html = this.template(this); this.setElement($(html)[0]); //Calls Pulley.View.setElement() to replace the reference and the element on stage. //this.$el.html(html); //Replace element's contents Pulley.View.autoinitializeViews(this.el); // Auto-initialize children. //var fooEl = this.$('.foo')[0]; //this.children.foo = window._views[fooEl.id]; } return this; //Enable chained calls. }, setElement: function(el) { if(this.$el){ this.$el.replaceWith(el); //Replace element on the stage first, since Backbone doesn't do this. } Backbone.View.prototype.setElement.apply(this, arguments); //Then update our reference and event binding. return this; }, //Define module methods setModel: function (model) { this.model = model; this.render(); if(this.model instanceof Backbone.Model || this.model instanceof Backbone.Collection) { this.listenTo(this.model, 'change', this.render); } }, _createGlobalReference: function () { if(!window._views){ window._views = []; window.getView = window.c = Pulley.View.getView; window.addView = Pulley.View.addView; } window.addView(this); }, _defineDeclarativeStates: function () { //You can automatically create states by defining them in HTML. //This works well if you simply want to switch between views, without using transitions or change anything else. //Declare a state like this: // Example: <div data-state-id='home' data-state-implementation-method='setState_stateId'></div> var stateCandidates = _this.$('[data-state-id]').toArray(); if(stateCandidates.length){//Quickly check if there are any HTML-defined states, before searching the whole tree in detail. var declarativeStates = findStatesInChild(_this.el); _this.states = _this.states.concat(declarativeStates);//Add these states, in addition to those defined in script. } function findStatesInChild(element){ var states2 = []; for(var i=0; i<element.children.length; i++){ var childElement = element.children[i]; if(childElement.hasAttribute('data-state-id')){//If this element is a state, add it to the array. var stateId = $(childElement).attr('data-state-id');//String var stateLabel = $(childElement).attr('data-state-label');//String var stateType = $(childElement).attr('data-state-type');//String (Enum) var stateImplementationMethodName = $(childElement).attr('data-state-implementation-method');//String if(stateImplementationMethodName){ var stateImplementationMethod = _this[stateImplementationMethodName];//Function if(!stateImplementationMethod){ console.error('No function with this name found.'); } } var state = new Pulley.view.State({ id:stateId, label:stateLabel, type:stateType, implementation:stateImplementationMethod, view:childElement }); //Replace the view with its View if it has one. var childElementVC = c(childElement.id); if(childElementVC){ state.view = childElementVC; } states2.push(state); }else if(!childElement.hasAttribute('data-initialized')){//Don't look for states inside of other Views. var childStates = findStatesInChild(childElement); states2 = states2.concat(childStates);//Add child states to the list of states. } } return states2; } }, reset: function (resetModel) { //(Boolean):void //logMethod(this, 'reset', arguments); if(resetModel){ this.destroy(); this.init(); }else{ //this.setState(null, stateModel, false); } }, focus: function () { //():void //logMethod(this, 'focus', arguments); }, setSize: function (w, h) { //(int, int):void //logMethod(this, 'setSize', arguments); if(w >= 0){ this.$el.width(w); } if(h >= 0){ this.$el.width(h); } if(this.children){ for(var i in this.children){ var child = this.children[i]; if(child){ if(child.setSize){ child.setSize(); } } } } }, isLocked: function () { if(this._locks.length){ return true; }else{ return false; } }, lock: function (id) { if(!id){ console.error('You must provide a lock id.');//The GUID helps ensure that locks are programmed cleanly, and makes it easy to locate uncleaned locks. } this._locks.push(id); if(this._locks.length == 1){ this.$el.addClass('locked'); } }, unlock: function (id) { if(this._locks <= 0){ console.error('The app is not locked. Make sure that each lock() and unlock() call has a pair.'); } var foundAndRemoved = null; for(var i in this._locks){ if(this._locks[i] == id){ this._locks.splice(i, 1);//Remove that lock. foundAndRemoved = true; } } if(!foundAndRemoved){ console.error('No lock with id '+id+' was found.'); } if(this._locks.length == 0){ this.$el.removeClass('locked'); } }, /***** STATE MANAGEMENT *****/ setState: function (stateArg, stateModel, useTransition, onComplete) { //(State or Number, Boolean, Function):void /* The stateArg can be: a. an instance of Pulley.view.State. b. a State's id (String). i.e. 'dashboard' c. a path of multiple state id's. i.e. 'dashboard/executive' ********** Use this method to manage the state of the Pulley.View. This is better than having custom goToSection/Part/Page methods. Instructions: 1. Add the class 'view__state' to each element that you want to represent a state. Everything in this state will be automatically turned on and off. It also needs a unique class that matches the stateId. Example: <div class='view__state Foo__state1'></div> 2. Define each state as an enumerable string constant, using the stateId. Example: Foo.STATE__STATE1 = 'state1'; 3. Define a dictionary of methods for each state, in the init() method. Example: this._stateMethods = { Foo.STATE__STATE1: setState_stateId } 4. Implement these methods to set the view for this state. This method should show/hide children or siblings of the state (not the state itself), and implement transitions. While this could all be defined inside of an extention of the setState() method, it is better to break it up into more managable pieces, and not worry about the method overrides. Example: this.setState_stateId = function(useTransition, reverse, onComplete){//(Boolean, Boolean, Function):void var _this = this; if(useTransition){ $(someChild).transition({opacity:1, duration:1000, complete:onComplete}); }else{ $(someChild).css({display:'block', opacity1}); onComplete(); } } */ var State = Pulley.view.State; var _this = this; logMethod(this, 'setState', arguments); var state; var subState; if(isPath(stateArg)){ //If a path is provided, find the matching state, and create a substate string to pass on down. var pathParts = parsePath(stateArg); if(!pathParts.length) pathParts = null; state = this.getState(pathParts[0]); if(pathParts.length > 1){ subState = pathParts.slice(1).join('/'); } }else if(stateArg){ //Find the state if a number, string, or state are provided. state = this.getState(stateArg); if(!state){ console.error('invalid state'); } }else{ //Use the default state if none specified. state = this.states[0]; } if(!state){ return; } var reverse = null; var stateIsChanging = (this.currentState != state); if(stateIsChanging){//If the state is changing... this.previousState = this.currentState;//Set the current state to the previous state. this.currentState = state;//Define the new state reverse = this._areStatesInReverseOrder(this.previousState, this.currentState);//Determine if it is moving to an earlier state. } //Hide the old state, and show the new state. Also set opacity to 0 on hide so it is ready for fade in. if(!useTransition){//If not using a transition, then this can handle the showing/hiding for you. //Hide the old state if(stateIsChanging){ if(this.previousState){ var previousViewElement = (this.previousState.view instanceof Pulley.View)? this.previousState.view.$el[0] : this.previousState.view; if(previousViewElement.parentElement){ previousViewElement.parentElement.removeChild(previousViewElement); } } } //Create the view if it doesn't yet exist. if(!this.currentState.view){ this.currentState.view = new this.currentState.viewClass(); } //Set the state's model, if desired. if(stateModel && this.currentState.view.setModel){ this.currentState.view.setModel(stateModel); } //Show the new state. var currentViewElement = null; if(this.currentState.view instanceof Pulley.View){ //If it is a Pulley.View instance. currentViewElement = this.currentState.view.el; }else{ //Otherwise, assume it is an element. currentViewElement = this.currentState.view; } if(!this.stateContainer){ this.stateContainer = this.el; } this.stateContainer.appendChild(currentViewElement); //Set the substate of the new state. (Pass a null substate to reset to the default state.) if(this.currentState.view instanceof Pulley.View){ this.currentState.view.setState(subState); } this.trigger(Pulley.View.STATE_CHANGED);//, {newState: _this.currentState, oldState:_this.previousState});//I am not passing through the states as data, because it may not have been the state of the current level. It may be substates that changed. These can still be referenced via the instance vars. } //Set the size of the currentState. if(this.currentState.view){//If the current state has a view, if(this.currentState.view.setSize){//And that view is a Pulley.View, window.requestAnimationFrame(function(){//Set its size, since it is now on the stage. if(_this.currentState.view){ if(_this.currentState.view.setSize){ _this.currentState.view.setSize(); } } }); } } //Call the custom implementation method. if(this.currentState.implementation){ //this.currentState.implementation(false, reverse, onComplete);//Using this method causes scope to be lost. this.setSize(); this.currentState.implementation.call(this, useTransition, reverse, done);//Use the "call" function to pass the right scope through as the first parameter. On the receiving end, the first parameter is omited. }else{ done(); } function done(){ _this.render(); _this.setSize(); //Focus the new view. if(_this.currentState.view){ if(_this.currentState.view.focus){ _this.currentState.view.focus(); } } //Stop listening to the previous view. if(stateIsChanging){ if(_this.previousState){ if(_this.previousState.view){ if(_this.previousState.view instanceof Pulley.View){ _this.stopListening(_this.previousState.view, Pulley.View.STATE_CHANGED, _this._currentState__onStateChanged); } } } } //Listen to the new view. if(_this.currentState.view){ if(_this.currentState.view instanceof Pulley.View){ _this.listenTo(_this.currentState.view, Pulley.View.STATE_CHANGED, _this._currentState__onStateChanged); } } if(onComplete){ onComplete(); } } }, _currentState__onStateChanged: function (event, data) { //var _this = event.data.scope; this.trigger(Pulley.View.STATE_CHANGED);//This effectively bubbles the event up. }, getStateIndex: function (state) { // (State):int for(var i in this.states){ var state2 = this.states[i]; if(state == state2){ return eval(i); } } }, getPreviousStateByOrder: function (excludeModalStates) { // Not to be confused with this.previousState if(excludeModalStates == null){ excludeModalStates = true; } var currentStateIndex = this.getStateIndex(this.currentState); for(var i = currentStateIndex-1; i<this.states.length; i--){ if(i<0){ return null; } var previousState = this.states[i]; if(excludeModalStates){ if(previousState.TYPE != Pulley.view.State.TYPE__MODAL){ return previousState; } }else{ return previousState; } } }, getNextStateByOrder: function (excludeModalStates) { if(excludeModalStates == null){ excludeModalStates = true; } var currentStateIndex = this.getStateIndex(this.currentState); for(var i = currentStateIndex+1; i<this.states.length; i++){ var nextState = this.states[i]; if(excludeModalStates){ if(nextState.TYPE != Pulley.view.State.TYPE__MODAL){ return nextState; } }else{ return nextState; } } }, getLastState: function (excludeModalStates) { if(excludeModalStates == null){ excludeModalStates = true; } for(var i=this.states.length-1; i>-1; i--){//Go through states backwards. var state = this.states[i]; if(excludeModalStates){ if(state.TYPE != Pulley.view.State.TYPE__MODAL){ return state; } }else{ return state; } } }, getState: function (stateArg) { // (String):State var state; if(isString(stateArg)){ for(var i in this.states){ state = this.states[i]; if(state.id == stateArg){ return state; } } }else if(isNumber(stateArg)){ state = this.states[stateArg]; return state; }else if(stateArg instanceof Pulley.view.State){ state = stateArg; return state; } var doError = false; if(doError){ console.error('Invalid argument provided.'); }else{ var notFoundState = new Pulley.view.State({ id: '404', name: 'Page Note Found', //viewClass: Leviathan.Lotan.View, view: $('<div>404: Page not found</div>')[0] }) return notFoundState; } }, _areStatesInReverseOrder: function(state1, state2){ var value = null; var forceReverse = null; /*if(this.states.length > 1){ //I commented this out, because it is not clear if/when it is needed. var lastState = this.states[this.states.length-1]; var secondLastState = this.states[this.states.length-2]; forceReverse = (this.currentState == lastState && this.previousState != secondLastState);//If this is the last state, and the old state is not the second-last state, then it needs to play the transition in reverse. }*/ if(forceReverse){ value = true;//This is only called when going from the next slide, in reverse to the last state. }else{ value = (this.getStateIndex(state1) > this.getStateIndex(state2)); } return value; }, /***** /State Management *****/ }); Pulley.View.type = Pulley.View.prototype.type = 'Pulley.View'; //STATIC VARS Pulley.View.STATE_CHANGED = 'stateChanged'; //STATIC METHODS //This goes through the HTML of a DOMElement and initializes any controllers. //Controllers are defined in HTML like this: <div class="Clip">stuff</div>. This looks for a controller with name ClipVC. Pulley.View.autoinitializeViews = function (containerElement) { //logMethod(this, '_initializeViews', arguments); var elementsToInitialize = _getUninitializedViewsWithinElement(containerElement); var views = []; for(var i in elementsToInitialize){ var el = elementsToInitialize[i]; var $el = $(el); var viewControllerName = $el.attr('data-view'); var viewControllerClass = Pulley.View.getViewByNamespace(viewControllerName); if(viewControllerClass){ var args = { el: el } var model = $el.attr('data-model'); if(model) args.model = JSON.parse(model); var settings = $el.attr('data-settings'); if(settings) args.settings = JSON.parse(settings); var view = new viewControllerClass(args); //Pass in the view, so it has a reference. views.push(view); } } return views; function _getUninitializedViewsWithinElement(containerElement){ var elementsWithControllers = $(containerElement).find('*[data-view]').toArray(); var result = []; for(var i in elementsWithControllers){ var el = elementsWithControllers[i]; var elementIsAlreadyInitialized = ($(el).attr('data-initialized') == 'true'); if(!elementIsAlreadyInitialized){ result.push(el); } } return result; } } Pulley.View.getViewByNamespace = function(namespaceString){//String, example: Pulley.view.controls.NavBarVC var namespacesArray = namespaceString.split(".");;//Array example, ['Pulley','view','controls','NavBarVC']; var parentNamespace = window; var controller = null; for(var i in namespacesArray){ var childNamespace = namespacesArray[i]; if(i < namespacesArray.length - 1){ parentNamespace = parentNamespace[childNamespace]; if(!parentNamespace){ e.e;//Invalid namespace. } }else{ controller = parentNamespace[childNamespace]; break; } } return controller; } Pulley.View.getView = function (elementOrClientId) { /*for(var i in window._views){ var viewController = window._views[i]; if(viewController.cid == cid){ return viewController; } }*/ var elementClientId = null; if(elementOrClientId instanceof Element){ var el = elementOrClientId; elementClientId = $(el).attr('data-cid'); }else{ elementClientId = elementOrClientId; } var view = window._views[elementClientId]; return view; } Pulley.View.addView = function (view) { view.$el.attr('data-cid', view.cid) window._views[view.cid] = view; } /***** End Class: Pulley.View *****/ /***** Class: State Extends: Class Notes: All view classes should extend this class. It adds additional methods that are needed throughout the app. *****/ Pulley.view.State = function (attributes, options) { this.id = null; // String referenced by JavaScript, and the URL hash. this.label = null; // String used in the app. this.type = null; // String of enumerable 'type' (see below). this.implementation = null; // Function that lays out the state. this.viewClass = null; // View Class this.view = null; // View Instance or DOM Element //Constructor this.type = this.__proto__.type; if(!this.type){ type = Pulley.view.State.prototype.type; } for(var attr in attributes){ var value = attributes[attr]; this[attr] = value; } //Assign defaults. if(!this.id && !(this.id == 0)){ console.error('You must provide an id to create a State.'); } if(!this.type) this.type = Pulley.view.State.TYPE__PAGE; if(!this.view && !this.viewClass){ console.error('You must provide a view or viewClass to create a State.'); } }; //EXTEND Pulley.view.State.type = Pulley.view.State.prototype.type = 'Pulley.view.State'; //STATIC VARS Pulley.view.State.TYPE__PAGE = 'page';//Default Pulley.view.State.TYPE__MODAL = 'modal'; Pulley.view.State.TYPE__TRANSITION = 'transition'; //STATIC METHODS //Pulley.view.State.staticMethod = function () {} /***** End Class: Pulley.view.State *****/
{ "redpajama_set_name": "RedPajamaGithub" }
2,025