text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
Q: How to get the I'm trying to extract the values of a form in the HTML code. The form is written in Java script.
this is the function I am using currently, it uses the form id from :
<!DOCTYPE html>
<html>
<body>
<form id="frm1">
<tr>
<td nowrap="nowrap">
<label>Number of Panels</label>
</td>
<td>
<input class="display" type="text" size="8" readonly="readonly" value="20"/>
</td>
</tr>
<tr> <br>
Last name: <input type="text" name="lname" value="Duck"><br><br>
</form>
<p>Click "Try it" to display the value of each element in the form.</p>
<button onclick="myFunction()">Try it</button>
<p id="demo"></p>
<script>
function myFunction() {
var x = document.getElementById("frm1");
var text = "";
var i;
for (i = 0; i < x.length ;i++) {
text += x.elements[i].value + "<br>";
}
document.getElementById("demo").innerHTML = text;
}
</script>
</body>
</html>
The output of this after clicking the "Try it" button will give:
20
Duck
My Question is, the actual script doesn't have a form id in it, as so:
<!DOCTYPE html>
<html>
<body>
<form>
<tr>
<td nowrap="nowrap">
<label>Number of Panels</label>
</td>
<td>
<input class="display" type="text" size="8" readonly="readonly" value="20"/>
</td>
</tr>
<tr> <br>
Last name: <input type="text" name="lname" value="Duck"><br><br>
</form>
<p>Click "Try it" to display the value of each element in the form.</p>
<button onclick="myFunction()">Try it</button>
<p id="demo"></p>
<script>
function myFunction() {
var x = document.getElementById("frm1");
var text = "";
var i;
for (i = 0; i < x.length ;i++) {
text += x.elements[i].value + "<br>";
}
document.getElementById("demo").innerHTML = text;
}
</script>
</body>
</html>
How can I get the value of this form that doesn't have an Id?
Cheers!
A: You can just do:
var x = document.forms[0];
And for inputs, you can just do document.getElementsByTagName("input")[index]
A: In modern browsers (and IE8), you can use document.querySelector to use any CSS selector to find an element.
In your case, for instance, you could use
var x = document.querySelector('[action="form_action.asp"]');
...to look it up by the value of its action attribute. The rest of your function doesn't change.
querySelector finds the first matching element. If you want a list of all matching elements, you can use querySelectorAll. In your case, for instance, if you want a list of the input elements inside the form, you could do this:
var elements = document.querySelectorAll('[action="form_action.asp"] input');
e.g.:
function myFunction() {
var elements = document.querySelectorAll('[action="form_action.asp"] input');
var text = "";
var i;
for (i = 0; i < elements.length ;i++) {
text += elements[i].value + "<br>";
}
document.getElementById("demo").innerHTML = text;
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,423
|
\section{I\MakeLowercase{o}T Application Scenario}
\label{sec:iot_example_application}
In Figure~\ref{fig:iot-overview-berlin}, we present an integrated public transport system of Berlin as a representative IoT application scenario.
The components in this scenario are either stationary or mobile.
Vehicles (red and yellow boxes), i.e., taxis, buses, subways, and trains move around the city and carry a set of sensors and a simple processing unit.
Each unit collects vehicle data (e.g., routing, maintenance information, and occupancy/usage) as well as data from the environment (e.g., traffic, road conditions, and weather).
The base stations, processing nodes, and dispatch station are stationary components.
Base stations (green triangles) are distributed across the city and consist of antennas, network routers, and compute and storage capacity.
Processing nodes (green circles) are distributed within the city to gather data from several base stations and apply more complex processing.
The centralized dispatch station represents the endpoint for all data and merges data from the fog and the cloud with stored and external data.
Users manage public transport through the dispatch station.
This IoT scenario requires a massively distributed system with continuous data producers as well as transient and permanent, distributed compute and storage capabilities.
The environment in this scenario differs fundamentally from current cloud-based data processing architectures.
In particular, vehicles move within the city and interact with multiple antennas, which transmit data to base stations.
Due to the dynamic nature, vehicles may encounter temporary connection losses or outages (red vehicle), e.g., when they are outside of transmission ranges.
Furthermore, all vehicles move at different speeds, on different roads/tracks, and are potentially equipped with different hardware.
User queries addressing only a subset of the vehicles do not require collecting all sensor data from all vehicles at every transmission interval.
This represents a major characteristic that is crucial for enabling large-scale IoT applications.
As a result, a fog requires continuous adaptation to a dynamic environment with respect to faults and changes in the availability, amount, type, capacity, and location of data and compute nodes.
Furthermore, on the sensor level, a system has to continuously adapt the sensor reads depending on a dynamic query workload.
Despite the distributed nature, it must be possible to manage the system through a centralized, global view and execute continuous as well as ad-hoc data analytics.
This includes the entire data analysis pipeline, from information extraction to integration and model building using machine learning, signal processing, and other advanced analytics.
From a user perspective, this system may assist the public transport dispatcher to schedule new vehicles or reroute vehicles in case of outages or increased passenger demand.
This results in a feedback loop that may change the physical fog architecture.
Furthermore, this architecture allows for enriching real-time data with external sources, e.g., air pollution measurements, event calendars, area crowdedness, or knowledge bases.
The characteristics of this application are representative for many IoT scenarios including Industry 4.0, smart homes, smart grids, smart cities, or participatory sensing applications.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/new-berlin-example-compact.pdf}
\vspace{-0.6cm}
\caption{IoT application scenario.}
\vspace{-0.6cm}
\label{fig:iot-overview-berlin}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we introduced NebulaStream, a general-purpose, end-to-end data management system for the IoT.
We showed that current systems are not yet ready for the upcoming challenges of the IoT era.
We highlighted the system design of the NebulaStream platform and its design principles.
The goal of our envisioned design is to handle the heterogeneity, unreliability, and elasticity of a unified sensor-fog-cloud environment.
Furthermore, we revealed upcoming research challenges and outlined possible solutions.
Finally, we presented first results that motivate the need of a new system design for upcoming IoT applications.
With our \mbox{NebulaStream Platform}, we aim to enable emerging IoT applications in different domains.
\section{Introduction}
\label{sec:intro}
Over the last decade, the amount of produced data has reached unseen magnitudes.
Recently, the International Data Corporation~\cite{dataAge} estimated that by 2025 the global amount of data will reach 175ZB and that 30\% of these data will be gathered in real-time.
In particular, the number of IoT devices is expected to grow to as many as 20 billion connected devices by 2025~\cite{gartner}.
At the same time, devices such as embedded computers or mobile phones continuously increase their processing capabilities.
This trend enables the exploitation of their computing and communication capabilities, as they become objects of common use.
As a result, the IoT is one of the fastest emerging trends in the area of information and communication technology~\cite{IOTVison}.
\begin{figure}[t]
\includegraphics[scale=.2]{figures/throughput-latency.png}
\vspace{-5mm}
\caption{IoT application using a cloud-centric SPE.}
\label{fig:throughput-latency}
\vspace{-7mm}
\end{figure}
The explosion in the number of connected devices triggers the emergence of novel data-driven applications.
These applications require low-latency, location awareness, wide-spread geographical distribution, and real-time data processing on potentially millions of distributed data sources.
To enable these applications, a data management system needs to leverage the capabilities of IoT devices.
However, today's data management systems are not yet ready for these applications as they embrace either the cloud or the fog computing paradigm.
Systems based on the cloud paradigm, e.g., Flink~\cite{Stratosphere}, Spark~\cite{zaharia2016apache}, and Kafka Streams~\cite{kstreams2018}, do not exploit the full capabilities of IoT devices.
To implement IoT applications, these systems require the collection of sensor data centrally in a data center prior to applying processing.
This centralized processing paradigm presents a bottleneck for upcoming IoT applications, which need to process data from millions of distributed sensors.
In Figure~\ref{fig:throughput-latency}, we showcase the impact of this bottleneck by executing an IoT application scenario using a cloud-based approach and reporting the average processing latency.
To this end, we scale the number of IoT data producers from 1 to 80.
Each producer generates data at a constant speed of 50K record/sec.
Producers send their data over a gateway to an Kafka cluster with five nodes.
Inside the same cloud environment, we setup an Flink cluster with eight nodes (cloud nodes are connected through a 1 Gbit Ethernet connection).
Our Flink job reads data from Kafka and executes a tumbling windowed aggregation of 10 seconds to count distinct events.
We let the experiment run for 10 minutes and measure the end-to-end processing latency following the methodology introduced by Karimov et al.~\cite{KarimovRKSHM18}.
Our experiment shows that latency increases as we increase the number of producers.
Our cloud-based IoT application scenario can sustain up to 20 producers with constant latency.
Beyond this point, our application saturates and latency increases gradually.
This effect intensifies for more IoT producers and results in a continuously increasing backlog within Kafka.
Overall, our experiment shows that a centralized cloud approach does not scale for IoT applications and thus future IoT applications require a new system.
In contrast, systems based on the fog computing paradigm, e.g., Frontier~\cite{Frontier} and CSA~\cite{Streaming_IOT_Survey}, exploit the processing capabilities of edge devices, i.e., devices that are physically closer to the data sources.
These devices apply data reduction techniques, e.g., pre-selection or pre-aggregation, to reduce data volume as early as possible in the processing pipeline, i.e., close to the sensor.
However, fog computing systems only scale within the fog and do not exploit the virtually unlimited resources of modern cloud infrastructures (e.g., Amazon Web Services or Microsoft Azure).
Data management systems for wireless sensor networks (WSNs), e.g., TinyDB~\cite{tinyDB}, exploit small battery-powered sensors to create a network of nodes to capture physical phenomena, such as earthquakes or volcanic eruptions.
These systems apply acquisitional query processing techniques to optimize the execution for battery lifetimes and deploy a small set of specialized queries to capture the physical phenomena.
However, WSN systems only scale within the sensor networks and do not exploit the resources of the attached cloud and fog environments.
In particular, they do not consider offloading computation to external nodes and do not provide general-purpose query execution capabilities.
Overall, there is no general-purpose, end-to-end data management system for a unified sensor-fog-cloud environment with functionality similar to production-ready systems such as Flink or Spark.
To enable future IoT applications, a data management system for the IoT has to combine the cloud, the fog, and the sensors in a single unified platform to leverage their individual advantages and enable cross-paradigm optimizations (e.g., fusing, splitting, or operator reordering).
From a system point of view, this unified environment imposes three unique characteristics that are not supported by state-of-the-art data management systems.
\textbf{Heterogeneity:} A unified environment consists of a high\-ly heterogeneous hardware landscape.
The processing nodes range from low-end battery-powered sensors (e.g., Mica \linebreak Motes) over system-on-a-chip devices (e.g., Raspberry PIs) to high-end rack-scale servers.
In particular, cloud infrastructures consist of homogeneous node setups, whereas the fog contains heterogeneous, low-end computing devices.
Furthermore, WSNs consist of highly specialized battery-\linebreak powered sensors.
To exploit the individual capacities of each node, an IoT data management system has to take their individual capabilities into account, especially their resource restrictions.
However, current data management systems abstract from the underlying hardware with virtual machines and managed runtimes.
These abstractions hinder the exploitation of specialized instructions and processing units and prevent important optimizations.
\textbf{Unreliability:} A unified environment has to handle different levels of runtime dynamics.
The fog introduces a highly dynamic runtime environment with unreliable nodes that might change their geo-spatial position, i.e., resulting in many transient errors or changes in latency/throughput.
WSNs exacerbate this highly dynamic runtime even further by turning-off sensors temporally to save energy and allowing reads only following a dedicated read schedule.
In contrast, a cloud infrastructure is a relatively stable environment where node failures are rare.
However, current approaches for load balancing, fault-tolerance, and correctness only concentrate on one particular environment. Thus, these approaches miss out important cross-paradigm optimization potential.
\textbf{Elasticity:} In a unified environment, data move from the sensors via intermediate nodes to the cloud, and finally to the consumer, e.g., a user device or another system.
The fog topology is commonly built as a tree-like network topology~\cite{bonomi2012fog, hong2013mobile} with several dataflow paths.
Data processing in the fog topology has to be network-aware because only nodes on the path from the sensors to the cloud can participate.
Furthermore, in a WSN, all sensors send their data to the next sensor in range until all data end up at the root of the network.
In contrast, in the cloud, every node has access to all data, e.g., via a distributed file system, e.g., HDFS.
However, current approaches allow optimizations, scaling, and load balancing only within nodes of the same environment and thus miss out important cross-paradigm optimization potential.
Overall, a unified environment introduces a previously unprecedented, unique combination of characteristics, i.e., hardware heterogeneity, unreliable nodes, and changing network topologies.
This new set of characteristics enables new cross-paradigm optimizations, which are crucial to support upcoming IoT applications over millions of sensors.
In this paper, we propose \textit{NebulaStream} (NES), a novel data processing platform that addresses the above-mentioned heterogeneity, unreliability, and scalability challenges and enables effective and efficient data management for the IoT.
In particular, NES copes with these unique characteristics as follows.
First, NES copes with heterogeneity by maximizing \textit{sharing of results} and \textit{efficiency of computing} to significantly reduce the amount of data transferred and to exploit hardware capabilities efficiently.
Second, NES addresses unreliability by applying \textit{dynamic decisions} and \textit{incremental optimizations} during runtime to be as flexible as possible.
Third, NES enables elasticity by designing each node to react \textit{autonomously} to a wide range of situations during runtime.
With NES, we enable future IoT applications by unifying sensors, fog, and cloud in one general-purpose, end-to-end data management platform.
Our early experiments show that NES reduces the amount of data and sensor reads up to 90\%, increases node throughput and decreases energy consumption on low-end devices by up to two orders of magnitude, and processes queries with low latency even in the presence of many node failures.
The remainder of the paper is structured as follows.
We show a typical IoT application scenario in Section~\ref{sec:iot_example_application}.
In Section~\ref{sec:neb_overview}, we describe the NebulaStream platform, discuss its design principles, and provide initial performance results.
Finally, we survey related work in Section~\ref{sec:sota} and conclude in Section~\ref{sec:conclusion}.
\section{NebulaStream Platform}
\label{sec:neb_overview}
In this section, we present the NebulaStream (NES) platform.
First, we describe the common topology of IoT application scenarios and highlight its novelty (Section~\ref{sub:nes_topology}).
After that, we identify key design principles for an IoT data management system (Section~\ref{sub:design_principles}) and later describe how NES implements them (Section~\ref{sub:arch_overview}).
Finally, we discuss challenges for an IoT data management system and how NES addresses them (Section~\ref{subsec:nes_challenges}).
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/fog-topology-compact.pdf}
\vspace{-0.6cm}
\caption{Multi-layer NES Topology.}
\vspace{-0.6cm}
\label{fig:overview}
\end{figure}
\subsection{NES Topology}
\label{sub:nes_topology}
In Figure~\ref{fig:overview}, we present a multi-layer NES Topology that is common in today's IoT infrastructures~\cite{bonomi2012fog}.
This figure presents the dataflow from the sensors to the cloud.
The basic assumptions in this topology are three-fold.
First, all data might reach the \textit{Cloud Layer}.
Second, devices on the path from the sensors to the cloud are able to apply processing.
Third, the Cloud Layer is able to apply remaining processing, i.e., representing a fall-back mechanism.
In contrast, all other nodes can only access data if they are routed through them and their storage and processing capabilities determine the operations they can apply.
The data are routed among the three layers as follows.
On the \textit{Sensor Layer}, millions of sensors produce data without processing them.
However, NES is able to schedule the sensor reads depending on the query, e.g., increasing read frequency or omitting reads.
Sensors provide two data access patterns: pull-based and push-based.
Each sensor is connected to at least one low-end node in the \textit{Fog Layer}, which is responsible for this sensor (so-called \textit{Entry Node}).
In the \textit{Fog Layer}, NES processes data as they flow from Entry Nodes to Exit Nodes. During processing, nodes may change their geo-spatial position.
The data transfer is orchestrated by \textit{Routing Nodes}, such as routers or switches.
The data processing capabilities on Routing Nodes are restricted and the provided functionality is highly vendor-dependent~\cite{DPI,lerner}.
In general, the storage and processing capabilities of nodes increase significantly in the NES Topology with each hop towards the Cloud Layer.
After leaving the Fog Layer through an \textit{Exit Node}, data enter the Cloud Layer.
The Cloud Layer provides virtually unlimited scaling of compute and storage.
In IoT application scenarios, this layer will perform the remaining computation and output the data to the user.
An alternative approach to this centralized design would allow each node in the fog to function as a potential sink.
Thus, users would submit their queries directly through their device and each device would represent an exit node in the topology.
In this decentralized design, each device will be responsible for answering the submitted user query.
This design naturally supports geo-spatial query processing as most users are potentially only interested in data produced nearby.
Exploring the design space of a centralized vs. a decentralized design is one major future challenge.
The NES Topology introduced in Figure~\ref{fig:overview} represents a fundamentally new and unique set of characteristics and requirements compared to common cloud infrastructures.
First, query processing and operator placement have to be network-aware.
The main query optimization goal is to find an efficient route through the Fog Layer that reduces data volumes as early as possible without violating any Service-Level-Agreement (SLA) but fulfilling Quality of Ser-\linebreak vice (QoS) constraints.
Second, the NES Topology is highly heterogeneous and many nodes have only limited processing capabilities.
In particular, nodes in the lower parts of the Fog Layer are restricted in storage and processing capabilities.
Furthermore, processing has to trade-off between energy consumption and performance.
Third, the Fog Layer is highly unreliable compared to the homogeneous and relatively stable Cloud Layer.
To support mobility and related aspects, the system has to take the characteristics of each individual environment into account.
Fourth, the volume and velocity of sensor data represent an external factor.
As a result, the entire system has to evolve around sensor data that is injected by the outside world.
With \mbox{NES}, we build a platform that creates a federation of sensors, fog, and cloud, which enables big data acquisition and analysis.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{figures/software-architecture-compact.pdf}
\vspace{-0.4cm}
\caption{NES architecture overview.}
\vspace{-0.5cm}
\label{fig:sw_overview}
\end{figure}
\subsection{NES Design Principles}
\label{sub:design_principles}
NES is a platform for future IoT applications that copes with the unique set of characteristics of a unified environment.
For individual layers, different approaches were proposed over the last decades.
However, combining all of them into a single system is the major challenge that we address with NES.
To handle millions of sensors and thousands of queries, we base the system design of NES on the following design principles:
\begin{enumerate}[topsep=4pt, itemsep=-4pt, leftmargin=*]
\item \textbf{Dynamic Decisions:} NebulaStream never expects a static behavior or conditions in any component.
\item \textbf{Autonomous Processing:} NebulaStream equips compute nodes with all logic necessary to act as autonomously as possible.
\item \textbf{Incremental Optimizations:} NebulaStream optimizes a network of active queries in incremental steps rather than traditional query optimization or batched changes.
\item \textbf{Maximize Sharing:} NebulaStream shares data and processing wherever possible, i.e., on windows (stream slicing), among queries (multi-query optimization), on sensor data (acquisitional query processing), and on operator level (code optimization).
\item \textbf{Maximize Efficiency:} NebulaStream applies hardware-tailored code generation to exploit the underlying hardware efficiently.
\item \textbf{SLA Centric Processing:} NebulaStream's primary goal is to match user-provided SLAs and QoS constraints with available resources.
\item \textbf{Ease of Use:} NebulaStream enables users to choose their preferred programming environments and models, without worrying about system-internals and performance implications.
\end{enumerate}
\subsection{NES Architecture}
\label{sub:arch_overview}
In Figure~\ref{fig:sw_overview}, we present the architecture of \mbox{NebulaStream}.
In general, we design NES with a centralized deployment process and a decentralized run-time re-optimization.
In particular, we envision a \textit{logically} centralized deployment process in which one central instance has control over the deployment.
However, this logically centralized instance can be distributed among multiple region coordinators to form a hierarchy of coordinators.
In the future, we envision moving towards a decentralized deployment process that enables every device to timely submit queries and receive results.
In the current design, users interact with NES through one of the provided APIs to send queries to the \textit{NES Coordinator}~\makecircled{1}.
Our current APIs allow specifying dataflow programs, similarly to the APIs of streaming systems like Flink, Spark, and Storm.
The NES Coordinator consists of several components that orchestrate query processing.
The \textit{NES Query Manager} is responsible for creating logical query plans from user requests~\makecircled{2}.
Additionally, this component maintains \textit{logical streams} that represent logical views over sensors, e.g., a logical stream \textit{cars} could combine sensor inputs from multiple cars into one consistent stream.
The \textit{NES Topology Manager} orchestrates the NES Topology, which consists of workers and sensors.
During startup, each device registers itself and provides information, such as resource capabilities and network topology information.
However, to reduce the complexity of optimization decisions, NES follows the idea of introducing \textit{zones} that aggregate a sub-tree or geo-spatial region of the topology into one node.
Thus, the optimizer treats a zone as one node which transparently abstracts from the dynamic behavior inside the zone.
As a result, a topology may consist of a hierarchy of zones, which simplifies the global optimization process.
The efficient assembly of zones is one future research challenge for NES.
The \textit{NES Optimizer} provides the assignment of a logical query plan (created by the NES Query Manager) to the current NES Topology plan~\makecircled{3} (maintained by the NES Topology Manager).
This assignment defines the \textit{NES Execution Plan (NES-EP)}.
The assignment process introduces a large optimization search space, e.g., operators can be assigned top-down, bottom-up, or by other assignment strategies.
The \textit{NES Deployment Manager} takes the NES-EP~\makecircled{4}, disassembles it into Node Execution Plans (Node-EPs), deploys them to the nodes in the NES Topology, i.e., into either the Fog or the Cloud Layer, and sets up the sensors~\makecircled{5}.
This deployment is performed incrementally and requires rerouting data on different dataflow paths.
Note that this deployment process has to handle a gap between optimization and deployment time.
Thus, optimization is based on a snapshot of the topology, while deployment has to take the current topology into account.
Therefore, the deployment process in this highly dynamic execution environment introduces many interesting research challenges, such as the partial deployment of plans and the partial re-optimization of sub-plans.
The \textit{NES Monitor} constantly collects feedback from the NES Topology~\makecircled{6} and maintains statistics and current resource utilization for the NES Topology Manager~\makecircled{7}.
To improve operator placement, the NES Optimizer requests these statistics and current resource utilization from the \textit{NES Monitor}~\makecircled{7}.
However, maintaining a centralized, coherent view over a large and highly dynamic topology is a major research challenge.
First, the NES Optimizer has to be aware that the topology data is potentially out-dated and thus has to optimize accordingly, e.g., by providing a set of alternative plans.
Second, the collection of monitoring data and the maintenance of statistics has to take the current system load into account and thus must be prioritized lower than data transfers to answer user queries.
Third, we envision a decentralized run-time re-optimization process that is triggered by the nodes themselves.
To this end, NES nodes first attempt to address a change locally, then communicate with their neighboring nodes, and finally requesting support from a central coordinator.
In Figure~\ref{fig:node_overview}, we show the components of the node engine, which is deployed on all devices of the NES Topology.
The \textit{NES Node Engine} is responsible for communicating with the NES Coordinator, accepting Node-EPs and control messages, as well as setting up the input sources, output sinks, and other components.
The incoming queries are Node-EPs, which contain a partial subtree of the overall NES-EP.
The Node-EP is compiled by the local query compiler and later injected into the processing tasks.
As input, the NES Node Engine receives data from the network, e.g., from another node, or directly from an attached sensor.
As output, the NES Node Engine either sends data over the network or triggers an action on an attached device, e.g., controlling an actuator such as a light switch.
The \textit{Execution Engine} orchestrates the processing inside each NES Node Engine.
The central unit of work is one task that combines $n$ input buffers, $m$ output buffers, and the execution of the specified operators~\cite{QTM}.
The processing in NES is \textit{source-driven} and applies the following sequence of steps on each incoming buffer.
First, the engine assembles the tasks by embedding the executable and allocates all required input, intermediate, and output buffers.
After that, the engine enqueues the tasks in one of the processing queues.
Finally, each thread in the \textit{Thread Pool} dequeues one task, processes it, and either enqueues the result buffer into an output queue or triggers an action.
This highly dynamic design enables high resource utilization but also introduces a dynamic execution order, which poses new challenges for the system design.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/node-architecture-compact.pdf}
\vspace{-0.5cm}
\caption{NES Node Engine.}
\vspace{-0.5cm}
\label{fig:node_overview}
\end{figure}
In addition to processing components, each NES Node Engine contains dedicated components for local and neighboring optimizations, windows, routing, sensors, state, and run-time re-optimization.
As a result, we drastically reduce the complexity of the query compiler and increase maintainability and separation of concerns in NES.
In particular, NES compiles only the \textit{hot} code fragments and links other functionalities as pre-compiled components (following Neumann et al.~\cite{neumann2011efficiently}).
Overall, it is a design decision in NES to equip the NES Node Engine with all necessary components to enable it to be as autonomous as possible.
In particular, we assign all means to the node to enable it to make as many decisions as possible decentrally and independently.
This design follows the Borealis design~\cite{borealis} and tries to encounter transient changes locally and permanent changes globally.
In NES, we envision a system design with autonomous nodes and a simple coordinator to mitigate potential bottlenecks in large scale environments.
\subsection{NES Solutions for IoT Challenges}
\label{subsec:nes_challenges}
Based on the unique characteristics highlighted in Section~\ref{sec:intro} and IoT application scenarios presented in Section~\ref{sec:iot_example_application}, we outline five main challenges for an IoT data management system. In the following, we discuss the challenges and propose our solutions.
\subsubsection{C1 - Heterogeneity, Distribution, and Volume of Data At-Rest and Data In-Motion}
NebulaStream's goal is to scale to thousands of queries and millions of sensors.
In the IoT, data are generated by many distributed sources such as sensors or streams of other systems.
A particular challenge originates from handling the sheer amount of diverse data sources, potentially up to the number of millions.
These sources differ in their characteristics, ranging from millions of small sensor streams to a few large streams from sources such as click-streams or auctions.
The accessibility of sources under security and privacy constraints, as well as efficient access paths, requires solutions completely different from what today's big data processing systems provide.
For example, an IoT infrastructure enables new solutions for security and privacy as it allows local pre-processing of data next to the generation, e.g., inside a house or building.
This enables a scenario where only authorized or anonymized data are sent to the central cloud.
As a result, we can enable users to have full control of their own data.
Overall, these characteristics imply research questions with respect to scalability, efficiency, integration, security, privacy, and interoperability.
To support this extreme diversity in NES, we follow the \textit{Maximize Sharing} design principle (Section~\ref{sub:design_principles}) and apply data sharing techniques on three different levels.
First, on the query level, NES exploits data sharing among multiple streaming queries as proposed by Karimov et al.~\cite{AStream}.
Second, on the operator level, NES slices data streams and exploits data sharing on stream aggregations as proposed by Traub et al.~\cite{GeneralStreamSlicing}.
Third, on the sensor level, NES applies \textit{Acquisitional Query Processing} (ACQP)~\cite{tinyDB} and \textit{On-Demand Scheduling} of sensor reads and data transmissions~\cite{OnDemandDataAcc}. These techniques limit data acquisition to data points which are required for answering user queries.
By combining the introduced techniques in NES, we attempt to drastically reduce the amount of acquired, transferred, and processed data; thus, enabling IoT applications with thousands of queries over millions of sensors.
\begin{figure}[t]%
\centering%
\includegraphics[scale=.21]{figures/sense_example.png}%
\vspace{-0.2cm}
\caption{NES data reduction on the sensor level.}%
\vspace{-0.5cm}
\label{fig:sense_example}%
\end{figure}
Figure~\ref{fig:sense_example} presents an initial experiment that demonstrates the potential savings of data reduction techniques in NES on the sensor level.
We use the New York taxis data set~\cite{nyt}, derive routes for each taxi trip, and replay the routes of all taxis on Raspberry Pis, which represent sensor nodes located in taxis.
As a baseline, we use a common IoT setup where sensor nodes stream current values to a central SPE in the cloud, without any knowledge about the executed queries.
In contrast to this cloud-centric IoT setup, NES combines cloud and fog nodes as well as sensor nodes in taxis in one system to allow for holistic optimizations.
We show three example queries in an SQL-like notation.
The queries include an outlier detection (Query~\ref{lst:q1}), an airport attendance monitoring (Query~\ref{lst:q2}), and a top three query for the longest ongoing trips (Query~\ref{lst:q3}).\\
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{HTML}{C42043}
\definecolor{backcolour}{HTML}{F2F2F2}
\definecolor{bookColor}{cmyk}{0,0,0,0.90}
\color{bookColor}
\lstdefinestyle{mystyle}{
backgroundcolor=\color{backcolour},
commentstyle=\color{codegreen},
keywordstyle=\color{codepurple},
numberstyle={},
stringstyle=\color{codepurple},
basicstyle=\footnotesize\ttfamily,
breakatwhitespace=false,
breaklines=true,
captionpos=b,
keepspaces=true,
numbers=left,
numbersep=2pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
}
\lstset{style=mystyle}
\lstset{emph=
AHEADLIMIT, DELAYLIMIT%
},emphstyle={\color{codepurple}\bfseries}%
}%
\renewcommand{\lstlistingname}{Query}
\begin{lstlisting}[
language=sql,
numbers=none,
caption={Journeys leaving the New York area and journeys without passengers. Checked every 2 seconds.},
label=lst:q1,
captionpos=b]
SELECT ts, medallion, trip_id, latitude, longitude, distance, passenger_count
FROM stream(taxis, 2000)
WHERE journey_flag=TRUE &&
(latitude<40.249448 || latitude>41.381560 || longitude<-74.820611 || longitude>-71.848319 || distance=0 ||passenger_count=0); -- NY area
\end{lstlisting}
\begin{lstlisting}[
language=sql,
numbers=none,
caption={Returning the number of passengers in the airport zone. Updated every 5 seconds.},
label=lst:q2,
captionpos=b]]
SELECT ts, sum(passenger_count)
FROM stream(taxis, 5000)
WHERE (40.536532<latitude AND latitude<40.745906) && (-73.946390<longitude AND longitude<-73.609759) --airport
GROUP BY ts AHEADLIMIT 100 DELAYLIMIT 100;
\end{lstlisting}
\begin{lstlisting}[
language=sql,
numbers=none,
caption={Returning the top three longest ongoing trips. Updated every second.},
label=lst:q3,
captionpos=b]]
SELECT ts, latitude, longitude, trip_distance
FROM stream(taxis, 1000)
WHERE journey_flag = TRUE
ORDER BY trip_distance DESC LIMIT 3;
\end{lstlisting}
\color{black}
We modify the data acquisition process for all three queries such that only required data are sampled and transmitted.
In particular, we can interleave data gathering operations (i.e., sensor reads) with data processing (e.g., filters)~\cite{tinyDB}.
Theoretically, the system has to read all sensors specified in the \textit{select} clause at the frequency specified in the \textit{from} clause.
However, the filter predicates in the \textit{where} clause allow for preventing sensor reads and data transmissions for tuples that are filtered out.
For instance, in Query~\ref{lst:q1} and Query~\ref{lst:q3}, we first check the journey flag.
If the value is \texttt{false}, we do not read any other sensor.
Another important optimization is to adjust sampling \linebreak rates continuously and to prevent data transmissions based on the observed sensor values~\cite{OnDemandDataAcc}.
For example, in Query~\ref{lst:q1} and Query~\ref{lst:q2}, we can use the current position of the taxi to calculate the earliest time when the taxi could leave New York or enter the airport area.
Thus, we know upfront that no tuple will pass the filter for that time span and do not have to read or evaluate sensor values for that time.
In addition, in Query~\ref{lst:q2}, we specify a tolerance for sensor read times (\textit{ahead} and \textit{delay} limit), which saves data transmissions when multiple queries request values from the same sensor.
We apply user-defined sampling functions to adjust sampling rates continuously, apply read time tolerances, and schedule sensor reads, respectively~\cite{OnDemandDataAcc}.
In Figure~\ref{fig:sense_example}, we show that the saved traffic between the fog and the cloud layer is significant for all queries using these optimizations.
\subsubsection{C2 - Heterogeneity, Distribution, and Volume of Compute}
NebulaStream's goal is to exploit the hardware resources of millions of heterogeneous devices efficiently.
A particular challenge originates from the potentially millions of compute devices that are found in a fog topology.
These devices have a diverse set of capabilities, with respect to storage, processing, and interconnect.
The devices range from small battery-powered sensors with no compute capabilities (beyond simple filtering) and an unreliable temporary connection to a large compute cluster with huge storage, infiniband interconnect, and thousands of compute cores.
These characteristics imply challenges with respect to security, permission management, and efficient and effective resource utilization.
\begin{figure}[t]
\centering
\begin{subfigure}[c]{0.4\linewidth}
\centering
\includegraphics[scale=.21]{figures/compiler_evaluation.png}%
\subcaption{Throughput.}
\label{fig:pi_eval_throughput}%
\end{subfigure}
\hfill
\begin{subfigure}[c]{0.5\linewidth}
\centering
\includegraphics[scale=.18]{figures/compiler_energy_evaluation.png}%
\vspace{-1mm}
\subcaption{Energy Efficiency.}
\label{fig:pi_eval_energy}%
\end{subfigure}
\vspace{-0.3cm}
\caption{YSB on RaspberryPi 3B+.}%
\vspace{-0.3cm}
\label{fig:pi_eval}%
\end{figure}
To support this heterogeneity in NES, we follow the \textit{Maximize Efficiency} design principle (Section~\ref{sub:design_principles}).
In particular, we apply two techniques.
First, we use query compilation, the leading paradigm for achieving high resource utilization in data-at-rest processing \cite{neumann2011efficiently}.
In NES, we transfer this approach to the special semantics of fog and stream processing.
In particular, NES generates specialized code depending on the actual query, hardware, and data characteristics~\cite{zeuch2019analyzing}.
Second, NES distributes query optimization and code generation between the central coordinator and the local node engine.
On the coordinator, NES performs global query optimizations (e.g., operator reorder) and splits the query into segments for individual devices.
On the node engine, the query compiler produces hardware-tailored code to exploit the availability capabilities most efficiently.
Our experiment in Figure~\ref{fig:pi_eval} evaluates the throughput and energy efficiency of the Yahoo Streaming Benchmark (YSB) on a RaspberryPi 3B+ using Python, Flink, a hand-opti\-mized Java program, and NES, respectively.
The YSB simulates a real-word stream processing task and consists of a filter and a windowed aggregation~\cite{yahooB}.
We implement the YSB with a one second tumbling window and 10000 campaigns based on the codebase provided by Gier et al.~\cite{grier2016extending}.
Our results show that hardware-tailored code generation is essential to efficiently utilize resources, especially for low-end devices.
In Figure~\ref{fig:pi_eval_throughput}, we present the maximal throughput of the four different YSB implementations.
NES outperforms all other systems by at least 10x and is the only system that is able to reach a throughput of more than 10 million tuples per second.
All other systems suffer from the high-overhead of the underlying managed runtime.
This overhead is significant on low-end devices like the RaspberryPi.
Furthermore, through code generation, NES reduces the energy consumption per device and thus requires less energy to achieve the same performance.
In Figure~\ref{fig:pi_eval_energy}, we evaluated the energy efficiency of the four different YSB implementations.
To this end, we define energy efficiency as the required energy in milli joule per processed record.
Our results show that NES requires around 0.0003 milli joule per tuple, which is an 80x improvement compared to the Python implementation.
In the future, we will further investigate the trade-off between energy consumption and performance as one major research question for NES.
Especially for battery-powered sensors, code generation enables a higher operation time and thus reduced maintenance and replacement costs.
\begin{figure}[t]
\begin{center}%
\includegraphics[scale=.2]{figures/performance-figure.png}
\vspace{-2.5mm}
\caption{Gathering coherent snapshots from sensor nodes.}%
\vspace{-9mm}
\label{fig:scalability_example}%
\end{center}%
\end{figure}
As a second technique, we utilize in-network processing inside the Fog Layer to reduce the computation required at the Cloud Layer.
In Figure~\ref{fig:scalability_example}, we present an example query that gathers values from up to 1000 nodes and joins them to coherent snapshots.
A snapshot is coherent, if all sensor values contained in the snapshot have been read at the same time.
In practice, snapshots are often incoherent, because the times of sensor reads are not perfectly aligned among all distributed nodes.
In addition, clock deviations among sensor nodes lead to undetected incoherence, which potentially causes application failures such as false correlations.
We use the techniques which were introduced in the \linebreak SENSE System~\cite{traub2019sense} to ensure scalability and to mitigate incoherence.
SENSE arranges sensor nodes in data gathering pipe\-lines, which join tuples incrementally (decentralized join) and ensure coherence.
In contrast to a centralized join, the Cloud Layer only joins the results of the pipelines instead of all individual sensor measures.
This prevents a central bottleneck at the Cloud Layer and ensures high throughput when gathering values from a large number of sensors.
As shown in Figure~\ref{fig:scalability_example}, a centralized join causes a drastic throughput decay when the number of nodes increases.
In contrast, by utilizing the available computing resources on the path from the sensors to the Cloud Layer, NES achieves almost constant throughput and addresses coherence issues.
By applying hardware-tailored code generation and in-network processing, NES exploits the available compute resources most efficiently and allows for balancing computational demands and energy consumption.
\subsubsection{C3 - Spontaneous, Potentially Unreliable \\ Connectivity between Data and Compute}
NebulaStream's goal is to detect and compensate potentially unreliable nodes in the Fog and Sensor Layer without impacting consistency and availability.
A particular challenge originates from the need to manage data and compute together, as most applications will consist of ad-hoc or standing streaming queries.
Furthermore, some compute units may be connected via Wifi, mobile, or satellite networks with intermittent connectivity and unreliable connections.
In contrast to a homogeneous and relatively stable cloud environment, a heterogeneous and volatile fog environment has to handle frequent transient failures.
Furthermore, WSNs are even more prone to transient failures due to their battery-powered low-end devices and vulnerable radio transmission.
Failures in the fog and in WSNs occur due to numerous reasons, most notably hardware errors, software errors, congestion that results in back-pressure (straggler nodes), inadequate resource allocation, and transient connection loss.
Furthermore, devices continuously refresh their connections while moving and create ad-hoc connections that result in an unpredictable communication pattern \cite{IOTVison}.
This requires special solutions to deal with the intermittent availability of resources, both with respect to data and code management.
The resulting challenges require changes in areas such as adaptivity, synchronization across devices, consistency, transaction management, recovery, and fault-tolerance.
\begin{figure}[t]%
\centering
\includegraphics[scale=.2]{figures/figure_ft.png}
\vspace{-3mm}
\caption{Evaluation of fault tolerance mechanisms.}
\vspace{-7.5mm}
\label{fig:failure_example}%
\end{figure}
Common cloud-centric SPEs handle node failures using a stop-the-world recovery protocol~\cite{carbone2017flink,DBLP:reference/db/BalazinskaHS18}.
When an error occurs, the system stops the entire processing and redeploys a new query plan.
In contrast, NES adopts a fine-grained recovery protocol, i.e., NES restarts only the operator instances involved in a failure.
To assess the performance of both protocols, we implement them in Flink and run the comparison on a simulated IoT environment.
This environment comprises of 8 servers, which are equipped with Intel Xeon E5620 CPUs, 32 GB of RAM, and an 1 Gbits network.
In Figure~\ref{fig:failure_example}, we show the end-to-end processing latency of both protocols while randomly terminating compute nodes (indicated by the black vertical lines).
As shown, the stop-the-world protocol cannot recover from high transient error rates as the latency constantly increases.
In contrast, the fine-grained recovery protocol restarts failed operators without halting the entire query.
To achieve reliability in an unreliable environment, we apply the \textit{Dynamic Decisions}, \textit{Autonomous Processing} and \textit{Incremental Optimizations} design principles in NES.
Because a central component cannot keep up with the pace of failures in a dynamic environment, we apply a diverse set of techniques in NES.
On each layer of the NES Topology, we apply different failure recovery approaches; thus, providing different guarantees.
On the Sensor Layer, NES substitutes missing sensor values from broken sensors with nearby sensors, if applicable, or buffers the values during transient connection loss \cite{tinyDB}.
On the Fog Layer, NES extends the Frontier approach~\cite{Frontier}, which sends data through multiple network paths to achieve fault-tolerance.
Furthermore, data are buffered by upstream operators and replayed in case of an error.
On the Cloud Layer, NES extends existing fault-tolerance approaches, e.g., global checkpointing and message broker with fine-grained operator reconfiguration~\cite{VenturaPhDWork}.
By extending and combining existing approaches on different levels of the NES Topology into a unified fault-tolerant solution, we attempt to handle spontaneous, potentially unreliable connectivity of IoT infrastructures.
\subsubsection{C4 - Diversity in Programming and \\ Management Environments}
NebulaStream's goal is to support a diverse set of data processing workloads specified in different query languages and following different processing models (e.g., relational, linear, or graph algebra).
A particular challenge originates from IoT applications that require a combination of different data-oriented programming paradigms.
Possible workloads range over the entire data management pipeline, from information extraction over information integration to model building and inference.
In particular, running AI/ML/Data Science algorithms in the fog enables direct feedback loops between the digital and the physical world.
These workloads include potentially iterative algorithms mixing relational, linear, and graph algebra, and may run on top of continuous data streams or finite data sets.
This diversity presents challenges with respect to 1) holistic, optimizable, intermediate representations, 2) efficient and scalable physical operators across all paradigms that can be mixed and matched, and 3) the combination of domain-specific and generic query languages that offers a sufficiently powerful yet optimizable interface to a data engineer.
Furthermore, the programming and reasoning about sensors and actuators in such a distributed, diverse setting entails a huge challenge with respect to both, scalability and ease of use.
To support diverse workloads in NES and create a large community with diverse users from different fields, we envision an \textit{easy-to-use} interface.
In particular, we attempt to allow users to choose their preferred programming environments and models without the need to take system-internals and performance implications into account.
To enable this diversity, we build on top of existing frameworks, such as Weld~\cite{palkar2017weld}, Arc~\cite{kroll2019arc}, Emma~\cite{Alexandrov2016}, and LARA~\cite{kunft2019optend} to represent diverse queries in a unified intermediate representation, our so-called \textit{Nebular-IR}.
The Nebular-IR allows us to perform optimizations across operators, processing models, and language boundaries.
The optimizations range from high-level optimizations on the operator plan level (e.g., placement, ordering, fusion \cite{hirzel2014acatalog}) to low-level optimizations on the instruction level (e.g., branch conversion across operators).
One particular challenge for the Nebular-IR is to handle and optimize UDFs.
In particular, most data processing systems treat UDFs as black boxes and thus provide only basic optimizations to plans containing them.
However, in NES we first analyze UDFs to perform high-level optimizations on the IR (e.g., operator reordering~\cite{hueske2012opening}).
After that, we fuse operators across UDF-boundaries and generate compact machine code.
This allows NES to achieve high code efficiency among different UDFs.
From a management point of view, centrally managing the system in a heterogeneous distributed setup introduces challenges from areas such as data collection, response time, and fault-tolerance.
To this end, NES provides a management view with a centralized, homogeneous interface, automatic distribution and parallelization, and means to adaptively detect and react to changes in the environment.
Although the management is performed centrally, parts of the system require a decentralized design.
By providing a central management view as well as an intermediate representation in NES, we support a diverse set of data processing workloads specified in different query languages and following different processing models.
\subsubsection{C5 - Constant Evolution under \\ Continuous Operation}
\label{sub:c5_constant_evolution_under_continuous_operation}
NebulaStream's goal is to support continuous operations while the topology and user workloads change constantly.
A particular challenge originates from a changing topology where new devices join the fog/WSN and existing devices get phased out or change their geo-spatial position.
Additionally, the workloads continuously change as users submit, update, or delete queries.
Furthermore, to enable time-sensitive processing, nodes must behave dynamically and autonomously during runtime, to capture and react to changes in velocity, volume, and variety.
Managing and reacting to changes in a robust way while the system is in continuous operation presents drastic challenges to the software architecture and fabric of an IoT data management system.
To support such a highly dynamic environment in NES, we apply the \textit{Autonomous Processing}, \textit{Dynamic Decisions}, and \textit{Incremental Optimizations} design principles.
First, NES equips the compute nodes with all necessary components to autonomously react to a wide range of situations.
We enrich the Node-EPs with several alternative routes and different options.
As a result, if a node detects changes in velocity, volume, or variety, it reacts dynamically at runtime.
To this end, nodes require mechanisms to cope with a highly dynamic environment either locally, by interacting with nodes in the neighborhood, or by reaching out to a global coordinator.
The possible design space for these changes includes reduction of the sampling rate, dropping of packages, change in the operator order or algorithm, or rerouting of data streams.
Second, each software component in NES is designed to allow for the ever-changing network topology and query workloads and to handle some degree of bounded staleness.
We expect that this dynamicity will result in a complete redesign of many components and will require new algorithms and protocols.
In particular, we plan to incorporate the actor model \cite{DBLP:journals/pacmpl/BernsteinBBCFKK17} to capture the dynamic behavior between moving devices.
In this model, each device represents either a client, worker, source, or coordinator actor.
Using the actor model, we make sure that each device is always in a valid state and that each device can react to a wide range of events autonomously, e.g., lost connection or coordinator change.
We plan to use the actor model for coordination between actors, e.g., sending queries or reacting to node failures.
Due to the high message overhead of the actor model, we plan to offload data transfer to a more light-weight mechanism, e.g., ZMQ\footnotemark or RabbitMQ\footnotemark.
\footnotetext[1]{https://zeromq.org/}
\footnotetext[2]{https://www.rabbitmq.com/}
Third, we apply incremental optimizations such that NES modifies a stateful execution plan of a running query in incremental steps rather than in one large change.
With each incoming or modified query as well as with each change in data velocity, volume, or variety, NES converges to the optimal NES-EP.
Furthermore, we introduce continuous feedback loops between the NES Coordinator and the NES Node Engines in different layers to enable a central management in a heterogeneous distributed setup.
In addition, NES re-optimizes the query execution based on dynamic changes in the workload and environment in an asynchronous process.
The trade-off between a centralized orchestration in a coordinator and decentralized decisions in the nodes remains an open research question for the future.
By defining feedback loops between its components and by performing changes incrementally and autonomously, we attempt to make NES resilient against constantly changing user workloads and network topologies.
Running AI/ML/Data Science algorithms based on data sets and streams produced by sensors in the IoT provides explanation models and prediction capabilities, which in conjunction with actuators, result in a feedback loop between the digital and the physical world.
Additionally, programming and reasoning about sensors and actuators in a distributed, diverse setting at the scale of the IoT provides a huge challenge both with respect to ease of use and scalability.
Overall, NebulaStream addresses all challenges of an IoT data management system presented in Section~\ref{subsec:nes_challenges} by combining existing approaches with new solutions.
To this end, NebulaStream's goals are to handle heterogeneous and distributed data sources and formats, to utilize available resources efficiently, to cope with unstable network topologies, and to provide multiple query and processing models.
We envision that NES's unique features make it an attractive platform for future IoT application scenarios.
\section{Acknowledgments}
This work was funded by the EU projects E2Data (780245), DFG Priority Program "Scalable Data Management for Future Hardware" (MA4662-5), FogGuru (Horizon 2020 under Marie Skłodowska-Curie grant agreement No 765452), the German Ministry for Education and Research as BBDC~II (01IS18025A), and by the German Federal Ministry for Economic Affairs and Energy as Project ExDra (01MD19002B).
Bonaventura Del Monte is partially funded by the German Ministry for Education and Research as Software Campus 2.0 (01IS17052).
We thank Julius Hülsmann for his support with the experiments on decentralized joins and Vianney de Cibeins for his support with the experiments on data reduction techniques.
Furthermore, we thank Eleni Tzirita Zacharatou and Xenofon Chatziliadis for the valuable input and discussions.
\bibliographystyle{abbrv}
\balance
\section{State-of-the-art Systems}
\label{sec:sota}
In this section, we group existing approaches and outline how they address IoT data management challenges.
\vspace{1mm}
\subsection{Cloud-centric IoT data processing}
\label{subsec:cloud_rel_work}
The first group of approaches relies on the cloud to process IoT data centrally.
Mobile Cloud Computing (MCC) outsources data storage and processing from devices to the cloud.
In this scenario, a pool of sensors gathers and sends data directly to a cloud infrastructure for further processing ~\cite{aws_iot_analytics,azure_iot_hub}.
Example applications following this approach are camera surveillance~\cite{nest,netatmo}, wearable cognitive assistance~\cite{glass, hololens}, and smart city monitoring~\cite{fogConsortium2017visual, fogConsortium2017smartcity}.
As soon as data reach the cloud, common SPEs, such as Apache Flink~\cite{yang2017flink,piasecki2018Flink,sfikas2019Flink} and Apache Pulsar~\cite{kjerrumgaard2018pulsar2,bock2018pulsar1} process the incoming streams.
Based on this infrastructure, cloud providers offer services to deploy and manage data streams.
The cloud-centric processing of sensor data enables elastic scaling of compute and storage resources once data reach the cloud.
However, this neglects the resources provided by sensors and intermediate nodes (\textbf{C1},\textbf{C2}).
Although these systems offer fault-tolerance and dynamic scaling (addressing \textbf{C3},\textbf{C5}) in the cloud, they do not provide them across a unified sensor-fog-cloud environment.
In NES, we extend existing work in the area of stream processing to incorporate IoT specific requirements. In particular, we enable cross-paradigm optimization, in-network processing, and hardware-tailored code generation.
\vspace{1mm}
\subsection{Edge-Aware IoT data processing}
\label{subsec:cloud_rel_work}
With the concept of Mobile-Edge Computing (MEC), cloud providers address the limitations of cloud-centric approaches by implementing \textit{hub devices} to extend their IoT services \cite{Streaming_IOT_Survey,amazon_greengrass,azure_iot}.
Hub devices are placed at the edge of the fog topology and act as local control centers which are close to the sensors.
They gather data from attached sensors, perform simple processing steps, and do not require a stable connection to a cloud infrastructure.
Although MEC and MCC improve scalability with respect to the number of sensors (addressing \textbf{C1}), they do not focus on efficient resource utilization across heterogeneous devices (\textbf{C2}).
In particular, hub devices do not enable cooperative processing across the whole topology.
Furthermore, these approaches offer fault-tolerance only between hub-devices and the cloud but still require a stable connection between sensors and the hub-device (partially addressing \textbf{C3}).
Additionally, these approaches do not address dynamic changes in the topology (\textbf{C3}).
Ryden et al.~\cite{ryden2014nebula} introduce a distributed data and resource management framework.
They leverage distributed in-situ data and computing resources on edge nodes only for batch processing.
Their system supports the combination of dedicated and voluntary resources under a unified infrastructure while ensuring high availability (addressing \textbf{C5}, partially addressing \textbf{C1}).
However, their framework neither exploits hardware heterogeneity for efficient code computation nor supports a multi-programming environment (\textbf{C2, C4}).
In NES, we support streaming queries in a unified sensor-fog-cloud environment that is able to exploit fog devices and sensors to optimize query execution in a holistic way.
\subsection{Fog-aware IoT data processing}
Two data processing systems utilize the fog as the underlying infrastructure.
O'Keeffe et al.~\cite{Frontier} propose Frontier, a distributed and resilient data processing system for fog devices.
Frontier aims to handle a large number of sensors and to achieve reliability.
To this end, it exploits the processing capability of the fog by distributing queries over a topology (addressing \textbf{C1}).
It replicates operators to neighboring nodes to recompute intermediate results and to cope with device failures (addressing \textbf{C3}).
However, Frontier does not address the efficient utilization of heterogeneous devices, diversity in programming environments, and adaptability to the constant evolution of the fog (\textbf{C2},\textbf{C4},\textbf{C5}).
Finally, it does not consider the exploitation of cloud resources.
Zhitao et al.~\cite{Streaming_IOT_Survey} extend Cisco's Connected
Streaming Analytics platform (CSA) for IoT processing.
CSA utilizes Cisco network hardware to enable in-network processing (partially addressing \textbf{C1},\textbf{C2}).
However, CSA does not address potentially unreliable connections, the dynamic evolution of the fog, and provides only an SQL-like interface (\textbf{C3},\textbf{C5},\textbf{C4}).
In NES, we build on top of these approaches and combine the possible compute and storage capacities of the fog and the cloud.
Besides Frontier and CSA, additional research has been conducted on individual challenges in fog computing, which we will leverage in NES.
Janssen et al.~\cite{janssen2018scheduling} propose operator placement techniques to partition queries across a fog topology (addressing \textbf{C1}).
Park et al.~\cite{park2018StreamBoxTz} exploit special capabilities of IoT hardware to improve efficiency and security~(addressing \textbf{C2}).
Kang et al.~\cite{kang2017neurosurgeon} and Grulich et al.~\cite{grulich2018collaborative} propose solutions to partition the inference of deep neural networks across fog topologies to improve scalability (addressing \textbf{C1}).
\subsection{Data Processing in Sensor Networks}
Sensor networks (SNs) target a particular sub-area of the IoT~\cite{tinyDB,Cougar}.
In particular, these systems focus on distributed processing in a wireless sensor network~\cite{WSN_SURVEY}.
A major goal is resilience to intermittent and changing network connectivities.
To this end, sensor nodes form a network to transfer sensor values through multiple hops to a root node and perform in-network data processing.
Approaches in this area tackle efficiency (addressing \textbf{C2}) by optimizing the computation for battery lifetimes and enable filtering and aggregation queries over sensor data \cite{tinyDB}.
Moreover, they provide support for a dynamic execution environment (addressing \textbf{C5}).
However, these approaches do not support more complex and general workloads, which combine multiple queries, languages, and algebras (\textbf{C4}).
In addition, they do not provide strong fault-tolerance and correctness guarantees (\textbf{C3}).
In NES, we leverage concepts from sensor networks and integrate them seamlessly across the Sensor, Fog, and Cloud Layers, resulting in a unified environment.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,155
|
You are here: Home / The Blog / Rabbi Borrows a Chopper and Rescues Stranded in Nepal
Rabbi Borrows a Chopper and Rescues Stranded in Nepal
April 29, 2015 By eJP
Satellite phones provided by Chabad were key to locating the missing
Photo courtesy Chabad.org/Nepal
By Faygie Levy
When Chabad of Nepal got word that about 50 Israelis were stuck in the several remote villages with no food, electricity or water, they sprang into action to try and reach them.
A rescue mission to deliver food and a satellite phone to them by motorcycle yesterday ended after a 10-hour journey that met with blocked roads and no way to get through to them.
Earlier today, they tried again. This time, Rabbi Chezky Lifshitz – co-director of Chabad of Nepal with his wife, Chani – took to the skies himself in a Nepali helicopter to reach the stranded.
Satellite phones are handed out to backpackers when they leave the Chabad House so they can stay in touch and have become vital link in recent days. Photo courtesy Chabad.org/Nepal.
Since the quake first happened, Lifshitz has been communicating with many of the Israelis stuck in the mountain regions in remote areas like Dhunche and Syrabrubesi, using satellite phones fixed with GPS that the hikers are carrying. He then relays their locations to the Nepalese government.
It was earlier tragedy that made the use of the satellite phones possible. The family of Nadav Shoham, a hiker killed last year in a freak blizzard, donated the phones to ensure other hikers could be reached in case of emergency.
The phones are handed out to some backpackers when they leave the Chabad House so they can stay in touch and have become vital link in recent days.
When the rabbi touched down in Dhunche with food and water, he found the group cold, tired and hungry. The relief was clear on their faces as they gathered up their personal belongings and backpacks. Most were eager to head back home, though the Chabad House in Kathmandu would be the next best thing in the short term.
Photo courtesy Chabad.org/Nepal.
Twenty five were airlifted to Kathmandu and about the same number remain in various regions of the mountains waiting for their chance to be rescued. Bad weather delayed some helicopter rescues yesterday near Mount Everest, and some families are still waiting to be reunited with their loved ones and to know they are safe and sound.
(While the rabbi was out rescuing the stranded, his wife Chani was serving up 2,000 meals to Nepalis back at the Chabad center in Kathmandu and his children arrived in Israel and were being hosted at the home of Israel's President Reuven Rivlin.)
When Rabbi Lifshitz returned to the Chabad House at 9 p.m. this evening, he was not alone; he arrived with a group of rescued Israeli hikers in tow.
"With the kindness of G-d, they succeeded in saving 25 friends," his wife wrote on their Facebook page.
"They are weak after full days [without supplies] in the wilderness. We brought them all to the Chabad center. The crew here greeted them with tremendous emotion. They've now had a hot meal, and received warm clothing and supplies. They are all here for the night."
In response to the photos of the rescue operation, one woman posted: "Thank you very much for extracting my brother and his girlfriend, there are no words to thank you for all you are doing."
Tomorrow promises to be another day, another chance to reach those desperate to get home, another chance for more families to be reunited even if it's through photos online.
courtesy Chabad.org/News
Filed Under: The Blog Tagged With: Chabad, Nepal earthquake
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,435
|
Inside the Now
Contents
1. Preface
2. The Way In
3. Now I See
4. Notes
Preface
The Way In
Now I See
Notes
# Guide
1. Cover
2. Contents
3. Title Page
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
95.
96.
97.
98.
99.
100.
101.
102.
103.
104.
105.
106.
107.
108.
109.
110.
111.
112.
113.
114.
115.
116.
117.
118.
119.
120.
121.
122.
123.
124.
125.
126.
127.
128.
129.
130.
131.
132.
133.
134.
135.
136.
137.
138.
139.
140.
141.
142.
143.
144.
145.
146.
147.
148.
149.
150.
151.
152.
153.
154.
155.
156.
157.
158.
159.
160.
161.
PrefacePreface
_I nside the Now_ is a book in two parts: the first is an autobiographical prologue ( _The Way In_ ) and the second is a profound contemplation on time, love, and happiness ( _Now I See_ ). Thich Nhat Hanh wrote this book in the summer of 2013, while he was staying at the European Institute of Applied Buddhism in Waldbröl, Germany.
_Now I See_ is an extended free verse poem about time and what it means to be fully present in the here and now. Thay, (as Thich Nhat Hanh is known to his students) takes as his inspiration a series of lines about the passing of time from Vietnam's most famous epic poem, _The Tale of Ki u_, in the same way that the thirteenth-century Japanese Zen Master D gen took lines from an earlier Chinese poem as the inspiration for his own great contemplation on time, "Being Time," (Uji, ).
For many readers, this will be the first encounter with _The Tale of Ki u_, an historical tale that is part romance and part tragedy, set in a medieval era of Confucianism and warlords. It is the story of a beautiful young woman, Ki u, who experiences great love but also suffers immense misfortune and hardship. It is Ki u's love and her suffering that lead her, ultimately, to deep understanding and insight. Thay has selected a few vivid moments from _The Tale of Ki u_ that reveal something universal about our experience of time, love, and happiness. For readers who are interested, a brief plot summary of _The Tale of Ki u_ and context for the quotes are provided in the Notes.
You may have already experienced how in moments of a strong feeling of love—whether it is the love of deep friendship, the love between parents and children, or the love of an intimate relationship—you touch "the now" with greater vividness and intensity. When you can look into a loved one's eyes, hold one another close, and see and know one another completely, you may feel that time stands still.
In _Now I See_ Thay shows us how any moment, including a moment experienced alone with our wonderful planet, can have this same intensity and quality of deep love and connection. Each and every moment is already more beautiful than we could ever have imagined. We just need to learn how to see it.
_Now I See_ begins by establishing that there is no such thing as a heaven where everything is pure and blissful. As Thay has often taught, there cannot be a lotus without the mud. In the same way, we cannot have happiness without suffering: happiness is born from understanding and transforming suffering. The wonderful present moment is, therefore, a place where we know how to embrace and understand our suffering and difficulties. Using the eloquent poetry of _The Tale of Ki u_, as well as that of renowned Zen masters and his own poetry, Thay shows us how important it is to come back to the now in order to truly cultivate joy, take care of suffering, and generate understanding, love, compassion, and insight.
Perhaps Thay was inspired by _The Tale of Ki u_ because Ki u's suffering resonates closely with the suffering that he and his loved ones experienced in Vietnam under decades of colonialism, occupation, violence, and war. _Now I See_ is a kind of lotus that has bloomed from the mud of that suffering.
After completing the manuscript for _Now I See_ , Thay wrote the autobiographical introduction, which he called _The Way In._ It is a "prequel" to _Now I See_ , describing his years in Vietnam from 1949 until his exile in 1966. These were challenging and formative years for Thay, as a young monk, poet, scholar, and community-builder struggling to develop in Vietnam a Buddhism relevant to the suffering of his time.
Thay shares intimately about his deep interconnection with his fellow monks and poets, teachers, friends, and students; and we learn how poetry, writing, art, and the close bonds of brotherhood and sisterhood nourished and sustained their spirits. Thay introduces us to those who inspired him, those who supported him, and those whose lives were taken by the ravages of war. The intense experiences of life, love, and loss described in _The Way In_ shine light on the insights into time and interbeing presented in _Now I See._ Thay's autobiographic writing reveals how understanding, love, compassion, and insight are not abstract ideas, but energies which can be generated in real-life situations, no matter how difficult they may be.
In _The Way In_ Thay shows us how poetry can be both a song of insight and an eloquent voice for change. When one of Thay's poems was first published in _The New York Review of Books_ in 1966, it dominated the front page and helped foster a national discussion about the terrible costs of war. His indefatigable efforts to write bravely and eloquently, sometimes at the risk of his own life, can inspire us to discover ways to contribute a spiritual voice to the issues of our own times—to the challenges of war and violence, hatred and discrimination, and the devastation of our beautiful planet.
In the decades since 1966, Thay has developed, from his roots in Vietnamese Zen, profoundly effective practices of mindfulness and meditation that go beyond boundaries of nationality and faith, to touch peace and healing, and cultivate compassion and insight. If in the 1950s and '60s he was tirelessly searching, in _Now I See_ he reveals, with profound simplicity and eloquence, what he was looking for and what he has found.
There is no separation between Thay's spirituality, his poetry, and his deep aspiration to engage with and transform the suffering in the world. Just as his early, profoundly spiritual poems and writings called for change in Vietnam, so too does this new poetic contemplation, _Now I See_ , call for deep, personal transformation in each one of us who reads it.
This book reveals two different voices: the voice and poetry of Thay as a young monk in _The Way In_ , and the clear and direct voice of a Zen master in _Now I See_ as he challenges us to open our hearts, seize the moment, and truly touch the now.
We hope that this deeply personal and intimate journey of spiritual discovery will show you the way to touch the now so deeply that you will be able to see who you really are, see those you truly love, and together touch the ultimate.
Sister True Dedication
Plum Village, 2015
The Way InThe Way In
Poetic Inspiration
For Thích Nh t H nh
As a bridge of sympathy between Spirituality and Poetry
From Tr V
_My head cushioned on illusory dreams_
_I carry the soul of poetry in the garment of heaven and earth_
_Sleeping amidst fallen autumn leaves_
_Ground surges up to touch sky_
_Who let autumn recede into the distance?_
_What is that song rising from the sea?_
_Who painted grey clouds on the canvas of space?_
_One golden leaf falling, is enough to disturb my heart_
_As I hold in my hand the seasons of creation_
_And befriend earth and sky_
_Life sleeps deeply under my feet_
_Body bound tight under ancient earth_
_From a thousand directions comes wind of sky and sea_
_Blowing far and high the wings of a solitary bird_
_I return, at one with emptiness_
_The poet's inspiration will endure a thousand years . . ._
—Tr V
When I was twenty-three years old, I met two impoverished young poets, Tr V and Quách Tho i, at the Source of Awakening Temple in the port district of Saigon, where I was living as a young monk. It was the autumn of 1949, and for three years our whole country had been engulfed in a terrible war between the French colonial forces struggling to reclaim the country for France, and the resistance fighters battling to win Vietnam's independence.
The poets had come to live at the temple and teach Vietnamese literature to the novices, in order to have somewhere to stay and something to eat. The Dragon River Press had recently agreed to publish a collection of my poems, _Reed Flute in the Autumn Twilight._ It was my first book. They had given me fifty author copies as payment, which I shared amongst my friends, so I didn't have any left to offer to our two young poets.
Then, one afternoon I was out teaching novices at the Responsive Radiance Temple, on the road the French colonists called Rue de Lorgeril, when Tr V came in looking for me. He had found a copy of _Reed Flute in the Autumn Twilight_ at the Dragon River Press, and had taken it to Tao Ðàn Garden to lie down on the grass and read. He fell asleep and upon waking, the poem "Poetic Inspiration" came to him. He went straight to find me and offer it to me. Born of this heartfelt exchange, a profound connection developed between us.
Tr V prefaced the poem with the following dedication: "For _Thích Nh t H nh—As a bridge of sympathy between Spirituality and Poetry_." But, I wondered, is such a bridge really necessary?
Isn't poetry already spirituality, and spirituality already poetry?
A few months later, in my collection of poems _The Golden Light of Spring_ , I included two lines on the interconnection and deep interbeing of poetry and Buddhism in a passage about the Buddha's passing:
_May the radiant light of the golden path—source of poetry—_
_Illumine the depths of the darkest night._
Song of Eternity
Of the two poets, we didn't see Quách Tho i as much as Tr V . Although his writing revealed a strong and bold spirit, his physical constitution was weak, and within a few years he had succumbed to tuberculosis. Amongst his manuscripts we found the following poem, "A Dahlia Flower," which touched me deeply:
_Standing quietly by the fence_ ,
_you smile your wondrous smile._
_I am speechless, and my senses are filled_
_by the sound of your beautiful song_ ,
_beginningless and endless._
_I bow deeply to you._
A few years later, Quách Tho i's beautiful, miraculous flower reappeared in my poem "April":
_. . . The sun is up._
_One of your tiny petals carries a dewdrop_
_imitating the sun, shining forth._
_The forest doesn't seem to know you are there_ ,
_although you have already begun to sing your immortal song._
_A song that sounds as if it has been there forever_
_in the solemn atmosphere of the deep forest . . ._
The Golden Path
When I was young, two of the principal sources of my inspiration to become a monk were the Zen Master M t Th and the author Nguy n Tr ng Thu t, who wrote the celebrated book _The Watermelon_ , one of the first novels ever written in Vietnamese. Nguy n Tr ng Thu t also wrote a deeply inspiring history of the Zen lineage in Vietnam, which was serialized in the _Torch of Wisdom_ magazine. Zen Master M t Th was a brilliant scholar, with a great vision for the future of Buddhism. His seminal book _The Spring of Ethics_ was published in 1942, the year I became a novice. In it, M t Th advocated that Buddhism's mission should be to bring about "a new spring" for our country. He believed that Buddhist spirituality and ethics could open a new path that would liberate humanity from the depths of doubt, despair, and depravity. Of all my elders, he was the one I felt closest to in my deep desire to become a monk. He too wrote poetry—poetry that was very gentle and pure.
_The moon is shining after the rain_
_The yard fragrant with perfumed breeze_
_The bell resounds in the evening silence_
_Asking whose souls have awakened_
—Zen Master M t Th
Th y M t Th ordained at the age of twelve at the Bamboo Forest Monastery in Hu . At that time, Miss Ð m Ph ng, one of Vietnam's first feminists and an important writer, used to come to the temple to teach the monks literature, as well as the modern Vietnamese script. In those days, many monks could read and write better in Sino-Vietnamese, using Chinese characters, than with the new system, using the roman alphabet. But as Th y M t Th had already mastered both Vietnamese and Chinese, he was able to learn French, which then allowed him to read the most up-to-date Buddhist research being published by French scholars. He later studied at the Xiao Shan Buddhist Institute in China, and even before receiving the _bhikshu_ precepts, he had already become a respected professor of Buddhism and a published scholar on the history of Buddhism in Vietnam.
The Heart of Eternity
I feel very fortunate that, as a monastic, I have been close to many wonderful elder and younger brothers. We loved one another even more than we loved our own blood brothers. We lived, studied, and practiced together happily in the spirit of true and wonderful brotherhood. We all loved Buddhism and we all loved our country. All of us wanted to contribute something, whether great or small, to help resist the French occupation. Many of us shared a deeply rooted faith that it should be possible to create a kind of Buddhism that would respond to the needs of our country, that could be applied in daily life, and that could liberate us from our suffering.
Yet for eight years, from 1946 to 1954, the war with the French—"the dirty war," as Jean Paul Sartre would call it—continued to rage around us. The walls of our temple in Hu were riddled with bullet holes; and each day when night would come, the ambience of war and death would return. The sound of gunfire and explosions could be heard in all directions, and bullets flew over our roof. French soldiers would raid our temples, searching for resistance fighters or food, demanding that we hand over the last of our rice.
Monks were killed, even though they were unarmed. My elder brother, Trí Thuyên, was shot dead by the French in the Diamond Mountain Pagoda. Several of my fellow novices, close friends I had studied with at the Báo Qu c Institute of Buddhist Studies in Hu , were also killed. My younger brother, Tâm Th ng, was shot dead on the Mountain of the Immortals, right beside the Zen Lineage Temple. Young Minh Tâm was shot while out in the rural district of Phong Ði n, and young Tánh Huy n was shot behind the Auspicious Cloud Temple. Brother Châu Quang was shot right in the city center of Hu . . .
We secretly organized memorial ceremonies for our brothers. We kept a portrait of Th y Trí Thuyên and, below it, these four lines by Th y Tr ng Ân:
_In ancient times were we not together_
_Lighting the fragrant brazier, vowing under the tower of clarity?_
_But now, with the land still not at peace_
_Where do you wander lost? Your image here endures._
On the anniversary of the death of our younger brother Tâm Th ng, I wrote this poem:
_Tâm Th ng, my friend, in this early morning mist_,
_Can you hear the wind calling amongst the thousand pines?_
_On the Mountain of the Immortals, the ancient pagoda is obscured_
_Clouded by gunsmoke and the haze of war_
_Have you seen the bluebird land by the forest_
_Uttering without cease its grief-stricken cry?_
_When will it ever stop?_
_Do you hear the heroic strains of music_
_Full of noblest will and strongest resolve?_
_The source of poetic inspiration rises_
_as a joyous cry from centuries past._
_In spirit, at the source of the sublime way_ ,
_One path, two aspirations raised together_
_We go on . . . Remember the spring evenings of years gone by_
_When everyone gathered together_
_Around the warm hearth, fragrant with sandalwood._
_We go out . . . Tender blows the wind over mountains and rivers_
_Spring falls on a thousand radiant trees_
_The forceful call of the spirit rises in tumult:_
_"Out now, go! The darkness has prevailed too long_ ,
_Find the ancient ruins hidden under the distant ocean!_
_Bring back the light to the places of shadow!"_
_We step out, hearts still young and full of elation_
_Look at the floating clouds, and the far-off misty mountains and rivers._
_On the long path, I am impassioned_
_"The heart of that day will be the heart of eternity."_
_My friend, the mirror of time was never broken_
_In the heart of anyone. The source of life is infinite still._
_This evening, lighting the incense, the smoke surges up_ ,
_Go, my friend, and visit those who dwell in the undiscovered country!_
"The heart of that day will be the heart of eternity" are the words of the vow we chose to patiently pursue, and the ideal of service by which we swore to live. We knew that the spirit of poetic inspiration, the heart of spirituality, and the mind of love could not be extinguished by death. If those who went ahead were cut down, then those who followed would continue them.
Around 1950, three of us young monks who had been living at the Source of Awakening Temple in Saigon moved out to Vietnam's central highlands to start a community of young monastics at the Miraculous Brightness Pagoda on the Dalat plateau. We had a deep need to come together with others to nourish ourselves, and were eagerly searching for a way to make Buddhism relevant to the immediate needs of the people. There, with the wholehearted support of the young abbot, we established an Institute of Buddhist Studies for monks and nuns, and set up a new middle school and primary school for children—the first private Buddhist schools in Vietnam run by monastics.
A New Spring
Finally, after a decisive defeat at the battle of Ði n Biên Ph in 1954, the French were forced to surrender their colonies in Vietnam. The three-month-long Geneva Conference concluded with a treaty declaring a cease-fire and the division of the country into North and South. The news that the country would be partitioned shook the whole population. Many young monastics were in a state of shock and confusion. In 1954 the new Buddhist administration of South Vietnam called me back from the central highlands to Saigon, and asked me to help renew and modernize the program of studies and practice for the young generation of monks and nuns. In addition to the classes, we soon set up a student association and began publishing a magazine called _The New Lotus Season._ We initiated the "Engaged Buddhism" movement, having first introduced the term in a series entitled _A New Perspective on Buddhism_ , which was featured on the front page of _The Democrat_ newspaper. We wanted to offer a new kind of Buddhism—a Buddhism that could act as a raft, to save the whole country from the desperate situation of conflict, division, and war.
I was a young Dharma teacher and my students in Saigon were just like my younger brothers and sisters. We shared a common vision and purpose and, even now, whenever I think of them, I still feel so much gratitude—for the love between teacher and student, and for the spirit of brotherhood and sisterhood that we shared. This love has endured, and our intimate friendship has been able to nourish every other kind of love. Looking back today, I am very grateful to have always had a good connection with the younger generation.
The spirit of our Engaged Buddhism was the continuation of Zen Master M t Th 's vision, but it was still too radical for the majority of the elders in the Buddhist establishment. They dismissed many of our ideas, and steadily began to silence our voices. We felt at a loss. We were young, we had no position or temple of our own. How could we realize our dreams? Still, we refused to give up hope. We continued to write articles, publish books, and sow the seeds of a new kind of unified Buddhism that could respond to the needs of the people, and the needs of our time.
Within a few years, Zen Master M t Th passed away. He was only forty-nine. Although he was not able to realize his dream in his lifetime, that dream lived on in us, and lives on in us still. After his passing, I took on his only disciple, Th y Châu Toàn, whom I loved as a younger brother. Perhaps I loved him even more than a blood brother, because we shared the same dream and aspiration. This was true with all my other brothers. It's strange, but we were never angry with each other and we never quarrelled—perhaps because we had such deep faith in one another.
Forest Refuge
During those difficult years, together with a number of close friends and students, I did everything I could to create a grassroots Buddhism that could respond to the challenges of the times; offering chants and prayers was not enough to stop the war. But we were accused of sowing seeds of dissent, our magazines were eventually closed down, and the setbacks we were facing in the struggle for peace wore us down. It was then that some of us suddenly had the idea of building a practice center, a place of refuge to nourish and heal ourselves after periods of intense activity. It would be a chance for us to develop concrete Buddhist practices to offer to the people of our country, to heal our wounds, refresh our spirits, and give all of us the strength we needed to continue to help change the situation.
In August 1957 we sought and found some land in the mountainous Ð i Lão Forest, a remote and quiet place with plenty of space, clear streams, and paths for walking. I remember the very first time I drove up the dirt road into the deep and mysterious forest. I was with Sister Di u Âm, our fearless and compassionate elder sister, who actively supported our vision with immense faith and trust. In that moment, as we saw the forest for the first time, I knew we were seeing the future. There we established Ph ng B i (the Fragrant Palm Leaves Hermitage), and gradually began to develop it into a practice community. We had the idea of planting persimmon trees and selling the fruit to sustain the community, and we planned to call it Persimmon Village.
But within four years we were forced to abandon our refuge and scatter once more to the winds. The government suspected us of clandestine activities and made it impossible for us to stay. Some of us fled to Saigon and others were forced into a strategic hamlet nearby, set up by government troops for "protection."
Our Plum Village Practice Center in France is a new manifestation of the spirit of Ph ng B i. So too was Bát Nhã Monastery, which was established in the hills not far from Ph ng B i. There, from 2005 to 2009, over four hundred of my young Vietnamese monastic disciples built a thriving community of practice. Although the government—fearing the monastery's growing popularity—shut it down in 2009 and forcibly disbanded the community, the young Bát Nhã monks and nuns continue to embody the spirit of both Bát Nhã and Ph ng B i in our practice centers all over the world.
Poetry for Peace
In 1964, I was invited to become editor in chief of the new weekly magazine _The Sound of the Rising Tide_ , the official publication of the newly established Vietnamese Unified Buddhist Church in Saigon. I asked my younger brother Th y Châu Toàn to be editorial secretary, and he in turn invited the acclaimed poet V Hoàng Ch ng to be responsible for the poetry pages. At that time we had also just established Fragrant Palm Leaves Press, which was already publishing many influential books by scholars and artists in the capital.
V Hoàng Ch ng had recently published the poem "Fire of Compassion" in honor of Venerable Thích Qu ng Ð c, our revered elder whose love, courage, and hope was so great that he set himself on fire in order to call the world's attention to the suffering of the Vietnamese people. V Hoàng Ch ng was something of a poet laureate for the South, and although he later became widely known for his "drunken poetry" and passionate, inebriated lifestyle, Buddhism was still his primary inspiration. His poems of the 1960s, written when we were all collaborating on the magazine, were infused with the purity, hope, and peace of meditation.
_Buddha's heart stirs with love for this life of sorrow._
_Transforming his body into snow falling from the four quarters_ ,
_He becomes a lotus of one hundred petals, a thousand-meter tree._
_All bitterness soothed away by a single drop of stillness._
_The Sound of the Rising Tide_ soon began reporting on the Buddhist community's efforts to bring about peace and reunify the country. V Hoàng Ch ng would provide poems to accompany the photos and news. He and Th y Châu Toàn worked very closely and they would go together to gather news about the monks' ongoing hunger strike at the Vietnam National Temple (Qu c T ) in protest against the government's oppression. From time to time, when V Hoàng Ch ng came over to visit us at the Bamboo Forest Monastery, he would also bring us a poem. V Hoàng Ch ng was outspoken and defiant—qualities that would cost him his freedom and his life within a few months of the Communist regime coming to power in 1975.
During the time we were all working on the magazine in Saigon, I was living in a little thatched hermitage in the grounds of the Bamboo Forest Monastery, about an hour's motorbike ride from the city center. Th y Châu Toàn was also living there, and every day he would travel in to the magazine offices by moped. The abbot Th y D ng B n, Th y Châu Toàn, and I had all been novices together in Hu twenty years earlier. My two brothers made the monastery into a wonderful, happy place for us all to take refuge in. Th y Châu Toàn was truly an artist and had a real talent for making beautiful flower arrangements, and Th y D ng B n was an excellent cook and would often treat us to his famous green jackfruit dish. Every week we would come together to practice sitting meditation, walking meditation, Dharma discussion, and silent meals, and we would imagine the future together. Many university students—including sister Ph ng, who later became Sister Chân Không—would also join us and sometimes ask me to recite poetry for them.
It was around this time that the pioneering literary journal _Creativity_ launched "free poetry" ( _poésie libre_ ), a kind of poetry that broke free from the very strict traditional rules of meter and rhyme. My poems on war and peace, many of which were also written in free verse, were being published in the poetry pages of _The Sound of the Rising Tide._ When a collection was printed in 1965, government police came to seize them from the bookstores, but they had all already been sold. They were read and heard by many Vietnamese people, and sometimes they were sung with guitar accompaniment at student meetings, just as songs of protest were being sung in the United States. Many of them were denounced as "anti-war poems"—interestingly by both sides fighting in the war.
_The Sound of the Rising Tide_ soon became the most popular Buddhist weekly in Vietnam. Fifty thousand copies were printed every week, and they were delivered by plane to Hu and Danang to meet demand. We heard that our esteemed poetry editor V Hoàng Ch ng had remarked how strange it was that my "peace poems" in _The Sound of the Rising Tide_ were by far the best poems of the free poetry movement, even though I never said they were "free poetry."
The lines below are typical of those poems—they are excerpted from "Our Green Garden," which was translated into English and published in _The New York Review of Books_ in 1966.
_Fires spring up like dragon's teeth at the ten points of the universe._
_A furious acrid wind sweeps them toward us from all sides._
_Aloof and beautiful, the mountains and rivers abide._
_All around, the horizon burns with the color of death._
_As for me, yes, I am still alive_ ,
_But my body and the soul in it writhe as if they too had been set afire._
_My parched eyes can shed no more tears._
_Where are you going this evening, dear brother, in what direction?_
_The rattle of gunfire is close at hand._
_In her breast, the heart of our mother shrivels and fades like a dying flower._
_She bows her head, the smooth black hair now threaded with white._
_How many nights, night after night, has she crouched wide awake_ ,
_Alone with her lamp, praying for the storm to end?_
_Dearest brother, I know it is you who will shoot me tonight,_
_Piercing our mother's heart with a wound that can never heal._
_O terrible winds that blow from the ends of the Earth_
_To hurl down our houses and blast our fertile fields!_
_I say farewell to the blazing, blackening place where I was born._
_Here is my breast! Aim your gun at it, brother, shoot!_
_I offer my body, the body our mother bore and nurtured._
_Destroy it if you will_ ,
_Destroy it in the name of your dream_ ,
_That dream in whose name you kill._
_Can you hear me invoke the darkness:_
_"When will these sufferings end_ ,
_O darkness, in whose name you destroy?"_
_Come back, dear brother, and kneel at our mother's feet._
_Don't make a sacrifice of our dear green garden_
_To the ragged flames that are carried into the front yard_
_By wild winds from far away._
_Here is my breast. Aim your gun at it, brother, shoot!_
_Destroy me if you will_
_And build from my carrion whatever it is you are dreaming of._
_Who will be left to celebrate a victory made of blood and fire?_
In 1964, the same year I began editing _The Sound of the Rising Tide_ , we also established the School of Youth for Social Service (SYSS) and founded V n H nh University. Our vision for the university was to revive the open-minded spirit of the educational system of Vietnam's ancient dynasties, to free young minds from dogmatic studies, and to teach them the qualities of understanding, love, and trust that could save our country. We assigned a different brother to lead each branch of our work. One elder brother became rector of the university; another the director of Fragrant Palm Leaves Press; another, Th y Thanh V n, became director of the SYSS; and my young brother Th y Châu Toàn continued to be editor of _The Sound of the Rising Tide_ magazine. Th y Thanh V n became director of the SYSS when he was only twenty-four years old. He was a very gentle and very brave young monk, and directed with deep insight, calm, and compassion, the thousands of young people working in our village reconstruction programs. When he was accidentally killed by a drunk American soldier driving a military truck, Th y Châu Toàn was asked to take over as the director of the SYSS.
But within barely two years of starting all these initiatives, I was exiled from Vietnam for daring to publicly call for a cease-fire and peace negotiations. It was the summer of 1966. The SYSS began facing significant financial and legal difficulties, and we had to struggle from a distance to mobilize support in order to help the school continue. The SYSS had already realized many remarkable projects: offering relief for war victims; taking care of orphans of the war; establishing villages for victims of the war so that they would have a fixed place to live; and building locally administered "pilot" villages to demonstrate the ability of the people to self-organize.
I can now see clearly that everything that came to be was a continuation, a new manifestation, of what had come before. The Bamboo Forest Monastery in Saigon, our refuge and base during our years of intense activity in the 1960s, was the continuation of the Bamboo Forest Monastery in Hu , where we had studied and practiced as novices in the 1940s under the inspiring presence of Th y M t Th . My beloved younger brother Th y Châu Toàn, with his sincere heart of service, was the continuation of his teacher, Th y M t Th , who had been the first to give us the vision that such an engaged Buddhism was even possible. You cannot take Th y M t Th out of Th y Châu Toàn, nor either of them out of me. We inter-are. The same is true for all my elder and younger brothers, and for each one of my students.
Exiled from my country, I could not be present the day that Th y Thanh V n passed away. The day Th y Châu Toàn passed away I also could not be present. It was only in 2005 that I was allowed to return to Vietnam, after forty years had slipped away. I offered incense for both of them at the Bamboo Forest Monastery and at the Floating Cloud Temple.
Today all of us are continued in our younger generation of monastics, present all around the world. We are continued, too, in our poetry.
Love, Poetry, and Time
Nguy n Du (1766–1820) is perhaps the greatest of all Vietnamese poets. His epic poem _The Tale of Ki u_ holds a special place in the hearts of the Vietnamese people, and even illiterate farmers can still recite entire passages from memory. For many Vietnamese now living, _The Tale of Ki u_ has become a powerful metaphor for the suffering of the Vietnamese people and their homeland.
The tale is the story of a highly intelligent, talented, and beautiful, but desperately unlucky young woman named Ki u, who endures fifteen years of tragedy and misfortune. Twice she is forced into prostitution and twice into servitude. Yet despite encountering immense hardship, suffering, and despair, she is able again and again to find the strength to trust in the power of love. And it is this deep love—not only for her family, for her first love Kim Tr ng, and for the rebel hero T H i, but also for many thousands of soldiers whose lives she spares at a terrible cost to herself—that eventually leads her to peace.
Nguy n Du vividly captures intense moments of understanding and insight between Ki u and her lovers. I read Nguy n Du when I was still young, and was very surprised that such an imposing and solemn Confucian scholar could write such ardent and romantic lines of poetry, expressing all the passion, the recklessness, and the folly of youth. Nguy n Du witnessed the ugliest and most hateful aspects of a corrupt society undergoing total collapse, but at the same time he was able to see that poetry embodying the virtues of beauty, nobility, and spiritual purity could perhaps help save even such a society from disaster.
The many and varied words that Nguy n Du uses in _The Tale of Ki u_ to describe the experience of time, gave rise to the inspiration to write the following poetic contemplation, _Now I See._ It is a deep meditation on time, love, and happiness. Dear friends, please read just one short section at a time, exactly as you would if you were reading _Being Time_ , by the great Zen Master D gen.
Now I SeeNow I See
For so long until now, I could not see. Why not? I may have been searching for a long time, but I couldn't yet see. Perhaps it's because I'm not searching anymore that I now begin to see. And what is it that I see? What is it that I've been searching for?
Maybe what I've been searching for is myself. I want to know who I am. I want to know: _Who_ is the one practicing meditation? _Who_ is the one trying to look deeply? _Who_ is the one reciting the Buddha's name?
Polin Temple
Lantau Island, Hong Kong
Zen Masters and Pure Lands
Fifty years ago, on the wall of Polin Temple on Lantau Island, Hong Kong, I saw this poem:
_If there is a Land, then it cannot be described as Pure._
_What is the use of words and expressions?_
_If the Buddha says there is no self_
_Who then is the Zen master?_
There is no such thing as a Pure Land. If there is life in that land, then that land cannot really be called pure. People in such a Pure Land have to eat, and if they eat, they have to defecate. In that land, there must be meditation halls, dining halls, and also toilets. If there are toilets, then it is no longer pure. So as soon as you say the two words "Pure Land," you are wrong. _What then is the use of words and expressions?_ Yet followers of the Pure Land school of Buddhism believe that reciting the Buddha's name over and over again will help them be reborn in the Pure Land, a kind of beautiful Kingdom of Heaven, after they die.
If you are looking for a Pure Land or a Kingdom of Heaven to go to after you die, then you are caught. If you follow the Zen school of Buddhism, and you think you need to look for a master from whom to receive transmission, you are equally caught.
If what the Buddha taught about nonself is true, then who is the Zen master? Is the Zen master who is teaching you a self? Who are you? Who is the Zen master? The Buddha has made it very clear: there is no self. Who is the one reciting the Buddha's name? It is me. Yet I do not really know who I am—that is why I am searching for myself.
"Who is the one reciting the Buddha's name?" is a koan. At first it may seem that I already know who the Buddha is, but what I don't yet know is who I am. In fact, if I really knew who the Buddha was, then I would already know who I am. I have been searching for the Buddha. And I have been searching for myself. Only now that I have stopped searching, do I begin to see.
Back to the Now
Where was I looking for the Buddha? Where was I looking for myself? Somewhere in the past? In the future? But the past is already gone and the future is not yet here. Past and future are both illusions. They are only ideas. Only the present is real. Only the now is real. That is why I have to come back to the now if I want to really see the Buddha and really see myself.
_We can see it only if we go back to the now._
_It is that simple._
The now is the only moment when and where you can find what you have been looking for. You have been searching for Nirvana. You have been looking for God. You have been looking for enlightenment, for awakening. You have been looking for the Pure Land, and for your true nature of no birth and no death.
_It turns out that everything you have been looking for is already there in the present moment._
_And the secret of the finding is to go back to the now._
Only Now Do I See
_Only now do I see_ , is part of a line from _The Tale of Ki u_ by Nguy n Du, Vietnam's most well-known epic poem. It can be translated literally as: _arriving in the now, I see the here._ Only in _the now_ can you see _the here._
_The here_ represents space, and _the now_ represents time. _The now_ is encountering _the here._ Is it possible to detach _the here_ from _the now?_ Is it possible to take space out of time? Are they two different things, or are they the same thing?
_Only now do I see what is here before me_ ,
_Yet from the first my heart had seen for sure the days to come_.
Ki u is addressing the rebel warrior T H i, her great love, as he returns to her after a long year away with a huge new army under his command.
He is now a man of great power and fortune, a rebel king. This does not surprise Ki u. It is not only now that she can see who he really is; she could already see his greatness and heroic future the very first time they met. "When I first saw you, even though I had not yet seen you as a king, I already knew you were a king. I didn't need to see you with a hundred thousand soldiers at your command to really know who you were. I could see them in your future, and I was sure of what I saw."
It is obvious that when you look deeply into the now, you can already see the future. That is why _the now_ and _the here_ are so important. Looking deeply into _the here_ and _the now_ you can see all the ten directions as well as the past, the present, and the future. The ten directions _are_ the three times themselves. The poet understood that we have to come back to _the now_ in order to see _the here_ , and coming back to _the here_ allows us to see _the now._ Arriving in the now, I see the here. Isn't this truly wonderful?
Time Itself
, _hi n pháp_ in Vietnamese, or _drst dharma_ in Sanskrit, means "that which is now being seen." That which is now being seen is _time itself._ That which is being seen is the present moment—it is the now.
_What you see is yourself._
Life Itself
What is there that you see? First of all, it is your body, the miraculousness of which we have not even begun to measure: these two shining eyes capable of beholding moon and stars, these two legs, still strong enough to climb a mountain.
_How many such wonders have you not yet truly seen?_
You know that you have a body, but when you busy yourself with your computer for hours on end, you completely forget that you have a body.
When you remember to breathe in and out mindfully, your mind comes back to your body and back to the present moment, back to the now. In the present moment, the first thing you encounter is your body. Getting in touch with your body you see the history of life—you see your parents and your ancestors in you, not only human ancestors but also animal, plant, and mineral ancestors. They are all alive and fully present in every cell of your body. You can also see your spiritual ancestors in you. You can see Mother Earth and Father Sun in you.
_Looking into your body, you will discover that you are not a separate self, cut off from everything else, but that you are a continuously flowing stream—the stream of life itself._
Your Body Encompasses the Whole Cosmos
_To see a World in a Grain of Sand_
_And a Heaven in a Wild Flower,_
_Hold Infinity in the palm of your hand,_
_And Eternity in an hour_.
The one contains the all. Your body can tell you everything there is to know about the cosmos, boundless space, and time without end. You will see that _the here_ is also _the there_ , and that _the now_ carries within itself the span of eternity, including the past and the future. Eternity is there to be touched in each moment. Both sun and moon, all the stars and all the black holes, can nestle comfortably inside a tiny grain of sand.
_The entire cosmos can sing to us_
_with the voice of a wild flower._
Finding Each Other in the Now
_In the now we see each other clearly_ ,
_Outside of this moment, will everything be remembered_
_only as a dream?_
From the very first moment they see each other, Ki u and the handsome scholar Kim Tr ng fall desperately in love, but the customs of Confucian society of that time make it very difficult for them to meet. Kim Tr ng moves to a house close by and eventually they arrange to meet secretly when Ki u's family is out. They are able to spend a single magical afternoon together. Ki u returns home at dusk, before her family can discover her absence. Finding her family has not yet returned, she cannot resist going back to be with Kim Tr ng a little longer. Under the light of the rising moon, she creeps out once more, and the sound of her footsteps on the gravel path wakes Kim Tr ng. In his half-sleeping, bewildered state, he asks whether he is dreaming or if she has really come back. Ki u's reply reveals deep insight: " _In this moment_ , I am seeing you, and you are seeing me; but," she says, "who knows whether _outside of this moment_ everything will be no more than a dream?"
Only in the here and the now do we have a chance to see each other clearly. Outside of the now there is only illusion. The now carries within itself true life, along with all its wonders. Your beloved is one of those wonders. It is only in the frame of the now that you can recognize the presence of your beloved.
Do you possess the now? If you don't have the now, how can you love? This is why every breath and every step that you make must bring you back to the now. If you don't have the now, then you don't have anything—not even yourself; everything is, and will be, no more than a dream.
_It is only in the now that we can recognize each other's presence. Outside of the now, everything is as insubstantial as smoke._
The first time he met Ki u, the warrior T H i invited her to come close to him, to see him clearly:
_Come here, come close, and look again_.
It's true that if we want to see each other clearly then we have to look right into the heart of the moment. Ki u looked, and she was able to see very deeply. She could see the dragons and clouds in T H i's future. She could see his great heart and his deepest aspiration. Have you truly seen your beloved? Has your beloved truly seen you?
Looking at one another deeply, we can see each other's deepest aspirations.
_When we understand_
_each other's deepest aspirations,_
_we become soulmates._
That Moment Is Now
The now can be the most beautiful moment. It's so beautiful that you can hardly believe it's real.
But reality is as it is. The present moment is more beautiful than any kind of dream. This moment is no dream. This is reality. Pinch yourself—doesn't it hurt? You are not dreaming. You are fully awake. You have had so many dreams, but no dream is as beautiful as the reality that is unfolding itself to you in the here and the now.
Mother Earth is a bodhisattva of infinite beauty. Mother Earth has never been as beautiful as she is now. We may be inspired to write poem after poem in her praise. Every one of her four seasons is beautiful. Yet there is no poet, no painter, no composer, no architect, no mathematician as talented as Mother Earth herself. Mother Earth is the mother of all buddhas and bodhisattvas. She is the mother of all saints and holy people. A white crane, a limpid creek, a cherry tree in blossom, a serene moonlit night, a mighty snow-capped peak—all bear witness to her splendor.
Mother Earth has brought you to life and she is you. You are as beautiful as she is, because you are her. Your nature is her nature—the nature of no birth, no death, no coming, no going, no being, no nonbeing, no sameness, no otherness. You are the green willow, you are the yellow chrysanthemum, you are the red rose, you are the violet bamboo swaying in the wind.
You are invited to come back to the now, and you will be in touch with her. You will find in this very moment everything that you have ever been looking for.
_The now embraces all the whens_
_and all the might have beens._
_With awe I realize that long-awaited moment is now._
_I see clearly all before me, but still suspect it is a dream_.
Ki u's family finally finds her after fifteen years of separation, fifteen years of untold suffering and despair. Ki u had been taken far from home and, unable to return, had given up hope of ever seeing her family again. She simply cannot believe her eyes when they suddenly appear. Ki u cannot believe that the moment she's been dreaming of for so long is here, now.
Every moment is like that, if we can see it clearly. You discover that the present moment, this very now, is already more beautiful than you could ever have imagined. You doubt that it's real because it's so unbelievably beautiful; you think you must be dreaming. What more are you looking for? What more do you need to attain? You already are what you want to become. You already have everything you need to be happy.
There is no way to the Pure Land; the Pure Land is the way. There is no way to the Kingdom of God; the Kingdom is the way. The Pure Land and the Kingdom are available in every step.
_There is no Pure Land,_
_there is no Kingdom outside of the now._
Only When...
Dear one, do not seek happiness in the future. Do not wait for that day, do not wait for a distant future _then_ . . . Do not say that happiness will be possible _only when_ you have this or that. What is it you are looking for? What is it you are waiting for? Is it fame? Is it wealth? Is it power? Is it sex? Or is it just distraction from the emptiness inside? Do not think that you will be truly happy _only when_ you have obtained these things. Do not wait for _then._
Look around. There are plenty of people who have all of these things, but they do not have peace of mind, they are still not happy. They never feel they have enough because the well of desire is bottomless. If we are thirsty but we keep eating salt, we will only get more and more thirsty. We need to know the practices of _having as few desires as possible_ and of "I have enough." When we can see that in this very moment _we already have enough_ , our thirst is quenched, our craving is calmed, and true happiness becomes possible.
At one moment in _The Tale of Ki u_ the brave warrior T H i confides to Ki u that, even though he loves her deeply, he feels they cannot go on living together quietly forever. "All we have now is each other," he says. "I haven't yet made a name for myself, I haven't made my fortune. I need to make my way in the world, to achieve something truly great. _Only when_ I have a hundred thousand troops under my command and can welcome you home as my queen will I be truly happy."
Because of this _only when_ , T H i decides to leave Ki u alone for an entire year. She begs him to let her accompany him on his journey, but he refuses.
_When a hundred thousand men have I_ ,
_When drums and banners shake the earth, shadow the sky_ ,
_When all around acknowledge my greatness_ ,
_Only then will I again receive you by my side_.
Our beloved ones are there. They are our partner, our friend, our child, or our parent. And yet we have the feeling that just being together is not enough; we need something more. We feel the need to go out in search of success, achievement, more money, or more status to bring back to offer our dear ones in order to make them happy, to make them proud, to earn their love. The _then_ becomes the condition of the _now._
Many of us think that only once we have this or that, only once the situation changes, only _then_ can we be happy. We do not recognize our happiness in the _now_ , and we seek it in the _then._ We have the idea that happiness lies in some future moment. We say to one another, "We have to wait, my beloved, and _then_ . . ." And while we busy ourselves trying to bring about that _then_ , we abandon our loved ones in the _now_. We sacrifice the _now_ which is so precious, for the _then_ which never comes.
_The then always belongs to the future._
_It is an illusion that can never become reality._
Looking in the Same Direction
Someone said that to love each other is not just to sit there and look at each other, but to look in the same direction. Is this true? If we both look in the same direction, what then is that direction? Is it the direction of power, fame, and wealth? That would mean that our love alone does not satisfy us—our love is not enough for our happiness. If that direction is the direction of the television, then that is truly a tragedy.
In the beginning, when we first fell in love, we only needed to look at each other to be happy. Now, looking at each other we are no longer happy, because we have hurt each other so many times. Looking at the television is just a way of covering up the suffering and the loneliness each of us feels inside.
If the direction we are looking in is the direction of our ideals, of our deepest aspirations, then what are our ideals, our aspirations? Clearly they're not fame or wealth—because those are not true ideals. And yet very often ideals that appear beautiful may conceal within them a deeper desire for fame and profit. We deceive ourselves; we deceive others. We deceive ourselves with the good, the true, the beautiful; with justice, equality, and brotherhood; with humanism or socialism.
Why do we not feel at peace just sitting and looking at each other? Looking at each other we discover the wonders of each other, and we learn to treasure each other. Looking at each other we recognize the wonders of the here and the now. Looking at each other we can see each other's concerns and aspirations, as well as each other's fears, suffering, and loneliness. When we see and understand the pain and the suffering in ourselves and in the other person, understanding and compassion in us begin to grow. These are the energies that have the power to heal and transform us. This is the secret to nourishing our love. When we look at the world, we see that nothing can survive without food. The same is true with love. However beautiful our love is, it is impermanent. We need to learn how to feed our love with the energy of understanding and compassion. Only when we know how to look deeply at each other, and how to look deeply at ourselves, can we generate these two precious energies.
When we know how to nourish our love, we can heal ourselves and heal those around us. When love grows, it naturally embraces more and more. If your love is true love, then it will continue to grow until it includes all people and all species. Your love will become a river, wide enough to nourish not only you and your beloved, but the whole world. This is love without limits, a heart without boundaries, and without discrimination. It is unlimited compassion, unlimited loving kindness. It brings joy to everyone. Nothing and no one is excluded from this love—that is why it is called the love of limitless inclusiveness.
Love that doesn't grow is love that is already beginning to die. That is why we should look at each other deeply—to help our love grow. Looking into the suffering of our beloved one, we will see the suffering of all living beings and our compassion will begin to grow. Our compassion will become as powerful as thunder, and our loving kindness will be like the rain—refreshing drops that can penetrate into the hearts of all beings. A cloud may look very gentle and soft, but it can produce powerful thunder.
_True love is never weak._
_Great compassion is also great courage._
Dwell Peacefully
If you are restless, if you are not able to sit peacefully and with stability, it is because you are not established in the now. Restlessness is the disease of our times, and the more we try to fill it with the consumption of things—such as food and drink, movies, websites, books, or games—the more the emptiness grows and the more restless we become. We should remind each other that the now is the only thing that is solid and real.
_The now is a remarkable, fascinating, and beautiful place—the foundation of all time and space._
All you need to do is to focus your attention on your in-breath and out-breath, recognize it, and smile to it. Being aware that you are breathing in means you are really there. Your presence is a wonder and a miracle. Breathing like that, you bring your mind back to your body and become truly present in the now. Treasuring that moment, you dwell in peace and freedom. Each breath is a miracle. Each breath has the power to nourish and to heal.
Breathing mindfully, dwelling in peace and freedom, you see yourself as the wondrous _Dharmakaya_ , you see yourself as the lover of the cosmos. You see yourself as Mother Earth and Father Sun.
Walking in the Now
If when you walk you are harried or discontented, and your steps are not solid, it is because you are still searching for something in the past or in the future. You are not aware that what you are searching for is already there in the present moment. If each step you make brings you back to the present, then that step will become as solid as Mother Earth herself. Making a step like that, it's as though a lotus is blooming under your foot. You walk in freedom, peace, and contentment. You will be one of the most beautiful people on Earth, thanks to your ability to dwell peacefully in the now.
You don't need to look for anything else, because you yourself are the object of all searching. If you have not realized this yet, then even though everything around you may be peaceful and safe, you still will not feel safe and at ease. Looking at the beautiful, silent moon you will wonder why your heart is still not at peace.
Some people possess something very special: they have the now in their heart. When we have a chance to sit close to such a person, we feel so peaceful. They radiate an energy of peace that penetrates us deeply. Whenever we have a chance to walk alongside them, we can feel this subtle source of peace and joy. Their steps are peaceful and free, and that helps us to walk with peace and freedom.
You too can walk like this. Walk as if you do not need to get anywhere—as if you are arriving with every step. Each step can bring you back to the island within—back to the wonderful present moment, back to the now.
Walking together like this, we feel like drops of water flowing in a vast and gentle river. The drop of water does not need to do anything. The drop is embraced by the river and transported to the ocean of the present moment.
Drinking Clouds
Every time you drink your tea, you have an eternity to enjoy drinking your tea. The clouds stop running, the wind stops flying, and time stands still. The clouds are present in your tea. The wind is present in your tea. I am also in your tea.
_Drink your tea as if you are drinking clouds._
What Are You Looking For?
_If long beneath the sea I sought a pin_ ,
_Was it for love's true gold, or only the fickle flowers and moon of lust?_
You have been searching for something, my dear one. Searching for so long in vain. It's as though you've been looking for a needle at the bottom of the ocean. What is it you've been looking for? Were you looking for just another fleeting illusion of happiness—or were you looking for true love? Yes, the love you are looking for is the love of solid rock and gold—true love—not mere lust and sensual pleasure.
What is true love? Where can you find it? Are you able to find understanding and love within you? Do you really need for _someone else_ to love and understand you? If you are so starved of love and understanding, then who will ever be able to understand and love you?
_If you cannot understand yourself, if you cannot love yourself, how will you ever be able to understand or love another person? And how will you ever allow another person to love you?_
After years of searching, Kim Tr ng finally finds his true love Ki u far from home. Fifteen years have slipped away since their one magical day together, when they pledged their deep vow of eternal love to one another under the light of the moon. Ki u has been saved from taking her own life and is now a nun. Although she has suffered so much, she has been able to learn a lot from her suffering. Perhaps now she even understands life and death better than the learned scholar Kim Tr ng. After giving up all hope of being reunited, Ki u's joy is now immeasurable. But the peace and purity she has now attained is so precious to her that she is adamant they must keep their love as pure and unconsummated as it was on that moonlit night. She is able to help Kim Tr ng realize that if his love has endured all these years of searching, it was not because of the fleeting passions of lust, but because of something much deeper and more precious—the solid gold of true love.
The Magic Spyglass
In order to recognize your true love, you will need a magic spyglass. With this spyglass, you will be able to recognize your soulmate. Without it, you wouldn't be able to recognize your soulmate even if he or she were sitting right in front of you.
Your magic spyglass is something you make yourself, with your _kung fu_ , your daily practice. You have to listen to yourself. You need to recognize the suffering within you—and to see the ways it carries within itself the suffering of your father, your mother, your ancestors, and your people. As you come to understand the suffering within you, the energy of compassion will be born in your heart. It will calm you and begin to heal you. You will feel light and peaceful.
True love has the power to heal and transform.
_Embracing your suffering and listening to it,_
_you will start to understand it._
You will find the roots of your anxiety and be able to identify your deepest aspiration. You will see yourself more clearly, understand yourself better, and become your own true love. This is the magic spyglass that will allow you to recognize your beloved, your soulmate, the one you have always been searching for.
This Is It
_For so long have I awaited this day_.
The opportunity that you have been waiting for is right here in the present moment. Each step is that opportunity; each breath is that opportunity—an opportunity for you to go back to the now and stop your endless wandering and _waiting for that day to come._
_The day that you've been waiting for is today; the moment that you've been waiting for is this very moment._
You must pierce the veil of time and space in order to come to the here and the now.
No matter what your circumstances are, that opportunity is there for you. In the now, you will find what you have been looking for.
We Still Have The Now
Please do not say that the moment has passed, that now it is too late. Perhaps in the past there were difficulties between you and your loved one, divisive words, and mistaken perceptions. You were not able to see each other clearly and recognize each other's true presence. These obstacles are the clouds that shroud the moon, the mist that obscures the flowers. You think that everything has fallen apart—that the moon has waned, the flowers have withered, and that you have lost each other.
But in the present moment, with the energy of mindfulness and concentration, you will be able to clear away the misunderstandings, the anger, the sadness, and suspicion of days gone by. It is exactly in this very moment, today, that the work must be done. You are still alive! Treasure the reality that you are still alive. Do not allow your afflictions, craving, anger, and despair to overwhelm you. Live the moment that life is offering you. Sit down quietly and meditate to look deeply and sweep away the wild imaginings, the prejudices, and the wrong perceptions of the past. Part the clouds and uncover the brightly shining moon in the vast sky; dissolve the mist to find the dazzling flowers and fresh new buds.
You still have each other. Nothing has passed away and nothing is lost. That is because the now is still with us—because today is still here. The moon will be brighter than it was, the flowers fresher than before, because now you know how to pierce the veil of mist at the gate and roll back the clouds to reveal the vast open sky. Life is still there, waiting for us this very day, more so than ever before.
_This day today is still everything._
_Heaven yet preserves for us this day,_
_Mist melts from the gate, clouds furl up in the sky,_
_Flowers once withered are fresher now than ever before,_
_The waned moon now brighter still than moons of yore_.
Interbeing
Beloved one, you are not something that has been created—you did not come into the realm of being from the realm of nonbeing. You are a wonderful manifestation, like a pink cloud on the top of a mountain, or a mysterious moonlit night. You are a flowing stream, the continuation of so many wonders. You are not a separate self. You are yourself, but you are also me. You cannot take the pink cloud out of my fragrant tea this morning. And I cannot drink my tea without drinking my cloud.
I am in you and you are in me. If we take me out of you, then you would not be able to manifest as you are manifesting now. If we take you out of me, I would not be able to manifest as I am manifesting now. We cannot manifest without one another. We have to wait for each other in order to manifest together.
In the Bible it is said that God gave the command, "Let there be light!"
I imagine the light must have replied, "But I have to wait, my Lord."
"What are you waiting for?" God asked.
"I am waiting for the darkness so that we can manifest together."
"But darkness is already there," said God.
"In that case," said the light, "I am already there, too."
We cannot exist, we cannot _be_ , by ourselves alone. We can only _inter-be_ , like the left and the right, above and below, good and evil, creator and created. The lover and the beloved are of the same nature, they manifest at the same time. There cannot be a lover if there is no one to love. You cannot take one out of the other, just as you cannot take the left out of the right, the inside out of the outside. Both the lover and the beloved are, by nature, empty.
A flower is made only of non-flower elements. A Buddha is made only of non-Buddha elements. The one who bows and the one who is bowed to are contained within each other. That is why, my beloved one, you should know that your beloved is already in you. You should not try to look for him or for her outside yourself. You are empty; that is why love is possible. If there is no emptiness, then there is nothing at all.
_It is only thanks to emptiness that everything_
_can manifest. Self-nature is an illusion._
The Happiest Moment in Your Life
Has the happiest moment of your life come yet? If the most fulfilling, uplifting moment of exaltation has already happened once, it can happen many times more. But how can we help that moment come more often, especially when we want it to? If in the last thirty years that moment has not come, then it's not likely to come in the next thirty years and perhaps it never will. Don't just dream about it. The secret is to produce that moment ourselves. When? Right in this moment.
You need to _wake up_! Wake up to the wonders within you and around you. If you know the way, then any moment of your life can become the happiest moment of your life.
Dear one, please come, and lean your arms on this windowsill. Look out. Do you see the wonderful immensity of space in front of you? Do you see the vast blue ocean? Can you see the wings of the seagull playing with the sunlight? Looking out, you see immensity; but looking in, you also see immensity. The world inside is as vast as the world outside. In fact, reality transcends both the notions of inside and outside. This special window is everywhere—it is in you and in me. It helps us to see the miraculous world of no birth and no death, no coming and no going.
On Top of the World
The Zen Master Không L found a wonderful place to live in the mountains where he could enjoy the wilderness day and night. There were times when he would climb up to the top of a nearby mountain peak. Standing there all alone, he would let forth a wild yell.
The whole cosmos responds to that yell; and the sound freezes the whole cosmos. It is a wonderful moment, a most fulfilling moment, a most satisfying moment. Any moment can be a moment like that if you know how to handle the now, if you have the time, and if you take the opportunity.
Không L had limitless time and countless opportunities, because he had the now. Every one of his moments was an opportunity.
_There are times I climb to the mountain top,_
_and let forth a howl that freezes the cosmos_.
_There are times_ . . . which times? _Being Time_ . . . what time?
_Each moment can be all the moments;_
_each moment is an opportunity waiting to be seized._
Không L 's yell is still reverberating—it is still heard now and for all eternity.
Time is for Being
D gen begins his poem "Being Time" with these lines:
A former Buddha once said in verse:
_There is a time I stand on top of a soaring mountain peak_
_There is a time I walk in the deepest ocean abyss_
_There is a time I have three heads and eight arms_
_There is a time I have a golden body, eight or sixteen feet tall_
_There is a time I am a monk's staff or fly whisk_
_There is a time I am a pillar or a stone lantern_
_There is a time I am a certain Mr. Dupont or Mr. Smith_
_There is a time I am the vast sky and the great Earth_
Who is the ancient Buddha mentioned by D gen? And what does it mean, "there is a time?" Each line begins with the characters , "being time," meaning, "there is a time," or, "there are times."
To be standing on the peak of a mountain and let forth a wild yell that shakes the heavens—that is so wonderful. To be a deity with three heads and eight arms as often seen in the Hindu tradition can be equally wonderful. And to be a Buddha with a golden body of sixteen feet is marvellous. But to be a pillar or a lantern? To be you or me, or to be a certain Monsieur Dupont or Mister Smith? Yes, it is equally wonderful. Because the Kingdom of God can be found in even the tiniest flower, in a bullfrog, or in the mud that nourishes the lotus.
To see oneself as the vast sky and great Earth is to attain nirvana, the ground of no birth and no death.
Time is for being, and for being anything.
When you see the mountain, you are the mountain, the mountain is you. The mountain is time. The mountain is the now. In Sanskrit, the word for "now" is _drst dharma_, "that which is now being seen."
When you see the great Earth, you are the great Earth. The great Earth is time. The great Earth is the now.
_You cannot take anything out of anything._
Space Outside of Space
Zen Master Không L and Zen Master D gen both possess the now, and so they also have the here. Not only do they enjoy the here and the now, but they are _themselves_ the here and the now. Time, space, matter, and consciousness are not four separate things.
When you learn to live in the now, you see that your being can be limited neither by the space of a physical body nor by the time of a life span. If the world of a person were only one hundred years, then even the sky seen from Không L 's peak could never be vast enough. But when you live deeply the now, you have the opportunity to liberate yourself from time and enter time outside of time, and space outside of space.
Chorus
_I will never grow up_
_no matter how long I live._
_Just yesterday, I saw a band_
_of golden butterflies fluttering above our garden._
_The mustard greens were bursting with bright yellow flowers._
_Mother and sister, you are always with me._
_The gentle afternoon breeze is your breathing._
_I am not dreaming of some distant future._
_I am back. Someone is singing._
_My hand touches the old gate,_
_and I ask, "What can I do to help?"_
_The wind replies,_
_"Smile. Life is a miracle._
_Be a flower._
_Happiness is not built of bricks and stones."_
_My mother's hair is fresh and long._
_It touches her heels._
_The dress my sister hangs out to dry_
_is still sailing in the wind_
_over our green yard._
_It was an Autumn morning_
_with a light breeze._
_I am really standing in our backyard—_
_the guava trees, the fragrance of ripe mangoes_ ,
_the red maple leaves scurrying about_
_like little children at our feet._
_A song drifts from across the river._
_Bales of silky, golden hay_
_traverse the bamboo bridge._
_Such fragrance!_
_As the moon rises above_
_the bamboo thicket,_
_we play together_
_near the front gate._
_I am not dreaming._
_This is a real day, a beautiful one._
_Do we want to return to the past_
_and play hide-and-seek?_
_We are here today,_
_and we will be here tomorrow._
_This is true._
_Come, you are thirsty._
_We can walk together_
_to the spring of fresh water._
_The chrysanthemum is smiling to you._
_Don't dip your hands into cement and sand._
_The stars never build prisons for themselves._
_Let us sing with the flower and the morning birds._
_Let us be fully present._
_I know you are here because I can look into your eyes._
_And bring mother. I want to see her._
_I will sing for you, my dear sister_ ,
_and your hair will grow as long as mother's_.
NOTESNOTES
. The first pilot project of the SYSS was started in January 1964. Living there were the Venerable nun T nh Nguy n, the novice Nh t Trí, sisters Trà Mi, Ph ng Th i, Phùng Th ng, and Cao Ng c Thanh, and the brothers Lê Kh c Tích and Tâm Quang. From September 1964 onward, there was sister Ph ng, and brothers Tr n T n Trâm, Lê Thành Nguyên, and H v n Quy n. The year 1965 was the first year we began to enroll students, and in September the first term began for the SYSS as a branch of the V n H nh Institute of Education.
. The Tale of Ki u is considered to be the supreme achievement of Vietnamese literature. Written in the early nineteenth century, the story is known by all Vietnamese people, memorized in its entirety by some, and often quoted in everyday conversation. In 1,627 spare couplets, the author vividly brings to life a story of love, fate, and tragedy, which is at the same time a biting commentary on the turbulent political situation of Vietnam at the turn of the eighteenth century. The poem is rich in allusions and references to classical Chinese literature while also celebrating the richness and depth of the Vietnamese language in all its diversity of rhythm and tone.
Ki u is a beautiful and talented young girl who falls in love with a handsome scholar, Kim Tr ng. They are separated just after they have pledged troth to each other, and she is forced to sell herself into marriage with an older man in order to save her family from destitution.
She discovers to her horror that she has been tricked and has been married to a pimp. She is first saved from the brothel by a weak-willed young man who falls passionately in love with her and arranges for her escape. But he is already married, and his wife plots revenge on Ki u. She briefly becomes a nun but the wife's plot eventually forces her back into prostitution. She is only released when the young hero T H i recognizes in her a kindred soul and buys back her freedom. They have a short period of happiness before he leaves her to make his name. He returns as a powerful rebel general and he sends out in search of all her former tormentors to mete out justice. Some are forgiven and some are executed. Ki u later persuades him to put down his arms and give himself up to the king's men, in order to save the lives of many thousands of soldiers; but he is betrayed and killed. Devastated, Ki u tries to commit suicide in a river but is saved from drowning by the nun Giác Duyên, her friend and mentor. She ordains as a nun for the second time and, fifteen years after she was first sold away, is finally reunited with her family and with her first love, Kim Tr ng.
Her family persuades her to honor her original pledge to Kim Tr ng and perform a marriage ceremony with him, but she pleads with him not to sully the purity of their love with sensual desire, and she lives out the rest of her life peacefully in chastity. There is a bilingual Vietnamese and English edition of The Tale of Ki u by Nguy n Du available from Yale University Press (New Haven, 1987).
. D gen Zenji was a Japanese Zen Buddhist teacher born in Ky to in the thirteenth century. He founded the S t school of Zen in Japan after travelling to China where he lived for several years while receiving training at a number of Chan monasteries. He wrote prolifically and his most famous work is called the _Sh b genz ._ It is broken up into ninety-five shorter pieces or episodes. The piece focusing on the nature of impermanence and time, called the "Uji," translates from Japanese into something like "for the time being" or "being time." D gen wrote "Uji" in the early winter of 1240 when he was forty-one years old. The trilingual edition that inspired Thay is _Uji, être-temps_ ( _Being-Time_ ) by D gen, translated by Eid Shimano and Charles Vacher (Paris: Encre Marine, 1997).
_. —Who is the one reciting the Buddha's name?_ One of the most widely used koans in the Chinese Zen school.
. ( _Tale of Ki u_, lines 2283–2284). We may rewrite the first line, inserting the implied words: , which in English gives, _Arriving at the now, we begin to see the here._
. Opening verse from _Auguries of Innocence_ , William Blake, 1803. Blake's lines are very similar to those written by Vietnamese Zen Master Khánh H (1067–1142) during the L Dynasty: _All Heaven and Earth balanced on the tip of a hair, Both Sun and Moon in a mustard seed contained—. trung._
_. (Tale of Ki'êu_, lines 443–444).
_. L i ây, xem l i cho gân (Tale of Ki u_, line 2195). T H i has heard rumours of Ki u"s beauty and seeks her out in the pleasure-house where she has been captured for a second time in prostitution. He is immediately smitten, and assures her how fortunate she is to have found a real man who will take care of her as she deserves. But, trapped and powerless, Ki u retorts that although she craves to be able entrust her heart to someone, she does not have that kind of freedom. He begs her to look him in the eye, closely, to see for herself what kind of man he is.
_. . T n D ng shall see a dragon in the clouds_ ( _Tale of Ki u_, line 2198). When Ki u looks into T H i's eyes she sees the clouds and dragons of _T n D ng._ This deeply auspicious sign makes reference to a prophecy made about the brilliant imperial future of a four-year-old boy in seventh-century China. He grew up to be a powerful rebel general and succeeded in establishing the T'ang Imperial Dynasty.
_. _ ( _Tale of Ki u_, lines 2201–2202). _Hearing Ki'êu's words he nodded with pleasure, And smiling said, "How many in life can see another's soul?"_ In their intense first encounter, T H i recognizes that Ki u is the only person to have ever truly seen and understood him. He falls deeply in love and soon buys her freedom from prostitution. He has found his soulmate.
_. _ ( _Tale of Ki u_, lines 3015–3016). Ki u has been separated from her family and her first love, Kim Tr ng, for fifteen years, since she first sold herself away in order to save her father from the debtor's prison. Twice she has been sold into prostitution, twice a slave. At every opportunity she has tried to escape and return to her family but, alone and strikingly beautiful, she does not get far before she is caught and deceived again.
After her great love, hero, and protector T H i, was tragically betrayed and killed in part because of her, Ki u threw herself into the river to drown. She is rescued by a nun and they live quietly together in a temple by the river. Ki u enjoys the peace but is bitterly homesick and fears that she will never see her family again. But hearing of the end of the war, Ki u's family and Kim Tr ng set off in search of her.
Rumors take them to the river where she is said to have drowned, and there they finally find Ki u living as a nun. When Ki u sees them coming, she cannot believe her eyes. That moment is too good to be true. Every night for fifteen years, she had dreamed of being reunited, and every night she had despaired of ever finding her family again. Now, in this moment, suddenly all those who she loves the most in the world are right there in front of her. It is the moment she has been dreaming of, and she cannot believe it's true. _ —With awe I realize_ that _long-awaited moment is_ now, is a most wonderful phrase.
_. _ ( _Tale of Ki u_, lines 2223–2226). T H i abandons Ki u to seek fame and glory, ignoring her pleas to let her accompany him.
_. L'expérience nous montre qu'aimer ce n'est point nous regarder l'un l'autre mais regarder ensemble dans la même direction—Experience shows us that to love is not just to look at each other, but to be looking out together in the same direction._ From _Wind, Sand, and Stars_ , Antoine de St-Exupéry, 1939.
_. Dharmakaya_ literally means the "body" ( _kaya_ ) of the Buddha's teachings ( _Dharma_ ), the way of understanding and love. In Mahayana Buddhism the words have come to mean "the essence of all that exists." To see yourself as the Dharmakaya means to see yourself as one with the cosmos.
_. _ ( _Tale of Ki u_, lines 3177–3178).
_. _ ( _Tale of Ki u_, line 315). Since the first time he caught a glimpse of the young, beautiful, and innocent Ki u, Kim Tr ng has longed for the day he can see her again. Now that long-awaited moment has come, and he persuades her to stay and talk to him.
_. _ ( _Tale of Ki u_, lines 3123–3126). Reunited at last, Kim Tr ng tries to persuade Ki u that it is not too late to realize their vow pledged under the glowing moon and enjoy deep happiness together. They still have this moment; nothing has been lost. The moon is still as bright, and their love still as fresh as it ever was.
_. The one who bows and the one who is bowed to are both by nature empty._ This is the first line of a verse which is recited by monks and nuns in the Buddhist tradition before bowing to the Buddha. Everyone, including the Buddha is full of the whole cosmos, and empty of only one thing: a separate self. To say a person is "empty" is to say they are empty of a separate self.
. The twelfth-century Vietnamese Zen Master Không L belonged to the tenth generation of the Vô Ngôn Thông lineage, and he passed away in 1141.
. These are two lines from a poem by Zen Master Không L . The phrase "there are times," or "being time," is made up of two Chinese characters, , which is also the title of D gen's famous essay written in 1240, a century later. Both Không L in the twelfth century, and D gen in the thirteenth century, took the phrase, , "being time," as their inspiration.
. This eight-line verse is by Chinese Master Yao Shan.
. The famous thirteenth-century Vietnamese Zen Master Tu Trung wrote a beautiful poem about enjoying "space outside of space" . He was a member of the great dynastic family of Tr n, but he chose to withdraw from court life and live in a hermitage where he could devote himself entirely to spiritual practice. He became a great lay Zen master and poet. In one of his most beautiful and simple verses, he describes how he would take up his bamboo staff, set out from his little hut, and climb the mountain to "go and enjoy space outside of space" .
. Excerpted from "Butterflies over the Golden Mustard Fields," Thich Nhat Hanh, 1963. See _Call Me By My True Names: The Collected Poems of Thich Nhat Hanh_ (Berkeley: Parallax Press, 1999).
Parallax Press
P.O. Box 7355
Berkeley, California 94707
Parallax Press is the publishing division of Unified Buddhist Church, Inc.
© Copyright 2015 by Unified Buddhist Church
All rights reserved
eISBN: 978-1-937006-80-8
Cover and text design by Jess Morphew
Chinese calligraphy for "being time" on page 140 is by Venerable Kai Ly.
All other calligraphy is by Thich Nhat Hanh.
The brush stroke images on pages 56, 88, and 109 are from iStock.
All other brush stroke images are from Shutterstock.
Library of Congress Cataloging-in-Publication Data
Nhát Hanh, Thích, author.
Inside the now: meditations on time / Thich Nhat Hanh.
pages cm
Summary: "For the first time Thich Nhat Hanh shares his inspiration and experience of living in stillness and timelessness. Written to pull you into the moment as he sees it, Inside the Now offers teachings inspired by the spirit of poetry. More personal than the majority of his writing, Inside the Now shares the Zen Master's experience using poetry and meditation to endure and move beyond violence and oppression. Inspired by Being Time by Zen Master Dogen, Thich Nhat Hanh shares short meditations along with revelations from his past to give the reader a sense of entering a space of timelessness. In these meditations, he reveals his own doubts and his own searching."-- Provided by publisher.
1. Time--Religious aspects--Buddhism. 2. Spiritual life--Buddhism. 3. Meditation--Buddhism. I. Title.
BQ9800.T5392N45446 2015
294.3'442--dc23
2015034198
1 2 3 4 5 / 19 18 17 16 15
Parallax Press is a nonprofit publisher, founded and inspired by Zen Master Thich Nhat Hanh. We publish books on mindfulness in daily life and are committed to making these teachings accessible to everyone and preserving them for future generations. We do this work to alleviate suffering and contribute to a more just and joyful world.
To learn more about our books, click here
Want to connect with like-minded readers? Check out our book club and our blog for reader's guides and the latest news from Parallax Press.
Subscribe to our newsletter to receive special deals, news from our authors, and inspiration.
Visit us on Facebook
Follow us on Twitter
Add us on Goodreads
To reach mindful living communities following the tradition of Thich Nhat Hanh please contact:
Plum Village
13 Martineau
33580 Dieulivol, France
**plumvillage.org**
Blue Cliff Monastery
3 Mindfulness Road
Pine Bush, NY 12566
**bluecliffmonastery.org**
Deer Park Monastery
2499 Melru Lane
Escondido, CA 92026
**deerparkmonastery.org**
Magnolia Grove Monastery
123 Towles Rd.
Batesville, MS 38606
**magnoliagrovemonastery.org**
_The Mindfulness Bell_ , a journal of the art of mindful living in the tradition of Thich Nhat Hanh, is published three times a year by Plum Village. To subscribe or to see the worldwide directory of Sanghas, visit **mindfulnessbell.org.**
|
{
"redpajama_set_name": "RedPajamaBook"
}
| 5,438
|
Q: Divide an array into separate arrays in PHP I have an array like this:
array(5) {
[0]=> array(1)
[0]=> int(1)
[1]=> array(1)
[0]=> int(2)
[2]=> array(1)
[0]=> int(3)
[3]=> array(1)
[0]=> int(4)
[4]=> array(1)
[0]=> int(5)
}
how can I divide it into 5 separate arrays?
(Actually divide the array into its length)
This is my code:
$temp = array();
function toArr(){
return func_get_args();
}
//{('a',1),('b',2),('c',3),('d',4),('e',5)}
$a = array ('a','b','c','d','e');
$b = array(1,2,3,4,5);
$c = array_map ('toArr',$a,$b);
$collection1 = array_slice($c, 0, 1, true);
$collection2 = array_slice($c, 1, 1, true);
$collection3 = array_slice($c, 2, 1, true);
$collection4 = array_slice($c, 3, 1, true);
$collection5 = array_slice($c, 4, 1, true);
$temp[] = $collection1;
$temp[] = $collection2;
$temp[] = $collection3;
$temp[] = $collection4;
$temp[] = $collection5;
$jsondata = json_encode($temp);
echo $jsondata;
This is the output:
[[["a",1]],{"1":["b",2]},{"2":["c",3]},{"3":["d",4]},{"4":["e",5]}]
I want to have something like this:
[["a",1],["b",2],["c",3],["d",4],["e",5]]
A: Not sure what you're trying to achieve but you can use a loop and access each array, in this example as the variable $a. Have a read of http://php.net/manual/en/control-structures.foreach.php
foreach($arr as $a) {
//do what you want with $a
print_r($a);
}
A: $array =
array(
'0' => array( '0' => 1 )
,'1' => array( '0' => 2 )
,'2' => array( '0' => 3 )
,'3' => array( '0' => 4 )
,'4' => array( '0' => 5 )
);
foreach( $array as $key => $value )
{
/*
* Variable variables
*/
${'array' .$key} = $value;
}
var_dump( $array0 );
var_dump( $array1 );
var_dump( $array2 );
var_dump( $array3 );
var_dump( $array4 );
Result
array (size=1)
0 => int 1
array (size=1)
0 => int 2
array (size=1)
0 => int 3
array (size=1)
0 => int 4
array (size=1)
0 => int 5
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 621
|
Just a block off the strip, this infrastructure (IFX) focused "oasis in the desert" brings together leaders from the overlapping worlds of hardware, software, and connectivity. IFX is presented by Open19 Foundation founding member Packet, and will features talks and presentations from leaders in this space, as well as private meeting spaces, a beer garden, local fare, inspired art, relaxing games, and WiFi. Attendance is free.
Open19 Foundation President Yuval Bachar will be speaking about Open19 and the edge on Thursday, Nov. 29 at 11:30 a.m. in the Big Tent. Click here to view the full schedule.
Flex will also be showcasing its Open19 cage and compute blades in the Infrastructure Hardware Showcase.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,408
|
\section*{Abstract}
Xenon can produce general anesthesia.
Its main protein target is the N-methyl-D-aspartate receptor, a ionotropic channel playing a pivotal role in the function of the central nervous system.
The molecular mechanisms allowing this noble gas to have such
a specific effect remain obscure, probably as a consequence of the lack of structural data at the atomic
level of detail.
Herein, as a result of five independent molecular dynamics simulations, three different binding sites were found for xenon in the glycine binding domain of the
N-methyl-D-aspartate receptor.
The absolute binding free energy of xenon in these sites
ranges between -8 and -14 kJ$\cdot$mole$^{-1}$. However,
it depends significantly upon the protein conformer chosen
for performing the calculation, suggesting that larger values
could probably be obtained, if other conformers were considered.
These three sites are next to each other,
one of them being next to the glycine site.
This could explain why the F758W and F758Y mutations can prevent competitive inhibition by xenon
without affecting glycine binding.
\vskip 1cm
\textbf{Keywords}: Xenon, NMDA receptor, Molecular Dynamics, Absolute Binding Free Energy.
\section*{Introduction}
Xenon can produce general anesthesia without causing undesirable side effects \cite{Erdmann:90,Morita:03}.
This is likely due to the fact that
xenon potently inhibits the excitatory N-methyl-D-aspartate receptor (NMDAR) \cite{Lieb:98}, a ionotropic channel which is a coincidence detector able to detect synchronicity
in synaptic depolarization events \cite{Sjostrom:06,Kohr:06}.
Interestingly, xenon has other attractive pharmacological properties \cite{Maze:03,Franks:10}. For instance, it provides neuroprotection against hypoxia--ischemia \cite{Dickinson:10}.
Though the main protein target of xenon has been known for twenty-five years \cite{Lieb:98},
the molecular mechanisms allowing this noble gas to have such specific effects remain obscure, probably as a consequence of the lack of structural data at the atomic level of detail. Indeed, though numerous crystal structures of proteins with bound xenon atoms have been obtained \cite{Petsko:84,Fourme:98,Abraini:07,Colloch:16}, in the case of the NMDAR, no such data is presently available in the Protein Data Bank \cite{RCSB}.
Thanks to computational approaches, the location of the xenon binding site(s) in the NMDAR has however been looked for a couple of times. Using a Grand Canonical Monte Carlo method, it was first proposed to be within the glycine binding site itself \cite{Franks:07}. However,
two mutations in this site \cite{1PBQ,Zefirov:03}, namely, F758W and F758Y, were next found to prevent
competitive inhibition by xenon without affecting glycine binding \cite{Dickinson:12}.
In a subsequent study of the ligand binding domains of NMDAR, performed using docking, molecular dynamics (MD) simulations and absolute binding free energy (ABFE) calculations, four putative xenon binding sites were identified, one nearby the glutamate site, the three others far from the agonist binding sites \cite{Tang:10}, that is, all far from F758.
Though the effects of the F758W and F758Y mutations could prove to have an allosteric character \cite{Jane:12,Paoletti:13}, that is, to be long-range ones, it is also possible that both previous studies have missed the actual binding site(s) of xenon in the NMDAR.
For instance, both studies were performed starting from
crystal structures of holo forms (PDB 1PBQ \cite{1PBQ} or 2A5T \cite{2A5T}), while xenon could for instance inhibit glycine binding by stabilizing an apo form.
Also, in the later study, putative xenon binding sites were first located using standard docking methods \cite{Tang:10}. But since xenon is an apolar atom, it can bind to any large enough hydrophobic cavity, whose size and shape can in turn fluctuate, as a consequence
of protein intrinsic flexibility \cite{Mouawad:06,Okuno:18,Blondel:19}, which is usually little accounted for in most docking protocols \cite{Abagyan:08,Wolfson:10}.
On the other hand, it has been shown that it is nowadays possible to dock accurately small ligands on proteins using explicit solvent MD simulations \cite{Shaw:11c,Zacharias:14,Lau:18,Caflisch:18,Mondal:21}, that is, with a much more detailled and accurate energy function, without making any assumption about the location of the binding site or about the rigidity of the protein backbone.
So, the main goal of the present work was to locate the binding site(s) of xenon in the NMDAR, using explicit solvent MD simulations only. Then, in order to obtain a lower bound for the affinity of xenon for the putative binding sites thus found, state-of-the-art ABFE calculations were performed.
The NMDAR is an obligatory heterotetramer usually made of
two GluN1 and two GluN2 subunits, glycine binding on the GluN1 subunits while glutamate binds on the GluN2 ones \cite{Farrant:01}. Since xenon is expected to bind in the vicinity of
the glycine binding site \cite{Franks:07,Dickinson:12},
only the glycine binding domain of the NMDAR, for which a high resolution crystal structure of an apo form is available \cite{4kcc}, is considered hereafter.
\section*{Methods}
\subsection*{Multiple sequence alignment}
Starting from the sequence of the glycine binding domain of the GluN1 subunit of the rat NMDAR (gene GRIN1), as found in PDB 4KCC \cite{4kcc},
977 sequences were retrieved from the Uniprot database \cite{Uniprot}, and a multiple alignment was performed, using Clustal$\Omega$ \cite{Clustalo}. Then, 648 sequences more than 66\% identical (87 $\pm$ 7\%, on average) to the complete (without any gap --see below) rat sequence of the glycine binding domain were retained for further analysis. These sequences come from 237 species such as human, chicken or the african clawed frog.
\subsection*{Molecular dynamics simulations}
\label{sec:md}
Molecular dynamics simulations were performed with gromacs \cite{Gromacs} version 2018.6, using the Amber 99SB-ILDN forcefield \cite{Shaw:10amb} and the TIP3P water model \cite{Jorgensen:83}.
Short range electrostatic and van der Waals interactions were cut off at 1.2 nm, long-range electrostatics being handled through the particle mesh Ewald method \cite{Darden:93}. The xenon Lennard-Jones interaction parameters are the following ones:
$\sigma_{Xe}$ = 0.4063 nm, $\epsilon_{Xe}$ = 2.35 kJ $\cdot$ mole$^{-1}$ \cite{Masters:15,Vrabec:19}.
Missing loops 441-448 and 491-495 were added to the apo conformation of the glycine binding domain of the rat NMDAR (PDB 4KCC \cite{4kcc}, R=1.89{\AA}), using the REMODEL protocol of ROSETTA \cite{Rosetta:15}. Then, ten xenon atoms were randomly added around the protein, so as to be at least 0.5 nm away from any heavy protein or other xenon atom.
Next, together with the 226
water molecules found in the crystal structure, the protein was embedded in a water box, with its boundaries at least 1 nm away from the protein, the box volume being of 707 nm$^3$. Finally, sodium and chloride ions were added so as to neutralize the charge of the system and to reach a salt concentration of 150 mM$\cdot$L$^{-1}$,
as often done \cite{Okuno:18,Schulten:09}.
The system was then relaxed using steepest descent minimization, with harmonic restraints (force constant of 1000 kJ$\cdot$mol$^{-1}\cdot$nm$^{-2}$) on protein heavy atoms, until a maximum force threshold of 1000 kJ$\cdot$mol$^{-1}\cdot$nm$^{-1}$ was reached.
The solvent was next equilibrated at 300$^\circ$K, first during one nsec with a control of both volume and temperature, using the Berendsen thermostat \cite{Berendsen:84} and a coupling constant of 0.1 ps, then during another nsec with a control of both pressure and temperature, using the Parrinello-Rahman barostat \cite{Parrinello:81} and a coupling constant of 2 ps.
Finally, restraints were removed and 500 ns of simulation at 300$^\circ$K were
performed, with a timestep of 2 fs and all bonds constrained using the LINCS
algorithm \cite{LINCS}, 1000 frames being saved for further analysis.
Five MD simulations (coined A to D) are considered hereafter, each of them starting with a different configuration of the ten xenon atoms around the NMDAR.
\begin{figure}[t!]
\hskip 0.3 cm
\includegraphics[width=7.0 cm]{fig_scheme_abfe.pdf}
\caption[]{
Thermodynamical cycle used to compute the absolute binding free energy of xenon.
Top left: the xenon atom (filled circle) is first annihilated (empty circle). An harmonic spring is then added in order
to restrain its motion. Bottom: the harmonic spring is linked to a protein atom.
Right: the xenon atom is restored. Then, the harmonic spring is removed.
}
\label{Fig:dgcycle}
\end{figure}
\subsection*{Absolute binding free energy}
$\Delta G^0_{binding}$, the absolute binding free energy (ABFE) of xenon in the NMDAR, was calculated through a thermodynamical cycle (see Fig. \ref{Fig:dgcycle}), that is, instead of obtaining it directly, the calculation was performed in several steps, so that \cite{Shankar:86,Smith:96}:
\begin{multline}
$$
\Delta G^0_{binding} = \Delta G^{solv}_{annihil} + \Delta G^{solv}_{restr} + \\
\Delta G^0_{transf} + \\
\Delta G^{prot}_{creation} + \Delta G^{prot}_{unrestr} \hspace{1.4cm}
\notag
$$
\end{multline}
where $\Delta G^{solv}_{annihil}$ is the free energy of transfer of the xenon atom from the bulk to the gas, that is, the opposite of its solvation free energy, $\Delta G^{solv}_{restr}$, the free energy cost for adding an harmonic restraint, $\Delta G^0_{transf}$, the free energy of transfer of the restrained, annihilated, xenon atom from the bulk to the protein, where it is linked to a given protein atom, $\Delta G^{prot}_{creation}$ being the free energy of transfer of the restrained xenon atom from the gas to the protein, while $\Delta G^{prot}_{unrestr}$ is the free energy cost for removing the harmonic restraint.
$\Delta G^{solv}_{annihil}$ can be obtained from experimental data, namely: -5.4 kJ$\cdot$mole$^{-1}$ \cite{Marcus:84,Nau:18}.
On the other hand:
\[
\Delta G^0_{transf} = \textnormal{0}
\]
while \cite{Smith:96b,McCammon:97,Ren:17}:
\[
\Delta G^{solv}_{restr} = -RT \ln \left[ C_0 \left( \frac{2 \pi RT}{k_{restr}} \right)^{\frac{3}{2}} \right]
\]
where $k_{restr}$ is the force constant of the harmonic restraint, $T$, the temperature, $R$, the molar gas constant, $C_0$ being the standard state concentration (1 mol$\cdot$L$^{-1}$).
Herein, $k_{restr}=$ 418 kJ$\cdot$mole$^{-1}\cdot$nm$^{-2}$, T=300$^\circ$K, so:
$\Delta G^{solv}_{restr} = \textnormal{7.8 kJ}\cdot\textnormal{mole}^{-1}$.
The last two terms,
$\Delta G^{prot}_{creation}$ and $\Delta G^{prot}_{unrestr}$, were calculated, as well as their estimated statistical uncertainty, using the Bennett acceptance ratio method \cite{Bennett:76,Boresch:11,Mobley:15}, as implemented in the g\_bar tool of gromacs \cite{Gromacs}.
To do so, for each calculation, 20 MD simulations were performed as described above,
each with a given value of the xenon-protein interaction parameter, $\lambda = \{ 0.0, 0.05 \cdots 0.95, 1.0 \}$,
except that an harmonic restraint was added, namely, an intermolecular bond of type 6, between the xenon and a protein heavy atom, ten nsec of sampling being performed after the two first nsec of equilibration, the last five nsec being used for the actual free energy calculation.
In order to avoid singularities when the xenon atom is uncoupled ($\lambda = 0$), a soft-core potential was used \cite{Vangunsteren:94}, with a default radius of 0.3 nm, the power in the soft-core potential being set to 1.0.
In practice, the $\lambda$ interaction parameter was controlled through the vdw-lambdas and bonded-lambdas keywords.
\section*{Results}
\subsection*{MD docking}
\begin{figure}[t!]
\includegraphics[width=8.0 cm]{fig_xe_site1a3_dist_of_t.pdf}
\caption[]{
Distance between a xenon atom and an atom of the glycine binding domain of the NMDA receptor, as a function of time.
a) and b) Distance to the carbonyl oxygen of Ala 515 (site I);
c) and d) Distance to the $\gamma$-oxygen of Ser 779 (site II);
e) and f) Distance to the sulfur atom of Met 512 (site III);
a) Simulation A; b) and f) Simulation D; c) Simulation C; d) Simulation E; e) Simulation B.
The arrows indicate the time frames chosen for the calculations of the absolute binding free energy of xenon.
}
\label{Fig:dxeoft}
\end{figure}
For each MD simulation, distances between each xenon atom and all protein heavy ones were monitored. As shown in Figure \ref{Fig:dxeoft}, six xenon atoms were found close (less than 6 {\AA} away) to a given protein atom for more than 100 nsec, and up to the end of the MD simulation.
Interestingly, this corresponds to only three different binding sites (coined I to III), depicted in Figure \ref{Fig:sitesxe}, each of them being reached twice, two of them (sites I and III) being reached by two different xenon atoms during the same MD simulation (simulation D; Fig. \ref{Fig:dxeoft}b and \ref{Fig:dxeoft}f).
\begin{figure}[t!]
\includegraphics[width=7.7 cm]{fig_md_3sitexe.pdf}
\caption[]{
The three sites of xenon in the glycine binding domain of the NMDA receptor, as observed during MD simulations.
Top: site I (left: at t=392.5 ns of simulation A; right: at t=266.5 ns of simulation D);
middle: site II (left: at t=492.5 ns of simulation C; right: at t=313.0 ns of simulation E);
bottom: site III (left: at t=461.5 ns of simulation B; right: at t=430.0 ns of simulation D). The xenon atom is depicted as a sphere. Drawn with Chimera \cite{Chimera}.
}
\label{Fig:sitesxe}
\end{figure}
Note that the sidechain of F758 is involved in site I.
Note also that the three xenon binding sites are next to each other.
Indeed, the sidechain of L462 is involved in both sites I and II,
while L466 is shared by sites II and III (see Fig. \ref{Fig:sitesxe}).
On average, each xenon atom finds its binding site in $t_{fpt} \approx$ 200 ns (see Fig. \ref{Fig:dxeoft}), meaning that $k_{on}$, the binding constant, is of the order of \cite{Shaw:11c,Shaw:11d}:
\[
k_{on} \approx \frac{1}{ t_{fpt} \textnormal{[Xe]}} = \textnormal{ 2 10$^8$ s$^{-1} \cdot$ M$^{-1}$}
\]
where $t_{fpt}$ is the mean first passage time, [Xe] being the xenon concentration during the MD simulations (25 mM$\cdot$L$^{-1}$).
\begin{table*}[t!]
\addtolength{\leftskip} {5mm}
\caption{Absolute binding free energy of xenon (kJ$\cdot$mole$^{-1}$) in the three sites of the NMDA receptor found through MD docking. For each MD simulation, the time frame with the most representative xenon site configuration was chosen. During each thermodynamical cycle, the distance between the xenon and the closest protein heavy atom was restrained.}
\label{Table:dG}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Site & MD & Time (ns) & Restraint & Distance (nm) & $\Delta G^{prot}_{creation}$ & $\Delta G^{prot}_{unrestr}$ & $\Delta G^0_{binding}$ \\
\hline
\multirow{2}{*}{I} & A & 392.5 & \multirow{2}{*}{A515:O} & \multirow{2}{*}{0.38} & -5.5 $\pm$ 0.3 & -0.64 $\pm$ 0.04 & -2.6 $\pm$ 0.4 \\
& D & 266.5 & & & -11.0 $\pm$ 0.5 & -0.36 $\pm$ 0.02 & -7.9 $\pm$ 0.5 \\
\hline
\multirow{2}{*}{II} & C & 492.5 & \multirow{2}{*}{S779:O$_\gamma$} & \multirow{2}{*}{0.38} & -17.0 $\pm$ 1.3 & -0.17 $\pm$ 0.01 & -13.7 $\pm$ 1.3 \\
& E & 313.0 & & & -11.3 $\pm$ 0.6 & -0.16 $\pm$ 0.00 & -8.0 $\pm$ 0.6 \\
\hline
\multirow{2}{*}{III} & B & 461.5 & \multirow{2}{*}{M512:S$_\delta$} & \multirow{2}{*}{0.41} & -15.5 $\pm$ 0.8 & -0.30 $\pm$ 0.01 & -12.3 $\pm$ 0.8 \\
& D & 430.0 & & & -11.3 $\pm$ 0.5 & -0.30 $\pm$ 0.04 & -8.1 $\pm$ 0.5 \\
\hline
\end{tabular}
\end{table*}
\subsection*{ABFE calculations}
In order to assess the quality of the ABFE calculations analyzed hereafter, that is, both the quality of the force field used as well as the quality of the protocol considered (see Methods), $\Delta G^{solv}_{annihil}$, the free energy of transfer of the xenon atom from the bulk to the gas, was determined.
The result obtained, namely:
\[
\Delta G^{solv}_{annihil} = \textnormal{-4.3} \pm \textnormal{0.2 kJ}\cdot\textnormal{mole}^{-1}
\]
happens to be 20\% below the experimental value, namely, -5.4 kJ$\cdot$mole$^{-1}$ \cite{Marcus:84,Nau:18}. Such a small, though significant difference could be due to the choice of the water model, namely, TIP3P (see Methods). Indeed, results in fair agreement with experimental data have been obtained with the same xenon interaction parameters and either the SPC/E \cite{Masters:15} or the TIP4P/2005 models \cite{Vrabec:19}.
However, the choice of the xenon parameters themselves, or of the mixing rules assumed for the description of xenon-water interactions \cite{Masters:15,Toennies:86}, may also play a role in this discrepancy.
For the ABFE calculations in the three putative xenon binding sites
obtained by MD docking, six representative NMDAR conformers were selected as follows.
First, for each MD simulation and each binding site,
the protein heavy atom the closest to the xenon, on average, was identified, the
distance between both atoms being restrained during the ABFE calculation, so as to enforce the average value found during the corresponding MD simulation (see Table \ref{Table:dG}). Then, the three protein heavy atoms from other residues with the shortest distance fluctuations with respect to the xenon atom were identified. Next, the MD time frame for which the distances between the xenon and these four protein atoms are the closest to their average values was picked (indicated by arrows in Fig. \ref{Fig:dxeoft}).
As shown in Table \ref{Table:dG}, starting from this six NMDAR conformers,
the range of $\Delta G^0_{binding}$ values obtained is rather large, namely, between -2.6 $\pm$ 0.4 (site I, simulation A) and -13.7 $\pm$ 1.3 kJ$\cdot$mole$^{-1}$ (site II, simulation C). However, values around -8 kJ$\cdot$mole$^{-1}$ were obtained for each of the three binding sites (simulations D and E), suggesting that this significant variability is due to local conformational changes, probably in the vicinity of the xenon atoms. Indeed, as shown in Figure \ref{Fig:sitesxe}, several sidechains in the xenon binding sites happen to be in different rotameric states during the pair of ABFE calculations performed for a given site like, noteworthy, F758 (top of Fig. \ref{Fig:sitesxe}).
Note that such a dependence of the result of ABFE calculations upon the rotameric state of a sidechain in a binding site has already been observed \cite{Gapsys:21}. Note also that this means that larger $\Delta G^0_{binding}$ values could probably be obtained, if other NMDAR conformers were considered.
\subsection*{Conservation of the binding sites}
\begin{table}[t!]
\caption{Conservation of the xenon binding sites in the glycine binding domain of the NMDA receptor.}
\label{Table:cons}
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}{*}{Residue$^a$} & PDB & \multirow{2}{*}{Site} & Conservation \\
& identifier$^b$ & & (\%) \\
\hline
Cys 459 & 67 & I & 100.0 \\
Leu 462 & 70 & I+II & 100.0 \\
Leu 466 & 74 & II+III & 100.0 \\
Phe 472 & 80 & III & 100.0 \\
Met 512 & 120 & III & 99.7 \\
Ala 515 & 123 & I & 99.7 \\
Phe 758 & 250 & I+Gly & 99.9 \\
Ile 760 & 252 & III & 99.9 \\
Ile 776 & 268 & II & 99.9 \\
Ser 779 & 271 & II & 97.2 \\
\hline
\end{tabular}
$^a$Uniprot rat (or human) sequence.\\
$^b$PDB 4KCC.\\
\vskip -0.5cm
\end{table}
The only presently available crystal structure of the apo form of the NMDAR glycine binding domain has been obtained with the rat sequence \cite{4kcc}.
Note that, for this domain, the rat and human sequences are identical. More generally, as shown in Table \ref{Table:cons}, all residues shown in Figure \ref{Fig:sitesxe} are highly conserved,
suggesting that the three xenon binding sites could prove involved in the function of the NMDAR.
\section*{Conclusion}
As a result of five independent MD docking simulations,
three different binding sites were found for xenon in the
glycine binding site of the NMDAR.
Each site was found twice, that is, during
two different simulations, two sites being found
during a single one (see Fig. \ref{Fig:dxeoft}).
All three xenon binding sites are next to each other,
each of them having at least a residue in common with
another one (see Fig. \ref{Fig:sitesxe}).
Interestingly, site I is next to the glycine binding site,
the sidechain of F758 being involved in both binding sites.
This could explain why the F758W and F758Y mutations can prevent
competitive inhibition by xenon
without affecting glycine binding \cite{Dickinson:12}.
For these sites, values of the absolute binding free energy of xenon
up to -14 kJ$\cdot$mole$^{-1}$ were obtained (see Table \ref{Table:dG}).
However, since, for a given site, results were found to vary
significantly when the ABFE calculations
are performed for different protein conformers, larger values could probably be obtained, if other conformers of the glycine binding site of the NMADR were considered.
On the other hand, since xenon could also compete
with glycine allosterically \cite{Jane:12,Paoletti:13},
the complete structure of the NMDAR needs also to be considered.
Work is in progress along these lines.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,160
|
Grade 6-8–The Crusaders–often icons of valor and derring-do in Robin Hood legends–were harbingers of terror and imminent death for medieval Jews. Elvina, granddaughter of the revered 11th-century French rabbi and commentator Rashi, takes center-stage in this novel in which readers experience the period through Jewish eyes. The otherwise fascinating plot proceeds somewhat erratically in chapters that alternate, but not consistently, between the 12-year-old's first-person communications with her imaginary guardian angel, Mazal, and a third-person narrative. While her mother is helping a friend through childbirth, Elvina becomes the woman of the house, frustratingly trying to balance her love of reading and writing and long, philosophical talks with her beloved grandfather with female responsibilities that she really doesn't want. When news that the illiterate Crusaders are heading through Troyes en route to Jerusalem to destroy the infidels (Muslims), the Jews know that they are in equal danger. While their leaders try to appease the marauders, Elvina stumbles on a young deserter who is hiding because he wants to study with priests rather than fight. Through a series of believable coincidences, the act of helping him becomes the catalyst for saving the community. Though occasionally uneven writing (or translating) keeps readers from being riveted, descriptions of traditions are well done and informative.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 7,496
|
Part of the excitement of our homify 360° segments is to see the stunning locations that some of the homes are built in. Yes it's true, a lot of these structures get to enjoy truly breathtaking views, whether it's on a beach, on a cliff overlooking a cityscape, or in a lush field with a dense forest in the background.
Today's discovery certainly has one of those spectacular views that re-affirm the popular saying "location, location, location!", as it presents not only a picturesque landscape with fresh greens, but also rolling hills for a backdrop.
Regarding the house, it flaunts a minimalist-meets-modern style which dabbles quite deliciously in the neutral colour palette, resulting in a look that is clean, subtle and oh-so stylish!
We won't use the word "plain" to describe the house's façade, but rather "subtle". The subtle look of the house keeps it simple and modern, with select polished surfaces to ensure it still catches the eye.
Flaunting a double-storey built, the house has various heights and entrances, giving it a dynamic design which certainly captures interest. And there's no overlooking the lawn paved with pathways, which provides a welcoming entrance to the home.
A picture-perfect backdrop is framed beautifully in this image, located at the rear side of the house. Here we get to admire a gazebo, a cool blue swimming pool, a tranquil fountain, lush gardens, and, of course, the rolling hills in the background.
Right next to the gazebo, we locate another free-standing structure: a guest unit, ensuring that visitors get their fair share of privacy and space (not to mention fascinating views).
This double-storey structure is treated to curved roof shingles in a terracotta tone, wrought-iron designs for the railings, majestic glazing for the windows and doors and its own private yard.
A simple cascading water feature fantastically enhances the serenity and calmness of this outdoor space while also adding an element of splashing fun.
How many people are fortunate enough to enjoy such a view on a daily basis?
Of course expert care went into the garden as well, ensuring that the pool is not the only attraction here in the back yard.
Stepping stones lead us upwards on the sloping landscape towards the rear entrance of the house. And the perfect amount of flowers, trees and shrubs have been included, ensuring that they enhance and not conquer the delightful view.
But can the fabulous style of the outdoors be matched on the inside? Well, this image seems to say "yes", as an elegant staircase with wrought-iron railings and gentle curves ensure a strong sense of sophistication for the interiors.
In addition, a modern water feature with spherical stone sculptures brings fluidity to this living area, while a focal wall composed of stone cladding enhances the rustic cosiness of the space.
A kitchen decked out monochrome hues and the contemporary style gets to bathe in a delicious amount of natural light streaming in through the glass doors and windows. This beautifully enhances the polished tiled floors and countertops, not to mention the stainless steel surfaces adorning the appliances.
Beautiful, definitely, but is it practical? Of course: look at all the cabinets offering storage space, all the countertops with ample working areas, and the available floor space to insert a striking dining-room set.
Get the kitchen of your dreams by checking out our range of kitchen planners here on homify.
No surface in here gets a dull look, not even the ceilings. At the top of the staircase, we are greeted by a starry false ceiling with LED lights. The low ceiling gives the first floor a cosy feel and makes the spaciousness of the house feel less bare.
A truly remarkable approach to design and space, considering that not one bright colour was spotted anywhere in the entire house!
Speaking of which, let's see How colours influence your bedroom.
We love it, but can't speak for everyone. Tell us what you think of this house!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,170
|
By Kirsty Bell
Do three recent German museum shows suggest a haptic antidote to our screen-based lives?
Gunta Stölzl, 5 Chöre, 1928.
In the second half of 2013 alone there have been at least six exhibitions relating to textiles. Why now, and when did this trend begin? Renowned textile collector Seth Siegelaub who died unexpectedly last year, has had a marked influence, founding the Centre for Social Research on Old Textiles (CSROT) in 1986. Stuff Matters, an exhibition of historic textiles from his collection in early 2012 at London's Raven Row was a front-runner of the current trend. At the same time, there has been an upsurge of interest in the use of fabric in contemporary art making. Does this, like the similar recent turn to ceramics, suggest a haptic antidote to our daily lives with its constant digital-feed and screen fixation? Or is it rather the continuation of an interest in the overlapping areas of fabric production and abstraction that has its roots in the early 20th century?
Chiharu Shiota, In Silence, 2009, Black wood, burned piano and chairs, Installationview Kunstmuseum Wolfsburg
Three recent exhibitions in German museums consider this overlap both historically and in relation to contemporary work. Though each takes a slightly different approach to the theme, there are many crossovers; in argument, as well as artists and even works presented. For Rike Frank and Grant Watson, curators of Textiles: Open Letter. Abstractions. Textiles. Art at the Museum Abteiberg in Mönchengladbach, textiles are situated 'between applied and "free" artistic practice, between craft and art' and are 'associated with female work, industrial labour, commodity, and trade.' It is this agile state of contingency that makes the study of textiles so broad and so tricky to categorize. Frank and Watson's exhibition grew out of a research project, borrowing its title from an essay by Bauhaus weaver Anni Albers. Choosing to focus solely on the woven structure, they construed a cohesively argued exhibition that was surprisingly non-medium specific, with books (from Siegelaub's collection), photography, video and drawing providing relief from the haptic assault of stoff, while articulating the essential relationships between these different media. Still, hapticity was the point, conveyed as much by Beryl Korot's 1977 three-channel video installation of Jacquard looms in action, as by Albers' actual weaving from 1927, Lenore Tawney's 1962 sculptural hangings, or floor-based woven nets from 2011 by Leonor Antunes. The show itself was just one facet of an ongoing discussion, however – accompanied by a series of talks in Mönchengladbach and a symposium in Bilbao – and will travel to the Generali Foundation in September 2014, to be concluded with a comprehensive publication.
Sophie Taeuber-Arp, Komposition Aubette , c.1928, Embroidery, 76 × 54 cm, Kunstmuseum Wolfsburg
A weighty catalogue was published concurrent with the opening of the Kunstmuseum Wolfsburg's Art & Textiles. Fabric as Material and Concept in Modern Art from Klimt to the Present last October, but could not compete with the material richness of the exhibition itself: an ambitious, essayistic show packed with fascinating works that offered a sweeping historical perspective on the relation between textiles, art and daily life and the breadth of influence that textiles have had: early textile samples by William Morris; small, sumptuous paintings of decorated interiors by Éduard Vuillard; the recreation of the 1927 Café Samt & Seide by Ludwig Mies van der Rohe and Lilly Reich; a massive hanging sisal sculpture by Magdalena Abakanowicz and Gerhard Richter's digitally woven recreation of an abstract painting, were interspersed with historic textile samples from ancient Egypt, ninth century China, 15th century Peru and the Congo in the 1800s. Women were well represented throughout, fortunately not just in the section dedicated to 'feminist art' (under the cringeworthy title 'Spiderwomen'), while alignments were proposed between interests in canvas as raw material that emerged in the 1960s with Lucio Fontana or Blinky Palermo, and present day engagements by artists like Sergej Jensen. The erudite enthusiasm of curator Markus Brüderlin was infectious until the final section, which, through an overblown installation by video artist Peter Kogler, attempted to portray the 'world wide web' as 'a kind of weaving loom of the internet age.' The parallels that are doubtless there to be drawn would be more convincingly conveyed by artists on the cusp of this burgeoning Internet generation – like Nick Relph for example whose recent show at the Chisenhale Gallery presented a room of hand-woven panels as potential synonyms for the digital screens that dominate our lives.
Sheela Gowda, Behold (Schaue) , 2009, Mixed media, Installationview Museum Abteiberg
Rosemarie Trockel, Alghiero e Boetti, Palermo, Polke, Fontana and Jensen – all of whom were included in the Wolfsburg show – cropped up again in To Open Eyes at the Kunsthalle Bielefeld, the exhibition title this time borrowed from Josef Albers. With the exception of Trockel, whose works were tucked away in a neighbouring annex, these were shown in a central first floor exhibition space in an all-male line-up strangely at odds with the argument made in the floor above about the marginalization of work made by female artists in the early and mid 20th century. On the second floor, a visually stunning multi-coloured Jacquard weaving by Bauhaus weaver Gunta Stölzl alongside a rarely shown Albers weaving were juxtaposed with a wonderful group of small-scale weavings by American artist Sheila Hicks from the 1960s to now. The inter-generational leap from works made in the 1920s to those in the '60s went unremarked, however – symptomatic of a show which brought together a wide span of material with little further explanation. The generative contingency that contributes to the richness of textiles as subject matter necessitates the establishment and delivery of an argument in an exhibition of this kind; there is too much political history, gender politics, and social factors at stake to skim over such connections. As Siegelaub put it, there is an intimate relation between textiles and society, and this marks it out as a medium of particular fascination and endurance.
Kirsty Bell is a freelance writer and contributing editor of frieze, based in Berlin, Germany.
First published in Issue 13
Mar – Apr 2014
'Conflicts and Adaptations. Estonian Art of the Soviet Era (1940–1991)'
Art Museum of Estonia
Dea Trier Mørch
Louisiana Museum of Modern Art
'Touch: Saastamoinen Foundation Art Collection'
'Bryk & Wirkkala Visible Storage'
'I'm a Believer. Pop Art and Contemporary Art'
Lenbachhaus München
Lizzie Fitch and Ryan Trecartin
Fondazione Prada
Museum Abteiberg
Chen Ching-Yuan
mor charpentier
'Who Are You? Two centuries of portraits'
Neue Galerie Graz, Universalmuseum Joanneum
Rebecca Warren
Maureen Paley
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,233
|
\section{Introduction}
\hspace*{0.5cm}
The incidence of colorectal cancer (CRC) has been ranked third in the world for many years.
Therefore, how to prevent CRC is an important global issue. Studies have pointed out that 95\% of CRC is due to a colorectal adenomatous polyp.
The resection of colorectal adenomatous polyps can greatly reduce the incidence of CRC.
Therefore, it is very important to have a colonoscopy on a regular basis as well as early invention and treatment.
\vspace{4mm}
At present, the best way to prevent CRC is by taking regular colonoscopy and undergo a polyp removal resection.
With the emergence and popularization of painless colonoscopy, people's acceptance of the examination is getting higher.
However, the detection of polyps was performed manually by endoscopists in the past, which is a consuming task for human beings and greatly depends on the doctor's experience and ability.
Early segmentation methods\cite{auto1,auto2,auto3} are based on extracting features such as color, patterns, etc., and then using a classifier to distinguish polyps from their surroundings.
However, this method still has a high rate of missed detection. The position, size, color, etc. of each polyp are different, so it is very difficult to segment them automatically and accurately.
\vspace{4mm}
In recent years, CNN has grown rapidly with breakthrough growth in the application of various imaging tasks. The segmentation of polyp have also benefited\cite{fully1,fully2}.
For this task, FCN\cite{FCN,fully1,fully2}, U-Net\cite{y-net,u-net}, U-Net++\cite{unet1,unet2}, DoubleU-Net\cite{jha2020doubleu} and ResUNet\cite{jha2019resunet++, yang2019road, jha2020real} series, etc., have good results compared to the early methods.
Most polyp blocks can be segmented out well, but there are still many problems, such as the cutting of boundary areas and the lack of smaller blocks, as well as broken images in large areas.
Moreover, the inference time of these networks is usually long, and the training time is relatively time-consuming.
\vspace{4mm}
We propose HarDNet-MSEG based on the backbone of HarDNet68\cite{chao2019hardnet}.
With a simple encoder-decoder\cite{badrinarayanan2017segnet} architecture, it achieves excellent accuracy and efficient inference time for related benchmarks such as Kvasir-SEG, CVC-ColonDB, etc.
\section{Related work}
\hspace*{0.5cm}Since the emergence of LeNet\cite{lecun2015lenet} in 1998, CNN has grown rapidly and has been used in different computer vision fields. Among them, the task of image segmentation is widely used in medical imaging.
\vspace{4mm}
In 2015, Long et al. first introduced fully convolutional networks (FCN)\cite{FCN} for the task of image segmentation.
An end-to-end trained convolutional neural network is used to classify each pixel in an image.
Since then, the convolutional neural network has flourished in the field of image segmentation.
In the same year, U-Net\cite{u-net} introduced at MICCAI has been widely used in the field of medical imaging.
Through a fairly symmetrical U-shaped encoder-decoder architecture, combined with skip connections at different scales to integrate deep and shallow features, it has now become a baseline network architecture for most medical imaging semantic segmentation.
Then, the emergence of U-Net++\cite{unet1,unet2} expands the original U-shaped architecture.
With more skip connections and convolutions to achieve the effect of deep layer aggregation\cite{yu2018deep}. It solves the problem that edge information and small objects are easily lost due to deep network down-sampling and up-sampling.
\vspace{4mm}
In recent years, the use of a better CNN backbone, or the introduction of additional modules like spatial pyramid pooling\cite{he2015spatial}, attention modules\cite{chen2016attention,fu2019dual}, etc., have achieved very good results in medical imaging semantic segmentation.
Examples of the former include ResUNet\cite{yang2019road,jha2020real}, ResUNet++\cite{jha2019resunet++}, and DoubleU-Net\cite{jha2020doubleu}.
By integrating a better CNN backbone with a U-shaped structure, the entire network has a stronger recognition capability, a larger receiving domain, and multi-scale information integration.
The second is to insert additional modules, such as DoubleU-Net\cite{jha2020doubleu} uses ASPP\cite{chen2018encoder} between the encoder and the decoder, which helps to deal with different object scales and improve accuracy; PraNet\cite{pranet} adds an RFB\cite{liu2018receptive} module to skip connection to capture more visual information for features of different scales.
In recent years, attention has also been widely used in the field of computer vision, especially for semantic segmentation which requires detailed edge information at the pixel level. Examples include PraNet\cite{pranet}, PolypSeg\cite{zhong2020polypseg} and ABC-Net\cite{fang2020abc}.
After adding different context modules, they all get good results in medical imaging segmentation.
Context modules such as Spatial Attention Module\cite{chen2017sca} and Channel Attention Module\cite{chen2017sca} will reduce the inference speed, but on the other hand, they are very efficient in improving accuracy and making edge cutting more precise.
\vspace{4mm}
The HarDNet-MSEG we proposed uses HarDNet\cite{chao2019hardnet} as the backbone and is designed with an encoder-decoder architecture.
It has achieved the high accuracy of the current state of the art in CVC-ColonDB, EndoScene, ETIS-Larib Polyp DB, CVCClinic DB, and Kvasir-SEG, and at the same time has an efficient inference speed.
In addition, we have also tried to add additional modules such as RFB, ASPP, Attention, etc. to our network architecture to further improve the accuracy.
\newpage
\begin{figure}[htb]
\centering
\includegraphics[scale=0.35]{mseg.png}
\caption{
HarDNet-MSEG overview.
The encoder part consists of HarDNet68,
and the decoder part is using partial decoder.}
\label{fig:HarDNet - MED}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{blk}
\caption{HarDNet Block overview.}
\label{fig:hardblk}
\end{figure}
\section{HarDNet-MSEG}
\vspace{4mm}
\hspace*{0.5cm}Figure 2 depicts the architecture of our proposed HarDNet-MSEG. It consists of an encoder backbone and a decoder.
\subsection{Backbone : HarDNet}
\vspace{4mm}
\hspace*{0.5cm}HarDNet\cite{chao2019hardnet}, improved the original dense block of Densenet\cite{huang2017densely} are illustrated in Figure 3. Considering the impact of memory traffic on model design, it reduces shortcuts to increase the inference speed, and at the same time increases its channels' width for the key layer to make up for the loss of accuracy.
It also uses a small amount of Conv1x1 to increase computational density.
\vspace{4mm}
Through this design, it not only achieves 30\% inference time reduction compared with DenseNet\cite{huang2017densely} and ResNet\cite{he2016deep}, also having higher accuracy on ImageNet\cite{deng2009imagenet}.
On the other hand, FC-HarDNet70\cite{huang2017densely} also reaches the state of the art in image segmentation on Cityscapes Dataset\cite{cordts2016cityscapes}. Therefore, we use HarDNet68 as the model backbone for Colorectal Polyps image semantic segmentation.
\subsection{Cascaded Partial Decoder}
\vspace{4mm}
\hspace*{0.5cm}Many well-known medical image segmentation networks are often modified based on the U-Net.
Our design also went in this direction at the beginning.
But based on the balance of the inference time and performance, we did not use HarDBlock (HarDBlk) in the Decoder part, which is different from FC-HarDNet.
\vspace{4mm}
We reference the Cascaded partial decoder \cite{wu2019cascaded}.
It found out that the shallow features have high resolution and occupy computing resources, and the deep information can also represent the spatial details of the shallow information relatively well.
So we decide to discard the shallow features and do more computing on the deeper layers' features.
At the same time, the aggregation of feature maps at different scales can be achieved by adding appropriate convolution and skip connections.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.53]{rfb}
\caption{RFB Module overview.}
\label{fig:rfb}
\end{figure}
\subsubsection{RFB Module}
\vspace{4mm}
\hspace*{0.5cm}Figure 4 shows a Receptive Field Block\cite{liu2018receptive}. It can strengthen the deep features learned from a lightweight CNN backbone.
By using multi-branch with different kernel size convolution and dilated convolution layers, it generates features with the different receptive fields. Afterwards, it applies a 1x1 convolution to merge these features and generate the final representation.
\vspace{4mm}
We add this module to the skip connection according to \cite{wu2019cascaded}, so that we could enlarge our receptive fields from each different resolutions' feature maps.
\subsubsection{Dense Aggregation}
\vspace{4mm}
\hspace*{0.5cm}We perform aggregation by element-wise multiplication shown in Figure 5.
After up-sampling to the same scale, the feature is multiplied with another input feature of the corresponding scale.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{agg}
\caption{Aggregation Module overview.}
\label{fig:AGG}
\end{figure}
\newpage
\section{Experiments}
\hspace*{0.5cm}
We used the training data from \cite{pranet} and \cite{jha2020real} for training because they have excellent performance in polyp segmentation. The training data and training methods used in the two articles are different. In order to reduce the variable factors, the training methods we use will refer to the methods proposed in the two articles respectively, and then compare the accuracy and inference speed with other models.
\begin{center}
\tabcolsep=3pt
\begin{table}[ht]
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& \textbf{mIoU} & \textbf{mDice} & \textbf{F2-score} & \textbf{Precision} & \textbf{Recall} & \textbf{Overall Acc.} & \textbf{FPS} \\ \hline
U-Net & 0.471 & 0.597 & 0.598 & 0.672 & 0.617 & 0.894 & 11 \\ \hline
ResUNet & 0.572 & 0.690 & 0.699 & 0.745 & 0.725 & 0.917 & 15 \\ \hline
ResUNet++ & 0.613 & 0.714 & 0.720 & 0.784 & 0.742 & 0.917 & 7 \\ \hline
FCN8 & 0.737 & 0.831 & 0.825 & 0.882 & 0.835 & 0.952 & 25 \\ \hline
HRNet & 0.759 & 0.845 & 0.847 & 0.878 & 0.859 & 0.952 & 12 \\ \hline
DoubleUNet & 0.733 & 0.813 & 0.820 & 0.861 & 0.840 & 0.949 & 7.5 \\ \hline
PSPNet & 0.744 & 0.841 & 0.831 & 0.890 & 0.836 & 0.953 & 17 \\ \hline
DeepLabv3+[ResNet50] & 0.776 & 0.857 & 0.855 & 0.891 & 0.8616 & 0.961 & 28 \\ \hline
DeepLabv3+[ResNet101] & 0.786 & 0.864 & 0.857 & 0.906 & 0.859 & 0.961 & 17 \\ \hline
U-Net[ResNet34] & 0.810 & 0.876 & 0.862 & {\color[HTML]{FE0000} \textbf{0.944}} & 0.860 & 0.968 & 35 \\ \hline
{\color[HTML]{FE0000} \textbf{HarDNet-MSEG}} & {\color[HTML]{FE0000} \textbf{0.848}} & {\color[HTML]{FE0000} \textbf{0.904}} & {\color[HTML]{FE0000} \textbf{0.915}} &
0.907 &
{\color[HTML]{FE0000} \textbf{0.923}} &
{\color[HTML]{FE0000} \textbf{0.969}} &
{\color[HTML]{FE0000} \textbf{86.7}} \\ \hline
\end{tabular}
}
\renewcommand{\arraystretch}{1.3}
\caption{Quantitative results on Kvasir dataset (training/testing split:880/120). Showing the performance of different metrics and inference speed evaluating on GeForce RTX 2080 Ti GPU. Others evaluation scores are refer from \cite{jha2020real}.}
\end{table}
\end{center}
\subsection{Dataset }
\vspace{4mm}
\hspace*{0.5cm}We used the datasets proposed in the two papers mentioned earlier, namely Kvasir-SEG, CVC-ColonDB, EndoScene, ETIS-Larib Polyp DB, and CVC-Clinic DB. And we will make a detailed comparison with other SOTA models on these datasets.
\subsection{Training setting and policy}
\vspace{4mm}
\hspace*{0.5cm}The two articles are based on different splitting method of training data, so we made two different experiments on each training setting to compare, and the details of the experiments will be explained below.
\vspace{4mm}
For \cite{jha2020real}, 880 images of Kvasir-SEG is used for training, and the other 120 images are used for testing. It does use augmentations like random rotation, horizontal flip, vertical flip. Our training input size is 512x512. We train our model with SGD optimizer for 100 epochs and the learning rate is set to 1e-2. The results comparing to \cite{jha2020real} is in Table 1. HarDNet-MSEG shows the greatest accuracy on most metrics, and the inference speed is much faster than others.
\vspace{4mm}
In \cite{pranet}, 1450 training images without any augmentation is used, including 900 images in Kvasir-SEG and 550 images in CVC-ClinicDB. And the testing set has 5 datasets mentioned above. Our training input size is 312x312, We train our model with Adam optimizer for 100 epochs and the learning rate is set to 1e-4. The quantitative results of each 5 datasets are shown in Table 2 (Kvasir-SEG) and Table 3 (ETIS, CVC-ClinicDB, CVC-ColonDB and EndoScene). We achieve the best performance in mean Dice and mIoU on each dataset, with the fastest inference speed (88 FPS).
\begin{table}[htb]
\label{tab:my-table}
\resizebox{\textwidth}{!}{
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
&
\textbf{mDice} &
\textbf{mIoU} &
\textbf{wfm} &
\textbf{Sm} &
\textbf{MAE} &
\textbf{maxEm} &
\textbf{FPS} \\ \hline
U-Net &
0.818 &
0.746 &
0.794 &
0.858 &
0.055 &
0.893 &
53 \\ \hline
U-Net++ &
0.821 &
0.743 &
0.808 &
0.862 &
0.048 &
0.910 &
25 \\ \hline
ResUNet-mod &
0.791 &
n/a &
n/a &
n/a &
n/a &
n/a &
n/a \\ \hline
ResUNet++ &
0.813 &
0.793 &
n/a &
n/a &
n/a &
n/a &
n/a \\ \hline
SFA &
0.723 &
0.611 &
0.67 &
0.782 &
0.075 &
0.849 &
40 \\ \hline
PraNet &
0.898 &
0.840 &
0.885 &
0.915 &
0.030 &
0.948 &
66 \\ \hline
{\color[HTML]{FE0000} \textbf{HarDNet-MSEG}} &
{\color[HTML]{FE0000} \textbf{0.912}} &
{\color[HTML]{FE0000} \textbf{0.857}} &
{\color[HTML]{FE0000} \textbf{0.903}} &
{\color[HTML]{FE0000} \textbf{0.923}} &
{\color[HTML]{FE0000} \textbf{0.025}} &
{\color[HTML]{FE0000} \textbf{0.958}} &
{\color[HTML]{FE0000} \textbf{88}} \\ \hline
\end{tabular}
}
\renewcommand{\arraystretch}{2}
\caption{Quantitative results on Kvasir, comparing with the SOTA. Using the same training script with the release code of PraNet. The inference speed is testing under 312x312 resolution on GeForce RTX 2080 Ti GPU.}
\end{table}
\begin{table}[htb]
\label{tab:my-tabletab}
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
&
\multicolumn{2}{c|}{\textbf{ClinicDB}} &
\multicolumn{2}{c|}{\textbf{ColonDB}} &
\multicolumn{2}{c|}{\textbf{ETIS}} &
\multicolumn{2}{c|}{\textbf{CVC-T}} \\ \cline{2-9}
\multirow{-2}{*}{} &
\textbf{mDice} &
\textbf{mIoU} &
\textbf{mDice} &
\textbf{mIoU} &
\textbf{mDice} &
\textbf{mIoU} &
\textbf{mDice} &
\textbf{mIoU} \\ \hline
U-Net &
0.823 &
0.755 &
0.512 &
0.444 &
0.398 &
0.335 &
0.71 &
0.627 \\ \hline
U-Net++ &
0.794 &
0.729 &
0.483 &
0.410 &
0.401 &
0.344 &
0.707 &
0.624 \\ \hline
ResUNet-mod &
0.779 &
n/a &
n/a &
n/a &
n/a &
n/a &
n/a &
n/a \\ \hline
ResUNet++ &
0.796 &
0.796 &
n/a &
n/a &
n/a &
n/a &
n/a &
n/a \\ \hline
SFA &
0.700 &
0.607 &
0.469 &
0.347 &
0.297 &
0.217 &
0.467 &
0.329 \\ \hline
PraNet &
0.899 &
0.849 &
0.709 &
0.640 &
0.628 &
0.567 &
0.871 &
0.797 \\ \hline
{\color[HTML]{FE0000} \textbf{HarDNet-MSEG}} &
{\color[HTML]{FE0000} \textbf{0.932}} &
{\color[HTML]{FE0000} \textbf{0.882}} &
{\color[HTML]{FE0000} \textbf{0.731}} &
{\color[HTML]{FE0000} \textbf{0.660}} &
{\color[HTML]{FE0000} \textbf{0.677}} &
{\color[HTML]{FE0000} \textbf{0.613}} &
{\color[HTML]{FE0000} \textbf{0.887}} &
{\color[HTML]{FE0000} \textbf{0.821}} \\ \hline
\end{tabular}
}
\renewcommand{\arraystretch}{2}
\caption{More results on CVC-ClinicDB, CVC-ColonDB, ETIS, and CVC-T, comparing with the SOTA. Among them, CVC-T is the testing data for EndoScene.}
\end{table}
\subsection{Metrics}
\hspace*{1.5cm}Mean Dice $=\frac{2*tp}{2*tp+fp+fn}$
\hspace*{1.2cm}mIoU $=\frac{tp}{tp+fp+fn}$
\vspace{4mm}\newline
\hspace*{2.2cm}Recall $=\frac{tp}{tp+fn}$
\hspace*{1.5cm}Precision $=\frac{tp}{tp+fp}$
\vspace{4mm}\newline
\hspace*{2.8cm}F2 $=\frac{5p*r}{4p+r}$
\hspace*{2.3cm}Acc. $=\frac{tp+tn}{tp+tn+fp+fn}$
\vspace{4mm}
\hspace*{0.5cm}We will mainly use Kvasir's official website as the basis for comparison, namely mean Dice and Mean IoU, but we will still use other metrics mentioned in these two articles for comparison so that we can show our advantages more clearly.
\subsection{Training and Inference Environment setting:
}
\vspace{4mm}
\hspace*{0.5cm}In order to show our advantage in speed, we respectively compare with other famous models. The platforms we use for evaluation is written below.
\vspace{2mm}{\footnotesize{\textbf{Intel i9-9900k CPU, GeForce RTX 2080 Ti, Pytorch: 1.6 and CUDA: 10.2}}}
\section{Conculsion}
\hspace*{0.5cm}HarDNet-MSEG achieved the SOTA in all five challenging datasets. It is the only network that has achieved over 0.90 mean Dice (0.912 comparing with \cite{pranet} and 0.904 comparing with \cite{jha2020real}) on Kvasir-SEG. And it is 1.3 times faster than PraNet and more than 2 times faster than other models. We achieve this with a simple encoder-decoder architecture without any attention module used in \cite{pranet} and \cite{fang2020abc}. See Figure 6 for some inference results of Kvasir-SEG. It shows that our model outputs better boundary and the prediction is more accurate.
\vspace{4mm}
Again, it shows that HarDNet\cite{chao2019hardnet} is a great and efficient backbone in not only classification and detection, but also medical imaging segmentation. We hope this study can help pushing the frontier of medical imaging and contribute to the application of CNN in this field.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.25]{infer.png}
\caption{Inference results of Kvasir-SEG.}
\label{fig:kva}
\end{figure}
\section*{Acknowledgements}
\hspace*{0.5cm}This research is supported in part by a grant from the
Ministry of Science and Technology (MOST) of Taiwan. We thank National Center for High-performance Computing (NCHC) for providing computational and storage resources. Without it this research is impossible. We would also like to thank Mr.Ping Chao for many fruitful discussions.
\input{ref.bbl}
\bibliographystyle{plain}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,094
|
Q: Error Processing Cube TFS I am getting the following error when processing the cube in team foundation server.
TF221122: An error occurred running job Incremental Analysis Database Sync for team project collection or Team Foundation server TEAM FOUNDATION.
I have recently moved and restored the team foundation server.
My problem is that there is no server called 'TEAM FOUNDATION'. I think I have used TFSConfig to configure something incorrectly.
Here is the xml output for the GetProcessStatus.
I have cut some out for brevity. I am pretty convinced that the problem is that Instance Name='TEAM FOUNDATION' is incorrect. I should be a machine name?
<WarehouseProcessingStatus xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/TeamFoundation/2005/06/Services/Controller/03">
<RequestTimeUtc>2012-03-04T07:07:31.4633418Z</RequestTimeUtc>
<WarehouseProcessingOnlineStatus>Stopped</WarehouseProcessingOnlineStatus>
<AnalysisProcessingOnlineStatus>Stopped</AnalysisProcessingOnlineStatus>
<JobProcessingStatus>Idle</JobProcessingStatus>
<JobsRunning>0</JobsRunning>
<JobsQueued>0</JobsQueued>
<Instance Name="TEAM FOUNDATION" JobProcessingStatus="Idle" JobsRunning="0" JobsQueued="0">
<Jobs>
<Job Name="Common Structures Warehouse Sync" JobProcessingStatus="Idle">
<LastRun QueueTimeUtc="2012-03-04T06:53:16.94Z" ExecutionStartTimeUtc="2012-03-04T06:53:17.433Z" EndTimeUtc="2012-03-04T06:53:17.45Z" Result="Blocked">
<ResultMessage>[Common Structures Warehouse Sync]: ---> TF221107: Reporting for Team Foundation Server cannot execute job Common Structures Warehouse Sync for team project collection TEAM FOUNDATION because the warehouse is offline. Use the Team Foundation Administration Console to start reporting.</ResultMessage>
</LastRun>
</Job>
<Job Name="Full Analysis Database Sync" JobProcessingStatus="Idle">
<LastRun QueueTimeUtc="2012-03-04T06:03:49.833Z" ExecutionStartTimeUtc="2012-03-04T06:03:52.7Z" EndTimeUtc="2012-03-04T06:04:06.917Z" Result="Failed">
<ResultMessage>[Full Analysis Database Sync]: ---> AnalysisDatabaseProcessingType=Full, needCubeSchemaUpdate=True. ---> Microsoft.TeamFoundation.Server.WarehouseException: TF221122:
An error occurred running job Full Analysis Database Sync for team project collection or Team Foundation server TEAM FOUNDATION. ---> Microsoft.TeamFoundation.Server.WarehouseException: Failed to Process Analysis Database 'Tfs_Analysis'. ---> Microsoft.TeamFoundation.Server.WarehouseException: Internal error: The operation terminated unsuccessfully.
Server: The operation has been cancelled.
OLE DB error: OLE DB or ODBC error: A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.; 08001; Client unable to establish connection; 08001; Encryption not supported on the client.; 08001.
Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Tfs_AnalysisDataSource', Name of 'Tfs_AnalysisDataSource'.
Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Test Plan', Name of 'Test Plan' was being processed.
Errors in the OLAP storage engine: An error occurred while the 'Test Plan ID' attribute of the 'Test Plan' dimension from the 'Tfs_Analysis' database was being processed.
</Job>
</Jobs>
</Instance>
<Collections>
<Collection Name="Upgrade Projects" JobProcessingStatus="Idle" JobsRunning="0" JobsQueued="0">
<Jobs>
<Job Name="Build Warehouse Sync" JobProcessingStatus="Idle">
<LastRun QueueTimeUtc="2012-03-04T06:35:09.837Z" ExecutionStartTimeUtc="2012-03-04T06:35:10.157Z" EndTimeUtc="2012-03-04T06:35:10.317Z" Result="Succeeded" />
</Job>
<Job Name="Common Structures Warehouse Sync" JobProcessingStatus="Idle">
<LastRun QueueTimeUtc="2012-03-04T06:35:09.823Z" ExecutionStartTimeUtc="2012-03-04T06:35:10.147Z" EndTimeUtc="2012-03-04T06:35:10.643Z" Result="Succeeded" />
</Job>
<Job Name="Test Management Warehouse Sync" JobProcessingStatus="Idle">
<LastRun QueueTimeUtc="2012-03-04T06:47:02.343Z" ExecutionStartTimeUtc="2012-03-04T06:47:03.313Z" EndTimeUtc="2012-03-04T06:47:03.41Z" Result="Succeeded" />
</Job>
<Job Name="Version Control Warehouse Sync" JobProcessingStatus="Idle">
<LastRun QueueTimeUtc="2012-03-04T06:35:09.837Z" ExecutionStartTimeUtc="2012-03-04T06:35:10.163Z" EndTimeUtc="2012-03-04T06:35:10.297Z" Result="Succeeded" />
</Job>
<Job Name="Work Item Tracking Warehouse Sync" JobProcessingStatus="Idle">
<LastRun QueueTimeUtc="2012-03-04T06:35:09.837Z" ExecutionStartTimeUtc="2012-03-04T06:35:10.15Z" EndTimeUtc="2012-03-04T06:35:10.573Z" Result="Succeeded" />
</Job>
</Jobs>
</Collection>
</Collections>
</WarehouseProcessingStatus>
A: I finally figured this out.
You are able to manually process the CUBE by logging into analysis services, right clicking on TFS_Analysis and selecting 'Process'.
It turned out that the user specified in the TFS_AnalysisDatasource could not be correctly impersonated. I don't know why. It was a valid user with admin privileges on the machine.
I changed the analysis services service to use a domain account, and changed the data source to use the service account.
This finally got the cubes processing. The instance='...' that I thought was the problem above was a red herring.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,960
|
Q: How to create a custom tag in app.config Afternoon,
After searching around and finding very little on this, I thought I will post the question I want answered.
As the question states I want to have a custom tag in a app.config file.
<appSettings>
<add key="exam" value="pp"/>
<add key="exam" value="ss" />
</appSettings>
<!-- This is the custom tag I want to have --!>
<WebPoints>
<host name="Main" value="URL" port="80" sslPort="633"></host>
</WebPoints>
Of course when I run the code it complains that the config hasn't initialised correctly.
I gather what I want is a lot more work than it should be, but I like to know how much work there is.
Thanks
A: To create a custom tag in web.config one can use Custom Configuration sections
C# code
using System;
using System.Collections;
using System.Text;
using System.Configuration;
using System.Xml;
namespace Samples.AspNet
{
public class PageAppearanceSection : ConfigurationSection
{
// Create a "remoteOnly" attribute.
[ConfigurationProperty("remoteOnly", DefaultValue = "false", IsRequired = false)]
public Boolean RemoteOnly
{
get
{
return (Boolean)this["remoteOnly"];
}
set
{
this["remoteOnly"] = value;
}
}
// Create a "font" element.
[ConfigurationProperty("font")]
public FontElement Font
{
get
{
return (FontElement)this["font"]; }
set
{ this["font"] = value; }
}
// Create a "color element."
[ConfigurationProperty("color")]
public ColorElement Color
{
get
{
return (ColorElement)this["color"];
}
set
{ this["color"] = value; }
}
}
// Define the "font" element
// with "name" and "size" attributes.
public class FontElement : ConfigurationElement
{
[ConfigurationProperty("name", DefaultValue="Arial", IsRequired = true)]
[StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;'\"|\\", MinLength = 1, MaxLength = 60)]
public String Name
{
get
{
return (String)this["name"];
}
set
{
this["name"] = value;
}
}
[ConfigurationProperty("size", DefaultValue = "12", IsRequired = false)]
[IntegerValidator(ExcludeRange = false, MaxValue = 24, MinValue = 6)]
public int Size
{
get
{ return (int)this["size"]; }
set
{ this["size"] = value; }
}
}
// Define the "color" element
// with "background" and "foreground" attributes.
public class ColorElement : ConfigurationElement
{
[ConfigurationProperty("background", DefaultValue = "FFFFFF", IsRequired = true)]
[StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;'\"|\\GHIJKLMNOPQRSTUVWXYZ", MinLength = 6, MaxLength = 6)]
public String Background
{
get
{
return (String)this["background"];
}
set
{
this["background"] = value;
}
}
[ConfigurationProperty("foreground", DefaultValue = "000000", IsRequired = true)]
[StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;'\"|\\GHIJKLMNOPQRSTUVWXYZ", MinLength = 6, MaxLength = 6)]
public String Foreground
{
get
{
return (String)this["foreground"];
}
set
{
this["foreground"] = value;
}
}
}
}
web.config
<configuration>
<!-- Configuration section-handler declaration area. -->
<configSections>
<sectionGroup name="pageAppearanceGroup">
<section
name="pageAppearance"
type="Samples.AspNet.PageAppearanceSection"
allowLocation="true"
allowDefinition="Everywhere"
/>
</sectionGroup>
<!-- Other <section> and <sectionGroup> elements. -->
</configSections>
<!-- Configuration section settings area. -->
</configuration>
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 6,094
|
Dense foam provides sound reduction under engineered and laminate flooring; closed cell foam for moisture protection; use for radiant heat floors; mold and mildew resistant; anti-crush technology for performance and long life; 2mm thick foam, cushions flooring; includes adhesive strip and 3" overlap film for linking multiple rolls. Coverage: 100 square feet. STC:66 11c:69.
Super Felt is 4mm thick insulating underlayment made to reduce noise and help cushion flooring; by absorbing noise instead of deflecting it, "Super Felt" provides a deep, rich sound; this felt underlayment is made from recycled fibers and compressed using a high heat manufacturing process; the film overlay protects engineered and laminate flooring from any moisture from the subfloor and includes an adhesive strip and 2-1/2" overlap for linking multiple rolls; keep floors warm in winter and cool in summer; for use over concrete and wood subfloors; coverage: 100 square feet per roll. STC:66 11C:67; Rating: 21, RValue:58.
This high pressure laminate, plank design flooring has aluminum oxide surface with hand scraped finish, HDF core construction, squared edges and ends. INSTALLATION: Floating or Easy Lock. Can be used in commercial or residential areas. AC Rating: AC 4; Warranty: 25 years residential; Plank size: 6-1/2"W x 48"L x 1/2"T. Each box covers 17.36 sq. ft. 8 planks per box.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,332
|
{"url":"http:\/\/www.darwinproject.ac.uk\/letter\/DCP-LETT-821.xml","text":"DCP-LETT-821\n\n# To Emma Darwin \u2002 [3\u20134\u00a0February 1845]1\n\n[Down]\n\nMonday night\n\nMy dear Wife\n\nNow for my day\u2019s annals\u2014 In the morning I was baddish, & did hardly any work & was as much overcome by my children, as ever Bishop Coplestone2 was with Duck.3 But the children have been very good all day, & I have grown a good deal better this afternoon, & had a good romp with Baby\u20144 I see, however, very little of the Blesseds\u2014 The day was so thick & wet a fog, that none of them went out, though a thaw & not very cold; I had a long pace in the Kitchen Garden: Lewis5 came up to mend the pipe & paper the W.C.\u00a0in which apartment there was a considerable crowd for about an hour, when Mr Lewis & his son William, Willy Annie, Baby & Bessy6 were there. Baby insisted on going in, I daresay, greatly to the disturbance of Bessy\u2019s delecacy\u2014 Lewis from first dinner to second dinner was a first-rate dispensary, as they never left him\u2014 They, also, dined in the Kitchen, and I believe have had a particularly pleasant day.\u2014\n\nI was playing with Baby in the window of the drawing-room this morning, & she was blowing a feeble fly (fry) & blew it on its back, when it kicked so hard, that to my great amusement Baby grew red in the face, looked frightened & pushed away from the window.\u2014 The children are growing so quite out of all rule in the drawing-room, jumping on everything & butting like young bulls at every chair & sofa, that I am going to have the dining-room fire lighted tomorrow & keep them out of the drawing-room. I declare a months such wear, wd spoil every thing in the whole drawing-room.\u2014\n\nI read Whately\u2019s Shakspeare7 & very ingenious & interesting it is\u2014and what do you think Mitford\u2019s Greece8 has made me begin, the Iliad by Cowper,9 which we were talking of; & have read 3\u00a0books with much more pleasure, than I anticipated.\u2014 I have given up acids & gone to puddings again.\u2014\n\nTuesday morning\u2014 I am impatient for your letter this morning to hear how you got on.\u2014 I asked Willy how Baby has slept & he answered \u201cshe did not cry not one mouthful\u201d. My stomach is baddish again this morning & I almost doubt, whether I will go to London, tomorrow; if I do you won\u2019t hear. Poor Annie has had a baddish knock by Willie\u2019s ball in her eye.\u2014it is swelled a bit, but not otherwise bad.\n\nC.\u00a0D.\n\nYour cap cannot \u2329be\u232a found anywhere: Jane says you took one. $\\frac{9}{10}$ of the snow is gone & the children are going out. Very many thanks for your letter10\n\n## Footnotes\n\n1\nDate based on nn.\u00a07\u00a0and 8, below, and on Henrietta Litchfield\u2019s statement, before her transcription of parts of this letter, that Emma went to Maer in February 1845 (Emma Darwin 2: 92). Emma\u2019s diary records that she was away between 31\u00a0January and 11\u00a0February.\n2\nEdward Copleston.\n3\nHenrietta Litchfield notes, \u2018This must be some family joke. Bishop Copleston had been a friend of Sir James Mackintosh.\u2019 (Emma Darwin 2: 93).\n4\nHenrietta Emma Darwin, born 25\u00a0September 1843.\n5\nJohn Lewis was a carpenter in Down village (Post Office directory of the six home counties 1845.)\n6\nElizabeth Harding, nursery maid at Down House (see Emma Darwin 2: 80\u20131).\n7\nT.\u00a0Whately 1785. The London Library borrowing list records that CD borrowed Thomas Whately\u2019s book on 30\u00a0January and returned it on 27\u00a0March 1845 (London Library Archives).\n8\nMitford 1784\u20131818. Volumes two and three of William Mitford\u2019s History of Greece were borrowed from the London Library on 9\u00a0January and returned on 27\u00a0March 1845 (London Library Archives).\n9\nCowper 1791.\n10\nThe final paragraph was written in pencil.\n\n## Summary\n\nNews of the children and books he is reading.\n\n## Letter details\n\nLetter no.\nDCP-LETT-821\nFrom\nDarwin, C. R.\nTo\nDarwin, Emma\nSent from\nDown\nPhysical description\n4pp","date":"2016-07-26 02:36:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 1, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.21334338188171387, \"perplexity\": 8606.274614885153}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-30\/segments\/1469257824570.25\/warc\/CC-MAIN-20160723071024-00265-ip-10-185-27-174.ec2.internal.warc.gz\"}"}
| null | null |
Sustainability / Responsible business practices
We are committed to responsible business practices
Rigorous ethical standards, supply chain integrity and partnerships are fundamental for us.
We are committed to global sustainability initiatives
We commit to the UN Sustainable Development Goals through our Sanoma Sustainability Strategy. The Sanoma Code of Conduct encompasses the Ten Principles of the UN Global Compact on human rights, labour, environment and anti-corruption. Sanoma is a signatory of the UN Global Compact. Sanoma reinforces its climate action by committing to the Science Based Targets initiative.
Sanoma Code of Conduct outlines our business practices
In its business, Sanoma follows laws and regulations applicable in its operating countries, ethical guidelines set by its Code of Conduct as well as the Group's internal policies and standards. The Sanoma Code of Conduct outlines the shared ethical standards for our employees and our business partners. The Code acts as an umbrella for all policies and standards within Sanoma. The Sanoma Code of Conduct encompasses the Ten Principles of the UN Global Compact on human rights, labour, environment and anti-corruption.
Sanoma Code of Conduct
Sanoma Supplier Code of Conduct
Our policies guide our daily work
We are committed to complying with the legislation of each country in which we operate, and require the same from our partners. In addition to legislation, we follow sustainability principles that are embedded into our policies. Our people train their ethical behaviour on a continuous basis through our compulsory Code of Conduct e-learning and other sustainability related policy e-learnings to ensure compliance with our policies.
Our policies and training
We have zero tolerance for misconduct
The Sanoma-WhistleB reporting hotline enables Sanoma group employees, customers and business partners to report suspicions of misconduct confidentially and anonymously. Violations of the Code of Conduct, or any related policy or law, are encouraged to be reported through Sanoma's externally hosted, independent whistle-blowing hotline. Through this early warning system we foster high business ethics, maintain customer and public trust, and reduce risks for misconduct.
Report concern using the Sanoma-WhistleB
Our sustainability targets
We maintain rigorous ethical standards and responsible business practices
Sanoma develops sustainability in compliance with the legislation applicable to business activities in the learning and media industry. The company's internal control, risk management and governance support the management of sustainability. In addition to legislation, we follow sustainability principles that are embedded into our policies and development is guided by Sanoma group policies, guidelines and commitments.
We constantly develop responsibility in our supply chain
To provide products and services to our customers, we cooperate with a vast number of suppliers and vendors every day. Suppliers include transportation and distribution services, raw materials and supplies, royalties, printing and paper. We also use vendors for providing consultants.
We expect ethical and responsible conduct from our suppliers. Ensuring a sustainable supply chain begins from selecting suppliers. Our Know Your Counterparty (KYC) process identifies the risks of doing business with third parties by looking at their ownership, activities and role.
Sanoma Supplier Code of Conduct (the Supplier Code) sets out the ethical standards and responsible business principles our suppliers are required to comply with and expected to also apply to their employees, affiliates and sub-contractors. All new suppliers go through Sanoma's source-tocontract solution, which incorporates the Supplier Code as a mandatory step for successful selection.
The Sanoma-WhistleB reporting hotline enables Sanoma Group employees, customers and business partners to report suspicions of supply chain misconduct confidentially and anonymously.
Our good financial performance and position support sustainable development
During the past five years, Sanoma has focused its business on its strongholds and divested businesses, in which it did not have a leading position or a sustainable competitive advantage. Successful strategy execution has also strengthened the Group's financial position, performance and ability to distribute positive economic impact. Our good financial performance and position support sustainable development
and the economic added value we have in society.
Annually, we also support carefully selected partners with donations to strengthen our positive impact on society and local communities. Sanoma's Annual General Meeting decides the amount of our non-profit donations and authorises the Board of Directors to decide on the contributions. Consistent with our ethical standards, we are transparent about our contributions and do not make donations to political movements or representatives nor to purposes that are unethical or illegal. We comply with applicable laws and regulations in making donations and ensure that there is no misuse or corrupt purposes.
Supporting Sustainable Development Goals
Our sustainability actions for responsible business practices support these UN SDGs.
8 Decent work and economic growth
17 Partnerships for the goals
What are UN SDGs?
We support sustainable economic growth through our role as employer and tax-payer throughout Europe. Our employees range from diverse backgrounds and we offer young employees opportunities to develop.
We engage with our stakeholders and suppliers to meet our sustainability goals. We also annually support carefully selected partners with donations to strengthen our positive impact on society and local communities.
Read more about the United Nation's Sustainable Development Goals (SDGs) on Commitments.
Our business is built on long-term consumer trust and fair business practices. Our actions are guided by the United Nation's Global Compact's ten principles.
Link copied to clipboard successfully
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,840
|
Q: doesn't work I have the following embedded iframe
<iframe width="100%" height="400" src="reallylongpage.html" />
reallylongpage.html has 2 anchors #top and #bottom
I want my iframe to load reallylongpage.html at the bottom of the page so I did
<iframe width="100%" height="400" src="reallylongpage.html#bottom" />
But this has the undesirable effect of scrolling both the iframe AND the parent page. The parent page shouldn't scroll at all. This happens in Chrome and Firefox.
here is an example with full code
parent.html
<html>
<body>
<div style="padding:100 200;">
<iframe WIDTH="100%" HEIGHT="400" SRC="CHILD.HTML#BOTTOM" ></iframe>
</div>
<div>1<br>2<br>3<br>4<br>5<br>6<br>7<br>8<br>9<br>10<br>11<br>12<br>13<br>14<br>15<br>16<br>17<br>18<br>19<br>20<br>21<br>22<br>23<br>24<br>25<br>26<br>27<br>28<br>29<br>30<br></div>
</body>
</html>
child.html
<html>
<body>
<a name="top" href="#bottom">go to bottom</a><br>
1<br>2<br>3<br>4<br>5<br>6<br>7<br>8<br>9<br>10<br>11<br>12<br>13<br>14<br>15<br>16<br>17<br>18<br>19<br>20<br>21<br>22<br>23<br>24<br>25<br>26<br>27<br>28<br>29<br>30<br>
<a name="bottom" href="#top">go to top</a>
</body>
</html>
this is what i want it to look like
this is what i get instead
A: This appears to be the de facto behavior in browsers (At least I couldn't find any written standard about anchors and scrolling).
The browser tries its best to scroll all windows until the desired fragment is visible. (You'll notice this even when you click on the "got to top" link and also if you add "padding-bottom: 3000px;" to the div in your example.)
Consider using jQuery's scrollTo plugin which actually manipulates scroll position of the appropriate container for you.
To demonstrate with your own example:
Hosted Demos:
With jQuery scrollTo
Without jQuery scrollTo
Full Source:
parent.html
<!doctype html>
<html>
<head>
<title>parent</title>
<script type="text/javascript" src="http://code.jquery.com/jquery-1.4.2.min.js"></script>
<script type="text/javascript" src="jquery.scrollTo-min.js"></script>
<script type="text/javascript">
$(function(){
var iframe = $('iframe');
iframe.load(function(){
iframe.scrollTo('a[name=bottom]');
});
});
</script>
</head>
<body>
<div style="padding:100px 200px 3000px;">
<iframe width="100%" height="400" src="child.html"></iframe>
</div>
<div>1<br>2<br>3<br>4<br>5<br>6<br>7<br>8<br>9<br>10<br>11<br>12<br>13<br>14<br>15<br>16<br>17<br>18<br>19<br>20<br>21<br>22<br>23<br>24<br>25<br>26<br>27<br>28<br>29<br>30<br></div>
</body>
</html>
child.html
<!doctype html>
<html>
<head>
<title>child</title>
<script type="text/javascript" src="http://code.jquery.com/jquery-1.4.2.min.js"></script>
<script type="text/javascript" src="jquery.scrollTo-min.js"></script>
<script type="text/javascript">
$(function(){
$('a').click(function(){
$(window).scrollTo('a[name=' + this.hash.substring(1) + ']');
return false;
});
});
</script>
</head>
<body>
<a name="top" href="#bottom">go to bottom</a><br>
1<br>2<br>3<br>4<br>5<br>6<br>7<br>8<br>9<br>10<br>11<br>12<br>13<br>14<br>15<br>16<br>17<br>18<br>19<br>20<br>21<br>22<br>23<br>24<br>25<br>26<br>27<br>28<br>29<br>30<br>
<a name="bottom" href="#top">go to top</a>
</body>
</html>
A: Well, its just a confused browser. I did a quick look, and found out that the your name="bottom" has nothing beyond it. As the browser needs to focus on that element and place it window on its top, it didn't find anything inside the iframe which can scroll the #bottom to the top, so it scrolled the parent, but only to bring the #buttom to top.
I have set a page with your code here it is http://jsfiddle.net/VGN2k/ to explain what I am saying. Check it out
What I have done is added bunch of <br/> after the "#bottom" in order to create space, now browser has space which it can use to bring #bottom on the top.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 637
|
\section{\@startsection {section}{1}{\z@
{-3.5ex \@plus -1ex \@minus -.2ex
{2.3ex \@plus.2ex
{\normalfont\large\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@
{-3.25ex\@plus -1ex \@minus -.2ex
{0ex \@plus .0ex
{\normalfont\normalsize\bfseries}}
\setcounter{section}{0}
\@addtoreset{equation}{section}
\makeatother
\newtheorem{theorem}{Theorem
\newtheorem{definition}[theorem]{Definition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem*{lemma*}{Lemma}
\theoremstyle{remark}
\newtheorem{remark}{Remark}
\newtheorem{example}{Example}
\newtheorem{examples}[theorem]{Examples}
\makeatletter
\def\@maketitle{\newpage
\null
\vskip 2em
\begin{center
\vskip 3em
{\Large\bf \@title \par
\vskip 1.5em
{\normalsize
\lineskip .5em
\begin{tabular}[t]{c}\@author
\end{tabular}\par
\vskip 2em
\end{center
\par
\vskip 2.5em}
\makeatother
\newcommand{\mc}[1]{{\mathcal #1}}
\newcommand{\mf}[1]{{\mathfrak #1}}
\newcommand{\mb}[1]{{\mathbb #1}}
\newcommand{\bb}[1]{{\mathbb #1}}
\newcommand{\bs}[1]{{\boldsymbol #1}}
\newcommand{\ol}[1]{\,\overline {\!#1\!}\,}
\newcommand{\ul}[1]{\,\underline {\!#1\!}\,}
\renewcommand{\epsilon}{\varepsilon}
\newcommand\tint{{\textstyle\int}}
\definecolor{light}{gray}{.9}
\newcommand{\pecetta}[1]{
$\phantom .$
\bigskip
\par\noindent
\colorbox{light}{\begin{minipage}{14cm}#1\end{minipage}}
\bigskip
\par\noindent
}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\thalf}{\tfrac{1}{2}}
\newcommand{\zp}{\ZZ_{>0}}
\newcommand{\spr}{s^{\prime}}
\newcommand{\la}{\lambda}
\newcommand{\La}{\Lambda}
\newcommand{\al}{\alpha}
\newcommand{\osp}{\mathrm{osp} \,}
\newcommand{\wg}{\widehat{\fg}}
\newcommand{\wh}{\widehat{\fh}}
\newcommand{\wrh}{\widehat{\rho}}
\newcommand{\thl}{\Theta_{\lambda}}
\newcommand{\tz}{(\tau, z)}
\newcommand{\tzt}{(\tau, z, t)}
\newcommand{\tzzt}{(\tau, z_1, z_2, t)}
\newcommand{\tzzzt}{(\tau, z_1, z_2, z_3, t)}
\newcommand{\tuvt}{(\tau, u, v, t)}
\newcommand{\LLa}{L(\Lambda)}
\newcommand{\tot}{\frac{\tau}{2}}
\newcommand{\tof}{\frac{\tau}{4}}
\newcommand{\tch}{\tilde{\ch}}
\newcommand{\tph}{\tilde{\Phi}}
\newcommand{\tps}{\tilde{\Psi}}
\newcommand{\tef}{\tilde{F}}
\newcommand{\tbe}{\tilde{B}}
\newcommand{\HR}{\mathrm{HR}}
\setcounter{section}{+1}
\begin{document}
\title{A remark on boundary level admissible representations}
\author{Victor G.~Kac\thanks{Department of Mathematics, M.I.T,
Cambridge, MA 02139, USA. Email: kac@math.mit.edu ~~~} \
and Minoru Wakimoto\thanks{Email: ~~wakimoto@r6.dion.ne.jp~~~}}
\maketitle
Recently a remarkable map between 4-dimensional superconformal field theories and vertex algebras has been constructed \cite{BLLPRV15}. This has lead to new insights in the theory of characters of vertex algebras. In particular it was observed that in some cases these characters decompose in nice products
\cite{XYY16}, \cite{Y16}.
The purpose of this note is to explain the latter phenomena. Namely, we point out that it is immediate by our character formula \cite{KW88}, \cite{KW89} that in the case of a \textit{boundary level} the characters of admissible representations of affine Kac-Moody algebras and the corresponding $W$-algebras
decompose in products in terms of the Jacobi form $ \vartheta_{11} \tz. $
We would like to thank Wenbin Yan for drawing our attention to this question.
Let $ \fg $ be a simple finite-dimensional Lie algebra over $ \CC, $ let $ \fh $ be a Cartan subalgebra of $ \fg, $ and let $ \Delta \subset \fh^* $ be the set of roots. Let $ Q = \ZZ \Delta $ be the root lattice and let $ Q^* = \{ h \in \fh \ | \ \al (h) \in \ZZ \mbox{ for all } \al \in \Delta \} $ be the dual lattice.
Let $ \Delta_+\subset \Delta $ be a subset of positive roots, let $ \{ \al_1, \ldots, \al_\ell \} $ be the set of simple roots and let $ \rho $ be half of the sum of positive roots. Let $ W $ be the Weyl group. Let $ \bl $ be the invariant symmetric bilinear form on $ \fg, $ normalized by the condition $ (\al|\al) = 2 $ for a long root $ \al, $ and let $ h^\vee $ be the dual Coxeter number ($ = \half $ eigenvalue of the Casimir operator on $ \fg $). We shall identify $ \fh $ with $ \fh^* $ using the form $ \bl. $
Let $ \wg = \fg [t, t^{-1}] + \CC K + \CC d $ be the associated to $ \fg $ affine Kac-Moody algebra (see \cite{K90} for details), let $ \wh = \fh + \CC K + \CC d $ be its Cartan subalgebra. We extend the symmetric bilinear form $ \bl $ from $ \fh $ to $ \wh $ by letting $ (\fh | \CC K + \CC d) =0, (K|K) = 0, (d|d) = 0, (d|K)= 1, $ and we identify $ \wh^* $ with $ \wh $ using this form. Then $ d $ is identified with the $ 0^{th} $ fundamental weight $ \La_0 \in \wh^*, $ such that $ \La_0 |_{\fg [t, t^{-1}]+ \CC d} =0, \La_0 (K) = 1, $ and $ K $ is identified with the imaginary root $ \delta \in \wh^*. $
Then the set of real roots of $ \wg $ is $ \hat{\Delta}^{\re} = \{ \al + n \delta |\, \al \in \Delta, n \in \ZZ \}$ and the subset of positive real roots is $ \hat{\Delta}^{\re}_+ = \Delta_+ \cup \{\al + n \delta |\, \al \in \Delta, n \in \ZZ_{\geq 1} \} $. Let $\hat{\rho}=h^\vee \Lambda_0 +\rho$. Let
\[ \hat{\Pi}_u = \{ u \delta - \theta, \al_1, \ldots, \al_\ell \}, \]
where $ \theta \in \Delta_+ $ is the highest root, so that $ \hat{\Pi}_1 $ is the set of simple roots of $ \wg. $ For $ \al \in \hat{\Delta}^{\re} $ one lets $ \al^\vee = 2 \al / (\al|\al). $ Finally, for $ \beta \in Q^* $ define the translation $ t_\beta \in \End \wh^* $ by
\[ t_\beta (\la) = \la + \la (K) \beta -
((\la |\beta) + \half \la (K) |\beta|^2) \delta .
\]
Given $ \La \in \wh^* $ let $ \hat{\Delta}^\La = \{ \al \in \hat{\Delta}^{\re}
|\, (\La|\al^\vee) \in \ZZ \} $. Then $ \La $ is called an \textit{admissible} weight if the following two properties hold
\begin{enumerate}
\item[(i)] $ (\La + \wrh | \al^\vee) \notin \ZZ_{\leq 0}$ for all $ \al \in \hat{\Delta}_+, $
\item[(ii)] $ \QQ \hat{\Delta}^\La = \QQ \hat{\Delta}. $
\end{enumerate}
If instead of (ii) a stronger condition holds:
\begin{enumerate}
\item[(ii)$ ' $] $ \varphi (\hat{\Delta}^\La) = \hat{\Delta} $ for a linear
isomorphism $ \varphi : \wh^* \rightarrow \wh^*, $
\end{enumerate}
then $ \La $ is called a \textit{principal} admissible weight. In \cite{KW89} the classification and character formulas for admissible weights is reduced to that for principal admissible weights. The latter are described by the following proposition.
\begin{proposition}
\label{prop1}
\cite{KW89} Let $ \La $ be a principal admissible weight and let $ k = \La (K) $ be its level. Then
\begin{enumerate}
\item[(a)] $ k$ is a rational number with denominator $ u \in \ZZ_{\geq 1}, $ such that
\begin{equation}
\label{1}
k + h^\vee \geq \frac{h^\vee}{u} \mbox{ and } \gcd (u, h^\vee) = \gcd (u, r^\vee) = 1,
\end{equation}
where $ r^\vee = 1 $ for $ \fg $ of type A-D-E, = 2 for $ \fg $ of type B, C, F, and = 3 for $ \fg = G_2. $
\item[(b)] All principal admissible weights are of the form
\begin{equation}
\label{2}
\La = (t_\beta y). (\La^0 - (u-1) (k+h^\vee)\La_0),
\end{equation}
where $ \beta \in Q^*, y \in W $ are such that $ (t_\beta y) \hat{\Pi}_u \subset \hat{\Delta}_+, \La^0$ is an integrable weight of level $ u(k+h^\vee)-h^\vee, $ and dot denotes the shifted action: $ w.\La = w(\La + \wrh) - \wrh. $
\item[(c)] For $ \fg = s\ell_N $ all admissible weights are principal admissible.
\end{enumerate}
\end{proposition}
Recall that the normalized character of an irreducible highest weight $ \wg $-module $ \LLa $ of level $ k \neq -h^\vee $ is defined by
\[ \ch_\La (\tau, z, t) = q^{m_\La} \tr_{\LLa} e^{2 \pi i h} \]
where
\begin{equation}
\label{3}
h = -\tau d + z + tK, \ z \in \fh, \ \tau, t \in \CC, \ \Im \tau > 0, \ q = e^{2 \pi i \tau},
\end{equation}
and $ m_\La = \frac{|\La + \wrh |^2}{2 (k+h^\vee)} -\frac{\dim \fg}{24} $ (the normalization factor $ q^{m_\La} $ ``improves'' the modular invariance of the character).
In \cite{KW89} the characters of the $ \wg $-modules $ \LLa $ for arbitrary admissible $ \La $ were computed, see Theorem 3.1, or formula (3.3) there for another version in case of a principal admissible $ \La. $
In order to write down the latter formula, recall the normalized affine denominator for $ \wg: $
\[ \hat{R} (h) = q^{\frac{\dim \fg}{24}} e^{\wrh(h)} \prod_{n=1}^{\infty} (1-q^n)^\ell \prod_{\al \in \Delta_+} (1-e^{\al(z)}q^n) (1-e^{-\al(z)} q^{n-1}). \]
In coordinates \eqref{3} this becomes:
\begin{equation}
\label{4}
\hat{R} \tzt = (-i)^{|\Delta_+|} e^{2 \pi i h^\vee t} \eta (\tau)
^{\half(3\ell - \dim \fg)}
\prod_{\al \in \Delta_+} \vartheta_{11} (\tau, \al(z)),
\end{equation}
where
\[ \vartheta_{11} \tz =- i q^{\frac{1}{12}} e^{-\pi i z} \eta (\tau) \prod_{n=1}^{\infty} (1-e^{-2 \pi i z} q^n)(1-e^{2 \pi i z} q^{n-1}) \]
is one of the standard Jacobi forms $\vartheta_{ab}$, $a,b=0$ or 1
(see e.g., Appendix to \cite{KW14}), and $\eta(\tau)$ is the Dedekind eta function.
For a principal admissible $ \La, $ given by \eqref{2}, formula (3.3) from \cite{KW89} becomes in coordinates \eqref{3}:
\begin{equation}
\label{5}
(\hat{R} \ch_\La) \tzt = (\hat{R} \ch_{\La^0}) \left( u \tau, y^{-1} (z + \tau \beta), \frac{1}{u} (t + (z|\beta) + \frac{\tau |\beta|^2}{2}) \right).
\end{equation}
It follows from \eqref{5} that if $ \La^0 = 0 $ in \eqref{2} (so that $ \ch_{\La^0} =1 $), which is equivalent to
\begin{equation}
\label{6}
k + h^\vee = \frac{h^\vee}{u} \mbox{ and } \gcd (u, h^\vee) = \gcd (u, r^\vee) = 1,
\end{equation}
the (normalized) character $ \ch_\La $ turns into a product. The level $ k, $ defined by \eqref{6}, is naturally called the \textit{boundary principal admissible} level in \cite{KRW03}, see formula (3.5) there. We obtain from Proposition \ref{prop1}, \eqref{4} and \eqref{5}
\begin{proposition}
\label{prop2}
\begin{enumerate}
\item[(a)] All boundary principal admissible weights are of level $ k, $ given by \eqref{6}, and are of the form
\begin{equation}
\label{7}
\La = (t_\beta y). (k \La_0),
\end{equation}
where $ \beta \in Q^*, y \in W $ are such that $ (t_\beta y) \hat{\Pi}_u \subset \hat{\Delta}_+. $ In particular, $ k \La_0 $ is a principal admissible weight of level \eqref{6}.
\item[(b)] If $ \La $ is of the form \eqref{7}, then
\[ \ch_\La \tzt = e^{2 \pi i (kt + \frac{h^\vee}{u}(z | \beta))} q^{\frac{h^\vee}{2u}|\beta|^2} \left( \frac{\eta (u \tau)}{\eta (\tau)}\right)
^{\half(3\ell - \dim \fg)} \prod_{\al \in \Delta_+} \frac{\vartheta_{11} (u \tau, y(\al) (z + \tau \beta))}{\vartheta_{11} (\tau, \al (z))}. \]
\end{enumerate}
\end{proposition}
\begin{remark}
\label{rem1}
For the vacuum module $ L(k\La_0) $ of the boundary principal admissible
level $k$ the character formula from Proposition \ref{prop2}(b) becomes
\[ \ch_{k\La_0} \tzt = e^{2 \pi i kt} \left( \frac{\eta(u \tau)}{\eta (\tau)}\right)^{\half(3\ell - \dim \fg)} \prod_{\al \in \Delta_+} \frac{\vartheta_{11} (u \tau, \al(z))}{\vartheta_{11} (\tau, \al(z))}. \]
\end{remark}
\begin{example}
\label{ex1}
Let $ \fg = s \ell_2, $ so that $ h^\vee = 2. $ Then the boundary levels are $ k = \frac{2}{u}-2, $ where $ u $ is a positive odd integer, and all admissible weights are
\[ \La_{k,j} : = t_{-\frac{j}{2} \al_1} . (k \La_0) = (k + \frac{2j}{u}) \La_0 - \frac{2j}{u} \La_1, \ j = 0,1, \ldots, u-1, \]
and the character formula from Proposition \ref{prop2}(b) becomes:
\begin{equation}
\label{8}
\ch_{\La_{u,j}} = e^{2 \pi i (kt - \frac{j}{u}z)} q^{\frac{j^2}{2u}} \frac{\vartheta_{11} (u \tau, z-j\tau)}{\vartheta_{11} \tz}.
\end{equation}
For $ u = 3 $ and 5 some of these formulas were conjectured in \cite{Y16}.
\end{example}
\begin{example}
\label{ex2}
Let $ \fg = s\ell_N, $ so that $ h^\vee = N, $ let $ N>1 $ be odd, and let $ u =2 $. Then the boundary admissible level is $ k = -\frac{N}{2}, $ and the boundary admissible weights of the form $ t_\beta . (k\La_0) $ are:
\[ \La_{N,p} = -\frac{N}{2} \La_p, \ p = 0,1, \ldots, ,N-1, \]
where $ \La_p $ are the fundamental weights of $ \wg. $
Letting $ z = \sum_{i =1}^{N-1} z_i \bar{\La}_i, $ where $ \bar{\La}_i $ are the fundamental weights of $ \fg, $ the character formula from Proposition \ref{prop2} (b) becomes:
\[
\begin{aligned}
&\ch_{\La_{N,p}} \tzt = i^{p(N-p)} e^{-\pi i Nt} \left( \frac{\eta (2 \tau)}{\eta (\tau)}\right)^{- \frac{(N-1)(N-2)}{2}} \\
&\times \frac{ \displaystyle \prod_{\substack{ 1 \leq i \leq j < p \\ \mbox{ or } p < i \leq j < N}} \vartheta_{11} (2 \tau, z_i + \ldots + z_j) \prod_{1 \leq i \leq p \leq j < N} \vartheta_{01} (2 \tau, z_i + \ldots + z_j)}{\displaystyle \prod_{1 \leq i \leq j < N} \vartheta_{11} ( \tau, z_i + \ldots + z_j)},
\end{aligned} \]
where
\[ \vartheta_{01} \tz = \prod_{n =1 }^{\infty} (1-q^n) (1-e^{2 \pi i z} q^{n-\half}) (1-e^{-2 \pi i z} q^{n-\half}). \]
This follows from Proposition \ref{prop2}(b) by applying to $ \vartheta_{11} $ an elliptic transformation (see e.g. \cite{KW14}, Appendix).
In particular
\[ \ch_{-\frac{N}{2} \La_0} = e^{- \pi i Nt} \left(\frac{\eta (2 \tau)}{\eta (\tau)}\right)^{-\frac{(N-1)(N-2)}{2}} \prod_{1 \leq i \leq j < N} \frac{\vartheta_{11} (2 \tau, z_i + \ldots + z_j)}{\vartheta_{11} (\tau, z_i+ \ldots + z_j)}. \]
The latter formula was conjectured in \cite{XYY16}.
\end{example}
\begin{remark}
\label{rem2}
For principal admissible weights $ \La = (t_\beta y). (k\La_0) $ and $ (t_{\beta'} y'). (k\La_0) $ of boundary level $ k = \frac{h^\vee}{u}-h^\vee $ the $ S $-transformation matrix $ (a(\La, \La')), $ given by \cite{KW89}, Theorem 3.6, simplifies to
\[ a(\La, \La') = | Q / uh^\vee Q^* |^{-\half} \epsilon(yy') \prod_{\al \in \Delta_+} 2 \sin \frac{\pi iu (\rho | \al)}{h^\vee} e^{-2\pi i \left((\rho|\beta + \beta') + \frac{h^\vee (\beta| \beta')}{u}\right)} . \]
\end{remark}
\begin{remark}
\label{rem3}
If $ \fg = s \ell_2 $ and $ k $ is as in Example \ref{ex1}, then
\[ a (\La_{k,j}, \La_{k,j'}) = (-1)^{j + j'} e^{- \frac{2 \pi i jj'}{u}} \frac{1}{\sqrt{u}} \sin \frac{u \pi }{2}. \]
One can compute fusion coefficients by Verlinde's formula:
\[ N_{\La_{k, j_1}, \La_{k, j_2}, \La_{k, j_3}} = (-1)^{j_1+j_2+j_3} \mbox{ if } j_1+j_2+j_3 \in u \ZZ, \mbox{ and } = 0 \mbox{ otherwise}. \]
\end{remark}
\begin{example}
\label{ex3}
Let $\fg=sl_3$, so that $h^\vee=3$, and let $u$ be a positive integer, coprime to 3. Then all (principal) admissible weights have level $k=\frac{3}{u}-3$
and are of the form \eqref{7}, where
\[\beta=-(-1)^p
(k_1\bar{\Lambda}_1+k_2\bar{\Lambda}_2),\, y=r_{\theta}^p,\, p=0\,
\mbox{or}\,1, \,k_i\in\ZZ, k_i\geq \delta_{p,1},\, k_1+k_2\leq u-\delta_{p,0}.\]
Denote this weight by $\Lambda^{(p)}_{u;k_1,k_2}=(t_\beta y).(k\Lambda_0)$. Using Remark \ref{rem2}, one computes the fusion coefficients by Verlinde's formula:
\[N_{\Lambda^{(p)}_{u;k_1,k_2}\Lambda^{(p')}_{u;k'_1,k'_2}\Lambda^{(p'')}_
{u;k''_1,k''_2}}=(-1)^{p+p'+p''} \,\mbox{if}\,\,
(-1)^{p}k_i+(-1)^{p'}k'_i+(-1)^{p''}k''_i\in u\ZZ \,\,\mbox{for}\,i=1,2,\]
and $=0$ otherwise.
\end{example}
\begin{remark}
\label{rem 4}
If $ \La $ is an arbitrary admissible weight, then $ \hat{\Delta}^\La $ decomposes in a disjoint union of several affine root systems. Then $ \La $ has \textit{boundary level} if restrictions of it to each of them has boundary level, and formula (3.4) from \cite{KW89} shows that $ \ch_\La $ decomposes in a product of the corresponding boundary level characters. Note also that all the above holds also for twisted affine Kac-Moody algebras \cite{KW89}.
\end{remark}
\begin{remark}
\label{rem 5}
The product character formula for boundary level affine Kac-Moody superalgebras holds as well, see \cite{GK15}, formula (2).
\end{remark}
Recall that to any $ s\ell_2$-triple $ \{ f, x, e \} $ in $ \fg, $ where $ [x,f] = -f, \ [x,e] = e, $ one associates a $ W $-algebra $ W^k (g,f) $, obtained from the vacuum $ \wg $-module of level $ k $ by quantum Hamiltonian reduction, so that any $ \wg $-module $ \LLa $ of level $ k $ produces either an irreducible $ W^k (g,f) $-module $ H(\La) $ or zero. The characters of $ \LLa $ and $ H(\La) $ are related by the following simple formula (\cite{KRW03} or \cite{KW14}):
\begin{equation}
\label{9}
\left( \overset{W}{R} \ch_{H(\La)} \right) \tz = \left(\hat{R} \ch_\La \right) (\tau, -\tau x + z, \tot (x|x)).
\end{equation}
Here $ z \in \fh^f $, the centralizer of $f$ in $\fh$, and
\begin{equation}
\label{10}
\overset{W}{R} \tz = \eta (\tau)^{\frac{3}{2}l - \half \dim (\fg_0 + \fg_{1/2})} \prod_{\al \in \Delta^0_+} \vartheta_{11} (\tau, \al (z)) \left(\prod_{\al \in \Delta_{1/2}} \vartheta_{01} (\tau, \al(z)) \right)^{1/2},
\end{equation}
where $ \fg = \oplus_j \fg_j $ is the eigenspace decomposition for $ \ad x, \ \Delta_j \subset \Delta $ is the set of roots of root spaces in $ \fg_j $ and $ \Delta^0_+ = \Delta_+ \cap \Delta_0 $ (we assume that $ \Delta_j \subset \Delta_+ $ for $ j > 0 $). If $ k $ is a boundary level \eqref{6}, we obtain from Proposition \ref{prop2}(b) and formulas \eqref{9}, \eqref{10} the following character formula for $ H(\La) $ if $ \La $ is a principal admissible weight \eqref{7} ($z\in \fh^f$):
\begin{equation}
\label{11}
\begin{aligned}
\ch_{H(\La)} \tz & = (-i)^{|\Delta_+|}q^{\frac{h^\vee}{2u} |\beta - x|^2} e^{\frac{2 \pi i h^\vee}{u} (\beta | z)} \\
& \times \frac{\eta (u \tau)^{\frac{3}{2}\ell - \half \dim \fg}}{\eta (\tau)^{\frac{3}{2}\ell - \half \dim (\fg_0 + \fg_{1/2})}} \ \frac{\displaystyle \prod_{\al \in \Delta_+} \vartheta_{11} (u \tau, y (\al) (z + \tau \beta -\tau x))}{ \displaystyle \prod_{\al \in \Delta^0_+} \vartheta_{11} (\tau, \al(z)) \left( \displaystyle \prod_{\al \in \Delta_{1/2}} \vartheta_{01} (\tau, \al(z))\right)^{1/2}}.
\end{aligned}
\end{equation}
\begin{remark}
\label{rem 6}
A formula, similar to Proposition \ref{prop2}(b) and to formula \eqref{11}, holds if $ \fg $ is a basic Lie superalgebra; one has to replace the character by the supercharacter, $ \dim $ by $ \sdim, $ and the factor $ \vartheta_{ab}, $ corresponding to a root $ \al, $ by its inverse if this root is odd.
Also, the character is obtained from the supercharacter by replacing
$\vartheta_{ab}$ by $\vartheta_{a,b+1\!\! \mod 2}$ if the root $\alpha$ is odd.
\end{remark}
\begin{remark}
\label{rem 7}
An example of \eqref{11} is the minimal series representations of the Virasoro algebra with
central charge $c=1-\frac{3(u-2)^2}{u}$, obtained by the quantum Hamiltonian reduction from the boundary admissible $\hat{sl}_2$-modules from Example \ref{ex1}.
For $j=u-1$ one gets 0, for $u=3$ and $j=0,1$ one gets the trivial representation, but for all other $j$ and $u\geq 5$ the characters are the product sides of the Gordon generalizations of the Rogers-Ramanujan idenities
(the latter correspond to $u=5$). Another example is
the minimal series representations of the $N=2$ superconformal algebras,
see \cite{KRW03}, Section 7.
\end{remark}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,377
|
After graduating from Loyola University Chicago in 2004, Dr. Lawlor completed a residency in Anatomic Pathology and a fellowship in Neuropathology at Massachusetts General Hospital/Harvard Medical School, and has since become board-certified in both of these fields. Dr. Lawlor pursued postdoctoral research training in the laboratory of Dr. Alan Beggs at Children's Hospital Boston, where he focused on the pathological analysis of animal models of muscle disease and the development of treatments for X-linked myotubular myopathy.
Since moving to the Medical College of Wisconsin September of 2011, Dr. Lawlor has continued to work closely with the Beggs laboratory while establishing clinical and research neuromuscular pathology laboratories. The work performed in his research laboratory at MCW has performed the pathological analyses for a number of preclinical trial studies for animal models of X-linked myotubular myopathy that are currently being performed worldwide, including anti-myostatin therapy, gene therapy, and protein replacement therapy. He has also recently begun evaluating myostatin inhibition in murine models of nemaline myopathy.
In the spring of 2013, Dr. Lawlor's laboratory became the site of the Congenital Muscle Disease Tissue Repository, which is meant to provide a central place for the donation and distribution of patient tissues, thanks to a generous donation by a collection of non-profit organizations. It is our hope that such a central resource for tissue storage and distribution will improve the pace of research in our field.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 7,013
|
\section{Introduction}
The spin dynamics of charge carriers in semiconductor structures has been attracting a great deal of attention~\cite{Dyakonov08:SPS}. Much effort is focused on experimental and theoretical studying the electron spin dephasing in quantum wells (QWs) and obtaining the controllable spin lifetime (for a recent review see Refs.~\onlinecite{Wu10,Korn10,Glazov10,Muller10}). It is established that, in a wide range of temperature, carrier density and mobility, the spin lifetime of a two-dimensional electron gas is limited by the D'yakonov-Perel' (DP) spin dephasing mechanism~\cite{Dyakonov71,Dyakonov86}. The mechanism is based on precession of individual electron spins in the Rashba and/or Dresselhaus effective magnetic field
and is highly sensitive to the QW crystallographic orientation~\cite{Dyakonov86,Averkiev99,Cartoixa05,Tarasenko09} as well as
details of electron scattering by structural defects and phonons. Depending on the ratio between the period of spin precession in the effective field and the momentum relaxation time of carriers, the spin polarization monotonically decays or exhibits damping oscillations~\cite{Gridnev01,Brand02,Leyland07,Griesbeck09}.
So far, the DP mechanism has been theoretically analyzed for central electron scattering, neglecting possible anisotropy of scattering potential. However, such a model does not always describe electron scattering in QWs adequately. Transport measurements reveal that electron mobility and scattering rate can be anisotropic in the QW plane even for (001)-grown structures~\cite{Papadakis02,Ercolani08,Akabori10}.
Strong in-plane anisotropy of electric properties has been also demonstrated recently for QW structures with embedded semidisk-shaped or elongated dots~\cite{Sassine08,Li11}.
In the present paper, we study the electron spin dephasing in QW structures with anisotropic scattering potential and derive equations for the spin-relaxation-rate tensor.
We show that anisotropic scattering qualitatively modifies the spin dephasing both in the collision-dominated and oscillatory regimes. The paper is organized as follows. In Sec.~\ref{Sec_gen}, we present a general formalism for describing the electron spin dynamics in quantum wells in the presence of anisotropic elastic scattering. Collision-dominated regime of the DP spin dephasing is considered in Sec.~\ref{Sec_collisions}. We show that
the spin-relaxation-rate tensor can be expressed in terms of the constants of spin-orbit splitting and the electric conductivity tensor. In (001)-grown QWs with anisotropic scatterers,
the longest lifetime of electron spin along the growth direction is achieved in QWs with structure inversion asymmetry where the Rashba effective field is nonzero.
The oscillatory regime of spin dephasing, which is realized in high-mobility QWs, is considered in Sec.~\ref{Sec_oscillations}. It is shown that anisotropic scattering leads to a coupling between the in-plane and out-of-plane spin components even in the case of isotropic Rashba or Dresselhaus spin-orbit splitting. In particular, the spin dephasing of carriers initially polarized along the QW normal leads to the emergence of a net in-plane spin component which then also vanishes. We also analyze the effect of an external magnetic field on spin dephasing both in the collision-dominated and oscillatory regimes and show that the field dependence of electron spin can be very intricate. The main results of the paper are summarized in Sec.~\ref{Summary}.
\section{General equations}\label{Sec_gen}
The time evolution of the spin distribution function $\bm{s_k}$ in the wave vector $\bm{k}$ space is described by the kinetic equation~\cite{Dyakonov71,Meier84:OO,Gridnev01}
\begin{equation}\label{eq:kinetic}
\frac{\partial \bm{s_k}}{\partial t} + \bm{s_k} \times \bm{\Omega_k} = \bm{g} + {\rm St} \, \bm{s_k} \:,
\end{equation}
where $\bm{\Omega_k}$ is the Larmor frequency corresponding to the effective magnetic field, $\bm{g}$ is the spin generation rate,
e.g., due to optical excitation with circularly polarized light, and ${\rm St} \, \bm{s_k}$ is the collision integral. We consider $n$-doped QW structure with a degenerate two-dimensional electron gas
and assume that
optical excitation creates spin polarized electrons directly at the Fermi level, i.e., $\bm{g} \propto \delta(\varepsilon_{\bm{k}} - \varepsilon_F)$, where $\varepsilon_{\bm{k}} = \hbar^2 \bm{k}^2/(2m^*)$ is the electron kinetic energy, $m^*$ is the effective mass, and $\varepsilon_F$ is the Fermi energy. Such resonant excitation is commonly used in experiments to minimize electron gas heating~\cite{Brand02,Leyland07,Griesbeck09,Belkov08,Volkl11}. Under these conditions, the spin dephasing is determined by the effective magnetic field and details of electron scattering at the Fermi level, and energy relaxation processes are negligible.
For elastic spin-conserving scattering, the collision integral has the form~\cite{SturmanFridkin92}
\begin{equation}\label{eq:collision_int}
{\rm St} \, \bm{s_k} = \sum_{\bm{k}'} (W_{\bm{k} \bm{k}'} \bm{s}_{\bm{k}'} - W_{\bm{k}' \bm{k}}\bm{s_k}) \:,
\end{equation}
where $W_{\bm{k} \bm{k}'}$ is the rate of electron scattering from the state $\bm{k}'$ into the state $\bm{k}$ and it is assumed that the spin-orbit splitting $\hbar \Omega_{\bm{k}}$ is much smaller than the Fermi energy~\cite{Ivchenko90}. Below, we take the scattering rate in the form $W_{\bm{k} \bm{k}'} = 2\pi \hbar^2 /(m^* L^2) \, w_{\bm{k} \bm{k}'} \, \delta(\varepsilon_{\bm{k}} - \varepsilon_{\bm{k}'})$ with
$L^2$ being the normalization area.
To solve Eq.~(\ref{eq:kinetic}) we decompose the distribution function $\bm{s_k}$, the frequency $\bm{\Omega_k}$, and the scattering rate $w_{\bm{k} \bm{k}'}$ into angular harmonics~\cite{Meier84:OO}
\begin{eqnarray}
\bm{s_k} &=& \sum_n \bm{s}_n {\rm e}^{i n \varphi} \:, \nonumber \\
\bm{\Omega_k}&=& \sum_{n=\pm 1} \bm{\Omega}_{n} {\rm e}^{ i n \varphi} \:, \nonumber\\
w_{\bm{k} \bm{k}'} &=& \sum_{n, m} w_{n, m} \, {\rm e}^{ i n \varphi + i m \varphi'} \:, \label{eq:harmonic_decomp}
\end{eqnarray}
where $\varphi = \arctan (k_y / k_x)$ and $\varphi' = \arctan (k'_y / k'_x)$ are the polar angles of $\bm{k}$ and $\bm{k}'$, respectively. The dominant contribution to the effective magnetic field in quantum wells is linear in the wave vector~\cite{Dyakonov86,Cartoixa06}. Therefore, we assume that the frequency $\bm{\Omega_k}$ contains only terms with $n = \pm1$; the coefficients $\bm{\Omega}_{\pm 1}$ are related by $\bm{\Omega}_{1}=\bm{\Omega}_{-1}^*$.
The coefficients $w_{n, m}$ satisfy the relations $w_{n, m}=w_{-n, -m}^*$, $w_{n, m}=(-1)^{n+m} w_{m, n}$, and $w_{n, 0}=w_{0, n}=0$ ($n\neq 0$) which follow from reality of the scattering rate, time inversion symmetry, and the optical theorem, respectively.
Substituting the series~(\ref{eq:harmonic_decomp}) for $\bm{s_k}$, $\bm{\Omega_k}$, and $w_{\bm{k} \bm{k}'}$ in Eq.~(\ref{eq:kinetic}) we obtain the system of linear differential equations for the angular harmonics $\bm{s}_n$
\begin{equation}\label{eq:kinetic_decomp}
\frac{d\bm{s}_n}{dt} + \sum_{m=\pm 1} \bm{s}_{n-m} \times \bm\Omega_m =\bm{g} \, \delta_{n,0} - w_{0,0} \bm{s}_n + \sum_m w_{n,-m} \bm{s}_m \:.
\end{equation}
Here, it is assumed that $\bm{g}$ is independent of the direction of $\bm{k}$ and, therefore, contains only zero angular harmonic.
By solving Eqs.~(\ref{eq:kinetic_decomp}) numerically or analytically one can find the time dependence of $\bm{s}_0$ and thereby the evolution of the total spin density
$\bm{S}=(1/L^2) \sum_{\bm k} \bm{s_k} = m^*/(2\pi\hbar^2) \int_{0}^{\infty} \bm{s}_0 d\varepsilon$.
\section{Collision-dominated regime}\label{Sec_collisions}
In this section, we consider the case of frequent electron collisions, when the spin rotation angle between scattering events is small. In this regime, the anisotropic part of the spin distribution function $\bm{s_k}$ is much smaller than $\bm{s}_0$ and Eqs.~(\ref{eq:kinetic_decomp}) can be solved iteratively~\cite{Dyakonov71,Dyakonov86}.
Such a procedure gives the following equation for the spin density
\begin{equation}\label{eq:zero_s}
\frac{d\bm{S}}{dt} = \bm{G} - \bm{\Gamma} \bm{S} \:,
\end{equation}
where $\bm{G}=(1/L^2) \sum_{\bm{k}} \bm{g}$ is the total spin generation rate per unit area and $\bm{\Gamma}$ is the spin-relaxation-rate tensor. The latter is defined by
\begin{equation}\label{eq:gamma_sk}
\bm{\Gamma} \bm{S} = (1/L^2) \sum_{\bm{k}} (\bm{s}_{-1} \times \bm{\Omega}_{1} + \bm{s}_{1} \times \bm{\Omega}_{-1} ) \:,
\end{equation}
where, to first oder in the effective magnetic field, the harmonics $\bm{s}_{\pm 1}$ are to be found from the equation
\begin{equation}\label{eq:delta_s}
\bm{s}_0 \times \bm{\Omega_k} = {\rm St\,} \bm{s_k} \:.
\end{equation}
The calculation of Eqs.~(\ref{eq:gamma_sk}) and~(\ref{eq:delta_s}) is similar to the calculation of an electric current density $\bm{j}$ induced by a static electric field $\bm{E}$. Indeed, the current density is expressed via the electron distribution function $f_{\bm{k}}$ by $\bm{j}=(e / L^2) \sum_{\bm{k}} (\bm{v}_1 f_{-1} + \bm{v}_{-1} f_{1})$, where $e$ is the electron charge, $\bm{v}_{\pm 1}$ and $f_{\pm 1}$ are the angular harmonics of the electron velocity $\bm{v_k}= \hbar \bm{k} / m^*$ and the distribution function, respectively. Within linear in $\bm{E}$ regime, the harmonics $f_{\pm 1}$ are found from the equation $e (df_0 / d\varepsilon_{\bm{k}}) \, \bm{v_k} \cdot \bm{E} = {\rm St} f_{\bm{k}}$, which is similar to Eq.~(\ref{eq:delta_s}). Such an analogy allows us to express the spin-relaxation-rate tensor in terms of the tensor $2 \times 2$ of in-plane electric conductivity $\bm{\sigma}$ as follows
\begin{equation}\label{eq:gamma}
\tG = \frac{\pi m^*}{e^2} \left[\bm{I}_3 \Tr(\tL \ts\trans{\tL}) - \tL\ts\trans{\tL}\right] \:,
\end{equation}
where $\bm{I}_3$ is the unit matrix $3 \times 3$, $\tL$ is the matrix $3\times 2$ relating components of the frequency $\bm{\Omega_k}$ and the wave vector $\bm{k}$,
$\bm{\Omega_k}=\tL \bm{k}$, and we used that $\ts = \ts^T$. Equations~(\ref{eq:zero_s}) and~(\ref{eq:gamma}) describe the spin dynamics of a degenerate two-dimensional electron gas for arbitrary elastic scattering
and generalize previous results. If the scattering potential is central, then the conductivity tensor is diagonal and can be expressed via the momentum relaxation time $\tau_1$ at the Fermi energy by $\ts = \sigma \bm{I}_2$, where $\sigma = \tau_1 e^2 k_F^2 /(2 \pi m^*)$, $k_F$ is the Fermi wave vector, and $\bm{I}_2$ is the unit matrix $2 \times 2$. In this particular case, Eq.~(\ref{eq:gamma}) has the form $\tG = (\tau_1 k_F^2 /2) \left[\bm{I}_3 \Tr(\tL \trans{\tL}) - \tL\trans{\tL}\right]$, in agreement with the result of D'yakonov and Kachorovskii~\cite{Dyakonov86}.
To analyze Eq.~(\ref{eq:gamma}) in more detail we consider QW grown along $z \parallel [001]$ crystallographic direction. In such structures, the matrix $\tL$ has nonzero components
\begin{equation}\label{eq:lambda}
\Lambda_{xy}=\alpha+\beta \:, \;\; \Lambda_{yx}=\beta-\alpha \:,
\end{equation}
where $\alpha$ and $\beta$ are the constants of the Rashba and Dresselhaus spin-orbit splitting, respectively, $x\parallel [1\bar{1} 0]$ and $y\parallel [110]$ are the in-plane axes~\cite{Averkiev99,Averkiev08}. Then, components of the tensor $\tG$ take the form
\begin{eqnarray}\label{eq:gamma_components}
\Gamma_{xx}= \frac{\pi m^*}{e^2} (\alpha - \beta)^2 \sigma_{xx} \:, \;\; \Gamma_{yy} = \frac{\pi m^*}{e^2} (\alpha + \beta)^2 \sigma_{yy} \:, \nonumber \\
\Gamma_{xy}=\Gamma_{yx}=\frac{\pi m^*}{e^2} (\alpha^2 - \beta^2) \sigma_{xy} \:, \;\; \Gamma_{zz}=\Gamma_{xx}+\Gamma_{yy} \:. \;\;\;
\end{eqnarray}
Dependences of the tensor components $\Gamma_{xx}$, $\Gamma_{yy}$, and $\Gamma_{xy}$ on the ratio $\alpha / \beta$ for QWs with strong scattering anisotropy are plotted in Fig.~1 by dashed curves.
The in-plane eigen values $\gamma_1$ and $\gamma_2$ of the tensor $\tG$ and the out-of-plane value $\gamma_z$, which coincides with $\Gamma_{zz}$, are found from the equation ${\rm det} (\gamma \bm{I}_3 - \tG)=0$ and presented in Fig.~\ref{figure1} by solid curves. One can see that the minimum of $\gamma_z$, which corresponds to the longest lifetime of the spin component $S_z$, is achieved in asymmetric QWs where the Rashba constant $\alpha \neq 0$. This is in contrast to QW structures with central electron scattering where the Rashba splitting is known to decrease the spin lifetime. The analysis of Eqs.~(\ref{eq:gamma_components}) shows that, at fixed $\beta$, the rate $\gamma_z$ reaches the minimum at $\alpha / \beta = (\sigma_{xx}-\sigma_{yy})/ \Tr \ts$. We also note that $\gamma_1 \neq \gamma_2$ no matter how small the ratio $\alpha/ \beta$ is if the eigen axes of the conductivity tensor do not coincide with $x$ and $y$, i.e., $\sigma_{xy} \neq 0$.
\begin{figure}[b]
\includegraphics[width=0.9\linewidth]{figure1.eps}
\caption{(Color online) Dependences of the spin-relaxation-rate tensor components $\Gamma_{xx}$, $\Gamma_{yy}$, and $\Gamma_{xy}$ (dashed curves) and the tensor eigen values $\gamma_1$, $\gamma_2$, and $\gamma_z=\Gamma_{zz}$ (solid curves) on the ratio $\alpha / \beta$ calculated for $\sigma_{xx} / \Tr \ts = \sigma_{xy} / \Tr \ts =1/4$. The curves are normalized by $\Gamma_0 = (\pi m^* /e^2) \, \beta^2 \Tr \ts$.}
\label{figure1}
\end{figure}
Now we consider the effect of an external magnetic field $\bm{B}$ on spin dephasing. The magnetic field causes the Larmor precession of electron spins and cyclotron motion of electrons in QW plane with the frequencies $\bm{\Omega}_L = g \mu_B \bm{B} / \hbar$ and $\omega_c = e B_z / (m^* c)$, respectively~\cite{Ivchenko73,Wilamowski04}. Here, $g$ is the effective electron $g$-factor, $\mu_B$ is the Bohr magneton, $e$ is the electron charge, and $c$ is the speed of light. Both effects are theoretically described in the framework of kinetic approach, with kinetic equation having the form~\cite{Ivchenko73,Glazov04,Tarasenko09}
\begin{equation}\label{kinetic_field}
\frac{\partial \bm{s_k}}{\partial t} + \bm{s_k} \times (\bm{\Omega_k} + \bm{\Omega}_L) - \omega_c \left[ \bm{k} \times \frac{\partial}{\partial\bm{k}} \right]_z \bm{s_k} = \bm{g} + {\rm St} \, \bm{s_k} \:.
\end{equation}
Solution of Eq.~(\ref{kinetic_field}) shows that, in the collision-dominated regime, the time evolution of the spin density $\bm{S}$ is described by Eqs.~(\ref{eq:zero_s}) and~(\ref{eq:gamma}) where (i) the additional term $\bm{S} \times \bm{\Omega}_L$ is added to the left-hand side of Eq.~(\ref{eq:zero_s}) and (ii) $\ts$ in Eq.~(\ref{eq:gamma}) is replaced by the transposed tensor of electric conductivity in the magnetic field $\ts^T(\omega_c)$.
We note that in zero magnetic field the conductivity tensor is symmetric, i.e., $\ts^T(0)=\ts(0)$. In the presence of magnetic field, the tensor $\ts(\omega_c)$ contains both symmetric $\smt{\ts}(\omega_c)=[{\ts}(\omega_c) + \trans{\ts}(\omega_c)] / 2$ and antisymmetric $\asmt{\ts}=[{\ts}(\omega_c) - \trans{\ts}(\omega_c)] / 2$ parts. Accordingly, the right-hand side of Eq.~(\ref{eq:gamma}) can be also reduced to the sum of symmetric $\tG^{(s)}(\omega_c)$ and antisymmetric $\tG^{(a)}(\omega_c)$ tensors. The symmetric tensor $\tG^{(s)}(\omega_c)$ describes spin relaxation. The antisymmetric third-rank tensor $\tG^{(a)}(\omega_c)$ is equivalent
to a pseudovector $\delta\bm{\Omega}_L$ and represents, in fact, a correction to the Larmor frequency~\cite{Ivchenko73,Edelstein06,Tarasenko09}. Therefore, the equation describing the time evolution of spin density in the magnetic field has the final form
\begin{equation}\label{eq:spineq_b}
\frac{d\bm{S}}{dt} + \bm{S} \times (\bm{\Omega}_L + \delta \bm{\Omega}_L ) = \bm{g} - \tG(\omega_c) \bm{S} \:,
\end{equation}
where
\begin{equation}\label{eq:gamma_field}
\tG(\omega_c) = \frac{\pi m^*}{e^2} \left\{ \bm{I}_3 \Tr\left[\tL\smt{\ts}(\omega_c)\trans{\tL}\right] - \tL\smt{\ts}(\omega_c)\trans{\tL}\right\} \\
\end{equation}
is the spin-relaxation-rate tensor,
\begin{equation}\label{eq:omega_b}
{\left( \delta\bm{\Omega}_L \right)}_\alpha = \frac{\pi m^*}{2 e^2} \sum_{\beta\gamma} {\epsilon}_{\alpha \beta \gamma} \left[\tL\asmt{\ts}(\omega_c)\trans{\tL}\right]_{\beta \gamma}
\end{equation}
is the Larmor frequency correction caused by cyclotron motion, and ${\epsilon}_{\alpha \beta \gamma}$ is the antisymmetric third-rank tensor (Levi-Civita symbol).
As follows from Eq.~(\ref{eq:spineq_b}), the precession of total electron spin is determined by the frequency $\bm{\Omega}_L + \delta \bm{\Omega}_L$. The frequency correction $\delta \bm{\Omega}_L$ depends on the magnetic field non-monotonically: It is proportional to the magnetic field at small fields, reaches maximum at $\omega_c \tau_1 \sim 1$, and decreases with the further field increase. It may happen that $\bm{\Omega}_L$ and $\delta \bm{\Omega}_L$ have opposite signs and compensate each other at a certain magnetic field. Such a compensation results in a peculiarity in the magnetic field dependence of the electron spin. As an example, we consider the simple case of continuous spin generation, central electron scattering, and the magnetic field $\bm{B}$ pointed along the QW normal $[001]$. Then, the conductivity-tensor components have the form $\sigma_{xx}(\omega_c) = \sigma_{yy}(\omega_c) = \sigma/[1+(\omega_c\tau_1)^2]$, $\sigma_{xy}(\omega_c) = - \sigma_{yx}(\omega_c)= \sigma \omega_c\tau_1/[1+(\omega_c\tau_1)^2]$, and the spin-relaxation-rate tensor (\ref{eq:gamma_field}) is diagonal in the chosen coordinate frame $(x,y,z)$. Straightforward calculation shows that the components of the steady-state spin density $\bm{S}$ have the form
\begin{eqnarray}\label{eq:static}
S_x &=& \frac{\Gamma_{yy}(\omega_c) \, G_x - (\Omega_L + \delta \Omega_L) G_y}{\Gamma_{xx}(\omega_c)\, \Gamma_{yy}(\omega_c) + (\Omega_L + \delta \Omega_L)^2} \:, \nonumber \\
S_y &=& \frac{\Gamma_{xx}(\omega_c) \, G_y + (\Omega_L + \delta \Omega_L) G_x}{\Gamma_{xx}(\omega_c)\, \Gamma_{yy}(\omega_c) + (\Omega_L + \delta \Omega_L)^2} \:, \nonumber \\
S_z &=& \frac{G_z}{\Gamma_{zz}(\omega_c)} \:,
\end{eqnarray}
where $\tG(\omega_c)= \tG(0) / [1+(\omega_c \tau_1)^2]$, $\Gamma_{xx} = \tau_1 k_F^2 (\alpha-\beta)^2/2$, $\Gamma_{yy} = \tau_1 k_F^2 (\alpha+\beta)^2/2$, $\Gamma_{zz} = \Gamma_{xx} + \Gamma_{yy}$~\cite{Averkiev99}, and
\begin{equation}\label{eq:delta_simple}
\delta \Omega_L = \frac{ \tau_1 k_F^2(\alpha^2 - \beta^2)}{2} \frac{\omega_c \tau_1}{1+(\omega_c \tau_1)^2} \:.
\end{equation}
Shown in Fig.~\ref{figure2} are the magnetic field dependences of the in-plane components $S_x$ and $S_y$ calculated for the spin generation $\bm{G} \parallel x$, the Rashba spin-orbit splitting, and $\Omega_L / \omega_c = \pm 0.01$. Such ratios of the Larmor to cyclotron frequency can be realized, e.g., in GaAs/AlGaAs QW structures~\cite{Kiselev98}. The dependences plotted for $\Omega_L / \omega_c = 0.01$ (dashed curves) are rather simple:
$S_x$ is maximal at $B=0$ and monotonically decays with the field increase; $S_y \propto B$ at small fields, reaches maximum, and then decays. In contrast, the magnetic field dependences of $S_x$ and $S_y$ calculated for $\Omega_L / \omega_c = - 0.01$ (solid curves) are completely different. The component $S_x$ reaches maximum at a finite magnetic field corresponding to $\omega_c\tau_1 \approx 2$. $S_y$ has two extrema for a fixed direction of $\bm{B}$ and changes the sign approximately at the magnetic field where $S_x$ is maximal. Such a behavior is caused by interference of $\Omega_L$ and $\delta\Omega_L$ which compensate each other at $\omega_c \tau_1 \approx 2$ for the parameters chosen. The vanishing of the total Larmor frequency leads to the Hanle-like curves in the vicinity of this magnetic field, see Fig.~\ref{figure2}. The fact that $S_x$ in the point of compensation is much larger than $S_x(0)$ is caused by a slowdown of the DP spin dephasing by cyclotron motion~\cite{Ivchenko73}, see Eq.~(\ref{eq:static}).
\begin{figure}[ht]
\includegraphics[width=0.85\linewidth]{figure2.eps}
\caption{(Color online) Magnetic field dependences of the spin components $S_x$ and $S_y$ calculated for $\bm{G} \parallel x$, $\bm{B} \parallel z$, the Rashba spin-orbit splitting, $\alpha k_F \tau_1 = 0.3$, and $\Omega_L / \omega_c = - 0.01$ (solid curves) or $\Omega_L / \omega_c = + 0.01$ (dashed curves).}
\label{figure2}
\end{figure}
\section{Oscillatory regime}\label{Sec_oscillations}
Now we consider the oscillatory regime of spin dephasing which occurs if the relaxation time $\tau_1$ is longer than $1/\Omega_{\bm{k}}$~\cite{Gridnev01,Brand02,Leyland07,Griesbeck09}. For arbitrary $\bm{\Omega}_{\bm{k}}$ and scattering rate $w(\varphi,\varphi')$, Eqs.~(\ref{eq:kinetic_decomp}) can be solved only numerically. Therefore, we focus on scattering anisotropy and assume the isotropic spin-orbit splitting of the Rashba type.
The scattering rate can be conveniently presented as the sum of two terms~\cite{SturmanFridkin92}
\begin{equation}
w(\varphi, \varphi') = \cen{w}(|\varphi - \varphi'|) + \delta w(\varphi, \varphi') \:,
\end{equation}
where $\cen{w}(\phi) = \int_0^{2\pi} w(\varphi'+\phi,\varphi')\, d\varphi'/(2\pi)$, $\delta w(\varphi, \varphi')=w(\varphi, \varphi')-\cen{w}(|\varphi - \varphi'|)$, and we assume that $\delta w \ll \cen{w}$. The term $\cen{w}(\phi)$ describes central scattering. The corresponding collision integral is expressed in terms of the relaxation times $\tau_n$ of angular harmonics of the distribution function,
$1/\tau_n = \int_0^{2\pi} \cen{w}(\phi) (1-\cos n\phi) \: d\phi/(2\pi)$. Then, Eqs.~(\ref{eq:kinetic_decomp}) take the form
\begin{eqnarray}
\frac{ds_{z,n}}{dt} - \Omega_R \frac{s_{-,n-1} + s_{+,n+1}}{2} = g_z \delta_{n,0} - \frac{s_{z,n}}{\tau_n} \hspace{1cm} \nonumber \\
+ \sum_m \delta w_{n,-m} \, s_{z,m} \:,\;\; \label{eq:osc_decomp_z} \\
\frac{ds_{\pm,n}}{dt}+\Omega_R \, s_{z,n \mp 1} = g_{\pm} \delta_{n,0} - \frac{s_{\pm,n}}{\tau_n} \hspace{2.3cm} \nonumber \\
+ \sum_m \delta w_{n,-m} \, s_{\pm,m} \,,\;\; \label{eq:osc_decomp_pm}
\end{eqnarray}
where $s_{\pm,n} = s_{x,n} \pm i s_{y,n}$, $g_{\pm} = g_{x} \pm i g_{y}$, and $\Omega_R = \alpha k_F$ is the precession frequency corresponding to the Rashba field at the Fermi level.
In the regime of continuous spin generation, when $\bm{g}$ is independent of time, Eqs.~(\ref{eq:osc_decomp_z}) and~(\ref{eq:osc_decomp_pm}) can be solved iteratively.
To first oder in $\delta w$, spin components depend on the harmonics $\delta w_{11}=\delta w_{-1,-1}^*$ and $\delta w_{2,-1}=\delta w_{-2,1}^*=-\delta w_{-1,2}=-\delta w_{1,-2}^*$. Such angular harmonics in the scattering rate reduce space symmetry of the system and, in fact, correspond to a symmetric tensor and an in-plane vector, respectively. Accordingly, we define $2 \times 2$ tensor $\bm{Q}$ by $Q_{xx} = -Q_{yy} = \Re \delta w_{1,1}$ and $Q_{xy} = Q_{yx} = -\Im \delta w_{1,1}$ and the vector $\bm{\nu}$ by $\nu_x = \Re \delta w_{2,-1}$ and $\nu_y = - \Im \delta w_{2,-1}$.
In these notations, the in-plane $\bm{S}_\parallel=(S_x,S_y)$ and out-of-plane $S_z$ components of the steady-state spin density are given by
\begin{eqnarray}
\bm{S}_\parallel &=& \left(\tau_2 + \frac{2}{\Omega_R^2 \tau_1}\right) \bm{G}_\parallel - \frac{2}{\Omega_R^2} \bm{Q} \bm{G}_\parallel + \frac{\tau_2 \, \bm{\nu}}{\Omega_R} G_z \:,\;\; \label{eq:osc_spin_p} \\
S_z &=& \frac{G_z}{\Omega_R^2 \tau_1} + \frac{\tau_2 \, \bm{\nu} \cdot \bm{G}_\parallel}{\Omega_R} \:, \label{eq:osc_spin}
\end{eqnarray}
where $\bm{G}_\parallel=(G_x,G_y)$ is the projection of generation rate onto the QW plane. Equations~(\ref{eq:osc_spin_p}) and~(\ref{eq:osc_spin}) show that the in-plane and out-of plane spin components are coupled in structures with anisotropic scattering; the coupling strength is proportional to $\bm{\nu}$. In particular, the generation of electron spin along the QW normal, i.e.,
$\bm{G} \parallel z$,
leads not only to $S_z$ but also to $\bm{S}_\parallel \propto \bm{\nu} G_z$. Moreover, even in the case of small scattering anisotropy, the in-plane and out-of-plane spin components can be comparable to each other provided $\Omega_R \tau_1$ is large enough. The second term on the right-hand side of Eq.~(\ref{eq:osc_spin_p}) describes the in-plane anisotropy of spin dephasing due to anisotropic conductivity which, to first order in $\delta w$, has the form $\bm{\sigma}=\tau_1 e^2 k_F^2 /(2\pi m^*) (\bm{I}_2 + \tau_1 \bm{Q})$.
The effect of conductivity anisotropy on spin relaxation was considered in Sec.~\ref{Sec_collisions}. Below we focus on the coupling between $\bm{S}_\parallel$ and $S_z$ and assume, for simplicity, that $\bm{Q}=0$.
The coupling between the in-plane and out-of-plane components of the spin density
can be also studied in experiments with high time resolution. Shown in Fig.~\ref{figure3} are the time dependences $S_z(t)$ and $S_x(t)$ after a short circularly polarized optical pulse which orients electron spins along $z$ at $t=0$. The curves are obtained by solving Eqs.~(\ref{eq:osc_spin_p}) and~(\ref{eq:osc_spin}) numerically for noncentrosymmetric scattering potential with $\bm{\nu} \parallel x$. One can see that $S_z(t)$ demonstrates damping oscillations
as is expected for the oscillatory regime of spin dephasing~\cite{Gridnev01}. The oscillations are caused by precession of individual electron spins in the effective magnetic field.
The in-plane spin component $S_x$ is zero right after the pulse, emerges at the time scale of momentum relaxation, and then also decays. For the parameters given in caption to Fig.~\ref{figure3}, $S_x(t)$ reaches a few percent of $S_z(0)$.
\begin{figure}[t]
\includegraphics[width=0.85\linewidth]{figure3.eps}
\caption{(Color online) Time dependences $S_z(t)$ and $S_x(t)$ after a short optical pulse orienting electron spins along $z$. The curves are calculated for $\bm\nu \parallel x$, $\nu\tau_1=0.1$, $\tau_n =\tau_1$, and two different $\Omega_R \tau_1$.}
\label{figure3}
\end{figure}
Microscopic mechanism of the generation of the in-plane component $S_x$ is a three-stage process illustrated in Fig.~\ref{figure4}. At the first stage [Fig.~\ref{figure4}(a)], electron spins initially oriented along $z$ precess in the Rashba field with the frequency $\bm{\Omega_k}$. The precession forms a spin distribution function $\bm{s}_{\bm{k}}$ containing the first angular harmonic. The electron scattering by noncentrosymmetric defects modifies $\bm{s}_{\bm{k}}$ and, due to the terms $\propto \delta w_{2,-1}$ and $\propto \delta w_{-2,1}$ in the collision integral, partially transforms the first angular harmonic into the second harmonic. The spin distribution function described by the second angular harmonic is shown in Fig.~\ref{figure4}(b). Finally [Fig.~\ref{figure4}(c)], the subsequent rotation of electron spins in the Rashba field results in a net spin polarization of carriers along the $x$ axis.
\begin{figure}[t]
\includegraphics[width=0.99\linewidth]{figure4.eps}
\caption{(Color online) Microscopic mechanism of the generation of in-plane spin polarization when electron spins are initially oriented along the QW normal. Precession of electron spins in the effective magnetic field followed by anisotropic electron scattering and subsequent spin precession in the effective field results in a spin polarization along $x$.}
\label{figure4}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.95\linewidth]{figure5.eps}
\caption{(Color online) (a) Example of a noncentrosymmetric scatterer. Disk with different edges which diffusively and specularly scatter electrons. (b) Electron trajectories in QW plane. Trajectories shown by solid and dashed lines are connected by space inversion.}
\label{figure5}
\end{figure}
The coupling between $\bm{S}_{\parallel}$ and $S_z$ can be also understood by analyzing electron trajectories in QW structures where the angular dependence of scattering rate contains
the harmonics $\delta w_{2,-1}$ and $\delta w_{-2,1}$.
An example of scatterer providing such harmonics is a disk, one edge of which reflects electrons specularly while the other one scatters electrons diffusively, see Fig.~\ref{figure5}(a). Obviously, such scatterers with a preferred orientation in the QW plane
break the in-plane space inversion~\footnote{Together with inversion asymmetry along the $z$ axis causing the Rashba splitting, the scatterers reduce the overall point group of the structure to $C_s$.}.
The scattering anisotropy modify electron trajectories, which affects the spin dynamics.
Indeed, in QWs with centrosymmetric scatterers, each electron trajectory has on average its counterpart connected by space inversion, see solid and dashed lines in Fig.~\ref{figure5}(b). Electrons the spin $\bm{s}_0$ initially oriented along $z$ move in the QW plane and gain in-plane spin components $\bm{s}_{\parallel}$ due to rotation in the Rashba field. However, the particles propagating along the paths interconnected by space inversion gain the opposite projections $\bm{s}_{\parallel}$ leading to a vanishing average in-plane spin polarization. In QWs with noncentrosymmetric scatterers, the space-inversion symmetry of electron trajectories is broken, which results in a non-zero in-plane spin polarization. To first order in scattering anisotropy, $\bm{S}_{\parallel}$ is determined by the angular harmonics $\delta w_{2,-1}$ and $\delta w_{-2,1}$. Other harmonics $\delta w_{n,m}$ describing noncentrosymmetric scattering
can also couple the in-plane and out-of-plane spin components in higher orders in $\delta w$.
Equations~(\ref{eq:osc_decomp_z})-(\ref{eq:osc_spin}) are obtained for the Rashba spin-orbit splitting from general Eq.~(\ref{eq:kinetic_decomp}). Similar calculations can be carried out for the case of Dresselhaus splitting. One can see that Eq.~(\ref{eq:kinetic_decomp}) is invariant to the replacement of the Rashba field with the Dresselhaus one, which differ in sign of $\Omega_y$, and the simultaneous inversion of $s_x$ and $g_x$ signs. Thus, Eqs.~(\ref{eq:osc_spin_p}) and~(\ref{eq:osc_spin}) where $\Omega_R$, $S_x$, and $G_x$ are replaced by $\Omega_D = \beta k_F$, $-S_x$, and $-G_x$, respectively, describe the steady-state spin density in (001)-grown QWs with the Dresselhaus splitting. In general case, if both the Rashba and Dresselhaus contributions to the effective field are present, a prerequisite for the coupling between $\bm{S}_{\parallel}$ and $S_z$ remains the lack of inversion symmetry in scattering potential. We also note that the coupling is absent irrespective of the form of $w(\varphi,\varphi')$ if $|\alpha|=|\beta|$ because, in this particular case, the frequency $\bm{\Omega}_{\bm{k}}$ depends only upon one component of the wave vector.
Finally, we discuss the effect of an external magnetic field on spin dynamics in the oscillatory regime. Equations~(\ref{eq:osc_decomp_z}) and~(\ref{eq:osc_decomp_pm}) with the Larmor precession and cyclotron motion being taken into account have the form
\begin{eqnarray}
\frac{ds_{z,n}}{dt} - \Omega_R \frac{s_{+,n+1} + s_{-,n-1}}{2} + i \frac{\Omega_{L,-} s_{+,n} - \Omega_{L,+} s_{-,n}}{2} \nonumber \\ = g_z \delta_{n,0} - \left( \frac{1}{\tau_n} - i n \omega_c \right) s_{z,n} + \sum_m \delta w_{n,-m} \, s_{z,m} \:,\;\;\; \label{eq:osc_B1} \\
\frac{ds_{\pm,n}}{dt}+\Omega_R \, s_{z,n \mp 1} \pm i \Omega_{L,\pm} \, s_{z,n} = g_{\pm} \delta_{n,0} \hspace{2cm} \nonumber \\ - \left( \frac{1}{\tau_n} - i n \omega_c \mp i \Omega_{L,z} \right) s_{\pm,n} + \sum_m \delta w_{n,-m} \, s_{\pm,m} \,,\;\;\; \label{eq:osc_B2}
\end{eqnarray}
where $\Omega_{L,\pm} = \Omega_{L,x} \pm i \Omega_{L,y}$. Equations~(\ref{eq:osc_B1}) and~(\ref{eq:osc_B2}) are valid for arbitrary strength of spin-orbit splitting $\Omega_R \tau_n$ and angular dependence of the scattering rate.
\begin{figure}[b]
\includegraphics[width=0.99\linewidth]{figure6.eps}
\caption{(Color online) Dependences $S_z$ and $S_x$ on the in-plane magnetic field $\bm{B} \parallel y$ measured in units of $\Omega_L/\Omega_R$. The curves are calculated for $\bm{G} \parallel z$, $\bm\nu \parallel x$, $\tau_n =\tau_1$, $\nu\tau_1=0.1$, and two different $\Omega_R \tau_1$.}
\label{figure6}
\end{figure}
Dependences of the steady-state components $S_z$ and $S_x$ on the in-plane magnetic field $\bm{B}\parallel y$ for continuous spin generation along the $z$ axis are shown in Fig.~\ref{figure6}. The curves are obtained by solving Eqs.~(\ref{eq:osc_B1}) and~(\ref{eq:osc_B2}) numerically for $\bm{\nu} \parallel x$ and different $\Omega_R \tau_1$.
One can see that the curves drastically depend on the parameter $\Omega_R \tau_1$. At $\Omega_R \tau_1 =1$, the dependences $S_z(B)$ and $S_x(B)$ are similar to conventional Hanle curves. The only difference is that $S_x(0) \neq 0$ due to scattering anisotropy, see Eq.~(\ref{eq:osc_spin_p}).
The dependences $S_z(B)$ and $S_x(B)$ calculated for large $\Omega_R \tau_1$, $\Omega_R \tau_1 =3$ in Fig.~\ref{figure6}, look completely different. Instead of a monotonic decrease with the magnetic field, $S_z(B)$ increases first with the field, reach maximum at $\Omega_L \approx \Omega_R$, and then decreases. $S_x(B)$ is nearly independent of $B$ at small magnetic fields and exhibits a sharp rise at $\Omega_L \approx \Omega_R$.
Such a behavior is caused by a partial suppression of the DP spin dephasing mechanism by the external in-plane magnetic field equal to the effective field in high-mobility structures~\cite{Poshakinskiy11}.
We also note that the dependence $S_z(B)$ is always even despite the fact that the vector $\bm{S}(0)$ is not aligned along the $z$ axis. The evenness of $S_z(B)$ follows from Eqs.~(\ref{eq:osc_B1}) and~(\ref{eq:osc_B2}).
For the magnetic field pointed along the QW normal ($\bm{\Omega}_L \parallel z$) and continuous spin generation, the steady-state solution of Eqs.~(\ref{eq:osc_B1}) and~(\ref{eq:osc_B2}) can be found analytically. To first order in the scattering asymmetry $\delta w$ and for $\bm{G}\parallel z$, the solution has the form
\begin{eqnarray}
S_x &=& {\rm Re} \left[ \frac{(\nu_x - i\nu_y) \,\Omega_R \, \tilde{\tau}_{1} \tilde{\tau}_{2} }{1 + i \Omega_L \tilde{\tau}_2 + 2i\Omega_L (1/\tau_1 - i\omega_c)/\Omega_R^2} \right] S_z \:, \nonumber \\
S_y &=& {\rm Re} \left[ \frac{(\nu_y + i\nu_x) \,\Omega_R \, \tilde{\tau}_{1} \tilde{\tau}_{2} }{1 + i \Omega_L \tilde{\tau}_2 + 2i\Omega_L (1/\tau_1 - i\omega_c)/\Omega_R^2} \right] S_z \:, \nonumber \\
S_z &=& \frac{1+(\omega_c + \Omega_L)^2\tau_1^2}{\Omega_R^2 \tau_1} G_z \:, \label{eq:osc_normal}
\end{eqnarray}
where $1/\tilde{\tau}_n = 1/\tau_n - in\omega_c - i\Omega_L$. The $z$-component of the spin density quadratically increases with the magnetic field growth, which is caused by a slowdown of the D'yakonov-Perel' spin dephasing mechanism in the perpendicular magnetic field~\cite{Wilamowski04,Glazov04}. The dependences of $S_x$ and $S_y$ on the magnetic field $\bm{B} \parallel z$ are more complicated and drastically depend on parameters. Examples of such dependences are shown in Fig.~\ref{figure7}. First, we note that $S_x \neq 0$ in zero magnetic field due to scattering anisotropy. With the field $B$ increase, $S_x$ decreases and changes the sign. The component $S_y$ depends linearly on $B$ at small magnetic fields, reaches an extremum, and then decreases. The curves calculated for $\Omega_L/\omega_c = - 0.05$ (solid curves) have additional Hanle-like peculiarities at $\omega_c\tau_1 \approx 3.2$. The analysis of Eqs.~(\ref{eq:osc_normal}) shows that the peculiarities occur at $\omega_c \approx \Omega_R \sqrt{-\omega_c/(2\Omega_L)}$ and are of similar origin as those in Fig.~\ref{figure2} caused by nulling the total frequency $\Omega_L + \delta\Omega_L$.
\begin{figure}[b]
\includegraphics[width=0.85\linewidth]{figure7.eps}
\caption{(Color online) Dependences $S_x$ and $S_y$ on the magnetic field $\bm{B} \parallel z$ measured in units of $\omega_c \tau_1$. The curves are calculated after Eqs.~(\ref{eq:osc_normal}) for $\bm\nu \parallel x$, $\tau_n =\tau_1$, $\nu\tau_1=0.1$, $\Omega_R \tau_1 =1$, and
$\Omega_L/\omega_c = - 0.05$ (solid curves) or $\Omega_L/\omega_c = + 0.05$ (dashed curves).}
\label{figure7}
\end{figure}
\section{Summary}\label{Summary}
We have developed the microscopic theory of electron spin dephasing in QW structures with anisotropic scatterers. It is shown that, in the collision-dominated regime of spin dephasing, the spin-relaxation-rate tensor is determined by constants of spin-orbit splitting and the electric conductivity tensor. In (001)-grown structures with anisotropic in-plane conductivity, the longest spin lifetime of electrons polarized along the QW normal is achieved in asymmetric QWs with a finite Rashba splitting. We have demonstrated that, in structures with
noncentrosymmetric scattering potentials, the in-plane and out-of-plane spin components are coupled to each other. The coupling is caused by breaking the space-inversion symmetry of electron trajectories in the QW plane and is more pronounced in structures with strong spin-orbit splitting. The engineering of electron trajectories provides an additional approach to manipulate electron spins in low-dimensional semiconductors.
\paragraph*{Acknowledgments.} This work was supported by the RFBR, Russian Ministry for Education and Science (contract 14.740.11.0892), EU programs ``Spinoptronics'' and ``POLAPHEN'', and the Foundation ``Dynasty''-ICFPM.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,203
|
Александър Георгиев е български художник и педагог.
Биография
Роден е през 1940 г. в София. През 1968 г. завършва висше образование в Художествената академия в Берлин – Вайсензе, със специалност Живопис. През 1987 г. прави квалификация по висша педагогика в Лайпцигския университет. През 1983 г. става доцент във Висшето училище по техника и икономика в Берлин, специалност Дизайн. Пенсионира се през 1997 г. като професор.
През 2005 г. е награден в Берлин с почетна грамота и плакет За принос в развитието и популяризирането на българската култура.
Негова съпруга е Регине Георгиев, моден дизайнер.
Умира на 12 юли 2012 г. в дома си в Хохеншонхаузен.
Източници
Ingeborg Ruthe, "Alexander Georgiew: Bedrängende Tagträume", "Berliner Zeitung", 18 юли 2012
Български художници
Българи в Берлин
Родени в София
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,516
|
b. c.1687, 1st s. of Christopher Harris of Hayne by his w. Jane; bro. of John Harris. m. bef. 1714, Mary Ann, da. of John Buller of Morval, Cornw., 1s. 1da.
Descended from an old Devonshire family, Christopher Harris was classed as a Tory in 1715. Returned for Okehampton on the Mohun and his family's interests, he was absent from the division on the septennial bill in 1716. He died 1718 (buried 4 July) v.p., aged 31.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 7,161
|
Russian aircraft lands in Vietnam with 40 tons of storm Damrey relief
Russia has offered US$5 million in aid to Vietnam to support regions hit by Typhoon Damrey
A Russian Il-76 plane landed in south-central Vietnam's Cam Ranh Airport just after midnight on Thursday, carrying 40 metric tons of state-funded storm relief to the devastated region.
The Russian Il-76 freighter at Cam Ranh Airport. Photo: Tuoi Tre
Russian media on Tuesday reported that President Vladimir Putin had agreed to offer US$5 million in aid to Vietnam to support the Southeast Asian country in mitigating the impact of Typhoon Damrey.
In addition to monetary support, an Il-76 aircraft from Russia's Ministry of Emergency Situations was dispatched to Vietnam loaded with 40 tons of goods, including milk, sugar, canned food, and tents, the Russian TASS news agency reported.
The freighter aircraft landed at Cam Ranh Airport in the storm-struck province of Khanh Hoa at around 12:30 am on Thursday, after having departed the morning before from the Ramenskoye Airport on the outskirts of Moscow.
Nguyen Duy Bac, deputy chairman of Khanh Hoa, was present at Cam Ranh Airport to receive the aid.
An Il-76 freighter aircraft lands at Cam Ranh Airport in Khanh Hoa Province to deliver storm relief provided by the Russian government, November 9, 2017. Photo: Tuoi Tre
"We would like to thank President Putin for helping the Vietnamese citizens who suffered from this natural disaster, especially the people of Khanh Hoa," Bac said.
"These goods are an extreme necessity for us right now. We will do our best to quickly deliver them to storm-hit areas and rebuild from the catastrophic aftermath of Typhoon Damrey."
Damrey made landfall in Khanh Hoa and neighboring provinces on Friday last week with winds reaching up to 90 kilometers per hour. It is now considered the fiercest storm to have hit the central coast in twenty years.
As of Wednesday, the death toll had reached a staggering 106, with 30 people still missing along the coast.
Nguyen Van Cuong, a chief customs officer at Cam Ranh Airport, told Tuoi Tre (Youth) newspaper that all papers for the Russian goods have been completed beforehand in order to allow the plane to be quickly unloaded after its arrival.
The storm relief is expected to be delivered to areas in need as early as Thursday morning, Cuong said.
Nguyen Duy Bac (C), deputy chairman of Khanh Hoa Province, arrives at Cam Ranh Airport to receive the Russian storm relief. Photo: Tuoi Tre
Goods are unloaded through the back of the aircraft. Photo: Tuoi Tre
VNF/TTO
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,682
|
{"url":"https:\/\/research-development.nomadic-labs.com\/emmy-seven-years-of-updatable-consensus.html","text":"+A\nA\n-A\n1. Research & development\n2. > Blog >\n3. Emmy: seven years of updatable\u00a0consensus\n\nEmmy: seven years of updatable\u00a0consensus\n\nin-depth\n30 July 2021\nshare:\n\nTezos is a blockchain1, and a blockchain is a distributed database that gets updated by adding blocks (small bunches of database operations). Tezos\u2019 basic design goes back to a 2014 whitepaper, which was genuinely groundbreaking2 \u2014 not least in its insight of making the protocol of the database itself be a mutable database entry; this is the so-called Tezos protocol upgrades. From this idea has grown a broad blockchain ecosystem.\n\nA key design choice is the consensus algorithm: how our system decides which blocks to add, and which to discard. Thus far, Tezos has used and refined a Nakamoto-style consensus algorithm called Emmy. Consensus algorithms are genuinely subtle,3 so it is high time that we explain what Emmy is and how it\u00a0works:\n\n\nA survey (and explanation) of\u00a0Emmy\n\nIn this blog post we will give an explanation and survey of the Emmy family of consensus algorithms, which has powered the Tezos blockchain since its inception. We will start\u00a0with\n\nWe mentioned there are several flavours of\u00a0Emmy:\n\nSo \u2018Emmy\u2019 is a family of consensus algorithms \u2014 technically, a collection of proof-of-stake Nakamoto-style consensus algorithms \u2014 parameterised over some dials and switches which we can be tweaked to optimise safety and performance \u2014 and ideally, both at the same\u00a0time.\n\nThese parameters include (explanations will follow below):\n\nThe evolution from Emmy to Emmy+ to Emmy* consists of tweaks to these parameters, each of which had to be carefully considered and tested. These tweaks are practically motivated: each iteration offers worthwhile real-world improvements in either speed or security or (preferably) both.5\n\nThe evolution of these algorithms has been a part of the evolution of Tezos itself, and it illustrates just how far it is possible to optimise Nakamoto-style consensus in a live, functioning, industrial blockchain. So\u00a0\u2026\n\nHow does the Emmy family\u00a0work?\n\nA blockchain is a sequence of blocks. We call position of a block in this sequence its level: thus, the \u201cblock at level $$l$$\u201d is the $$(l{+}1)$$-th block in the sequence. We start counting at zero, so the first block (also called the genesis block) is at level $$0$$. (You can view details of the genesis block of the Tezos blockchain.)\n\nBlocks are timestamped. So the block at level $$l$$ will also have some timestamp $$t_l$$.\n\nFor simplicity suppose also that the blockchain is unforked, meaning that the network agrees on the blockchain up to and including block $$B$$ at level $$l$$ (we consider the more general case of a forked chain below).\n\nNow it\u2019s time to decide on what block to add at level $$l{+}1$$ to our unforked chain. The consensus algorithm gets to work, according to the following\u00a0rules:\n\nThe\u00a0rules\n\nHigh-level view\u00a0\u2026\n\nAt a high level, Emmy acts as follows:\n\n1. sort participants into random\u00a0order,\n2. query each participant in order for a block,\u00a0and\n\nBut this is a blockchain system, so we need an algorithm that can achieve the effect above\u00a0and\n\n\u2022 the algorithm is scalable, efficient, and resilient to real-world faults such as network interruptions,\u00a0and\n\u2022 does not assume a central authority,\u00a0and\n\u2022 does not assume all participants are\u00a0well-behaved.\n\n\u2026 in more\u00a0detail\n\nWe will need two inputs to the\u00a0algorithm:\n\n1. We need a random seed.\n\nThis is a number used to seed a (pseudo)random generating function. This is obtained as a hash of the state of the blockchain approximately $$4096\\times 5$$ blocks in the past \u2014 at one block per minute that means $$20480$$ minutes ago (about two\u00a0weeks).\n\n2. We need a mathematical function called the delay function.\n\n$$\\mathit{delay}$$ has type $$\\mathbb N\\times\\mathbb N\\to\\mathbb N\u208a$$, meaning that it inputs two nonnegative numbers and returns a strictly positive number. $$\\mathit{delay}$$ is a parameter of the Emmy algorithm, meaning that it varies between Emmy, Emmy+, and Emmy*; for now, it just suffices that this exists. More on this\u00a0shortly.\n\nWe now proceed as\u00a0follows:\n\n1. Using the random seed described above, each active blockchain participant $$a$$ with at least one roll ( = 8,000 \ua729) is randomly assigned two pieces of\u00a0data:\n\n\u2022 a unique earliest priority $$p(a)$$, which is a non-negative number.7 Smaller numbers are better and each participant gets a different number (think: positions in a\u00a0queue).\n\u2022 an endorsement power $$w(a)$$, which is a number between 0 and 32 (think: magic talismans; see below).\n\nNote that the earliest priority and the endorsement power are abstract data quantities. We discuss below how these quantities get mapped to a time in seconds.\n\nThe distribution of earliest priority $$p(a)$$ is proportional to participants\u2019 stake: if $$a$$ has more stake then there is a correspondingly increased chance that e.g. $$p(a)=0$$ or $$p(a)=1$$.\n\nEquivalently: priorities are dispensed by a uniform random distribution on a per-roll basis, so a participant who controls more rolls, will get proportionally more\u00a0priorities.\n\nThe distribution of endorsement power $$w(a)$$ is also proportional to participants\u2019 stake, and furthermore the total endorsement power dispensed must sum to 32. Equivalently: the 32 endorsement slots are distributed per roll, so a participant who controls more rolls, is proportionally more likely to get one or more endorsement\u00a0slots.\n\nSo intuitively:\n\nimagine that Tezos is a fantasy magic kingdom which dispenses queue positions and a strictly limited total of 32 magic talismans to its landowners \u2014 and, because this is a proof-of-stake system, larger landowners tend to get better queue numbers and more talismans. We will continue this example below.\n\n2. Recall that we assumed for simplicity that all the participants agree on the state of the blockchain so far, and in particular that it ends at block $$B$$ at level $$l$$.\n\nNow, any participant $$a$$ can endorse (vote for) $$B$$ by transmitting an endorsement operation $$d(a,B)$$ across the\u00a0network.\n\nSo suppose at some particular moment that block $$B$$ has received a set $$e_B=\\{d(a_1,B), d(a_2,B), \\dots, d(a_n,B)\\}$$ of endorsements from distinct participants $$(a_1,a_2,\\dots,a_n)$$ with endorsement powers $$(w(a_1),w(a_2),\\dots,w(a_n))$$ respectively.8 Write $$w(e_B)$$ for the sum of the endorsement powers of $$a_1$$ to $$a_n$$:\n\n$$w(e_B) = w(a_1) + w(a_2) + \\dots + w(a_n) .$$\n\nWe see that each endorsement is weighted by the endorsement power $$w(a_i)$$ of $$a_i$$ \u2014 so in effect, at most 32 participants can usefully endorse $$B$$, since there are at most 32 endorsement slots to go\u00a0around.\n\nThen a participant with earliest priority $$p(a)$$ has the right to bake a valid block $$B'$$ and attach it to a chain after a delay of\n\n$$delay(p(a),w(e_B))$$\nseconds. We discuss in detail below what function $$delay$$ is; for now the function is just a parameter of the\u00a0algorithm.\n\nEvery block contains enough information (timestamp, priority, public key of creating participant, endorsements, and so on) that every participant can check \u2014 just by examining it and without reference to any external oracle \u2014 whether it validly follows on from the last block.6\n\nThen $$B'$$ is transmitted to the network and, assuming everything is working efficiently, $$B'$$ gets appended to the blockchain at level $$l{+}1$$.\n\n3. If everything is not working efficiently \u2014 perhaps the network is slow or participant $$a$$ is rebooting \u2014 then time passes and eventually a delay of $$delay(p(a'),w(e_B'))$$ may pass, where $$a'$$ is the next participant in line and $$e_B'$$ is the number of endorsements that $$a'$$ sees for $$B$$. From this moment, both $$a$$ and $$a'$$ can bake blocks. In particular, if $$a$$ remains hors combat then $$a'$$ may be the one to bake the next valid block. And so it goes on until somebody bakes a valid block and the blockchain is\u00a0extended.\n\nIt might be helpful to continue our analogy of Tezos as a magic\u00a0kingdom:\n\nSuppose the Tezos blockchain is a magic kingdom and suppose block $$B$$ at level $$l$$ is the most recent Law of the magic kingdom to be passed. Landowners line up in a random order and 32 magic talismans are randomly dispensed, where both queue order and talisman distribution are weighted according to how much land each landowner owns. Landowners use their talismans to broadcast a magical blessing of the most recent Law, and \u2014 depending on the delay function, which depends on a landowner\u2019s queue position, and on the talisman-weighted number of blessings of the most recent Law $$B$$ that the landowner has received \u2014 the first landowner to bake a block, gets to determine the next $$B'$$, i.e. to write the next law of the magic kingdom. Of course, this means that large landowners connected to large sections of the magical blessing broadcast network, tend to get early queue positions and receive lots of the magical blessings and so tend to set the laws; whereas those with smaller parcels of land or more restricted access to the network, tend to miss\u00a0out.\n\nWhy we need a consensus\u00a0algorithm\n\nIf this seems complicated, note what it buys\u00a0us:\n\n\u2022 there is no central authority,\u00a0and\n\u2022 the system self-organises to reward having a large stake and following the rules, and is resilient against attack, non-participation, and network\u00a0delays.\n\nLarge fast networks beat small slow\u00a0ones\n\nThe algorithm dispenses endorsements amongst all active participants. Thus if a small group of nodes becomes isolated for a while \u2014 or just decides to go its own way \u2014 then it is not the case that they can just give one another early priorities, vigorously endorse one another\u2019s blocks, and so build a long chain.9\n\nAn isolated group may fork,\u00a0but\n\n\u2022 its members can garner relatively few priorities (because statistically speaking, most priorities go to participants not in the group)\u00a0and\n\u2022 relatively few endorsement slots\u00a0(ditto),\n\nso that the values of the delay function within the group will be relatively large, and it will make relatively slow progress. Thus, if and when the group tries to rejoin the main body of the network, the group\u2019s branch will be short compared to the branch of the rest of the blockchain, and it will die\u00a0off.\n\nEndorsements\n\nAt first sight it might seem strange to endorse the last block $$B$$. After all $$B$$ has already been attached to the blockchain, and we assumed for the sake of argument that there are no forks. So why does it even matter if we now endorse\u00a0it?\n\nEndorsements tend to unify the network and kill off small, slow, isolated chains: a small fragment of the network will find it difficult to ignore the rest of the community and go its own\u00a0way,\n\n\u2022 not just because it will struggle to obtain small priorities, but\u00a0also\n\u2022 because even once it gets a priority it may also struggle to gain endorsements from the wider network for the block that it bakes with that\u00a0priority.\n\nIn principle (withholding) endorsements could also be used to penalise $$B$$ if we disapprove of its contents. In practice this is not done: participants just endorse $$B$$ as soon as they can (for which they are rewarded with endorsing rewards). See the algorithm from the point of view of a participant.\n\nForks\n\nRecall that we simplified and assumed that all participants agree on the state of the blockchain so far. But what if they don\u2019t? In practice the chain can fork, meaning that different participants have different views of the blockchain so far.10\n\nSo what if the blockchain is\u00a0forked?\n\nRecall that the consensus algorithm depends on these three\u00a0parameters:\n\n\u2022 the random seed, which was seeded from the state of the chain at least two weeks ago \u2014 we presume that no fork can last that long, so every participant now agrees on this quantity \u2014\u00a0and\n\u2022 the delay function, which is a fixed parameter of the current economic protocol, so again every participant agrees on this \u2014\u00a0and\n\u2022 the fitness function which is also a fixed parameter of the economic\u00a0protocol.\n\nSo we can assume that these parameters are shared and everybody is working from a shared notion of \u201cwhat is a valid chain\u201d. Furthermore, checking validity of a proposed chain is precise and unambiguous: participants should just work from the fittest blockchain branch that they can see. Note that the use of a fitness function to choose the canonical branch is common to Nakamato-style consensus algorithms. This is known in the literature as the fork choice rule.\n\nIf the network becomes partitioned then the blockchain may fork for a while, and the partitions could evolve independently for a while, but it will snap back once connectivity is restored, provided that most (= at least half) of the participants work following the rules above. This is called a Nakamoto-style system because this is how Bitcoin works.11\n\nThe rules: the Emmy family, from the point of view of a\u00a0participant\n\n1. Continually observe blocks and endorsements on the\u00a0network.\n2. Work from the \u2018best\u2019 valid chain $$\\mathcal B$$ that you observe. If the system is unforked then picking the best chain is easy because there is only one. If the system is forked, only switch to another chain if it is \u2018better\u2019. What \u2018better\u2019 means for a chain is determined by a metric called chain fitness, discussed below.\n3. Endorse the first block you observe that can validly attach to $$\\mathcal B$$.\n4. If you do not observe such a block, then produce one as soon as the delay function says that you validly\u00a0can.\n\nThe need for a chain fitness\u00a0function\n\nSuppose we are on the blockchain and it is forked. How is an honest participant to decide which chain is better?13 Here are two possible\u00a0answers:\n\n1. In\u00a0Emmy:\n\nchain fitness is equal to the sum of the length of a chain and the number of endorsements which it\u00a0contains.\n\nThus, an honest participant who bakes on the fittest branch of a fork will prefer the branch with the most\u00a0blocks-plus-endorsements.\n\n2. In both Emmy+ and\u00a0Emmy*:\n\nchain fitness is equal to the length of the\u00a0chain.\n\nThus, an honest participant who bakes on the fittest branch of a fork will prefer the longest branch.14\n\nReasons for the change in fitness\u00a0function\n\nThis is discussed in the post on Emmy+: search for \u201csimplifies the optimal baking strategy\u201d.\n\nThe issue with the Emmy fitness function (item 1 in the list above) is that it gives equal weight to an endorsement and a block \u2014 and each block can hold up to 32 endorsements. Thus, in terms of chain fitness, gathering endorsements for the last block is more important than producing the next block and thus extending the\u00a0chain.\n\nThis incentivises bakers to hold off baking a block until they can pack it with as close as possible to a full complement of 32 endorsements, for fear that if they do not then their chain might be overtaken by a shorter fork (as much as \u00b9\u2044\u2083\u2082 of the length) but with more endorsements and so greater\u00a0fitness.\n\nThis incentive structure could slow down the blockchain overall, and in particular it could slow down transaction rate (the overall rate at which transactions get included in blocks and baked onto the chain), which is not good for users, who are likely to prefer high transaction rates and rapid transaction settlement\u00a0times.\n\nEmmy+ and Emmy* address this by setting chain fitness to equal chain length. Endorsements still play a role, but the effect is exerted via the delay function \u2014 discussed below; essentially, more endorsements means you can bake earlier. Honest participants just prefer the longest chain and there is no reason not to bake a block, once the delay function permits\u00a0it.\n\nThe three flavours of\u00a0Emmy\n\nList of\u00a0flavours\n\nEmmy exists in three\u00a0flavours:\n\n1. The original version Emmy. This was used from 2014 until 18 October 2019 (this is the first block to use the Babylon protocol). Emmy is no longer\u00a0current.\n2. The new version Emmy+. This is\u00a0current.\n\nThese are essentially the same algorithm but they differ in tweaks to their parameters.\u00a0Specifically:\n\n\u2022 Emmy and Emmy+ have 32 endorsement slots per level. Emmy* has 256, speeding up confirmation time and increasing user participation.15\n\u2022 The chain fitness function of Emmy is \u201cblocks + endorsements\u201d; that of Emmy+ and Emmy* is just \u201cblocks\u201d, as discussed above.\n\u2022 The delay functions of the three algorithms differ, as discussed below.\n\nThe Delay function of\u00a0Emmy\n\nIn Emmy, $$\\edelay$$ is a function just of the priority $$p$$:\n\n$$$$\\tag{1}\\label{eq:delay} \\begin{array}{r@{\\ }l} \\edelay(p) =& \\db + \\dpp \\cdot p \\\\ =& 60 + 75 \\cdot p \\quad \\text{(seconds)}. \\end{array}$$$$\n\u2022 $$\\db=60$$ is the base delay. Thus this is a constant minimal offset from one block to the\u00a0next.\n\u2022 $$\\dpp=75$$ is the priority delay. Thus this establishes the time between priorities, as discussed\u00a0above.\n\nWorked\u00a0examples:\n\n\u2022 A participant with earliest priority $$0$$ can start baking after $$60+75\\cdot 0 = 60$$\u00a0seconds.\n\u2022 A participant with earliest priority $$1$$ can start baking after $$60+75\\cdot 1 = 135$$\u00a0seconds.\n\nThe Delay function of\u00a0Emmy+\n\nEmmy+ adds a dependency on $$w$$, the endorsement power of the endorsements in the block to be\u00a0baked:\n\n$$$$\\tag{2}\\label{eq:epdelay} \\begin{array}{r@{\\ }l} \\edelayp(p, w) =& \\db + \\dpp \\cdot p + \\de\\cdot \\max(0, \\frac{3}{4}\\cdot\\te - w) \\\\ =& 60 + 40 \\cdot p + 8\\cdot \\max(0, 24 - w) \\quad\\text{(seconds)}. \\end{array}$$$$\n\nIt might help to view $$\\mathit{delay}$$ abstractly as a function that tends to increase on the first argument (the priority slot), and decreases linearly on the second (the endorsement power).\u00a0So:\n\n\u2022 later priority slot = more\u00a0delay;\n\u2022 more endorsements = less\u00a0delay.\n\nIn words, Equation\u00a0\\eqref{eq:epdelay} says that the delay\u00a0is:\n\n\u2022 a base delay of $$\\db=60$$ seconds,\u00a0plus\n\u2022 a priority delay of $$\\dpp=40$$ seconds,\u00a0plus\n\u2022 a delay per missed endorsement of $$\\de=8$$ seconds for every endorsement slot that the block to be baked falls short of a threshold of $$24=\\frac{3}{4}\\cdot \\te$$ out of the $$\\te=32$$ available endorsement\u00a0slots.\n\nFrom the Babylon protocol upgrade (18 October 2019) to time of writing, these parameters have been fixed\u00a0at\n\n$$\\db = 60,\\quad \\dpp=40,\\quad \\de=8,\\quad \\te=32.$$\n\nWorked\u00a0example:\n\n\u2022 A participant with earliest priority $$0$$, baking a block with $$16$$ endorsements for the previous block \u2014 thus, with half of the full complement of 32 possible endorsement slots \u2014 can start baking after $$60+40\\cdot 0+8\\cdot 8=124$$\u00a0seconds.\n\u2022 If the participant had gathered $$24$$ instead of $$16$$ endorsements, then this would drop to $$60$$\u00a0seconds.\n\u2022 A participant with earliest priority $$1$$, baking a block with $$16$$ endorsements for the previous block \u2014 thus, with half of the full complement of 32 possible endorsement slots \u2014 can start baking after $$60+40\\cdot 1+8\\cdot 8=164$$\u00a0seconds.\n\u2022 If the participant had gathered $$24$$ instead of $$16$$ endorsements, then this would drop to $$100$$\u00a0seconds.\n\nThe Delay function of\u00a0Emmy*\n\nEmmy* builds on the delay function of Emmy+ while observing that during normal operation most blocks are baked at priority 0 and with plenty of endorsements. So let\u2019s fast-track this optimal\u00a0case:\n\n$$$$\\tag{3}\\label{eq:esdelay} \\edelays(p, w) = \\begin{cases} \\md & \\text{ if } p = 0 \\wedge w \\geq \\frac{3}{5}\\te \\\\ \\edelayp(p, w) & \\text{ otherwise} \\end{cases}$$$$\n\nAbove $$\\md=30$$ is the minimal delay and $$\\te=256$$ is the number of endorsement slots. Furthermore, the constant $$\\de$$ used by $$\\edelayp(p, w)$$ has been changed to\u00a04.\n\nWorked\u00a0example:\n\n\u2022 A participant with earliest priority $$0$$, baking a block with $$153$$ endorsements for the previous block \u2014 thus, with less than 60% of the full complement of 256 possible endorsement slots \u2014 can start baking after $$60$$\u00a0seconds.\n\u2022 A participant with earliest priority $$0$$, baking a block with $$154$$ endorsements for the previous block \u2014 thus, with at least 60% of the full complement of 256 possible endorsement slots \u2014 can start baking after $$30$$\u00a0seconds.\n\u2022 If the participant had gathered $$192$$ instead of $$155$$ endorsements, then this would make no difference and they can still start baking after $$30$$\u00a0seconds.\n\u2022 A participant with earliest priority $$1$$, baking a block with $$128$$ endorsements for the previous block \u2014 thus, with half of the full complement of 256 possible endorsement slots \u2014 can start baking after $$60+40\\cdot 1+4\\cdot (192 - 128)=356$$\u00a0seconds.\n\u2022 If the participant had gathered $$192$$ instead of $$128$$ endorsements, then this would drop to $$60+40\\cdot 1=100$$\u00a0seconds.\n\nSo much for our overview of the delay function. How does this help the blockchain to withstand attacks? What does an attack look like, anyway? Read on\u00a0\u2026\n\nThe mathematics of withstanding\u00a0attacks\n\nAn attack scenario (a Valuable\u00a0Car)\n\nLet the time be $$t=0$$ and consider the following\u00a0scenario:\n\n1. You have a Valuable Car to\u00a0sell.\n2. I agree to purchase it and we agree on a sum of Quite A Lot for the\u00a0sale.\n3. I bake a block $$B$$ at time $$t=0$$ recording a transfer of Quite A Lot from my account to\u00a0yours.\n4. I also secretly bake another block $$B'$$ (this is called double baking). $$B'$$ is identical to $$B$$ except that this includes a transaction for Far Less from my account to\u00a0yours.\n5. The main Tezos chain $$\\mathcal M$$ continues to evolve from $$B$$, and (still in secret) I bake an alternative chain $$\\mathcal S$$ evolving from $$B'$$ instead of from $$B$$.\n6. Meanwhile, I take possession of the Valuable Car, and drive it\u00a0home.\n7. When I get home, I reveal my chain to the network. In this chain I bought your car, but for Far Less rather than for Quite A\u00a0Lot.\n\nIf you complain, I can say \u201cWhat\u2019s the problem? I paid you and you gave me the Valuable\u00a0Car\u201d.\n\n\u2022 Say that my attack succeeds when there exists some time $$t>0$$ at which $$\\mathcal S$$ is longer than $$\\mathcal M$$. Then, I can reveal $$\\mathcal S$$ to the world and as per the rules honest participants will bake from $$\\mathcal S$$.\n\u2022 Say that my attack fails when for every time $$t>0$$, $$\\mathcal S$$ is shorter than $$\\mathcal M$$. I can still reveal $$\\mathcal S$$ to the world, but as per the rules it will be\u00a0ignored.\n\nWhether or not this attack succeeds or fails depends on the relative rates of growth of $$\\mathcal M$$ and $$\\mathcal S$$. This is dictated by the delay function as discussed above. In the long term my chain $$\\mathcal S$$ will be slower than $$\\mathcal M$$ (so long as I have less than half the total stake on the chain) because I will gain relatively fewer priorities and endorsements. Still: like a gambler in a casino, I might get\u00a0lucky.\n\n\u2022 For a given attacker stake, what are the chances of an attacker undoing a transaction as outlined\u00a0above?\n\u2022 Turning the question around: after how many blocks on the main chain after I paid you Quite A Lot can you feel safe about handing over your Valuable\u00a0Car?\n\nCalculating the confirmation\u00a0number\n\nSetting the scene, and some simplifying\u00a0assumptions\n\nWe can amalgamate multiple dishonest bakers $$\\cb_i$$ into a single \u2018composite\u2019 dishonest baker $$\\cb$$ having as stake fraction the sum of the stake fractions of $$\\cb_i$$, and similarly for the honest bakers. We can also think of a fork as two competing chains \u2014 an honest main chain $$\\mathcal M$$ and a dishonest hostile chain $$\\mathcal S$$ \u2014 both extending some genesis block at level $$\\ell = 0$$.16\n\nThus we reason henceforth\u00a0using\n\n\u2022 a single honest baker $$\\hb$$ with stake fraction $$1-f$$\u00a0and\n\u2022 a single dishonest adversary baker $$\\cb$$ with stake fraction $$f$$,\n\u2022 both competing to add a longer chain starting from a common block at level $$\\ell = 0$$.\n\nWe\u2019re playing for the adversary $$\\cb$$ in the mathematics below, so we\u00a0write\n\n\u2022 $$f$$ for the stake fraction of $$\\cb$$,\n\u2022 $$p$$ for the earliest priority of $$\\cb$$,\u00a0and\n\u2022 $$e$$ for the number of endorsement slots of $$\\cb$$.\n\nTo win, the dishonest chain $$\\mathcal S$$ of $$\\cb$$ must accumulate more blocks than the honest main chain $$\\mathcal M$$. So we need to compute the probability that the timestamp of the $$l$$th and final block in $$\\mathcal S$$ is less than the timestamp of the $$l$$th and final block in $$\\mathcal M$$. If this happens, then $$\\cb$$ can reveal the dishonest chain $$\\mathcal S$$ to the network and participants will switch to\u00a0it.\n\nAssume that everyone favours themselves, and the network favours the\u00a0attacker\n\nWe assume\u00a0that:\n\n\u2022 Everybody bakes as early as they can (thus handing minimal advantage to the\u00a0adversary).\n\u2022 The honest baker\u2019s messages take time $$\\Delta$$ to arrive, which intuitively corresponds to some slowest possible path in the\u00a0network.\n\u2022 The attacker\u2019s messages arrive\u00a0instantly.\n\nSo this is a worst case scenario for the honest chain, in which honest network communications are as slow as they possibly could be, and dishonest attacker communications are very fast. In symbols, we can calculate17 that for a block baked by $$\\hb$$, the minimum delay function $$\\edelaydelta(\\ph, e)$$\u00a0is:\n\n$$\\edelaydelta(\\ph, e) = \\min\\bigl(\\max(2\\Delta, \\edelay(\\ph, e)),\\ \\max(\\Delta, \\edelay(\\ph, 0))\\bigr) % \\edelaydelta(p, e) = \\edelay(p, e) for blocks baked by the$$\n\nSome\u00a0probabilities\n\nWe want to calculate the probability that the hostile chain $$\\mathcal S$$ overtakes the main chain $$\\mathcal M$$, under the assumptions above. This can be expressed in terms of differences between minimal block delays, using a large summation over the distribution of priorities and\u00a0endorsements.\n\n\u2022 Write $$\\pprio{f,p}$$ for the probability that the earliest priority of a player with stake fraction $$f$$ is $$p$$:\n$$\\pprio{f,p} = \\pr[best\\_prio = p] = (1-f)^p f$$\n(recall: the first priority is $$0$$).\n\u2022 Write $$\\pendo{f,e}$$ for the probability that $$\\cb$$ gets $$e$$ many endorsement slots:\n$$\\pendo{f,e}=\\pr[num\\_endo = e] = \\textstyle\\binom{32}{e}f^e(1-f)^{32-e}.$$\n\nWe distinguish two cases, depending on whether we are baking a block to follow the shared genesis block, or subsequent\u00a0blocks:\n\n\\begin{align} \\difff{\\pc, \\ph, e} & := \\edelay(\\pc,32) - \\edelaydelta(\\ph, 32 - e) \\tag{4}\\label{eq:diff1}\\\\ \\diff{\\pc, \\ph, e} & := \\edelay(\\pc,e) - \\edelaydelta(\\ph, 32 - e) \\tag{5}\\label{eq:diff} \\end{align}\n\nAbove:\n\n\u2022 $$\\ph$$ and $$\\pc$$ are parameters representing the best priorities of $$\\hb$$ and $$\\cb$$\u00a0respectively.\n\u2022 The equations just subtract the minimal delay function for $$\\hb$$ from that of $$\\cb$$. A negative value here is good for the dishonest chain $$\\mathcal D$$, and a positive value is good for the honest chain $$\\mathcal M$$.\n\u2022 $$\\difff{}$$ corresponds to the case when we compute the delay difference for the first block $$\\cb$$ bakes which is on top of genesis and $$\\cb$$ has access to all endorsements for genesis, while $$\\hb$$ only has its own endorsements, not those of $$\\cb$$ \u2014 in contrast, $$\\diff{}$$ corresponds to the case of subsequent blocks $$\\cb$$ bakes; for these ones, $$\\cb$$ can only use its own endorsements, since $$\\hb$$s endorsements are for blocks on $$\\mathcal M$$ that are not on $$\\mathcal S$$, and $$\\cb$$ therefore cannot use them to extend $$\\mathcal S$$.\n\nWe write sequences of priorities and endorsements of length $$\\ell\\geq 1$$\u00a0as\n\n$$\\barpc = (\\pc_1,\\dots, \\pc_\\ell), \\qquad \\barph = (\\ph_1,\\dots, \\ph_\\ell), \\quad\\text{and}\\quad \\bar{e} = (e_1,\\dots, e_\\ell)$$\n\nand, parameterised over these sequences, we define an accumulated difference $$\\diffl{\\barpc,\\barph,\\bar{e}}$$\u00a0by\n\n$$\\diffl{\\barpc, \\barph, \\bar{e}} := \\difff{\\pc_1, \\ph_1, e_1} + {\\sum_{2\\leq i\\leq \\ell}}\\diff{\\pc_i, \\ph_i, e_i} .$$\n\nForks starting\u00a0now\n\nWe can now calculate the probability\u00a0that\n\nfor chain length $$\\ell\\geq 1$$, the timestamp of the head of $$\\cb$$s chain, minus the timestamp of the head of $$\\hb$$s chain, is equal to $$\\delta$$\u00a0seconds\n\nas\u00a0follows:\n\n\\begin{align} \\pr_\\ell[\\var{chains\\_diff}=\\delta] := \\sum_{\\substack{(\\barpc, \\barph)\\in P_2^{\\ell},\\bar{e}\\in [32]^{\\ell}\\\\\\diffl{\\barpc,\\barph,\\bar{e}} = \\delta}}\\;\\prod_{1\\leq i\\leq\\ell}\\pendo{f,e_i}\\cdot \\pprio{f,\\pc_i,\\ph_i} \\label{eq:pr0} \\end{align}\n\nwhere above, $$P_2 = \\{(p,p')\\mid \\text{either } p = 0 \\text{ or } p'=0\\}$$\u00a0and\n\n$$\\pprio{f,\\pc,\\ph} := \\left\\{ \\begin{array}{ll} \\pprio{f, \\pc} & \\text{if \\ph = 0}\\\\ \\pprio{1-f, \\ph} & \\text{if \\pc = 0}\\\\ \\end{array} \\right.$$\n\nto distinguish between the case when either $$\\cb$$ or $$\\hb$$ has priority\u00a00.\n\nNext, we give an inductive characterisation of $$\\pr_\\ell[\\var{chain\\_diff}=\\delta]$$ which is amenable to\u00a0computation.\n\nWe first consider the probabilities corresponding to the differences in Equations\u00a0\\eqref{eq:diff1}\u00a0and\u00a0\\eqref{eq:diff}.\n\n\\begin{align} \\pr[\\var{first\\_diff}=\\delta] := \\sum_{\\substack{(\\pc, \\ph)\\in P_2,e\\in [32]\\\\\\difff{\\pc,\\ph,e} = \\delta}}\\;\\pendo{f,e}\\cdot \\pprio{f,\\pc,\\ph} \\tag{7} \\label{eq:pri00} \\\\ \\pr[\\var{subseq\\_diff}=\\delta] := \\sum_{\\substack{(\\pc, \\ph)\\in P_2,e\\in [32]\\\\\\diff{\\pc,\\ph,e} = \\delta}}\\;\\pendo{f,e}\\cdot \\pprio{f,\\pc,\\ph} \\tag{8} \\label{eq:pri01} \\end{align}\n\nAbove:\n\n\u2022 $$\\pr[\\var{first\\_diff}=\\delta]$$ is the probability that the delay difference between the first block $$\\cb$$ bakes on its secret chain $$\\mathcal{S}$$ and the first block $$\\hb$$ bakes on $$\\mathcal{M}$$, is $$\\delta$$.\n\u2022 $$\\pr[\\var{subseq\\_diff}=\\delta]$$ is the probability that the delay difference between a block (other than the first one) $$\\cb$$ bakes on its secret chain $$\\mathcal{S}$$ and the block $$\\hb$$ bakes on $$\\mathcal{M}$$ at the same level, is $$\\delta$$.\n\nFor a given difference $$\\delta$$, we define the probability of forks of length $$\\ell$$ inductively\u00a0by:\n\n\\begin{align} &\\pr^{\\mathit{now}}_1[\\var{chains\\_diff}=\\delta] := \\pr[\\var{first\\_diff}=\\delta] \\tag{9} \\label{eq:prn1} \\\\ &\\pr^{\\mathit{now}}_{\\ell{+}1}[\\var{chains\\_diff}=\\delta] := \\displaystyle\\sum_{\\substack{(\\delta_{\\ell}, \\delta_1) \\in \\mathbb{Z}^2\\\\\\delta_\\ell+\\delta_1=\\delta}}\\pr^{\\mathit{now}}_{\\ell}[\\var{chains\\_diff}=\\delta_\\ell]\\cdot\\pr[\\var{subseq\\_diff}=\\delta_1] \\tag{10} \\label{eq:prnl} \\end{align}\n\nWe note that $$\\delta_\\ell$$ can be negative as long as $$\\delta_1$$ is big enough to\u00a0compensate.\n\nPutting this all together, the probability that $$\\cb$$ bakes a fork of length $$\\ell$$ that is faster than that of $$\\hb$$\u00a0is:\n\n\\begin{align} \\pr[\\exists\\mathit{faster\\_fork}.(\\mathit{len(faster\\_fork)} = \\ell)] = \\displaystyle\\sum_{\\delta \\leq 0}\\pr^{\\f{now}}_\\ell[\\var{chains\\_diff}=\\delta] \\tag{12} \\label{eq:lf} % syntax highlighting hack - !_ \\end{align}\n\nWe use $$\\pr[\\exists\\mathit{faster\\_fork}.(\\mathit{len(faster\\_fork)}= \\ell)]$$ to answer the\u00a0question:\n\nGiven some transaction in some block $$B$$, how many confirmations \u2014 blocks after $$B$$ \u2014 must we observe to be Reasonably Sure that the transaction will remain in the\u00a0chain?\n\nWe call this number of blocks the confirmation number. By Reasonably Sure, we\u00a0mean\n\nwith probability smaller than some reasonable security threshold $$\\secu$$\n\nwhose value can be fixed to the reader\u2019s taste. In practice, we fix a value for our security threshold $$\\secu$$ such that our expectation of being wrong about a block being final is expected to be roughly once every two centuries, and conversely; an attacker would have no reasonable confidence of seeing an attack succeed in their lifetime. For example, when there is one block per minute, we set $$\\secu=10^{-8}$$, as done in our previous analysis of Emmy+.\n\nSo to return to our motivating example above of the Valuable Car, and assuming one block per minute, all you need to do is wait confirmation number number of minutes before handing over the keys, and you should be Reasonably Sure that the payment of Quite A Lot is final. If you\u2019re inclined to be paranoid, just wait a few minutes longer (meaning in effect that you use a lower and so more stringent value of $$\\secu$$).\n\nTo compute the confirmation number concretely, we simply need to find the smallest $$\\ell$$ such\u00a0that\n\n$$\\pr[\\exists\\mathit{faster\\_fork}.(\\mathit{len(faster\\_fork)} = \\ell)] < \\secu.$$\n\nWe built a tool for computing this, discussed below. For reference, the maths above also underlies the results presented in \u201cforks starting now\u201d.\n\nIn practice \u2014 for Tezos as it currently runs Emmy+ and assuming an attacker controlling at most one fifth of the network, a message delay $$\\Delta$$ less than 60 seconds, and a security threshold $$\\secu$$ of $$10^{-8}$$ \u2014 confirmation number is about seven\u00a0blocks.\n\nForks started in the\u00a0past\n\nThe analysis above can be refined using conditional probabilities: as time passes we can observe the behaviour of the current blockchain $$\\mathcal M$$ and gather information about its health, and this might inform our estimates of the remaining confirmation\u00a0time.\n\nMy transaction $$tx$$ was included $$n$$ blocks ago into $$\\mathcal M$$ (i.e. there are $$n{-}1$$ blocks on top of the block with my transaction $$tx$$) and $$\\delta^o$$ time ago. Can I be Reasonably Sure that my $$tx$$ is\u00a0final?\n\nTo answer this, we consider the accumulated delay of the current chain $$\\mathcal M$$, relative to an ideal chain $$\\mathcal I$$ operating in ideal conditions in which all blocks are produced at priority 0 (i.e. by the first available baker) with a full complement of 32 endorsements and without any network\u00a0delays.\n\nThe accumulated delay $$\\delta^a$$ of $$\\mathcal M$$ is easy to calculate: in ideal conditions the ideal chain $$\\mathcal I$$ bakes one block every $$\\mathit{time\\_between\\_blocks}$$ seconds,18 so the accumulated delay of an $$n$$-block blockchain-fragment is\u00a0just\n\n\u2022 $$\\delta^o$$ (the timestamp of the current block minus the timestamp of the block containing my transaction $$tx$$),\n\u2022 minus $$n \\cdot \\mathit{time\\_between\\_blocks}$$,19\n\nor in\u00a0symbols:\n\n$$\\delta^a = \\delta^o - n\\cdot \\mathit{time\\_between\\_blocks}$$\n\nThe accumulated delay $$\\delta^a$$ is a simple measure of the \u2018health\u2019 of $$\\mathcal M$$: the smaller $$\\delta^a$$ is, the healthier $$\\mathcal M$$ is. Intuitively an unhealthy chain is easier for a hostile baker $$\\cb$$ to attack with a hostile chain $$\\mathcal S$$, so our task is now to quantify\u00a0this.\n\nWe compute the probability that our hostile baker $$\\cb$$ has a hostile chain $$\\mathcal S$$ that $$\\cb$$ has forked from the main chain just before the block with our transaction $$tx$$ and that has the same length as $$\\mathcal M$$ but is faster. We conceptually split $$\\mathcal S$$ into two parts: a \u201cpast\u201d one consisting of first $$n$$ blocks, and a \u201cfuture\u201d one consisting of the subsequent\u00a0blocks.\n\nWe will use our ideal chain $$\\mathcal I$$ as an intermediate reference step. Recall that we assume that blocks on $$\\mathcal I$$ are baked with no delay, that is, every $$\\bd$$ seconds. Then we proceed in three\u00a0steps:\n\n1. For the first $$n$$ blocks, we compare $$\\cb$$s chain $$\\mathcal S$$ with the ideal chain $$\\mathcal I$$.\n2. We then shift from $$\\mathcal I$$ to $$\\mathcal M$$ to account for $$\\delta^a$$.\n3. At this point, we are in a similar situation as with forks starting now, and so we compare $$\\mathcal S$$ with $$\\mathcal M$$.\n\nFor Item 1, the basic element we need is the difference between the minimal block delays on $$\\mathcal S$$ and on $$\\mathcal I$$:\n\n\\begin{align} \\difffp{\\pc, \\ph, e} & := \\edelay(\\pc,32) - \\mathit{time\\_between\\_blocks} \\tag{13} \\label{eq:pdiff1}\\\\ \\diffp{\\pc, \\ph, e} & := \\edelay(\\pc,e) - \\mathit{time\\_between\\_blocks} \\tag{14} \\label{eq:pdiff} \\end{align}\n\nWe note that the above equations are similar to Equations\u00a0\\eqref{eq:diff1} and\u00a0\\eqref{eq:diff}, with the difference that we subtract $$\\bd$$ from the definition of the ideal\u00a0chain.\n\nWe can write the probabilities corresponding to Equations\u00a0\\eqref{eq:pdiff1} and\u00a0\\eqref{eq:pdiff} as\u00a0follows:\n\n\\begin{align} &\\pr^{\\pcst}[\\f{first\\_diff}=\\delta] = \\displaystyle\\sum^{32}_{e=0}\\pendo{f,e}\\cdot \\sum_{\\substack{\\pc\\geq 0\\\\{\\difffp{\\pc,0,e}=\\delta}}} \\pprio{f,\\pc} \\tag{15} \\label{eq:pasteq1} \\\\ &\\pr^{\\pcst}[\\f{subseq\\_diff}=\\delta] = \\displaystyle\\sum^{32}_{e=0}\\pendo{f,e}\\cdot \\sum_{\\substack{\\pc\\geq 0\\\\{\\diffp{\\pc,0,e}=\\delta}}} \\pprio{f,\\pc} \\tag{16} \\label{eq:pastge1} \\end{align}\n\nSimilar to Equations\u00a0\\eqref{eq:prn1} and\u00a0\\eqref{eq:prnl}, for a given difference $$\\delta$$, the probability that the difference between $$\\mathcal S$$ and $$\\mathcal I$$ is $$\\delta$$ is defined inductively\u00a0as:\n\n\\begin{align} &\\pr^{\\pcst}_1[\\f{chains\\_diff} = \\delta] = \\pr^{\\pcst}[\\f{first\\_diff} = \\delta] \\tag{17} \\label{eq:past1} \\\\ &\\pr^{\\pcst}_n[\\f{chains\\_diff} = \\delta] = \\displaystyle\\sum_{\\substack{(\\delta_{n{-}1}, \\delta_1) \\in \\mathbb{Z}^2\\\\\\delta_{n{-}1}+\\delta_1=\\delta}}\\pr^\\pcst_{n{-}1}[\\f{chains\\_diff} = \\delta_{n{-}1}]\\cdot\\pr^\\pcst[\\f{subseq\\_diff} = \\delta_1] \\end{align}\n\nFor Item 2, $$\\pr_n^\\pcst[\\f{chains\\_diff} = \\delta - \\delta^a]$$ captures the shift with respect to $$\\delta^a$$ and this probability is the base case in the inductive definition for the probability that the difference between $$\\mathcal S$$ and $$\\mathcal M$$ is $$\\delta$$, which is the probability mentioned in Item\u00a03:\n\n\\begin{align} &\\pr^\\pcstn_1[\\f{chains\\_diff} = \\delta \\mid \\delta^a,n] = \\pr_n^\\pcst[\\f{chains\\_diff} = \\delta - \\delta^a]\\tag{18}\\label{eq:shift}\\\\ &\\pr^\\pcstn_{\\ell{+}1}[\\f{chains\\_diff} = \\delta \\mid \\delta^a,n] = \\displaystyle\\sum_{\\substack{(\\delta_{\\ell}, \\delta_1) \\in \\mathbb{Z}^2\\\\\\delta_\\ell+\\delta_1=\\delta\\\\\\delta_{\\ell} > 0}}\\pr^\\pcstn_{\\ell}[\\f{chains\\_diff} = \\delta_\\ell\\,\\mid \\delta^a,n]\\cdot\\pr_1^\\pcstn[\\f{chains\\_diff} = \\delta_1] \\tag{19} \\label{eq:ppl} \\end{align}\n\nFor brevity we will\u00a0write\n\n\u2022 $$\\pr^\\pcst$$ for the probability that $$\\cb$$ baked $$n$$ blocks until now\u00a0and\n\u2022 $$\\pr^\\pcstn$$ for the conditional probability that $$\\cb$$ bakes $$l$$ blocks from now given $$\\pr^\\pcst$$ and the accumulated\u00a0delay.\n\nNote the condition $$\\delta_{\\ell} > 0$$ in Equation\u00a0\\eqref{eq:ppl}: since $$\\delta$$ represents the difference between the timestamps of $$\\hb$$ and $$\\cb$$, by means of this condition, we do not take into account the probabilities of forks of length $$\\ell$$ the attacker could have won. Thus the probability in Equation\u00a0\\eqref{eq:ppl} represents the probability that the difference between the timestamps of $$\\mathcal S$$ and $$\\mathcal M$$ is $$\\delta$$ and any prefix of $$\\mathcal S$$ is not faster than the corresponding prefix of $$\\mathcal M$$. In other words, $$\\ell{+}1$$ is the first level at which the attacker might have a faster\u00a0chain.\n\nWe define $$\\pr[\\exists\\mathit{faster\\_fork}.(\\mathit{len(faster\\_fork)}\\leq \\ell) \\mid \\delta^a,n]$$ to be the probability the attacker has a faster fork of any length smaller than or equal to $$\\ell$$: \\begin{align} \\pr[\\exists\\mathit{faster_fork}.(\\mathit{len(faster_fork)}\\leq \\ell) \\mid \\delta^a,n] = \\displaystyle\\sum_i^{\\ell}\\sum_{\\delta \\leq 0}\\pr^\\f{past}_i[\\f{chains_diff} = \\delta \\mid\u00a0\\delta^a,n].\n\n\\end{align}\n\nFor a block to be considered final, given an accumulated delay $$\\delta^a$$ we need to find the smallest $$n$$ such that the probability to have a fork of any length in the future is smaller than our security threshold $$\\secu$$ (e.g. $$\\secu=10^{-8}$$), namely, we need to find the smallest $$n$$ such that $$\\forall \\ell. \\pr[\\exists\\mathit{faster\\_fork}.(\\mathit{len(faster\\_fork)}\\leq \\ell) \\mid \\delta^a,n] < \\secu$$. To effectively capture the universal quantifier, we define $$\\pr[\\exists\\mathit{faster\\_fork} \\mid \\delta^a,n]$$ as the following\u00a0limit:\n\n\\begin{align} \\tag{20} \\label{eq:pfl} \\pr[\\exists \\mathit{faster\\_fork} \\mid \\delta^a,n] = \\displaystyle{\\lim_{\\ell \\rightarrow \\infty}} \\pr[\\exists\\mathit{faster\\_fork}.(\\mathit{len(faster\\_fork)}\\leq \\ell) \\mid \\delta^a,n]. \\end{align}\n\nTo solve Equation\u00a0\\eqref{eq:pfl}, it suffices to compute the least $$\\ell$$ such\u00a0that:\n\n\\begin{align} \\frac{\\pr[\\exists\\mathit{faster\\_fork}.(\\mathit{len(faster\\_fork)}\\leq \\ell) \\mid \\delta^a,n]}{\\pr[\\exists\\mathit{faster\\_fork}.(\\mathit{len(faster\\_fork)}\\leq \\ell{-}1) \\mid \\delta^a,n]} \\leq 1 + \\epsilon. \\tag{21} \\label{eq:pastf} \\end{align}\n\nwhere we introduce $$\\epsilon$$ to denote some desired computational precision, just to limit the computation. That is, the larger $$\\ell$$ is, the smaller the probability, and at a certain point the probability of a fork of length $$\\ell$$ is within $$\\epsilon$$ of the probability of a fork of length $$\\ell{-}1$$, and we decide that we are now precise enough and can stop the\u00a0computation.\n\nSo now, to compute (or rather: to estimate using precision $$\\epsilon$$) the confirmation number for a given accumulated delay $$\\delta^a$$, we just need to find the smallest $$n$$ such\u00a0that\n\n$$\\pr[\\exists \\mathit{faster\\_fork} \\mid \\delta^a,n] < \\secu.$$\n\nCalculating this is best done by computer, so we have built\u00a0\u2026\n\nA tool, and the attack\u00a0model\n\nA concrete\u00a0tool\n\nThe methodology above underlies the results presented in a previous blogpost on forks started in the past. Since then, we have created a standalone tool to perform relevant calculations, which we have made accessible online as a web demo for forks started in the\u00a0past.\n\nWe can use our tool to compute and then compare concrete confirmation numbers for both the forks starting now and the forks started in the past scenarios considered above. Note that forks starting in the past is simply a conditional probability: forks starting now makes no assumptions about future chain health, whereas forks starting in the past asks \u201cgiven a particular value for chain health since the transaction concerned, what is the expected confirmation number\u201d? So recall our previous assumptions\u00a0of\n\n\u2022 an attacker controlling at most one fifth of the network\u00a0and\n\u2022 a security threshold $$\\secu$$ of $$10^{-8}$$.\n\nUnder a forks starting now scenario \u2014 that is, at the time that we generated our block \u2014 we expected to wait seven blocks before considering our transaction\u00a0final.\n\nSuppose now that time has passed and we see that since our transaction, three blocks have been baked in 190 seconds \u2014 i.e. with a 10 second delay with respect to an ideal chain in Emmy+ which bakes one block every 60 seconds. Then our tool tells us that, assuming $$\\Delta$$ is less than 60 seconds, the confirmation number is in fact three, so we can already consider our transaction final \u2014 four blocks earlier than our original worst-case scenario. And this is just because we have observed that the chain has been reasonably healthy since it included our\u00a0transaction.\n\nThus by taking account of chain health since our transaction was included, we may \u2014 if the chain remains healthy \u2014 get a smaller number of confirmations than in the case of forks starting\u00a0now.\n\nNotes on the attack\u00a0model\n\nAs always, our security guarantees depend on our attack model: i.e. what we assume an attacker wants to accomplish, and what powers we assume the attacker has when trying to do so. So it is as well to be clear about the limitations, as\u00a0follows:\n\n1. In the case of forks starting now, we have only considered that the attacker\u2019s goal is to undo a transaction, so the attack stops when the attacker succeeds. In an alternative attack model, the attacker could keep switching branches and start again with the purpose of maintaining a fork for as long as possible. (Don\u2019t worry! We have analysed this scenario in our blog post on mixed forks in Emmy.) The attacker could also play on multiple branches, while we consider only two\u00a0branches.\n2. In the case of forks started in the past, we have only considered the case when the attacker starts a fork at the block with the transaction to be undone. In an alternative attack model, the attacker could have started a fork before the block with the transaction to be undone \u2014 though one can always bring this within the scope of our analysis by assuming the attacker is trying to undo an arbitrary transaction in the earlier\u00a0block.\n\nThe\u00a0future\n\nTezos plans to switch to the BFT-style consensus algorithm Tenderbake, because this offers faster finality. Does that mean the Emmy family of algorithms is obsolete? Yes if you only care about Tezos, but absolutely not, if you care about Nakamoto-style consensus in\u00a0general.\n\nEmmy* is a perfectly fine consensus algorithm which arguably represents an evolutionary peak in Nakamoto-style consensus: the fact that it and Tezos stand to part ways reflects more on what Tezos requires for its future, than on Emmy\u00a0itself.\n\nFurthermore, all Nakamoto-style consensus algorithms are quite similar, so the lessons learned from optimising Nakamoto-style consensus to run the Tezos blockchain for seven years with increasing sophistication and real-world reliability may be valid, or at least indicative, of your favourite Nakamoto-style consensus algorithm too. Mathematics doesn\u2019t rust, and if the maths in this blog post might help inform the design of future blockchain protocols, then it will have done its\u00a0job.\n\nAcknowledgements: We thank Bruno Blanchet (INRIA) for providing the framework for the analyses in this post. Arthur Breitman made most of the design choices in the Emmy family described\u00a0above.\n\n1. Actually, it\u2019s The One True Blockchain, like strawberry jam is The One True Jam [not quince -ed] and Breaking Bad and Better Call Saul are The Joint One True TV Series. But, we digress.\n\n2. This is not hyperbole. It was.\n\n3. Like, really subtle. Please don\u2019t expect to read this blog post in a hurry: if you\u2019re not familiar with the material then it will force you to think.\n\n4. The consensus algorithm was unnamed in the whitepaper referenced. The name \u2018Emmy\u2019 was chosen later, in the blog post introducing Emmy+\n\n5. The upgrade from Emmy to Emmy+ improved security and did not degrade performance. Similarly for the forthcoming upgrade to Emmy*\n\n6. It is a whole other issue to determine whether timestamp clocks are in sync, which is outside the scope of this blog post. We will just note that the system is engineered to permit a discrepancy of up to a certain number of seconds, beyond which two timestamps are deemed not equal in some strong sense (15 seconds in Emmy+; 5 seconds in Emmy*).\n\n7. Enumerate active participants as $$(a_1,a_2,\\dots,a_n)$$ and to every nonnegative number $$j\\geq 0$$ assign a participant $$f(j)\\in\\{a_1,a_2,\\dots,a_n\\}$$ with probability proportional to participants\u2019 stake \u2014 meaning that if $$a_1$$ has twice the stake of $$a_2$$, then the probability that $$f(j)=a_1$$ is twice the probability that $$f(j)=a_2$$. Then the earliest priority $$p(a_i)$$ is the least $$j$$ such that $$f(j)=a_i$$.\nIn practice, all we care about is the first two participants \u2014 that is, the $$a_i$$ such that $$p(a_i)=0$$ (the head position in the queue) and the $$a_j$$ such that $$p(a_j)$$ is least nonzero (the next position in the queue), since these are the priorities which are most relevant in practice.\n\n8. This is a distributed network, so different participants may see different sets of endorsements; but let\u2019s look at this from the point of view of a particular participant.\n\n9. Any resemblance to multiple self-citing research cliques in academia, competing to do the same research but with different terminology and while assiduously ignoring each other, is coincidental, of course. But in truth, the selfish clique is a universal problem, as is how to design systems that efficiently and robustly reward cooperation above whatever clique or echo-chamber can most stridently self-cite.\n\n10. The planned upgrade to Tenderbake would make forks impossible. See the blog post\n\n11. The difference is that Bitcoin is proof-of-work (PoW): it is computationally extremely expensive to generate chains, so (in theory) hostile forks won\u2019t happen because beyond a certain number of blocks they would simply cost too much to create. Emmy is proof-of-stake (PoS): it is computationally cheap to generate chains, but participants with a large stake in the blockchain get more baking slots, and they have no incentive to undermine their own (larger) stake.\nOne might say that attacking a PoW system is hard, whereas attacking a PoS system is silly.12\n\n12. Though as always, it depends on the attack model. In principle, a large actor with effectively unlimited resources and not motivated by financial gain could acquire a large stake in an unpermissioned (i.e. open to participation by all) proof-of-stake or a proof-of-work system in order to break it.\n\n13. An economist might ask: \u201cWhat is a suitable utility function?\u201d.\n\n14. To be precise, participants bake on a given chain until they observe a strictly longer one. In particular, if more than one chain exists with the same length, then participants will not switch \u2014 but then they may all converge to bake on whichever of the chains subsequently bakes a block and so becomes longest.\n\n15. At time of writing, the Tezos live chain has about 80,000 rolls (where 1 roll = 8,000 \ua729). Statistics are on TZStats; divide \u201cActive Staking\u201d by 8,000. Endorsement slots are distributed to participants in proportion to their stake, so (simplifying slightly) each roll has an equal chance of getting an endorsement slot at each level, and assuming all is running smoothly, the Tezos blockchain produces one level per minute. So working with these numbers, 32 of the 80,000 rolls get an endorsement slot every minute; that\u2019s 0.04%. Emmy* will increase this to 256 out of 80,000; making 0.32%. Very roughly speaking, this means that if you hold 1 roll = 8,000 \ua729\u00a0then\n\n\u2022 under Emmy and Emmy+ you have roughly a one in two chance each day of being invited to endorse a block ($$(1-0.0004)^{1732}\\approx 0.5$$).\n\u2022 Under Emmy*, this chance rises to virtual certainty ($$(1-0.0032)^{1732}\\approx 0.004$$).\n\nAside from more user participation, this is also expected to reduce confirmation times, making the blockchain as a whole more stable.\n\n16. In the more general case, we would just offset $$\\ell$$ by some amount to reflect the length of any existing blockchain. It makes no difference to the analysis.\n\n17. This equation can be read as\u00a0follows:\n\n\u2022 Once a block is baked, it takes $$\\Delta$$ time units for the other honest bakers to receive\u00a0it,\n\u2022 and another $$\\Delta$$ time units for their endorsements of that block to get back to $$\\hb$$.\n\u2022 Therefore, the earliest baking time for the next block is $$\\max(2\\Delta, \\edelay(\\ph, e))$$.\n\u2022 If $$\\hb$$ prefers to bake its block without endorsements, then the earliest baking time is $$\\max(\\Delta, \\edelay(\\ph, 0))$$.\n\n18. In Emmy and Emmy+, $$\\mathit{time\\_between\\_blocks}$$ is one minute ($$\\bd$$). In Emmy*, $$\\mathit{time\\_between\\_blocks}$$ is 30 seconds ($$\\md$$).\n\n19. That\u2019s (ts(current_blk) - ts(blk_with_tx)) - n*tbb, for those who think in code, setting tbb to either bd or md depending on whether the analysis is for Emmy, Emmy+, or Emmy*.","date":"2021-09-25 03:13:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 2, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 11, \"equation\": 3, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7535523176193237, \"perplexity\": 1186.5899545794725}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057589.14\/warc\/CC-MAIN-20210925021713-20210925051713-00092.warc.gz\"}"}
| null | null |
What is the labyrinth and what does it do?
The labyrinth is in the inner ear. The inner ear includes the cochlea, vestibule and and semicircular canals. These are small shell-like structures in which there is a system of narrow fluid-filled channels called the labyrinth. The semicircular canals sense movement of your head and help to control balance and posture. The cochlea is concerned with hearing. There are three semicircular canals (anterior, lateral and posterior). These are roughly at right angles to each other and sense movement in different directions - left-right, forwardback, and up-down head movements. The semicircular canals are connected to a larger fluid-filled chamber called the vestibule which in turn is connected to the fluid-filled canal in the cochlea. Head movements are sensed because when you move your head, the fluid in the labyrinth within the semicircular canals moves too. The movement of the fluid moves tiny hairs on the inside lining of the labyrinth. When the hairs move, this triggers nerve messages to be sent to the brain via a nerve called the vestibular nerve. This gives the brain information about the movement and position of your head, even when your eyes are closed. Looking with your eyes, and nerve messages from the joints and muscles of the body also help to tell your brain about your position and posture. However, a properly working labyrinth in each ear is needed for a good sense of posture and balance.
What is vestibular neuritis and labyrinthitis? These names used to be used interchangeably, but are now used more specifically.
Vestibular neuritis (sometimes called vestibular neuronitis) means inflammation of the vestibular nerve. This is the nerve that comes from the inner ear and takes messages from the semicircular canals to the brain.
What are the causes of labyrinthitis and vestibular neuritis?
Viral infection The common cause of labyrinthitis and vestibular neuritis is a viral infection. This is called viral labyrinthitis and viral vestibular neuritis. There are various viruses that can cause these problems. The infection may occur at the same time as, or just after, you have a common viral illness such as a sore throat, glandular fever, flu, or a cold. The cold sore virus may also be a cause. Sometimes you may not be aware of any other viral infection and just develop symptoms of labyrinthitis or vestibular neuritis.
Bacterial infection in the middle ear. Most ear infections do not spread into the inner ear but a labyrinthitis or vestibular neuritis are uncommon complications. Meningitis. The infection may spread from the brain to to the inner ear. A blockage of the blood circulation to part of the brain.
An uncommon side-effect of some drugs.
What are the symptoms of labyrinthitis and vestibular neuritis?
Vertigo The main symptom is vertigo. Vertigo is dizziness with a spinning sensation. If you have vertigo you feel as if the world is spinning around you, and you feel very unsteady. Often you will also feel sick or vomit. Typically, if a viral infection is the cause (the common situation), you develop vertigo quite quickly. Vertigo occurs because the inflamed or damaged labyrinth or vestibular nerve sends conflicting signals to the brain compared with the normal ear. The brain becomes very confused about your head posture and reacts to cause vertigo. The vertigo can become intense and constant for the first few days and you simply have to lie down until the symptoms ease. The vertigo may be less intense if you lie down, and is often made worse by sitting up, moving your head, or moving around. In milder cases the vertigo is less intense, but you feel unsteady when moving or walking around.
Some mild hearing loss on the affected side if you have labyrinthitis.
Other symptoms of a viral infection such as a sore throat, flu symptoms or a cold.
Pain in an ear. However, this is not normally a feature of a viral labyrinthitis or viral vestibular neuritis. If you have ear pain it may indicate that you have a bacterial middle ear infection that has spread to the inner ear.
Symptoms of a viral labyrinthitis or viral vestibular neuritis can last anything from a few days to several weeks. A typical case is for symptoms to be bad for 2-3 weeks, and then gradually to settle down over several days. There may be some slight unsteadiness for 2-3 months before symptoms clear completely. However, in a small number of cases, symptoms can persist for months or years. In these cases, the viral infection will have gone but the inflammation and damage caused by the infection may cause persisting symptoms.
Do I need any tests? If you have a typical episode of labyrinthitis or vestibular neuritis due to a viral infection then your doctor will usually be able to diagnose this on the basis of your symptoms and the examination. Tests are not usually needed or helpful. However, you may be referred for tests such as a scan, hearing tests, balance tests, etc, if you have symptoms that suggest anything other than a viral infection, or if symptoms are not settling within 3-4 weeks.
What is the treatment for labyrinthitis and vestibular neuritis? If you get a sudden attack of vertigo accompanied by deafness in one ear you should seek urgent medical help as this could be a sign of blockage of the blood vessels to part of the brain, and you may need urgent treatment. Treatment if a viral infection is the cause No treatment will completely take away the symptoms - especially the main symptom of vertigo. You may simply have to accept that you will be dizzy, and may need to stay in bed until the viral infection runs its course and the worst of the symptoms subside. A doctor may prescribe anti-sickness medication if you are troubled with vomiting. Some drugs also help to quieten the nerve messages from the inner ear and may ease vertigo. For example, prochlorperazine. Occasionally, some people become so dehydrated due to the vomiting that goes with vertigo that they need to be admitted to hospital for a 'drip' until the vomiting stops. Some doctors prescribe a short course of steroid tablets. The theory is that this may reduce the inflammation more quickly than it would do naturally. It is not clear if this is of value. If symptoms do not clear within a few weeks then you may be referred to an ear specialist who may recommend treatment called vestibular rehabilitation therapy (VRT). This therapy uses physical and occupational therapy techniques to treat vertigo and balance disorders. Treatment of other causes Treatment of other less common causes depends on the cause. Your doctor will advise. For example, if you have a middle ear bacterial infection you may be prescribed antibiotics.
What is the prognosis (outcome)? A bout of labyrinthitis or vestibular neuritis can make you feel very unwell, and put you to bed. However, in most cases, the cause is a viral infection and this usually clears away. Therefore, symptoms in most cases clear completely but this may take several weeks. Some cases are milder and you just feel a bit unsteady on your feet for a short time. In a small number of cases, symptoms following a viral labyrinthitis or viral vestibular neuritis can persist for months or years. Also, there are more serious causes of labyrinthitis and vestibular neuritis, but these are much less common. Therefore, tell your doctor if you do not improve, or if you develop other symptoms.
Further help and information Labyrinthitis.org.uk Web: www.labyrinthitis.org.uk A quote from this site: "The norm for labyrinthitis is to last between 3 and 8 weeks and then disappear without so much of a trace. For a fair number however this does not happen ... This website is for those people who have been suffering for months and years rather than weeks."
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,060
|
Pteromalus luzonensis is een vliesvleugelig insect uit de familie Pteromalidae. De wetenschappelijke naam is voor het eerst geldig gepubliceerd in 1925 door Gahan.
luzonensis
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 971
|
\section{Introduction}
Among the many results of information theory, the ability to use the noise in
a wiretap channel for the purpose of private communication stands out as one
of the great conceptual insights \cite{W75}. A classical wiretap channel is
modeled as a conditional probability distribution $p_{Y,Z|X}$, in which the
sender Alice has access to the input $X$ of the channel, the legitimate
receiver Bob has access to the output $Y$, and the eavesdropper Eve has access
to the output $Z$. The goal of private communication is for Alice and Bob to
use the wiretap channel in such a way that Alice communicates a message
reliably to Bob, while at the same time, Eve should not be able to determine
which message was transmitted. The author of \cite{W75} proved that the mutual
information difference%
\begin{equation}
\max_{p_{X}}\left[ I(X;Y)-I(X;Z)\right] \label{eq:wiretap-MI-diff}%
\end{equation}
is an achievable rate for private communication over the wiretap channel, when
Alice and Bob are allowed to use it many independent times. Since then, the
interest in the wiretap channel has not waned, and there have been many
increasingly refined statements about achievable rates for private
communication over wiretap channels \cite{CK78,H06,T12,H13,YAG13,YSP16,TB16}.
Many years after the contribution of \cite{W75}, the protocol of quantum key
distribution was developed as a proposal for private communication over a
quantum channel \cite{bb84}. Quantum information theory started becoming a
field in its own right, during which many researchers revisited several of the
known results of Shannon's information theory under a quantum lens. This was
not merely an academic exercise: doing so revealed that remarkable
improvements in communication rates could be attained for physical channels of
practical interest if quantum-mechanical strategies are exploited
\cite{GGLMSY04}.
One important setting which was revisited is the wiretap channel, and in the
quantum case, the simplest extension of the classical model is given by the
classical-input quantum-output wiretap channel (abbreviated as~\textit{cq
wiretap channel}) \cite{ieee2005dev,1050633}. It is described as the following
map:%
\begin{equation}
x\rightarrow\rho_{BE}^{x}, \label{eq:cq-wiretap}%
\end{equation}
where $x$ is a classical symbol that Alice can input to the channel and
$\rho_{BE}^{x}$ is the joint output quantum state of Bob and Eve's system,
represented as a density operator acting on the tensor-product Hilbert space
of Bob and Eve's quantum systems. The goal of private communication over the
cq wiretap channel is similar to that for the classical wiretap channel.
However, in this case, Bob is allowed to perform a collective quantum
measurement over all of his output quantum systems in order to determine
Alice's message, while at the same time, we would like for it be difficult for
Eve to figure out anything about the transmitted message, even if she has
access to a quantum computer memory that can store all of the quantum systems
that she receives from the channel output. The authors of
\cite{ieee2005dev,1050633} independently proved that a quantum generalization
of the formula in \eqref{eq:wiretap-MI-diff} is an achievable rate for private
communication over a cq quantum wiretap channel, if Alice and Bob are allowed
to use it many independent times. Namely, they proved that the following
Holevo information difference is an achievable rate:%
\begin{equation}
\max_{p_{X}}\left[ I(X;B)-I(X;E)\right] , \label{eq:cq-wiretap-Holevo-infos}%
\end{equation}
where the information quantities in the above formula are the Holevo
information to Bob and Eve, respectively, and will be formally defined later
in the present paper.
Since the developments of \cite{ieee2005dev,1050633}, there has been an
increasing interest in the quantum information community to determine refined
characterizations of communication tasks
\cite{TH12,li12,TT13,DTW14,DHO16,DL15,BDL15,TBR15,WTB16}, strongly motivated
by the fact that it is experimentally difficult to control a large number of
quantum systems, and in practice, one has access only to a finite number of
quantum systems anyway. One such scenario of interest, as discussed above, is
the quantum wiretap channel. Hitherto, the only work offering achievable
one-shot rates for private communication over cq wiretap channels is
\cite{RR11}. However, that work did not consider bounding the second-order
coding rate for private communication over the cq wiretap channel.
The main contribution of the present paper is a lower bound on the one-shot
private capacity of a cq wiretap channel. Namely, I prove that%
\begin{equation}
\log_{2}M_{\operatorname{priv}}^{\ast}(\varepsilon_{1}+\sqrt{\varepsilon_{2}%
})\geq I_{H}^{\varepsilon_{1}-\eta_{1}}(X;B)-\widetilde{I}_{\max}%
^{\sqrt{\varepsilon_{2}}-\eta_{2}}(E;X)-\log_{2}(4\varepsilon_{1}/\eta_{1}%
^{2})-2\log_{2}( 1/\eta_{2}) . \label{eq:1-shot-private-bnd}%
\end{equation}
In the above, $\log_{2}M_{\operatorname{priv}}^{\ast}(\varepsilon_{1}%
+\sqrt{\varepsilon_{2}})$ represents the maximum number of bits that can be
sent from Alice to Bob, using a cq wiretap channel once, such that the privacy
error (to be defined formally later)\ does not exceed $\varepsilon_{1}%
+\sqrt{\varepsilon_{2}}\in(0,1)$, with $\varepsilon_{1},\varepsilon_{2}%
\in(0,1)$. The quantities on the right-hand side of the above inequality are
particular one-shot generalizations of the Holevo information to Bob and Eve,
which will be defined later. It is worthwhile to note that the one-shot
information quantities in \eqref{eq:1-shot-private-bnd} can be computed using
semi-definite programming, and the computational runtime is polynomial in the
dimension of the channel. Thus, for channels of reasonable dimension, the
quantities can be efficiently estimated numerically. The constants $\eta_{1}$
and $\eta_{2}$ are chosen so that $\eta_{1}\in(0,\varepsilon_{1})$ and
$\eta_{2}\in(0,\sqrt{\varepsilon_{2}})$. By substituting an independent and
identically distributed (i.i.d.) cq wiretap channel into the right-hand side
of the above inequality, using second-order expansions for the one-shot Holevo
informations \cite{TH12,li12}, and picking $\eta_{1},\eta_{2}=1/\sqrt{n}$, we
find the following lower bound on the second-order coding rate for private
classical communication:%
\begin{multline}
\log_{2}M_{\operatorname{priv}}^{\ast}(n,\varepsilon_{1}+\sqrt{\varepsilon
_{2}})\geq n\left[ I(X;B)-I(X;E)\right] \\
+\sqrt{nV(X;B)}\Phi^{-1}(\varepsilon_{1})+\sqrt{nV(X;E)}\Phi^{-1}%
(\varepsilon_{2})+O(\log n).
\end{multline}
In the above, $\log_{2}M_{\operatorname{priv}}^{\ast}(n,\varepsilon_{1}%
+\sqrt{\varepsilon_{2}})$ represents the maximum number of bits that can be
sent from Alice to Bob, using a cq wiretap channel $n$ times, such that the
privacy error does not exceed $\varepsilon_{1}+\sqrt{\varepsilon_{2}}\in
(0,1)$. The Holevo informations from \eqref{eq:cq-wiretap-Holevo-infos} make
an appearance in the first-order term (proportional to the number $n$ of
channel uses) on the right-hand side above, while the second order term
(proportional to $\sqrt{n}$) consists of the quantum channel dispersion
quantities $V(X;B)$ and $V(X;E)$ \cite{TT13}, which will be defined later.
They additionally feature the inverse $\Phi^{-1}$ of the cumulative Gaussian
distribution function $\Phi$. Thus, the one-shot bound in
\eqref{eq:1-shot-private-bnd} leads to a lower bound on the second-order
coding rate, which is comparable to bounds that have appeared in the classical
information theory literature \cite{T12,YAG13,YSP16,TB16}.
To prove the one-shot bound in \eqref{eq:1-shot-private-bnd}, I use two recent
and remarkable techniques:\ position-based coding \cite{AJW17}\ and convex
splitting \cite{ADJ17}. The main idea of position-based coding \cite{AJW17}%
\ is conceptually simple. To communicate a classical message from Alice to
Bob, we allow them to share a quantum state $\rho_{RA}^{\otimes M}$ before
communication begins, where $M$ is the number of messages, Bob possesses the
$R$ systems, and Alice the $A$ systems. If Alice wishes to communicate message
$m$, then she sends the $m$th $A$ system through the channel. The reduced
state of Bob's systems is then%
\begin{equation}
\rho_{R_{1}}\otimes\cdots\otimes\rho_{R_{m-1}}\otimes\rho_{R_{m}B}\otimes
\rho_{R_{m+1}}\otimes\cdots\otimes\rho_{R_{M}},
\label{eq:position-based-decoding}%
\end{equation}
where $\rho_{R_{m}B}=\mathcal{N}_{A_{m}\rightarrow B}(\rho_{R_{m}A_{m}})$ and
$\mathcal{N}_{A_{m}\rightarrow B}$ is the quantum channel. For all $m^{\prime
}\neq m$, the reduced state for systems $R_{m^{\prime}}$ and $B$ is the
product state $\rho_{R_{m^{\prime}}}\otimes\rho_{B}$. However, the reduced
state of systems $R_{m}B$ is the (generally)\ correlated state $\rho_{R_{m}B}%
$. So if Bob has a binary measurement which can distinguish the joint state
$\rho_{RB}$ from the product state $\rho_{R}\otimes\rho_{B}$ sufficiently
well, he can base a decoding strategy off of this, and the scheme will be
reliable as long as the number of bits $\log_{2}M$ to be communicated is
chosen to be roughly equal to a one-shot mutual information known as
hypothesis testing mutual information (cf., \cite{WR12}). This is exactly what
is used in position-based coding, and the authors of \cite{AJW17} thus forged
a transparent and intuitive link between quantum hypothesis testing and
communication for the case of entanglement-assisted communication.
Convex splitting \cite{ADJ17} is rather intuitive as well and can be thought
of as dual to the coding scenario mentioned above. Suppose instead that Alice
and Bob have a means of generating the state in
\eqref{eq:position-based-decoding}, perhaps by the strategy mentioned above.
But now suppose that Alice chooses the variable $m$ uniformly at random, so
that the state, from the perspective of someone ignorant of the choice of $m$,
is the following mixture:%
\begin{equation}
\frac{1}{M}\sum_{m=1}^{M}\rho_{R_{1}}\otimes\cdots\otimes\rho_{R_{m-1}}%
\otimes\rho_{R_{m}B}\otimes\rho_{R_{m+1}}\otimes\cdots\otimes\rho_{R_{M}}.
\end{equation}
The convex-split lemma guarantees that as long as $\log_{2}M$ is roughly equal
to a one-shot mutual information known as the alternate smooth max-mutual
information, then the state above is nearly indistinguishable from the
product state $\rho_{R}^{\otimes M}\otimes\rho_{B}$.
Both position-based coding and convex splitting have been used recently and
effectively to establish a variety of results in one-shot quantum information
theory \cite{AJW17,ADJ17}. In the present paper, I use the approaches in
conjunction to construct codes for the cq wiretap channel. The main underlying
idea follows the original approach of \cite{W75}, by allowing for a message
variable $m\in\{1,\ldots,M\}$ and a local key variable $k\in\{1,\ldots,K\}$
(local randomness), the latter of which is selected uniformly at random and
used to confuse the eavesdropper Eve. Before communication begins, Alice, Bob,
and Eve are allowed share to $MK$ copies of the common randomness state
$\theta_{X_{A}X_{B}X_{E}}\equiv\sum_{x}p_{X}(x)|xxx\rangle\langle
xxx|_{X_{A}X_{B}X_{E}}$. We can think of the $MK$ copies of $\theta
_{X_{A}X_{B}X_{E}}$ as being partitioned into $M$ blocks, each of which
contain $K$ copies of the state $\theta_{X_{A}X_{B}X_{E}}$. If Alice wishes to
send message $m$, then she picks $k$ uniformly at random and sends the $(m,k)$
$X_{A}$ system through the cq wiretap channel in \eqref{eq:cq-wiretap}. As
long as $\log_{2}MK$ is roughly equal to the hypothesis testing mutual
information $I_{H}^{\varepsilon}(X;B)$, then Bob can use a position-based
decoder to figure out both $m$ and $k$. As long as $\log_{2}K$ is roughly
equal to the alternate smooth max-mutual information $\widetilde{I}_{\max
}^{\varepsilon}(E;X)$, then the convex-split lemma guarantees that the overall
state of Eve's systems, regardless of which message $m$ was chosen, is nearly
indistinguishable from the product state $\rho_{X_{E}}^{\otimes MK}\otimes
\rho_{E}$. Thus, in such a scheme, Bob can figure out $m$ while Eve cannot
figure out anything about $m$. This is the intuition behind the coding scheme
and gives a sense of why $\log_{2}M=\log_{2}MK-\log_{2}K\approx I_{H}%
^{\varepsilon}(X;B)-\widetilde{I}_{\max}^{\varepsilon}(E;X)$ is an achievable
number of bits that can be sent privately from Alice to Bob. The main purpose
of the present paper is to develop the details of this argument and
furthermore show how the scheme can be derandomized, so that the $MK$ copies
of the common randomness state $\theta_{X_{A}X_{B}X_{E}}$ are in fact not necessary.
The rest of the paper proceeds as follows. In Section~\ref{sec:prelim}, I
review some preliminary material, which includes several metrics for quantum
states and pertinent information measures. Section~\ref{sec:public-comm}
develops the position-based coding approach for classical-input quantum-output
communication channels. Position-based coding was developed in \cite{AJW17}%
\ to highlight a different approach to entanglement-assisted communication,
but I show in Section~\ref{sec:public-comm} how the approach can be used for
shared randomness-assisted communication; I also show therein how to
derandomize codes in this case (i.e., the shared randomness is not actually
necessary for classical communication over cq channels).
Section~\ref{sec:priv-comm} represents the main contribution of the present
paper, which is a lower bound on the $\varepsilon$-one-shot private classical
capacity of a cq wiretap channel. The last development in
Section~\ref{sec:priv-comm} is to show how the one-shot lower bound leads to a
lower bound on the second-order coding rate for private classical
communication over a memoryless cq wiretap channel.
Therein, I also show how these lower bounds simplify for pure-state cq wiretap channels and when using binary phase-shift keying as a coding strategy for private communication over a pure-loss bosonic channel.
Section~\ref{sec:concl}%
\ concludes with a summary and some open questions for future work.
\section{Preliminaries\label{sec:prelim}}
I use notation and concepts that are standard in quantum information theory
and point the reader to \cite{W15book} for background. In the rest of this
section, I review concepts that are less standard and set some notation that
will be used later in the paper.
\bigskip
\textbf{Trace distance, fidelity, and purified distance.} Let $\mathcal{D}%
(\mathcal{H})$ denote the set of density operators acting on a Hilbert space
$\mathcal{H}$, let $\mathcal{D}_{\leq}(\mathcal{H})$ denote the set of
subnormalized density operators (with trace not exceeding one) acting on
$\mathcal{H}$, and let $\mathcal{L}_{+}(\mathcal{H})$ denote the set of
positive semi-definite operators acting on $\mathcal{H}$. The trace distance
between two quantum states $\rho,\sigma\in\mathcal{D}(\mathcal{H})$\ is equal
to $\left\Vert \rho-\sigma\right\Vert _{1}$, where $\left\Vert C\right\Vert
_{1}\equiv\operatorname{Tr}\{\sqrt{C^{\dag}C}\}$ for any operator $C$. It has
a direct operational interpretation in terms of the distinguishability of
these states. That is, if $\rho$ or $\sigma$ are prepared with equal
probability and the task is to distinguish them via some quantum measurement,
then the optimal success probability in doing so is equal to $\left(
1+\left\Vert \rho-\sigma\right\Vert _{1}/2\right) /2$. The fidelity is
defined as $F(\rho,\sigma)\equiv\left\Vert \sqrt{\rho}\sqrt{\sigma}\right\Vert
_{1}^{2}$ \cite{U76}, and more generally we can use the same formula to define
$F(P,Q)$ if $P,Q\in\mathcal{L}_{+}(\mathcal{H})$. Uhlmann's theorem states
that \cite{U76}%
\begin{equation}
F(\rho_{A},\sigma_{A})=\max_{U}\left\vert \langle\phi^{\sigma}|_{RA}%
U_{R}\otimes I_{A}|\phi^{\rho}\rangle_{RA}\right\vert ^{2},
\label{eq:uhlmann-thm}%
\end{equation}
where $|\phi^{\rho}\rangle_{RA}$ and $|\phi^{\sigma}\rangle_{RA}$ are fixed
purifications of $\rho_{A}$ and $\sigma_{A}$, respectively, and the
optimization is with respect to all unitaries $U_{R}$. The same statement
holds more generally for $P,Q\in\mathcal{L}_{+}(\mathcal{H})$. The fidelity is
invariant with respect to isometries and monotone non-decreasing with respect
to channels. The sine distance or $C$-distance between two quantum states
$\rho,\sigma\in\mathcal{D}(\mathcal{H})$ was defined as
\begin{equation}
C(\rho,\sigma)\equiv\sqrt{1-F(\rho,\sigma)}%
\end{equation}
and proven to be a metric in \cite{R02,R03,GLN04,R06}. It was
later~\cite{TCR09} (under the name \textquotedblleft purified
distance\textquotedblright) shown to be a metric on subnormalized states
$\rho,\sigma\in\mathcal{D}_{\leq}(\mathcal{H})$ via the embedding
\begin{equation}
P(\rho,\sigma)\equiv C(\rho\oplus\left[ 1-\operatorname{Tr}\{\rho\}\right]
,\sigma\oplus\left[ 1-\operatorname{Tr}\{\sigma\}\right] )\,.
\label{eq:purified-distance}%
\end{equation}
The following inequality relates trace distance and purified distance:%
\begin{equation}
\frac{1}{2}\left\Vert \rho-\sigma\right\Vert _{1}\leq P(\rho,\sigma).
\label{eq:TD-to-PD}%
\end{equation}
\bigskip
\textbf{Relative entropies and variances.} The quantum relative entropy of two
states $\omega$ and $\tau$ is defined as \cite{U62}%
\begin{equation}
D(\omega\Vert\tau)\equiv\operatorname{Tr}\{\omega\lbrack\log_{2}\omega
-\log_{2}\tau]\}
\end{equation}
whenever $\operatorname{supp}(\omega)\subseteq\operatorname{supp}(\tau)$ and
it is equal to $+\infty$ otherwise. The quantum relative entropy variance is
defined as \cite{TH12,li12}%
\begin{equation}
V(\omega\Vert\tau)\equiv\operatorname{Tr}\{\omega\lbrack\log_{2}\omega
-\log_{2}\tau-D(\omega\Vert\tau)]^{2}\},
\end{equation}
whenever $\operatorname{supp}(\omega)\subseteq\operatorname{supp}(\tau)$. The
hypothesis testing relative entropy \cite{BD10,WR12}\ of states $\omega$ and
$\tau$ is defined as%
\begin{equation}
D_{H}^{\varepsilon}(\omega\Vert\tau)\equiv-\log_{2}\inf_{\Lambda}\left\{
\operatorname{Tr}\{\Lambda\tau\}:0\leq\Lambda\leq I\wedge\operatorname{Tr}%
\{\Lambda\omega\}\geq1-\varepsilon\right\} .
\end{equation}
The max-relative entropy for states $\omega$ and $\tau$ is defined as
\cite{D09}%
\begin{equation}
D_{\max}(\omega\Vert\tau)\equiv\inf\left\{ \lambda\in\mathbb{R}:\omega
\leq2^{\lambda}\tau\right\} .
\end{equation}
The smooth max-relative entropy for states $\omega$ and $\tau$ and a parameter
$\varepsilon\in(0,1)$ is defined as \cite{D09}%
\begin{equation}
D_{\max}^{\varepsilon}(\omega\Vert\tau)\equiv\inf\left\{ \lambda\in
\mathbb{R}:\widetilde{\omega}\leq2^{\lambda}\tau\wedge P(\omega,\widetilde
{\omega})\leq\varepsilon\right\} .
\end{equation}
The following second-order expansions hold for $D_{H}^{\varepsilon}$ and
$D_{\max}^{\varepsilon}$ when evaluated for tensor-power states
\cite{TH12,li12}:%
\begin{align}
D_{H}^{\varepsilon}(\omega^{\otimes n}\Vert\tau^{\otimes n}) &
=nD(\omega\Vert\tau)+\sqrt{nV(\omega\Vert\tau)}\Phi^{-1}(\varepsilon)+O(\log
n),\label{eq:expand-1}\\
D_{\max}^{\sqrt{\varepsilon}}(\omega^{\otimes n}\Vert\tau^{\otimes n}) &
=nD(\omega\Vert\tau)-\sqrt{nV(\omega\Vert\tau)}\Phi^{-1}(\varepsilon)+O(\log
n). \label{eq:expand-2}%
\end{align}
The above expansion features the cumulative distribution function for a
standard normal random variable:%
\begin{equation}
\Phi(a)\equiv\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{a}dx\,\exp\left(
-x^{2}/2\right) , \label{eq:cumul-gauss}%
\end{equation}
and its inverse, defined as $\Phi^{-1}(\varepsilon)\equiv\sup\left\{
a\in\mathbb{R}\,|\,\Phi(a)\leq\varepsilon\right\} $.
\bigskip
\textbf{Mutual informations and variances.} The quantum mutual information
$I(X;B)_{\rho}$ and information variance $V(X;B)_{\rho}$ of a bipartite state
$\rho_{XB}$ are defined as%
\begin{align}
I(X;B)_{\rho} & \equiv D(\rho_{XB}\Vert\rho_{X}\otimes\rho_{B}),\\
V(X;B)_{\rho} & \equiv V(\rho_{XB}\Vert\rho_{X}\otimes\rho_{B}).
\end{align}
In this paper, we are exclusively interested in the case in which system $X$
of $\rho_{XB}$ is classical, so that $\rho_{XB}$ can be written as%
\begin{equation}
\rho_{XB}=\sum_{x}p_{X}(x)|x\rangle\langle x|_{X}\otimes\rho_{B}^{x},
\end{equation}
where $p_{X}$ is a probability distribution, $\{|x\rangle_{X}\}_{x}$ is an
orthonormal basis, and $\{\rho_{B}^{x}\}_{x}$ is a set of quantum states. The
hypothesis testing mutual information is defined as follows for a bipartite
state $\rho_{XB}$ and a parameter $\varepsilon\in(0,1)$:%
\begin{equation}
I_{H}^{\varepsilon}(X;B)_{\rho}\equiv D_{H}^{\varepsilon}(\rho_{XB}\Vert
\rho_{X}\otimes\rho_{B}).
\end{equation}
From the smooth max-relative entropy, one can define a mutual information-like
quantity for a state $\theta_{AB}$ as follows:%
\begin{equation}
D_{\max}^{\varepsilon}(\theta_{AB}\Vert\theta_{A}\otimes\theta_{B}).
\label{eq:D_max-MI}%
\end{equation}
Note that we have the following expansions, as a direct consequence of
\eqref{eq:expand-1}--\eqref{eq:expand-2}\ and definitions:%
\begin{align}
I_{H}^{\varepsilon}(X^{n};B^{n})_{\rho^{\otimes n}} & =nI(X;B)_{\rho}%
+\sqrt{nV(X;B)_{\rho}}\Phi^{-1}(\varepsilon)+O(\log n),\label{eq:MI-expand}\\
D_{\max}^{\sqrt{\varepsilon}}(\rho_{XB}^{\otimes n}\Vert\rho_{X}^{\otimes
n}\otimes\rho_{B}^{\otimes n}) & =nI(X;B)_{\rho}-\sqrt{nV(X;B)_{\rho}}%
\Phi^{-1}(\varepsilon)+O(\log n). \label{eq:MI-expand-2}%
\end{align}
Another quantity, related to that in \eqref{eq:D_max-MI}, is as follows
\cite{AJW17}:%
\begin{equation}
\widetilde{I}_{\max}^{\varepsilon}(B;A)_{\theta}\equiv\inf_{\theta
_{AB}^{\prime}\ :\ P(\theta_{AB}^{\prime},\theta_{AB})\leq\varepsilon}D_{\max
}(\theta_{AB}^{\prime}\Vert\theta_{A}\otimes\theta_{B}^{\prime}).
\label{eq:tilde-I-max}%
\end{equation}
We recall a relation \cite[Lemma~1]{AJW17} between the quantities in
\eqref{eq:D_max-MI}\ and \eqref{eq:tilde-I-max}, giving a very slight
modification of it which will be useful for our purposes here:
\begin{lemma}
\label{lem:alt-dmax-to-dmax-smooth}For a state $\theta_{AB}$, $\varepsilon
\in(0,1)$, and $\gamma\in(0,\varepsilon)$, the following inequality holds%
\begin{equation}
\widetilde{I}_{\max}^{\varepsilon}(B;A)_{\theta}\leq D_{\max}^{\varepsilon
-\gamma}(\theta_{AB}\Vert\theta_{A}\otimes\theta_{B})+\log_{2}\!\left(
\frac{3}{\gamma^{2}}\right) . \label{eq:alt-dmax-to-dmax}%
\end{equation}
\end{lemma}
\begin{proof}
To see this, recall \cite[Claim~2]{AJW17}: For states $\omega_{AB}$, $\xi_{A}%
$, $\kappa_{B}$, there exists a state $\overline{\omega}_{AB}$ such that
$P(\omega_{AB},\overline{\omega}_{AB})\leq\delta$ and%
\begin{equation}
D_{\max}(\overline{\omega}_{AB}\Vert\xi_{A}\otimes\overline{\omega}_{B})\leq
D_{\max}(\omega_{AB}\Vert\xi_{A}\otimes\kappa_{B})+\log_{2}\!\left( \frac
{3}{\delta^{2}}\right) . \label{eq:claim-2}%
\end{equation}
Let $\theta_{AB}^{\ast}$ denote the optimizer for $D_{\max}^{\varepsilon
-\gamma}(\theta_{AB}\Vert\theta_{A}\otimes\theta_{B})$. Then, in
\eqref{eq:claim-2}, taking $\omega_{AB}=\theta_{AB}^{\ast}$, $\xi_{A}%
=\theta_{A}$, $\kappa_{B}=\theta_{B}$, we find that there exists a state
$\overline{\theta}_{AB}$ such that $P(\theta_{AB}^{\ast},\overline{\theta
}_{AB})\leq\gamma$ and%
\begin{equation}
D_{\max}(\overline{\theta}_{AB}\Vert\theta_{A}\otimes\overline{\theta}%
_{B})\leq D_{\max}^{\varepsilon-\gamma}(\theta_{AB}\Vert\theta_{A}%
\otimes\theta_{B})+\log_{2}\!\left( \frac{3}{\gamma^{2}}\right) .
\end{equation}
By the triangle inequality for the purified distance, we conclude that
$P(\theta_{AB},\overline{\theta}_{AB})\leq P(\theta_{AB},\theta_{AB}^{\ast
})+P(\theta_{AB}^{\ast},\overline{\theta}_{AB})\leq\left( \varepsilon
-\gamma\right) +\gamma=\varepsilon$. Since the quantity on the left-hand side
includes an optimization over all states $\theta_{AB}^{\prime}$ satisfying
$P(\theta_{AB}^{\prime},\theta_{AB})\leq\varepsilon$, we conclude the
inequality in \eqref{eq:alt-dmax-to-dmax}.
\end{proof}
\bigskip
\textbf{Hayashi--Nagaoka operator inequality.} A key tool in analyzing error
probabilities in communication protocols is the Hayashi--Nagaoka operator
inequality \cite{HN03}:\ given operators $S$ and $T$ such that $0\leq S\leq I$
and $T\geq0$, the following inequality holds for all $c>0$%
\begin{equation}
I-(S+T)^{-1/2}S(S+T)^{-1/2}\leq(1+c)(I-S)+(2+c+c^{-1})T. \label{eq:HN-ineq}%
\end{equation}
\bigskip
\textbf{Convex-split lemma.} The convex-split lemma from \cite{ADJ17} has been
a key tool used in recent developments in quantum information theory
\cite{AJW17,ADJ17}. We now state a variant of the convex-split lemma, which is
helpful for obtaining one-shot bounds for privacy and an ensuing lower bound
on the second-order coding rate. Its proof closely follows proofs available in
\cite{AJW17,ADJ17} but has some slight differences. For completeness,
Appendix~\ref{app:convex-split} contains a proof of
Lemma~\ref{thm:convex-split}.
\begin{lemma}
[Convex split]\label{thm:convex-split}Let $\rho_{AB}$ be a state, and let
$\tau_{A_{1}\cdots A_{K}B}$ be the following state:%
\begin{equation}
\tau_{A_{1}\cdots A_{K}B}\equiv\frac{1}{K}\sum_{k=1}^{K}\rho_{A_{1}}%
\otimes\cdots\otimes\rho_{A_{k-1}}\otimes\rho_{A_{k}B}\otimes\rho_{A_{k+1}%
}\otimes\cdots\otimes\rho_{A_{K}}.
\end{equation}
Let $\varepsilon\in(0,1)$ and $\eta\in(0,\sqrt{\varepsilon})$. If%
\begin{equation}
\log_{2}K=\widetilde{I}_{\max}^{\sqrt{\varepsilon}-\eta}(B;A)_{\rho}+2\log
_{2}\!\left( \frac{1}{\eta}\right) ,
\end{equation}
then%
\begin{equation}
P(\tau_{A_{1}\cdots A_{K}B},\rho_{A_{1}}\otimes\cdots\otimes\rho_{A_{K}%
}\otimes\widetilde{\rho}_{B})\leq\sqrt{\varepsilon},
\label{eq:perf-error-convex-split}%
\end{equation}
for some state $\widetilde{\rho}_{B}$ such that $P(\rho_{B},\widetilde{\rho
}_{B})\leq\sqrt{\varepsilon}-\eta$.
\end{lemma}
\section{Public classical communication\label{sec:public-comm}}
\subsection{Definition of the one-shot classical capacity}
We begin by defining the $\varepsilon$-one-shot classical capacity of a cq
channel%
\begin{equation}
x\rightarrow\rho_{B}^{x}. \label{eq:cq-channel}%
\end{equation}
We can write the classical--quantum channel in fully quantum form as the
following quantum channel:%
\begin{equation}
\mathcal{N}_{X^{\prime}\rightarrow B}(\sigma_{X^{\prime}})=\sum_{x}\langle
x|_{X^{\prime}}\sigma_{X^{\prime}}|x\rangle_{X^{\prime}}\rho_{B}^{x},
\end{equation}
where $\{|x\rangle_{X^{\prime}}\}_{x}$ is some orthonormal basis. Let
$M\in\mathbb{N}$ and $\varepsilon\in(0,1)$. An $(M,\varepsilon)$ classical
communication code consists of a collection of probability distributions
$\{p_{X|M}(x|m)\}_{m=1}^{M}$ (one for each message $m$) and a decoding
positive operator-valued measure (POVM) $\{\Lambda_{B}^{m}\}_{m=1}^{M}%
$,\footnote{We could allow for a decoding POVM\ to be $\{\Lambda_{B}%
^{m}\}_{m=0}^{M}$, consisting of an extra operator $\Lambda_{B}^{0}=I_{B}%
-\sum_{m=1}^{M}\Lambda_{B}^{m}$, if needed.} such that%
\begin{equation}
\frac{1}{M}\sum_{m=1}^{M}\operatorname{Tr}\{(I_{B}-\Lambda_{B}^{m})\rho
_{B}^{m}\}=\frac{1}{M}\sum_{m=1}^{M}\frac{1}{2}\left\Vert \mathcal{M}%
_{B\rightarrow\hat{M}}(\rho_{B}^{m})-|m\rangle\langle m|_{\hat{M}}\right\Vert
_{1}\leq\varepsilon. \label{eq:decoding-error}%
\end{equation}
We refer to the left-hand side of the above inequality as the \textit{decoding
error}. In the above, $\{|m\rangle_{\hat{M}}\}_{m=1}^{M}$ is an orthonormal
basis, we define the state $\rho_{B}^{m}$ as%
\begin{equation}
\rho_{B}^{m}=\sum_{x}p_{X|M}(x|m)\rho_{B}^{x},
\end{equation}
and the measurement channel $\mathcal{M}_{B\rightarrow\hat{M}}$\ as%
\begin{equation}
\mathcal{M}_{B\rightarrow\hat{M}}(\omega_{B})\equiv\sum_{m}\operatorname{Tr}%
\{\Lambda_{B}^{m}\omega_{B}\}|m\rangle\langle m|_{\hat{M}}.
\end{equation}
The equality in \eqref{eq:decoding-error} follows by direct calculation:%
\begin{align}
& \left\Vert \mathcal{M}_{B\rightarrow\hat{M}}(\rho_{B}^{m})-|m\rangle\langle
m|_{\hat{M}}\right\Vert _{1}\nonumber\\
& =\left\Vert \sum_{m^{\prime}}\operatorname{Tr}\{\Lambda_{B}^{m^{\prime}%
}\rho_{B}^{m}\}|m^{\prime}\rangle\langle m^{\prime}|_{\hat{M}}-|m\rangle
\langle m|_{\hat{M}}\right\Vert _{1}\\
& =\left\Vert \sum_{m^{\prime}\neq m}\operatorname{Tr}\{\Lambda
_{B}^{m^{\prime}}\rho_{B}^{m}\}|m^{\prime}\rangle\langle m^{\prime}|_{\hat{M}%
}-(1-\operatorname{Tr}\{\Lambda_{B}^{m}\rho_{B}^{m}\})|m\rangle\langle
m|_{\hat{M}}\right\Vert _{1}\\
& =\sum_{m^{\prime}\neq m}\operatorname{Tr}\{\Lambda_{B}^{m^{\prime}}\rho
_{B}^{m}\}+(1-\operatorname{Tr}\{\Lambda_{B}^{m}\rho_{B}^{m}\})\\
& =2\operatorname{Tr}\{(I_{B}-\Lambda_{B}^{m})\rho_{B}^{m}\}.
\end{align}
For a given channel $\mathcal{N}_{X^{\prime}\rightarrow B}$ and $\varepsilon$,
the one-shot classical capacity is equal to $\log_{2}M_{\operatorname{pub}%
}^{\ast}(\varepsilon)$, where $M_{\operatorname{pub}}^{\ast}(\varepsilon)$ is
the largest $M$ such that \eqref{eq:decoding-error} can be satisfied for a
fixed $\varepsilon$.
One can allow for shared randomness between Alice and Bob before communication
begins, in which case one obtains the one-shot shared randomness assisted
capacity of a cq channel.
\subsection{Lower bound on the one-shot, randomness-assisted classical
capacity}
\label{sec:random-assisted-public-classical}We first consider a one-shot
protocol for randomness assisted, public classical communication in which the
goal is for Alice to use the classical-input quantum-output (cq) channel in
\eqref{eq:cq-channel}\ once to send one of $M$ messages with error probability
no larger than $\varepsilon\in(0,1)$. The next section shows how to
derandomize such that the shared randomness is not needed.
\textit{The main result of this section is that}%
\begin{equation}
\mathit{\ }I_{H}^{\varepsilon-\eta}(X;B)_{\rho}-\log_{2}(4\varepsilon/\eta
^{2})\mathit{,\ }%
\end{equation}
\textit{for all }$\eta\in(0,\varepsilon)$\textit{, is a lower bound on the
}$\varepsilon$\textit{-one-shot randomness-assisted, classical capacity of the
cq channel in \eqref{eq:cq-channel}.} Although this result is already known
from \cite{WR12}, the development in this section is an important building
block for the wiretap channel result in Section~\ref{sec:priv-cap-lower}, and
so we go through it in full detail for the sake of completeness. Also, the
approach given here uses position-based decoding for the cq channel.
Fix a probability distribution $p_{X}$ over the channel input alphabet.
Consider the following classical--classical state:%
\begin{equation}
\rho_{XX^{\prime}}\equiv\sum_{x}p_{X}(x)|x\rangle\langle x|_{X}\otimes
|x\rangle\langle x|_{X^{\prime}},
\end{equation}
which we can think of as representing shared randomness. Let $\rho_{XB}$
denote the following state, which results from sending the $X^{\prime}$ system
of $\rho_{XX^{\prime}}$ through the channel $\mathcal{N}_{X^{\prime
}\rightarrow B}$:%
\begin{equation}
\rho_{XB}\equiv\mathcal{N}_{X^{\prime}\rightarrow B}(\rho_{XX^{\prime}}%
)=\sum_{x}p_{X}(x)|x\rangle\langle x|_{X}\otimes\rho_{B}^{x}.
\end{equation}
The coding scheme works as follows. Let Alice and Bob share $M$ copies of the
state $\rho_{XX^{\prime}}$, so that their shared state is%
\begin{equation}
\rho_{X^{M}X^{\prime M}}\equiv\rho_{X_{1}X_{1}^{\prime}}\otimes\cdots
\otimes\rho_{X_{M}X_{M}^{\prime}}=\rho_{XX^{\prime}}^{\otimes M}.
\end{equation}
Alice has the systems labeled by $X^{\prime}$, and Bob has the systems labeled
by $X$. If Alice would like to communicate message $m$ to Bob, then she simply
sends system $X_{m}^{\prime}$ over the classical--quantum channel. In such a
case, the reduced state for Bob is as follows:%
\begin{equation}
\rho_{X^{M}B}^{m}\equiv\rho_{X_{1}}\otimes\cdots\otimes\rho_{X_{m-1}}%
\otimes\rho_{X_{m+1}}\otimes\cdots\otimes\rho_{X_{M}}\otimes\rho_{X_{m}B}.
\end{equation}
Observe that each state $\rho_{X^{M}B}^{m}$ is related to the first one
$\rho_{X^{M}B}^{1}$ by a permutation $\pi(m)$ of the $X^{M}$ systems:%
\begin{equation}
W_{X^{M}}^{\pi(m)}\rho_{X^{M}B}^{1}W_{X^{M}}^{\pi(m)\dag}=\rho_{X^{M}B}^{m},
\label{eq:perm-inv-state}%
\end{equation}
where $W_{X^{M}}^{\pi(m)}$ is a unitary representation of the permutation
$\pi(m)$.
If Bob has a way of distinguishing the joint state $\rho_{XB}$ from the
product state $\rho_{X}\otimes\rho_{B}$, then with high probability, he will
be able to figure out which message $m$ was communicated. Let $T_{XB}$ denote
a test (measurement operator) satisfying $0\leq T_{XB}\leq I_{XB}$, which we
think of as identifying $\rho_{XB}$ with high probability ($\geq1-\varepsilon
$) and for which the complementary operator $I_{XB}-T_{XB}$\ identifies
$\rho_{X}\otimes\rho_{B}$ with the highest probability subject to the
constraint $\operatorname{Tr}\{T_{XB}\rho_{XB}\}\geq1-\varepsilon$. From such
a test, we form the following measurement operator:%
\begin{equation}
\Gamma_{X^{M}B}^{m}\equiv T_{X_{m}B}\otimes I_{X_{1}}\otimes\cdots\otimes
I_{X_{m-1}}\otimes I_{X_{m+1}}\otimes\cdots\otimes I_{X_{M}},
\end{equation}
which we think of as a test to figure out whether the reduced state on systems
$X_{m}B$ is $\rho_{XB}$ or $\rho_{X}\otimes\rho_{B}$. Observe that each
message operator $\Gamma_{X^{M}B}^{m}$ is related to the first one
$\Gamma_{X^{M}B}^{1}$ by a permutation $\pi(m)$ of the $X^{M}$ systems:%
\begin{equation}
W_{X^{M}}^{\pi(m)}\Gamma_{X^{M}B}^{1}W_{X^{M}}^{\pi(m)\dag}=\Gamma_{X^{M}%
B}^{m}.
\end{equation}
If message $m$ is transmitted and the measurement operator $\Gamma_{X^{M}%
B}^{m}$ acts, then the probability of it accepting is%
\begin{equation}
\operatorname{Tr}\{\Gamma_{X^{M}B}^{m}\rho_{X^{M}B}^{m}\}=\operatorname{Tr}%
\{T_{XB}\rho_{XB}\}.
\end{equation}
If however the measurement operator $\Gamma_{X^{M}B}^{m^{\prime}}$ acts, where
$m^{\prime}\neq m$, then the probability of it accepting is%
\begin{equation}
\operatorname{Tr}\{\Gamma_{X^{M}B}^{m^{\prime}}\rho_{X^{M}B}^{m}%
\}=\operatorname{Tr}\{T_{XB}[\rho_{X}\otimes\rho_{B}]\}.
\end{equation}
From these measurement operators, we then form a square-root measurement as
follows:%
\begin{equation}
\Lambda_{X^{M}B}^{m}\equiv\left( \sum_{m^{\prime}=1}^{M}\Gamma_{X^{M}%
B}^{m^{\prime}}\right) ^{-1/2}\Gamma_{X^{M}B}^{m}\left( \sum_{m^{\prime}%
=1}^{M}\Gamma_{X^{M}B}^{m^{\prime}}\right) ^{-1/2}.
\end{equation}
Again, each message operator $\Lambda_{X^{M}B}^{m}$ is related to the first
one $\Lambda_{X^{M}B}^{1}$ by a permutation of the $X^{M}$ systems:%
\begin{align}
& W_{X^{M}}^{\pi(m)}\Lambda_{X^{M}B}^{1}W_{X^{M}}^{\pi(m)\dag}\nonumber\\
& =W_{X^{M}}^{\pi(m)}\left( \sum_{m^{\prime}=1}^{M}\Gamma_{X^{M}%
B}^{m^{\prime}}\right) ^{-1/2}\Gamma_{X^{M}B}^{1}\left( \sum_{m^{\prime}%
=1}^{M}\Gamma_{X^{M}B}^{m^{\prime}}\right) ^{-1/2}W_{X^{M}}^{\pi(m)\dag
}\label{eq:perm-inv-meas-1}\\
& =W_{X^{M}}^{\pi(m)}\left( \sum_{m^{\prime}=1}^{M}\Gamma_{X^{M}%
B}^{m^{\prime}}\right) ^{-1/2}W_{X^{M}}^{\pi(m)\dag}W_{X^{M}}^{\pi(m)}%
\Gamma_{X^{M}B}^{1}W_{X^{M}}^{\pi(m)\dag}W_{X^{M}}^{\pi(m)}\left(
\sum_{m^{\prime}=1}^{M}\Gamma_{X^{M}B}^{m^{\prime}}\right) ^{-1/2}W_{X^{M}%
}^{\pi(m)\dag}\\
& =\left( \sum_{m^{\prime}=1}^{M}W_{X^{M}}^{\pi(m)}\Gamma_{X^{M}%
B}^{m^{\prime}}W_{X^{M}}^{\pi(m)\dag}\right) ^{-1/2}\Gamma_{X^{M}B}%
^{m}\left( \sum_{m^{\prime}=1}^{M}W_{X^{M}}^{\pi(m)}\Gamma_{X^{M}%
B}^{m^{\prime}}W_{X^{M}}^{\pi(m)\dag}\right) ^{-1/2}\\
& =\left( \sum_{m^{\prime}=1}^{M}\Gamma_{X^{M}B}^{m^{\prime}}\right)
^{-1/2}\Gamma_{X^{M}B}^{m}\left( \sum_{m^{\prime}=1}^{M}\Gamma_{X^{M}%
B}^{m^{\prime}}\right) ^{-1/2}. \label{eq:perm-inv-meas-4}%
\end{align}
This is called the position-based decoder and was analyzed in \cite{AJW17}%
\ for the case of entanglement-assisted communication. The error probability
under this coding scheme is as follows for each message $m$:%
\begin{equation}
\operatorname{Tr}\{(I_{R^{M}B}-\Lambda_{R^{M}B}^{m})\rho_{R^{M}B}^{m}\}.
\end{equation}
The error probability is in fact the same for each message, due to the
observations in \eqref{eq:perm-inv-state}\ and
\eqref{eq:perm-inv-meas-1}--\eqref{eq:perm-inv-meas-4}:%
\begin{align}
\operatorname{Tr}\{(I_{R^{M}B}-\Lambda_{R^{M}B}^{1})\rho_{R^{M}B}^{1}\} &
=\operatorname{Tr}\{(I_{R^{M}B}-\Lambda_{R^{M}B}^{1})W_{X^{M}}^{\pi(m)\dag
}W_{X^{M}}^{\pi(m)}\rho_{R^{M}B}^{1}W_{X^{M}}^{\pi(m)\dag}W_{X^{M}}^{\pi
(m)}\}\\
& =\operatorname{Tr}\{(I_{R^{M}B}-W_{X^{M}}^{\pi(m)}\Lambda_{R^{M}B}%
^{1}W_{X^{M}}^{\pi(m)\dag})W_{X^{M}}^{\pi(m)}\rho_{R^{M}B}^{1}W_{X^{M}}%
^{\pi(m)\dag}\}\\
& =\operatorname{Tr}\{(I_{R^{M}B}-\Lambda_{R^{M}B}^{m})\rho_{R^{M}B}^{m}\}.
\end{align}
So let us analyze the error probability for the first message $m=1$. Applying
the Hayashi-Nagaoka operator inequality in \eqref{eq:HN-ineq}, with
$S=\Gamma_{X^{M}B}^{1}$, $T=\sum_{m^{\prime}\neq1}\Gamma_{X^{M}B}^{m^{\prime}%
}$, $c_{\operatorname{I}}\equiv1+c$, and $c_{\operatorname{II}}=2+c+c^{-1}$
for $c>0$, we find that this error probability can be bounded from above as%
\begin{align}
& \operatorname{Tr}\{(I_{R^{M}B}-\Lambda_{R^{M}B}^{1})\rho_{R^{M}B}%
^{1}\}\nonumber\\
& \leq c_{\operatorname{I}}\operatorname{Tr}\{(I_{X^{M}B}-\Gamma_{X^{M}B}%
^{1})\rho_{X^{M}B}^{1}\}+c_{\operatorname{II}}\sum_{m^{\prime}\neq
1}\operatorname{Tr}\{\Gamma_{X^{M}B}^{m^{\prime}}\rho_{X^{M}B}^{1}\}\\
& =c_{\operatorname{I}}\operatorname{Tr}\{(I_{XB}-T_{XB})\rho_{XB}%
\}+c_{\operatorname{II}}\sum_{m^{\prime}\neq1}\operatorname{Tr}\{T_{RB}\left[
\rho_{X}\otimes\rho_{B}\right] \}\\
& =c_{\operatorname{I}}\operatorname{Tr}\{(I_{XB}-T_{XB})\rho_{XB}%
\}+c_{\operatorname{II}}(M-1)\operatorname{Tr}\{T_{XB}\left[ \rho_{X}%
\otimes\rho_{B}\right] \}. \label{eq:EA-err-prob}%
\end{align}
Consider the hypothesis testing mutual information:%
\begin{equation}
I_{H}^{\varepsilon}(X;B)_{\rho}\equiv D_{H}^{\varepsilon}(\rho_{XB}\Vert
\rho_{X}\otimes\rho_{B}),
\end{equation}
where%
\begin{equation}
D_{H}^{\varepsilon}(\rho\Vert\sigma)\equiv-\log_{2}\inf_{\Lambda}\left\{
\operatorname{Tr}\{\Lambda\sigma\}:0\leq\Lambda\leq I\wedge\operatorname{Tr}%
\{\Lambda\rho\}\geq1-\varepsilon\right\} .
\end{equation}
Take the test $T_{XB}$ in Bob's decoder to be $\Upsilon_{XB}^{\ast}$, where
$\Upsilon_{XB}^{\ast}$ is the optimal measurement operator\ for $I_{H}%
^{\varepsilon-\eta}(X;B)_{\rho}$ for $\eta\in(0,\varepsilon)$. Then the error
probability is bounded as%
\begin{align}
& \operatorname{Tr}\{(I_{X^{M}B}-\Lambda_{X^{M}B}^{1})\rho_{X^{M}B}%
^{1}\}\nonumber\\
& \leq c_{\operatorname{I}}\operatorname{Tr}\{(I_{XB}-\Upsilon_{XB}^{\ast
})\rho_{XB}\}+c_{\operatorname{II}}M\operatorname{Tr}\{\Upsilon_{XB}^{\ast
}\left[ \rho_{X}\otimes\rho_{B}\right] \}\\
& \leq c_{\operatorname{I}}\left( \varepsilon-\eta\right)
+c_{\operatorname{II}}M2^{-I_{H}^{\varepsilon-\eta}(X;B)_{\rho}}.
\end{align}
Now pick $c=\eta/(2\varepsilon-\eta)$ and we get that the last line above
$=\varepsilon$, for%
\begin{equation}
\log_{2}M=I_{H}^{\varepsilon-\eta}(X;B)_{\rho}-\log_{2}(4\varepsilon/\eta
^{2}).
\end{equation}
Indeed, consider that we would like to have $c$ such that%
\begin{equation}
\varepsilon=c_{\operatorname{I}}\left( \varepsilon-\eta\right)
+c_{\operatorname{II}}M2^{-I_{H}^{\varepsilon-\eta}(X;B)_{\rho}}.
\end{equation}
Rewriting this, we find that $M$ should satisfy%
\begin{equation}
\log_{2}M=I_{H}^{\varepsilon-\eta}(X;B)_{\rho}+\log_{2}\!\left(
\frac{\varepsilon-c_{\operatorname{I}}\left( \varepsilon-\eta\right)
}{c_{\operatorname{II}}}\right) .
\end{equation}
Picking $c=\eta/(2\varepsilon-\eta)$ then implies (after some algebra)\ that%
\begin{equation}
\frac{\varepsilon-c_{\operatorname{I}}\left( \varepsilon-\eta\right)
}{c_{\operatorname{II}}}=\frac{\eta^{2}}{4\varepsilon}.
\end{equation}
So the quantity $I_{H}^{\varepsilon-\eta}(X;B)_{\rho}-\log_{2}(4\varepsilon
/\eta^{2})$\ represents a lower bound on the $\varepsilon$-one-shot
randomness-assisted, classical capacity of the cq channel in
\eqref{eq:cq-channel}. The bound holds for both average error probability and
maximal error probability, and this coincidence is due to the protocol having
the assistance of shared randomness.
\subsection{Lower bound on the one-shot classical capacity}
\label{sec:derandomize-public-classical-code}Now I show how to derandomize the
above randomness-assisted code. The main result of this section is the
following lower bound on the $\varepsilon$-one shot classical capacity of the
cq channel in \eqref{eq:cq-channel}, holding for\ all\ $\eta\in(0,\varepsilon
)$:%
\begin{equation}
\log_{2}M_{\operatorname{pub}}^{\ast}(\varepsilon)\geq I_{H}^{\varepsilon
-\eta}(X;B)_{\rho}-\log_{2}(4\varepsilon/\eta^{2}).
\end{equation}
Again, note that although this result is already known from \cite{WR12}, the
development in this section is an important building block for the wiretap
channel result in Section~\ref{sec:priv-cap-lower}. As stated previously, the
approach given here uses position-based decoding for the cq channel.
By the reasoning from the previous section, we have the following bound on the
average error probability for a randomness-assisted code:%
\begin{equation}
\frac{1}{M}\sum_{m=1}^{M}\operatorname{Tr}\{(I_{X^{M}B}-\Lambda_{X^{M}B}%
^{m})\rho_{X^{M}B}^{m}\}\leq\varepsilon, \label{eq:avg-error-bound}%
\end{equation}
if%
\begin{equation}
\log_{2}M=I_{H}^{\varepsilon-\eta}(X;B)_{\rho}-\log_{2}(4\varepsilon/\eta
^{2}).
\end{equation}
So let us analyze the expression $\operatorname{Tr}\{(I_{X^{M}B}%
-\Lambda_{X^{M}B}^{m})\rho_{X^{M}B}^{m}\}$. By definition, it follows that%
\begin{equation}
\rho_{X^{M}B}^{m}=\sum_{x_{1},\ldots,x_{M}}p_{X}(x_{1})\cdots p_{X}%
(x_{M})|x_{1},\ldots,x_{M}\rangle\langle x_{1},\ldots,x_{M}|_{X_{1}\cdots
X_{M}}\otimes\rho_{B}^{x_{m}}. \label{eq:unraveled-state-m}%
\end{equation}
Also, recall that $\Upsilon_{XB}^{\ast}$ is optimal for $I_{H}^{\varepsilon
-\eta}(X;B)_{\rho}$, which implies that%
\begin{align}
\operatorname{Tr}\{\Upsilon_{XB}^{\ast}\rho_{XB}\} & \geq1-\left(
\varepsilon-\eta\right) ,\\
\operatorname{Tr}\{\Upsilon_{XB}^{\ast}\left[ \rho_{X}\otimes\rho_{B}\right]
\} & =2^{-I_{H}^{\varepsilon-\eta}(X;B)_{\rho}}.
\end{align}
But consider that%
\begin{align}
\operatorname{Tr}\{\Upsilon_{XB}^{\ast}\rho_{XB}\} & =\operatorname{Tr}%
\left\{ \Upsilon_{XB}^{\ast}\sum_{x}p_{X}(x)|x\rangle\langle x|_{X}%
\otimes\rho_{B}^{x}\right\} \\
& =\sum_{x}p_{X}(x)\operatorname{Tr}\left\{ \langle x|_{X}\Upsilon
_{XB}^{\ast}|x\rangle_{X}\rho_{B}^{x}\right\} \\
& =\sum_{x}p_{X}(x)\operatorname{Tr}\left\{ Q_{B}^{x}\rho_{B}^{x}\right\} ,
\end{align}
where we define%
\begin{equation}
Q_{B}^{x}\equiv\langle x|_{X}\Upsilon_{XB}^{\ast}|x\rangle_{X}.
\label{eq:optimal-Q-x}%
\end{equation}
Similarly, we have that%
\begin{align}
\operatorname{Tr}\{\Upsilon_{XB}^{\ast}\left[ \rho_{X}\otimes\rho_{B}\right]
\} & =\operatorname{Tr}\left\{ \Upsilon_{XB}^{\ast}\sum_{x}p_{X}%
(x)|x\rangle\langle x|_{X}\otimes\rho_{B}\right\} \\
& =\sum_{x}p_{X}(x)\operatorname{Tr}\left\{ \langle x|_{X}\Upsilon
_{XB}^{\ast}|x\rangle_{X}\rho_{B}\right\} \\
& =\sum_{x}p_{X}(x)\operatorname{Tr}\left\{ Q_{B}^{x}\rho_{B}\right\} .
\end{align}
This demonstrates that it suffices to take the optimal measurement operator
$\Upsilon_{XB}^{\ast}$ to be $\sum_{x}|x\rangle\langle x|_{X}\otimes Q_{B}%
^{x}$, with $Q_{B}^{x}$ defined as in \eqref{eq:optimal-Q-x}, and this will
achieve the same optimal value as $\Upsilon_{XB}^{\ast}$ does.
Taking $\Upsilon_{XB}^{\ast}$ as such, now consider that%
\begin{align}
\Gamma_{X^{M}B}^{m} & =\Upsilon_{X_{m}B}^{\ast}\otimes I_{X_{1}}%
\otimes\cdots\otimes I_{X_{m-1}}\otimes I_{X_{m+1}}\otimes\cdots\otimes
I_{X_{M}}\\
& =\sum_{x_{m}}|x_{m}\rangle\langle x_{m}|_{X_{m}}\otimes Q_{B}^{x_{m}%
}\otimes I_{X_{1}}\otimes\cdots\otimes I_{X_{m-1}}\otimes I_{X_{m+1}}%
\otimes\cdots\otimes I_{X_{M}}\\
& =\sum_{x_{1},\ldots,x_{M}}|x_{1},\ldots,x_{M}\rangle\langle x_{1}%
,\ldots,x_{M}|_{X^{M}}\otimes Q_{B}^{x_{m}}.
\end{align}
Then this implies that%
\begin{align}
\left( \sum_{m^{\prime}=1}^{M}\Gamma_{X^{M}B}^{m^{\prime}}\right) ^{-1/2}
& =\left( \sum_{m^{\prime}=1}^{M}\sum_{x_{1},\ldots,x_{M}}|x_{1}%
,\ldots,x_{M}\rangle\langle x_{1},\ldots,x_{M}|_{X^{M}}\otimes Q_{B}%
^{x_{m^{\prime}}}\right) ^{-1/2}\\
& =\left( \sum_{x_{1},\ldots,x_{M}}|x_{1},\ldots,x_{M}\rangle\langle
x_{1},\ldots,x_{M}|_{X^{M}}\otimes\sum_{m^{\prime}=1}^{M}Q_{B}^{x_{m^{\prime}%
}}\right) ^{-1/2}\\
& =\sum_{x_{1},\ldots,x_{M}}|x_{1},\ldots,x_{M}\rangle\langle x_{1}%
,\ldots,x_{M}|_{X^{M}}\otimes\left( \sum_{m^{\prime}=1}^{M}Q_{B}%
^{x_{m^{\prime}}}\right) ^{-1/2},
\end{align}
so that%
\begin{align}
\Lambda_{X^{M}B}^{m} & =\left( \sum_{m^{\prime}=1}^{M}\Gamma_{X^{M}%
B}^{m^{\prime}}\right) ^{-1/2}\Gamma_{X^{M}B}^{m}\left( \sum_{m^{\prime}%
=1}^{M}\Gamma_{X^{M}B}^{m^{\prime}}\right) ^{-1/2}\\
& =\sum_{x_{1},\ldots,x_{M}}|x_{1},\ldots,x_{M}\rangle\langle x_{1}%
,\ldots,x_{M}|_{X^{M}}\otimes\Omega_{B}^{x_{m}}, \label{eq:unraveled-meas-m}%
\end{align}
where%
\begin{equation}
\Omega_{B}^{x_{m}}\equiv\left( \sum_{m^{\prime}=1}^{M}Q_{B}^{x_{m^{\prime}}%
}\right) ^{-1/2}Q_{B}^{x_{m}}\left( \sum_{m^{\prime}=1}^{M}Q_{B}%
^{x_{m^{\prime}}}\right) ^{-1/2}.
\end{equation}
Observe that $\{\Omega_{B}^{x_{m}}\}_{m=1}^{M}$ is a POVM on the support of
$\sum_{m^{\prime}=1}^{M}Q_{B}^{x_{m^{\prime}}}$ and can be completed to a
POVM\ on the full space by adding $\Omega_{B}^{x_{0}}\equiv I_{B}%
-\sum_{m^{\prime}=1}^{M}Q_{B}^{x_{m^{\prime}}}$. By employing
\eqref{eq:unraveled-state-m} and \eqref{eq:unraveled-meas-m}, we find that%
\begin{equation}
\operatorname{Tr}\{(I_{X^{M}B}-\Lambda_{X^{M}B}^{m})\rho_{X^{M}B}^{m}%
\}=\sum_{x_{1},\ldots,x_{M}}p_{X}(x_{1})\cdots p_{X}(x_{M})\operatorname{Tr}%
\{(I_{B}-\Omega_{B}^{x_{m}})\rho_{B}^{x_{m}}\},
\end{equation}
so that the average error probability is as follows:%
\begin{align}
& \frac{1}{M}\sum_{m=1}^{M}\operatorname{Tr}\{(I_{X^{M}B}-\Lambda_{X^{M}%
B}^{m})\rho_{X^{M}B}^{m}\}\nonumber\\
& =\frac{1}{M}\sum_{m=1}^{M}\sum_{x_{1},\ldots,x_{M}}p_{X}(x_{1})\cdots
p_{X}(x_{M})\operatorname{Tr}\{(I_{B}-\Omega_{B}^{x_{m}})\rho_{B}^{x_{m}}\}\\
& =\sum_{x_{1},\ldots,x_{M}}p_{X}(x_{1})\cdots p_{X}(x_{M})\left[ \frac
{1}{M}\sum_{m=1}^{M}\operatorname{Tr}\{(I_{B}-\Omega_{B}^{x_{m}})\rho
_{B}^{x_{m}}\}\right] .
\end{align}
The last line above is the same as the usual \textquotedblleft Shannon
trick\textquotedblright\ of exchanging the average over the messages with the
expectation over a random choice of code. By employing the bound in
\eqref{eq:avg-error-bound}, we find that%
\begin{equation}
\sum_{x_{1},\ldots,x_{M}}p_{X}(x_{1})\cdots p_{X}(x_{M})\left[ \frac{1}%
{M}\sum_{m=1}^{M}\operatorname{Tr}\{(I_{B}-\Omega_{B}^{x_{m}})\rho_{B}^{x_{m}%
}\}\right] \leq\varepsilon.
\end{equation}
Then there exists a particular set of values of $x_{1}$, \ldots, $x_{M}$ such
that%
\begin{equation}
\frac{1}{M}\sum_{m=1}^{M}\operatorname{Tr}\{(I_{B}-\Omega_{B}^{x_{m}})\rho
_{B}^{x_{m}}\}\leq\varepsilon.
\end{equation}
This sequence $x_{1}$, \ldots, $x_{M}$ constitutes the codewords and
$\{\Omega_{B}^{x_{m}}\}_{m=1}^{M}$ is a corresponding POVM that can be used as
a decoder. The number of bits that the code can transmit is equal to $\log
_{2}M=I_{H}^{\varepsilon-\eta}(X;B)_{\rho}-\log_{2}(4\varepsilon/\eta^{2})$.
No shared randomness is required for this code (it is now derandomized).
\begin{remark}
To achieve maximal error probability $2\varepsilon$, one can remove the worst
half of the codewords, and then a lower bound on the achievable number of bits
is%
\begin{equation}
I_{H}^{\varepsilon-\eta}(X;B)_{\rho}-\log_{2}2-\log_{2}(4\varepsilon/\eta
^{2})=I_{H}^{\varepsilon-\eta}(X;B)_{\rho}-\log_{2}(8\varepsilon/\eta^{2}).
\end{equation}
\end{remark}
\section{Private classical communication\label{sec:priv-comm}}
\subsection{Definition of the one-shot private classical capacity}
Now suppose that Alice, Bob, and Eve are connected by a classical-input
quantum-quantum-output (cqq) channel of the following form:%
\begin{equation}
x\rightarrow\rho_{BE}^{x},
\end{equation}
where Bob has system $B$ and Eve system $E$. The fully quantum version of this
channel is as follows:%
\begin{equation}
\mathcal{N}_{X^{\prime}\rightarrow BE}(\sigma_{X^{\prime}})=\sum_{x}\langle
x|_{X^{\prime}}\sigma_{X^{\prime}}|x\rangle_{X^{\prime}}\rho_{BE}^{x},
\label{eq:wiretap-fully-quantum}%
\end{equation}
where $\{|x\rangle_{X^{\prime}}\}_{x}$ is some orthonormal basis.
We define the one-shot private classical capacity in the following way. Let
$M\in\mathbb{N}$ and $\varepsilon\in(0,1)$. An $(M,\varepsilon)$ private
communication code consists of a collection of probability distributions
$\{p_{X|M}(x|m)\}_{m=1}^{M}$ (one for each message $m$) and a decoding POVM
$\{\Lambda_{B}^{m}\}_{m=1}^{M}$, such that%
\begin{equation}
\frac{1}{M}\sum_{m=1}^{M}\frac{1}{2}\left\Vert \mathcal{M}_{B\rightarrow
\hat{M}}(\rho_{BE}^{m})-|m\rangle\langle m|_{\hat{M}}\otimes\sigma
_{E}\right\Vert _{1}\leq\varepsilon. \label{eq:avg-rel-sec}%
\end{equation}
We refer to the left-hand side of the above inequality as the \textit{privacy
error}. In the above, $\{|m\rangle_{\hat{M}}\}_{m=1}^{M}$ is an orthonormal
basis, the state $\sigma_{E}$ can be any state, we define the state $\rho
_{BE}^{m}$ as%
\begin{equation}
\rho_{BE}^{m}=\sum_{x}p_{X|M}(x|m)\rho_{BE}^{x},
\end{equation}
and the measurement channel $\mathcal{M}_{B\rightarrow\hat{M}}$\ as%
\begin{equation}
\mathcal{M}_{B\rightarrow\hat{M}}(\omega_{B})\equiv\sum_{m}\operatorname{Tr}%
\{\Lambda_{B}^{m}\omega_{B}\}|m\rangle\langle m|_{\hat{M}}.
\end{equation}
For a given channel $\mathcal{N}_{X^{\prime}\rightarrow BE}$ and $\varepsilon
$, the one-shot private classical capacity is equal to $\log_{2}%
M_{\operatorname{priv}}^{\ast}(\varepsilon)$, where $M_{\operatorname{priv}%
}^{\ast}(\varepsilon)$ is the largest $M$ such that \eqref{eq:avg-rel-sec} can
be satisfied for a fixed $\varepsilon$.
The condition in \eqref{eq:avg-rel-sec} combines the reliable decoding and
security conditions into a single average error criterion. We can see how it
represents a generalization of the error criterion in
\eqref{eq:decoding-error}, which was for public classical communication over a
cq channel. One could have a different definition of one-shot private
capacity, in which there are two separate criteria, but the approach above
will be beneficial for our purposes. In any case, a code satisfying
\eqref{eq:avg-rel-sec}~satisfies the two separate criteria as well, as is
easily seen by invoking the monotonicity of trace distance.\footnote{Indeed,
starting with \eqref{eq:avg-rel-sec}\ and applying monotonicity of trace
distance under partial trace of the $E$ system, we get that $\frac{1}{M}%
\sum_{m=1}^{M}\frac{1}{2}\left\Vert \mathcal{M}_{B\rightarrow\hat{M}}(\rho
_{B}^{m})-|m\rangle\langle m|_{\hat{M}}\right\Vert _{1}\leq\varepsilon$.
Recalling \eqref{eq:decoding-error}, we can interpret this as asserting that
the decoding error probability does not exceed $\varepsilon$. Doing the same
but considering a partial trace over the $B$ system implies that $\frac{1}%
{M}\sum_{m=1}^{M}\frac{1}{2}\left\Vert \rho_{E}^{m}-\sigma_{E}\right\Vert
_{1}\leq\varepsilon$, which is a security criterion. So we get that the
conventional two separate criteria are satisfied if a code satisfies the
single privacy error criterion in \eqref{eq:avg-rel-sec}.} Having a single
error criterion for private capacity is the same as the approach taken in
\cite{HHHO09} and \cite{WTB16}, and in the latter paper, it was shown that
notions of asymptotic private capacity are equivalent when using either a
single error criterion or two separate error criteria.
\subsection{Lower bound on the one-shot private classical
capacity\label{sec:priv-cap-lower}}
The main result of this section is the following lower bound on the
$\varepsilon$-one shot private capacity of a cq wiretap channel, holding for
all $\varepsilon_{1},\varepsilon_{2}\in(0,1)$, such that $\varepsilon
_{1}+\sqrt{\varepsilon_{2}}\in(0,1)$, and $\eta_{1}\in(0,\varepsilon_{1})$ and
$\eta_{2}\in(0,\sqrt{\varepsilon_{2}})$:%
\begin{equation}
\log_{2}M_{\operatorname{priv}}^{\ast}(\varepsilon_{1}+\sqrt{\varepsilon_{2}%
})\geq I_{H}^{\varepsilon_{1}-\eta_{1}}(X;B)_{\rho}-\widetilde{I}_{\max
}^{\sqrt{\varepsilon_{2}}-\eta_{2}}(E;X)_{\rho}-\log_{2}(4\varepsilon_{1}%
/\eta_{1}^{2})-2\log_{2}\!\left( 1/\eta_{2}\right) .
\end{equation}
To begin with, we allow Alice, Bob, and Eve shared randomness of the following
form:%
\begin{equation}
\rho_{XX^{\prime}X^{\prime\prime}}\equiv\sum_{x}p_{X}(x)|x\rangle\langle
x|_{X}\otimes|x\rangle\langle x|_{X^{\prime}}\otimes|x\rangle\langle
x|_{X^{\prime\prime}}, \label{eq:randomness-state-ABE}%
\end{equation}
where Bob has the $X$ system, Alice the $X^{\prime}$ system, and Eve the
$X^{\prime\prime}$ system. It is natural here to let Eve share the randomness
as well, and this amounts to giving her knowledge of the code to be used. Let
$\rho_{XX^{\prime\prime}BE}$ denote the state resulting from sending the
$X^{\prime}$ system through the channel $\mathcal{N}_{X^{\prime}\rightarrow
BE}$ in\ \eqref{eq:wiretap-fully-quantum}:%
\begin{equation}
\rho_{XX^{\prime\prime}BE}\equiv\sum_{x}p_{X}(x)|x\rangle\langle x|_{X}%
\otimes\rho_{BE}^{x}\otimes|x\rangle\langle x|_{X^{\prime\prime}}.
\end{equation}
The coding scheme that Alice and Bob use is as follows. There is the message
$m\in\{1,\ldots,M\}$ and a local key $k\in\{1,\ldots,K\}$. The local key $k$
represents local, uniform randomness that Alice has, but which is not
accessible to Bob or Eve. We assume that Alice, Bob, and Eve share $MK$ copies
of the state in \eqref{eq:randomness-state-ABE} before communication begins,
and we denote this state as%
\begin{equation}
\rho_{X^{MK}X^{\prime MK}X^{\prime\prime MK}}=\rho_{X_{1,1}X_{1,1}^{\prime
}X_{1,1}^{\prime\prime}}\otimes\cdots\otimes\rho_{X_{M,K}X_{M,K}^{\prime
}X_{M,K}^{\prime\prime}}=\rho_{XX^{\prime}X^{\prime\prime}}^{\otimes MK}.
\end{equation}
To send the message $m$, Alice picks $k$ uniformly at random from the set
$\{1,\ldots,K\}$. She then sends the $(m,k)$th $X^{\prime}$ system through the
channel $\mathcal{N}_{X^{\prime}\rightarrow BE}$. Thus, when $m$ and $k$ are
chosen, the reduced state on Bob and Eve's systems is%
\begin{equation}
\rho_{X^{MK}X^{\prime\prime MK}BE}^{m,k}=\rho_{X_{1,1}X_{1,1}^{\prime\prime}%
}\otimes\cdots\otimes\rho_{X_{m,k-1}X_{m,k-1}^{\prime\prime}}\otimes
\rho_{X_{m,k}X_{m,k}^{\prime\prime}BE}\otimes\rho_{X_{m,k+1}X_{m,k+1}%
^{\prime\prime}}\otimes\cdots\otimes\rho_{X_{M,K}X_{M,K}^{\prime\prime}},
\end{equation}
and the state of Bob's systems is%
\begin{equation}
\rho_{X^{MK}B}^{m,k}=\rho_{X_{1,1}}\otimes\cdots\otimes\rho_{X_{m,k-1}}%
\otimes\rho_{X_{m,k}B}\otimes\rho_{X_{m,k+1}}\otimes\cdots\otimes\rho
_{X_{M,K}}.
\end{equation}
For Bob to decode, he uses the position-based decoder to decode both the
message $m$ and the local key $k$. Let $\{\Lambda_{X^{M,K}B}^{m,k}\}_{m,k}$
denote his decoding POVM. By the reasoning from
Section~\ref{sec:random-assisted-public-classical}, as long as%
\begin{equation}
\log_{2}MK=I_{H}^{\varepsilon_{1}-\eta_{1}}(X;B)_{\rho}-\log_{2}%
(4\varepsilon_{1}/\eta_{1}^{2}), \label{eq:message-key-size}%
\end{equation}
where $\varepsilon_{1}\in(0,1)$ and $\eta_{1}\in(0,\varepsilon_{1})$, then we
have the following bound holding for all $m,k$:%
\begin{equation}
\operatorname{Tr}\{(I_{X^{M,K}B}-\Lambda_{X^{MK}B}^{m,k})\rho_{X^{MK}B}%
^{m,k}\}\leq\varepsilon_{1}, \label{eq:avg-exp-dec-err-private-code}%
\end{equation}
where $\Lambda_{X^{MK}B}^{m,k}$ is defined as in
Sections~\ref{sec:random-assisted-public-classical} and
\ref{sec:derandomize-public-classical-code}. By the reasoning from
Section~\ref{sec:derandomize-public-classical-code}, we can also write
\eqref{eq:avg-exp-dec-err-private-code} as%
\begin{equation}
\sum_{x_{1,1},\ldots,x_{M,K}}p_{X}(x_{1,1})\cdots p_{X}(x_{M,K})\left[
\frac{1}{MK}\sum_{m=1}^{M}\sum_{k=1}^{K}\operatorname{Tr}\{(I_{B}-\Omega
_{B}^{x_{m,k}})\rho_{B}^{x_{m,k}}\}\right] \leq\varepsilon_{1},
\label{eq:avg-exp-dec-err-private-code-rewrite}%
\end{equation}
with $\Omega_{B}^{x_{m,k}}$ defined as in
Section~\ref{sec:derandomize-public-classical-code}. Define the following
measurement channels:%
\begin{align}
\mathcal{M}_{B\rightarrow\hat{M}}(\omega_{B}) & \equiv\sum_{m,k}%
\operatorname{Tr}\{\Omega_{B}^{x_{m,k}}\omega_{B}\}|m\rangle\langle
m|_{\hat{M}},\\
\mathcal{M}_{B\rightarrow\hat{M}\hat{K}}^{\prime}(\omega_{B}) & \equiv
\sum_{m,k}\operatorname{Tr}\{\Omega_{B}^{x_{m,k}}\omega_{B}\}|m\rangle\langle
m|_{\hat{M}}\otimes|k\rangle\langle k|_{\hat{K}},
\end{align}
with it being clear that $\operatorname{Tr}_{\hat{K}}\circ\mathcal{M}%
_{B\rightarrow\hat{M}\hat{K}}^{\prime}=\mathcal{M}_{B\rightarrow\hat{M}}$.
Consider that%
\begin{align}
& \frac{1}{2}\left\Vert \mathcal{M}_{B\rightarrow\hat{M}\hat{K}}^{\prime
}\left( \rho_{B}^{x_{m,k}}\right) -|m\rangle\langle m|_{\hat{M}}%
\otimes|k\rangle\langle k|_{\hat{K}}\right\Vert _{1}\nonumber\\
& =\frac{1}{2}\left\Vert \sum_{m^{\prime},k^{\prime}}\operatorname{Tr}%
\{\Omega_{B}^{x_{m^{\prime},k^{\prime}}}\rho_{B}^{x_{m,k}}\}|m^{\prime}%
\rangle\langle m^{\prime}|_{\hat{M}}\otimes|k^{\prime}\rangle\langle
k^{\prime}|_{\hat{K}}-|m\rangle\langle m|_{\hat{M}}\otimes|k\rangle\langle
k|_{\hat{K}}\right\Vert _{1}\\
& =\frac{1}{2}\left\Vert \sum_{(m^{\prime},k^{\prime})\neq(m,k)}%
\operatorname{Tr}\{\Omega_{B}^{x_{m^{\prime},k^{\prime}}}\rho_{B}^{x_{m,k}%
}\}|m^{\prime}\rangle\langle m^{\prime}|_{\hat{M}}\otimes|k^{\prime}%
\rangle\langle k^{\prime}|_{\hat{K}}-(1-\operatorname{Tr}\{\Omega_{B}%
^{x_{m,k}}\rho_{B}^{x_{m,k}}\})|m\rangle\langle m|_{\hat{M}}\otimes
|k\rangle\langle k|_{\hat{K}}\right\Vert _{1}\\
& =1-\operatorname{Tr}\{\Omega_{B}^{x_{m,k}}\rho_{B}^{x_{m,k}}\}\\
& =\operatorname{Tr}\{(I_{B}-\Omega_{B}^{x_{m,k}})\rho_{B}^{x_{m,k}}\}.
\end{align}
Now averaging the above quantity over $m$, $k$, and $x_{1,1}$, \ldots,
$x_{M,K}$, and applying the condition in
\eqref{eq:avg-exp-dec-err-private-code-rewrite}, we get that%
\begin{equation}
\sum_{x_{1,1},\ldots,x_{M,K}}p_{X}(x_{1,1})\cdots p_{X}(x_{M,K})\left[
\frac{1}{MK}\sum_{m,k}\frac{1}{2}\left\Vert \mathcal{M}_{B\rightarrow\hat
{M}\hat{K}}^{\prime}\left( \rho_{B}^{x_{m,k}}\right) -|m\rangle\langle
m|_{\hat{M}}\otimes|k\rangle\langle k|_{\hat{K}}\right\Vert _{1}\right]
\leq\varepsilon_{1}. \label{eq:decod-priv-almost-there}%
\end{equation}
Applying convexity of the trace distance to bring the average over $k$ inside
and monotonicity with respect to partial trace over system $\hat{K}$\ to the
left-hand side of \eqref{eq:decod-priv-almost-there}, we find that%
\begin{multline}
\sum_{x_{1,1},\ldots,x_{M,K}}p_{X}(x_{1,1})\cdots p_{X}(x_{M,K})\left[
\frac{1}{M}\sum_{m=1}^{M}\frac{1}{2}\left\Vert (\operatorname{Tr}_{\hat{K}%
}\circ\mathcal{M}_{B\rightarrow\hat{M}\hat{K}}^{\prime})\left( \frac{1}%
{K}\sum_{k=1}^{K}\rho_{B}^{x_{m,k}}\right) -|m\rangle\langle m|_{\hat{M}%
}\right\Vert _{1}\right] \\
=\sum_{x_{1,1},\ldots,x_{M,K}}p_{X}(x_{1,1})\cdots p_{X}(x_{M,K})\left[
\frac{1}{M}\sum_{m=1}^{M}\frac{1}{2}\left\Vert \mathcal{M}_{B\rightarrow
\hat{M}}\left( \frac{1}{K}\sum_{k=1}^{K}\rho_{B}^{x_{m,k}}\right)
-|m\rangle\langle m|_{\hat{M}}\right\Vert _{1}\right] \leq\varepsilon_{1}.
\label{eq:rel-bound-priv-comm}%
\end{multline}
Let us define the state%
\begin{align}
\omega_{E}^{x_{m^{\prime}},x_{m}} & \equiv\frac{\frac{1}{K}\sum
_{k,k^{\prime}=1}^{K}\operatorname{Tr}_{B}\{\Omega_{B}^{x_{m^{\prime
},k^{\prime}}}\rho_{BE}^{x_{m,k}}\}}{q(x_{m^{\prime}}|x_{m})},\\
q(x_{m^{\prime}}|x_{m}) & \equiv\frac{1}{K}\sum_{k,k^{\prime}=1}%
^{K}\operatorname{Tr}\{\Omega_{B}^{x_{m^{\prime},k^{\prime}}}\rho
_{BE}^{x_{m,k}}\}.
\end{align}
Consider that%
\begin{equation}
\sum_{m^{\prime}}q(x_{m^{\prime}}|x_{m})\omega_{E}^{x_{m^{\prime}},x_{m}%
}=\frac{1}{K}\sum_{k=1}^{K}\rho_{E}^{x_{m,k}}.
\end{equation}
Then we can write%
\begin{equation}
\mathcal{M}_{B\rightarrow\hat{M}}\left( \frac{1}{K}\sum_{k=1}^{K}\rho
_{BE}^{x_{m,k}}\right) =\sum_{m^{\prime}}q(x_{m^{\prime}}|x_{m})|m^{\prime
}\rangle\langle m^{\prime}|_{\hat{M}}\otimes\omega_{E}^{x_{m^{\prime}},x_{m}},
\end{equation}
so that%
\begin{equation}
\mathcal{M}_{B\rightarrow\hat{M}}\left( \frac{1}{K}\sum_{k=1}^{K}\rho
_{B}^{x_{m,k}}\right) =\sum_{m^{\prime}}q(x_{m^{\prime}}|x_{m})|m^{\prime
}\rangle\langle m^{\prime}|_{\hat{M}}.
\end{equation}
Using these observations, we can finally write%
\begin{align}
& \frac{1}{M}\sum_{m=1}^{M}\frac{1}{2}\left\Vert \mathcal{M}_{B\rightarrow
\hat{M}}\left( \frac{1}{K}\sum_{k=1}^{K}\rho_{BE}^{x_{m,k}}\right)
-|m\rangle\langle m|_{\hat{M}}\otimes\frac{1}{K}\sum_{k=1}^{K}\rho
_{E}^{x_{m,k}}\right\Vert _{1}\nonumber\\
& =\frac{1}{M}\sum_{m=1}^{M}\frac{1}{2}\left\Vert \sum_{m^{\prime}%
}q(x_{m^{\prime}}|x_{m})|m^{\prime}\rangle\langle m^{\prime}|_{\hat{M}}%
\otimes\omega_{E}^{x_{m^{\prime}},x_{m}}-|m\rangle\langle m|_{\hat{M}}%
\otimes\sum_{m^{\prime}}q(x_{m^{\prime}}|x_{m})\omega_{E}^{x_{m^{\prime}%
},x_{m}}\right\Vert _{1}\\
& \leq\frac{1}{M}\sum_{m=1}^{M}\sum_{m^{\prime}}q(x_{m^{\prime}}%
|x_{m})\left[ \frac{1}{2}\left\Vert |m^{\prime}\rangle\langle m^{\prime
}|_{\hat{M}}\otimes\omega_{E}^{x_{m^{\prime}},x_{m}}-|m\rangle\langle
m|_{\hat{M}}\otimes\omega_{E}^{x_{m^{\prime}},x_{m}}\right\Vert _{1}\right] \\
& =\frac{1}{M}\sum_{m=1}^{M}\sum_{m^{\prime}}q(x_{m^{\prime}}|x_{m})\left[
\frac{1}{2}\left\Vert |m^{\prime}\rangle\langle m^{\prime}|_{\hat{M}%
}-|m\rangle\langle m|_{\hat{M}}\right\Vert _{1}\right] \\
& =\frac{1}{M}\sum_{m=1}^{M}\sum_{m^{\prime}\neq m}q(x_{m^{\prime}}|x_{m})\\
& =\frac{1}{M}\sum_{m=1}^{M}\frac{1}{2}\left\Vert \mathcal{M}_{B\rightarrow
\hat{M}}\left( \frac{1}{K}\sum_{k=1}^{K}\rho_{B}^{x_{m,k}}\right)
-|m\rangle\langle m|_{\hat{M}}\right\Vert _{1}.
\end{align}
Combining with \eqref{eq:rel-bound-priv-comm}, the above development implies
that%
\begin{multline}
\sum_{x_{1,1},\ldots,x_{M,K}}p_{X}(x_{1,1})\cdots p_{X}(x_{M,K}%
)\label{eq:final-rel-decode-condition}\\
\times\left[ \frac{1}{M}\sum_{m=1}^{M}\frac{1}{2}\left\Vert \mathcal{M}%
_{B\rightarrow\hat{M}}\left( \frac{1}{K}\sum_{k=1}^{K}\rho_{BE}^{x_{m,k}%
}\right) -|m\rangle\langle m|_{\hat{M}}\otimes\frac{1}{K}\sum_{k=1}^{K}%
\rho_{E}^{x_{m,k}}\right\Vert _{1}\right] \leq\varepsilon_{1}.
\end{multline}
Now we consider the state on Eve's systems and the analysis of privacy. If $m$
and $k$ are fixed, then her state is%
\begin{equation}
\rho_{X^{MK}E}^{m,k}=\rho_{X_{1,1}}\otimes\cdots\otimes\rho_{X_{m,k-1}}%
\otimes\rho_{X_{m,k}E}\otimes\rho_{X_{m,k+1}}\otimes\cdots\otimes\rho
_{X_{M,K}}.
\end{equation}
(For simplicity of notation, in the above and what follows we are labeling her
systems $X^{\prime\prime}$ as $X$.) However, $k$ is chosen uniformly at
random, and so conditioned on the message $m$ being fixed, the state of Eve's
systems is as follows:%
\begin{align}
\rho_{X^{MK}E}^{m} & \equiv\frac{1}{K}\sum_{k=1}^{K}\rho_{X^{MK}E}^{m,k}\\
& =\rho_{X_{1,1}}\otimes\cdots\otimes\rho_{X_{m-1,K}}\nonumber\\
& \qquad\otimes\left[ \frac{1}{K}\sum_{k=1}^{K}\rho_{X_{m,1}}\otimes
\cdots\otimes\rho_{X_{m,k-1}}\otimes\rho_{X_{m,k}E}\otimes\rho_{X_{m,k+1}%
}\otimes\cdots\otimes\rho_{X_{m,K}}\right] \nonumber\\
& \qquad\otimes\rho_{X_{m+1,1}}\otimes\cdots\otimes\rho_{X_{M,K}}.
\end{align}
We would like to show for $\varepsilon_{2}\in(0,1)$ that%
\begin{equation}
\frac{1}{2}\left\Vert \rho_{X^{MK}E}^{m}-\rho_{X^{MK}}\otimes\widetilde{\rho
}_{E}\right\Vert _{1}\leq\varepsilon_{2},
\end{equation}
for some state $\widetilde{\rho}_{E}$. By the invariance of the trace distance
with respect to tensor-product states, i.e.,%
\begin{equation}
\left\Vert \sigma\otimes\tau-\omega\otimes\tau\right\Vert _{1}=\left\Vert
\sigma-\omega\right\Vert _{1}, \label{eq:trace-dist-prop}%
\end{equation}
we find that%
\begin{align}
& \frac{1}{2}\left\Vert \rho_{X^{MK}E}^{m}-\rho_{X^{MK}}\otimes
\widetilde{\rho}_{E}\right\Vert _{1}\\
& =\frac{1}{2}\left\Vert \rho_{X_{m,1}\cdots X_{m,K}E}^{m}-\rho
_{X_{m,1}\cdots X_{m,K}}\otimes\widetilde{\rho}_{E}\right\Vert _{1}\\
& =\frac{1}{2}\left\Vert \frac{1}{K}\sum_{k=1}^{K}\rho_{X_{m,1}}\otimes
\cdots\otimes\rho_{X_{m,k-1}}\otimes\left( \rho_{X_{m,k}E}-\rho_{X_{m,k}%
}\otimes\widetilde{\rho}_{E}\right) \otimes\rho_{X_{m,k+1}}\otimes
\cdots\otimes\rho_{X_{m,K}}\right\Vert _{1}.
\end{align}
From Lemma~\ref{thm:convex-split} and the relation in
\eqref{eq:TD-to-PD}\ between trace distance and purified distance, we find
that if we pick $K$ such that%
\begin{equation}
\log_{2}K=\widetilde{I}_{\max}^{\sqrt{\varepsilon_{2}}-\eta_{2}}(E;X)_{\rho
}+2\log_{2}( 1/\eta_{2}) , \label{eq:key-size}%
\end{equation}
then we are guaranteed that%
\begin{equation}
\frac{1}{2}\left\Vert \rho_{X^{MK}E}^{m}-\rho_{X^{MK}}\otimes\widetilde{\rho
}_{E}\right\Vert _{1}\leq\sqrt{\varepsilon_{2}},
\end{equation}
where $\widetilde{\rho}_{E}$ is some state such that $P(\widetilde{\rho}%
_{E},\rho_{E})\leq\sqrt{\varepsilon_{2}}-\eta_{2}$.
Consider that we can rewrite%
\begin{align}
& \frac{1}{2}\left\Vert \rho_{X^{MK}E}^{m}-\rho_{X^{MK}}\otimes
\widetilde{\rho}_{E}\right\Vert _{1}\\
& =\frac{1}{2}\left\Vert \frac{1}{K}\sum_{k=1}^{K}\sum_{x_{1,1}\cdots
x_{M,K}}p_{X}(x_{1,1})\cdots p_{X}(x_{M,K})|x_{1,1}\cdots x_{M,K}%
\rangle\langle x_{1,1}\cdots x_{M,K}|_{X^{M,K}}\otimes\left( \rho
_{E}^{x_{m,k}}-\widetilde{\rho}_{E}\right) \right\Vert _{1}\\
& =\frac{1}{2}\left\Vert \sum_{x_{1,1}\cdots x_{M,K}}p_{X}(x_{1,1})\cdots
p_{X}(x_{M,K})|x_{1,1}\cdots x_{M,K}\rangle\langle x_{1,1}\cdots
x_{M,K}|_{X^{M,K}}\otimes\left( \frac{1}{K}\sum_{k=1}^{K}\rho_{E}^{x_{m,k}%
}-\widetilde{\rho}_{E}\right) \right\Vert _{1}\\
& =\sum_{x_{1,1}\cdots x_{M,K}}p_{X}(x_{1,1})\cdots p_{X}(x_{M,K})\left[
\frac{1}{2}\left\Vert \frac{1}{K}\sum_{k=1}^{K}\rho_{E}^{x_{m,k}}%
-\widetilde{\rho}_{E}\right\Vert _{1}\right] \leq\sqrt{\varepsilon_{2}}.
\label{eq:privacy-condition-proof}%
\end{align}
Applying \eqref{eq:trace-dist-prop} to \eqref{eq:privacy-condition-proof}, we
find that%
\begin{equation}
\sum_{x_{1,1}\cdots x_{M,K}}p_{X}(x_{1,1})\cdots p_{X}(x_{M,K})\left[
\frac{1}{2}\left\Vert |m\rangle\langle m|_{\hat{M}}\otimes\frac{1}{K}%
\sum_{k=1}^{K}\rho_{E}^{x_{m,k}}-|m\rangle\langle m|_{\hat{M}}\otimes
\widetilde{\rho}_{E}\right\Vert _{1}\right] \leq\sqrt{\varepsilon_{2}}.
\label{eq:final-privacy-condition}%
\end{equation}
Putting together \eqref{eq:message-key-size}, \eqref{eq:key-size},
\eqref{eq:final-rel-decode-condition}, and \eqref{eq:final-privacy-condition},
we find that if
\begin{align}
\log_{2}M & =I_{H}^{\varepsilon_{1}-\eta_{1}}(X;B)_{\rho}-\log
_{2}(4\varepsilon_{1}/\eta_{1}^{2})-\left[ \widetilde{I}_{\max}%
^{\sqrt{\varepsilon_{2}}-\eta_{2}}(E;X)_{\rho}+2\log_{2}( 1/\eta_{2}) \right]
\\
& =I_{H}^{\varepsilon_{1}-\eta_{1}}(X;B)_{\rho}-\widetilde{I}_{\max}%
^{\sqrt{\varepsilon_{2}}-\eta_{2}}(E;X)_{\rho}-\log_{2}(4\varepsilon_{1}%
/\eta_{1}^{2})-2\log_{2}( 1/\eta_{2}) ,
\end{align}
then we have by the triangle inequality that%
\begin{equation}
\sum_{x_{1,1},\ldots,x_{M,K}}p_{X}(x_{1,1})\cdots p_{X}(x_{M,K})\left[
\frac{1}{2}\left\Vert \mathcal{M}_{B\rightarrow\hat{M}}\left( \frac{1}{K}%
\sum_{k=1}^{K}\rho_{BE}^{x_{m,k}}\right) -|m\rangle\langle m|_{\hat{M}%
}\otimes\widetilde{\rho}_{E}\right\Vert _{1}\right] \leq\varepsilon_{1}%
+\sqrt{\varepsilon_{2}}.
\end{equation}
So this gives what is achievable with shared randomness (again, no difference
between average and maximal error if shared randomness is allowed).
We now show how to derandomize the code. We take the above and average over
all messages $m$. We find that%
\begin{align}
& \varepsilon_{1}+\sqrt{\varepsilon_{2}}\nonumber\\
& \geq\frac{1}{M}\sum_{m=1}^{M}\sum_{x_{1,1},\ldots,x_{M,K}}p_{X}%
(x_{1,1})\cdots p_{X}(x_{M,K})\left[ \frac{1}{2}\left\Vert \mathcal{M}%
_{B\rightarrow\hat{M}}\left( \frac{1}{K}\sum_{k=1}^{K}\rho_{BE}^{x_{m,k}%
}\right) -|m\rangle\langle m|_{\hat{M}}\otimes\widetilde{\rho}_{E}\right\Vert
_{1}\right] \\
& =\sum_{x_{1,1},\ldots,x_{M,K}}p_{X}(x_{1,1})\cdots p_{X}(x_{M,K})\left(
\frac{1}{M}\sum_{m=1}^{M}\left[ \frac{1}{2}\left\Vert \mathcal{M}%
_{B\rightarrow\hat{M}}\left( \frac{1}{K}\sum_{k=1}^{K}\rho_{BE}^{x_{m,k}%
}\right) -|m\rangle\langle m|_{\hat{M}}\otimes\widetilde{\rho}_{E}\right\Vert
_{1}\right] \right) .
\end{align}
So we can conclude that there exist particular values $x_{1,1}$, \ldots,
$x_{M,K}$ such that%
\begin{equation}
\frac{1}{M}\sum_{m=1}^{M}\left[ \frac{1}{2}\left\Vert \mathcal{M}%
_{B\rightarrow\hat{M}}\left( \frac{1}{K}\sum_{k=1}^{K}\rho_{BE}^{x_{m,k}%
}\right) -|m\rangle\langle m|_{\hat{M}}\otimes\widetilde{\rho}_{E}\right\Vert
_{1}\right] \leq\varepsilon_{1}+\sqrt{\varepsilon_{2}}.
\label{eq:final-wiretap-perf}%
\end{equation}
Thus, our final conclusion is that the number of achievable bits that can be
sent such that the privacy error is no larger than $\varepsilon_{1}%
+\sqrt{\varepsilon_{2}}$ is equal to%
\begin{equation}
\log_{2}M=I_{H}^{\varepsilon_{1}-\eta_{1}}(X;B)_{\rho}-\widetilde{I}_{\max
}^{\sqrt{\varepsilon_{2}}-\eta_{2}}(E;X)_{\rho}-\log_{2}(4\varepsilon_{1}%
/\eta_{1}^{2})-2\log_{2}( 1/\eta_{2}) . \label{eq:rate-bound-wiretap}%
\end{equation}
\subsection{Second-order asymptotics for private classical communication}
In this section, I show how the lower bound on one-shot private capacity
leads to a non-trivial lower bound on the second-order coding rate of private
communication over an i.i.d.~cq wiretap channel. I also show how the bounds simplify for pure-state cq wiretap channels and when using binary phase-shift keying as a coding strategy for private communication over a pure-loss bosonic channel.
Applying
Lemma~\ref{lem:alt-dmax-to-dmax-smooth} to \eqref{eq:rate-bound-wiretap} with
$\gamma\in(0,\sqrt{\varepsilon}-\eta)$, we can take%
\begin{align}
\log_{2}M & =I_{H}^{\varepsilon_{1}-\eta_{1}}(X;B)_{\rho}-\widetilde
{I}_{\max}^{\sqrt{\varepsilon_{2}}-\eta_{2}}(E;X)_{\rho}-\log_{2}%
(4\varepsilon_{1}/\eta_{1}^{2})-2\log_{2}( 1/\eta_{2})\\
& \geq I_{H}^{\varepsilon_{1}-\eta_{1}}(X;B)_{\rho}-D_{\max}^{\sqrt
{\varepsilon_{2} }-\eta_{2}-\gamma}(\rho_{XE}\Vert\rho_{X}\otimes\rho
_{E})-\log_{2}(4\varepsilon_{1}/\eta_{1}^{2})-2\log_{2}( 1/\eta_{2}) -\log
_{2}( 3/\gamma^{2}) .
\end{align}
while still achieving the performance in \eqref{eq:final-wiretap-perf}.
Substituting an i.i.d.~cq wiretap channel into the one-shot bounds, evaluating
for such a case and using the expansions for $I_{H}^{\varepsilon}$ in
\eqref{eq:MI-expand} and $D_{\max}^{\varepsilon}$ in \eqref{eq:MI-expand-2},
while taking $\eta_{1}=\eta_{2}=\gamma=1/\sqrt{n}$, for sufficiently large
$n$, we get that%
\begin{multline}
\log_{2}M_{\operatorname{priv}}^{\ast}(n,\varepsilon_{1}+\sqrt{\varepsilon
_{2}})\geq n\left[ I(X;B)_{\rho}-I(X;E)_{\rho}\right]
\label{eq:second-order-bound}\\
+\sqrt{nV(X;B)_{\rho}}\Phi^{-1}(\varepsilon_{1})+\sqrt{nV(X;E)_{\rho}}%
\Phi^{-1}(\varepsilon_{2})+O(\log n).
\end{multline}
\subsubsection{Example:\ Pure-state cq wiretap channel}
Let us consider applying the inequality in \eqref{eq:second-order-bound} to a
cq pure-state wiretap channel of the following form:%
\begin{equation}
x\rightarrow|\psi^{x}\rangle\langle\psi^{x}|_{B}\otimes|\varphi^{x}%
\rangle\langle\varphi^{x}|_{E},\label{eq:pure-state-cq-channel}%
\end{equation}
in which the classical input $x$ leads to a pure quantum state $|\psi
^{x}\rangle\langle\psi^{x}|_{B}$ for Bob and a pure quantum state
$|\varphi^{x}\rangle\langle\varphi^{x}|_{E}$ for Eve. This channel may seem a
bit particular, but we discuss in the next section how one can induce such a
channel from a practically relevant channel, known as the pure-loss bosonic
channel. In order to apply the inequality in \eqref{eq:second-order-bound} to
the channel in \eqref{eq:pure-state-cq-channel}, we fix a distribution
$p_{X}(x)$ over the input symbols, leading to the following classical--quantum
state:%
\begin{equation}
\rho_{XBE}\equiv\sum_{x}p_{X}(x)|x\rangle\langle x|_{X}\otimes|\psi^{x}%
\rangle\langle\psi^{x}|_{B}\otimes|\varphi^{x}\rangle\langle\varphi^{x}|_{E}.
\end{equation}
It is well known and straightforward to calculate that the following
simplifications occur%
\begin{align}
I(X;B)_{\rho} & =H(B)_{\rho}=H(\rho_{B}),\\
I(X;E)_{\rho} & =H(E)_{\rho}=H(\rho_{E}),
\end{align}
where $H(\sigma) \equiv - \operatorname{Tr} \{ \sigma \log_2 \sigma\}$ denotes the quantum entropy of a state $\sigma$ and %
\begin{align}
\rho_{B} & =\sum_{x}p_{X}(x)|\psi^{x}\rangle\langle\psi^{x}|_{B},\\
\rho_{E} & =\sum_{x}p_{X}(x)|\varphi^{x}\rangle\langle\varphi^{x}|_{E}.
\end{align}
Proposition~\ref{prop:variance-simplify-pure-state-cq} below demonstrates that a
similar simplification occurs for the information variance quantities in
\eqref{eq:second-order-bound}, in the special case of a pure-state cq wiretap
channel. By employing it, we find the following lower bound on the
second-order coding rate for a pure-state cq wiretap channel:%
\begin{multline}
\log_{2}M_{\operatorname{priv}}^{\ast}(n,\varepsilon_{1}+\sqrt{\varepsilon
_{2}})\geq n\left[ H(\rho_{B})-H(\rho_{E})\right]
\label{eq:pure-state-cq-2nd-order-bnd}\\
+\sqrt{nV(\rho_{B})}\Phi^{-1}(\varepsilon_{1})+\sqrt{nV(\rho_{E})}\Phi
^{-1}(\varepsilon_{2})+O(\log n),
\end{multline}
where $V(\rho_{B})$ and $V(\rho_{E})$ are defined from
\eqref{eq:entropy-var-def} below.
\begin{proposition}
\label{prop:variance-simplify-pure-state-cq}Let%
\begin{equation}
\rho_{XB}=\sum_{x}p_{X}(x)|x\rangle\langle x|_{X}\otimes|\psi^{x}%
\rangle\langle\psi^{x}|_{B}\label{eq:special-cq-state}%
\end{equation}
be a classical--quantum state corresponding to a pure-state ensemble
$\{p_{X}(x),|\psi^{x}\rangle_{B}\}_{x}$. Then the Holevo information variance
$V(X;B)_{\rho}=V(\rho_{XB}\Vert\rho_{X}\otimes\rho_{B})$\ is equal to the
entropy variance $V(\rho_{B})$ of the expected state $\rho_{B}=\sum_{x}%
p_{X}(x)|\psi^{x}\rangle\langle\psi^{x}|_{B}$, where%
\begin{equation}
V(\sigma)=\operatorname{Tr}\{\sigma\left[ -\log_{2}\sigma-H(\sigma)\right] ^{2}\}.\label{eq:entropy-var-def}%
\end{equation}
That is, when $\rho_{XB}$ takes the special form in
\eqref{eq:special-cq-state}, the following equality holds%
\begin{equation}
V(X;B)_{\rho}=V(\rho_{B}).\label{eq:prop-statement-pure-state-cq}%
\end{equation}
\end{proposition}
\begin{proof}
For the cq state in \eqref{eq:special-cq-state}, consider that $I(X;B)_{\rho
}=H(B)_{\rho}=H(\rho_{B})$. Furthermore, we have that%
\begin{align}
\log_{2}\rho_{XB} & =\log_{2}\left[ \sum_{x}p_{X}(x)|x\rangle\langle
x|_{X}\otimes|\psi^{x}\rangle\langle\psi^{x}|_{B}\right] \\
& =\sum_{x}\log_{2}\left( p_{X}(x)\right) |x\rangle\langle x|_{X}%
\otimes|\psi^{x}\rangle\langle\psi^{x}|_{B},
\end{align}
which holds because the eigenvectors of $\rho_{XB}$ are $\left\{
|x\rangle_{X}\otimes|\psi^{x}\rangle_{B}\right\} _{x}$. Then%
\begin{align}
V(X;B) & =V(\rho_{XB}\Vert\rho_{X}\otimes\rho_{B})\\
& =\operatorname{Tr}\{\rho_{XB}\left[ \log_{2}\rho_{XB}-\log_{2}\left(
\rho_{X}\otimes\rho_{B}\right) \right] ^{2}\}-\left[ I(X;B)_{\rho}\right]
^{2}\\
& =\operatorname{Tr}\{\rho_{XB}\left[ \log_{2}\rho_{XB}-\log_{2}\rho
_{X}\otimes I_{B}-I_{X}\otimes\log_{2}\rho_{B}\right] ^{2}\}-\left[
H(B)_{\rho}\right] ^{2}.\label{eq:1st-step-ent-var}%
\end{align}
By direct calculation, we have that%
\begin{align}
& \log_{2}\rho_{XB}-\log_{2}\rho_{X}\otimes I_{B}-I_{X}\otimes\log_{2}\rho
_{B}\nonumber\\
& =\sum_{x}\log_{2}\left( p_{X}(x)\right) |x\rangle\langle x|_{X}%
\otimes|\psi^{x}\rangle\langle\psi^{x}|_{B}-\sum_{x}\log_{2}\left[
p_{X}(x)\right] |x\rangle\langle x|_{X}\otimes I_{B}-\sum_{x}|x\rangle\langle
x|_{X}\otimes\log_{2}\rho_{B}\\
& =-\sum_{x}|x\rangle\langle x|_{X}\otimes\left[ \log_{2}\left(
p_{X}(x)\right) \left( I_{B}-|\psi^{x}\rangle\langle\psi^{x}|_{B}\right)
+\log_{2}\rho_{B}\right] .
\end{align}
Observe that $I_{B}-|\psi^{x}\rangle\langle\psi^{x}|_{B}$ is the projection
onto the space orthogonal to $|\psi^{x}\rangle_{B}$. Then we find that%
\begin{align}
& \left[ \log_{2}\rho_{XB}-\log_{2}\rho_{X}\otimes I_{B}-I_{X}\otimes\log
_{2}\rho_{B}\right] ^{2}\nonumber\\
& =\left[ -\sum_{x}|x\rangle\langle x|_{X}\otimes\left[ \log_{2}\left(
p_{X}(x)\right) \left( I_{B}-|\psi^{x}\rangle\langle\psi^{x}|_{B}\right)
+\log_{2}\rho_{B}\right] \right] ^{2}\\
& =\sum_{x}|x\rangle\langle x|_{X}\otimes\left[ \log_{2}\left(
p_{X}(x)\right) \left( I_{B}-|\psi^{x}\rangle\langle\psi^{x}|_{B}\right)
+\log_{2}\rho_{B}\right] ^{2}.
\end{align}
Furthermore, we have that%
\begin{multline}
\left[ \log_{2}\left( p_{X}(x)\right) \left( I_{B}-|\psi^{x}\rangle
\langle\psi^{x}|_{B}\right) +\log_{2}\rho_{B}\right] ^{2}%
\label{eq:entropy-var-expand}\\
=\left[ \log_{2}\left( p_{X}(x)\right) \right] ^{2}\left( I_{B}-|\psi
^{x}\rangle\langle\psi^{x}|_{B}\right) +\log_{2}\left( p_{X}(x)\right)
\left( I_{B}-|\psi^{x}\rangle\langle\psi^{x}|_{B}\right) \left( \log
_{2}\rho_{B}\right) \\
+\log_{2}\left( p_{X}(x)\right) \left( \log_{2}\rho_{B}\right) \left(
I_{B}-|\psi^{x}\rangle\langle\psi^{x}|_{B}\right) +\left[ \log_{2}\rho
_{B}\right] ^{2}%
\end{multline}
So then, by direct calculation,%
\begin{align}
& \operatorname{Tr}\{\rho_{XB}\left[ \log_{2}\rho_{XB}-\log_{2}\rho
_{X}\otimes I_{B}-I_{X}\otimes\log_{2}\rho_{B}\right] ^{2}\}\\
& =\operatorname{Tr}\left\{ \left[ \sum_{x^{\prime}}p_{X}(x^{\prime
})|x^{\prime}\rangle\langle x^{\prime}|_{X}\otimes|\psi^{x^{\prime}}%
\rangle\langle\psi^{x^{\prime}}|_{B}\right] \left[ \sum_{x}|x\rangle\langle
x|_{X}\otimes\left[ \log_{2}\left( p_{X}(x)\right) \left( I_{B}-|\psi
^{x}\rangle\langle\psi^{x}|_{B}\right) +\log_{2}\rho_{B}\right] ^{2}\right]
\right\} \\
& =\sum_{x}p_{X}(x)\operatorname{Tr}\left\{ |\psi^{x}\rangle\langle\psi
^{x}|_{B}\left[ \left[ \log_{2}\left( p_{X}(x)\right) \left( I_{B}%
-|\psi^{x}\rangle\langle\psi^{x}|_{B}\right) +\log_{2}\rho_{B}\right]
^{2}\right] \right\} \\
& =\sum_{x}p_{X}(x)\operatorname{Tr}\left\{ |\psi^{x}\rangle\langle\psi
^{x}|_{B}\left[ \log_{2}\rho_{B}\right] ^{2}\right\} \\
& =\operatorname{Tr}\{\rho_{B}\left[ \log_{2}\rho_{B}\right] ^{2}%
\}.\label{eq:last-step-ent-var}%
\end{align}
In the second-to-last equality, we used the expansion in
\eqref{eq:entropy-var-expand} and the fact that $|\psi^{x}\rangle\langle
\psi^{x}|_{B}$ and $I_{B}-|\psi^{x}\rangle\langle\psi^{x}|_{B}$ are
orthogonal. Finally, putting together \eqref{eq:1st-step-ent-var}\ and
\eqref{eq:last-step-ent-var}, we conclude \eqref{eq:prop-statement-pure-state-cq}.
\end{proof}
\subsubsection{Example: Pure-loss bosonic channel}
We can induce a pure-state cq wiretap channel from a pure-loss bosonic
channel. In what follows, we consider a coding scheme called binary phase-shift keying
(BPSK). Let us recall just the basic facts needed from Gaussian quantum
information to support the argument that follows (a curious reader can consult
\cite{S17}\ for further details). The pure-loss channel of transmissivity
$\eta\in\left( 0,1\right) $ is such that if the sender inputs a coherent
state $|\alpha\rangle$ with $\alpha\in\mathbb{C}$, then the outputs for Bob
and Eve are the coherent states $|\sqrt{\eta}\alpha\rangle_{B}$ and
$|\sqrt{1-\eta}\alpha\rangle_{E}$, respectively. Note that the overlap of any
two coherent states $|\alpha\rangle$ and $|\beta\rangle$ is equal to
$\left\vert \left\langle \alpha|\beta\right\rangle \right\vert ^{2}%
=e^{-\left\vert \alpha-\beta\right\vert ^{2}}$, and this is in fact the main
quantity that we need to evaluate the information quantities in
\eqref{eq:pure-state-cq-2nd-order-bnd}. The average photon number of a
coherent state $|\alpha\rangle$ is equal to $\left\vert \alpha\right\vert
^{2}$. A BPSK-coding scheme induces the following pure-state cq wiretap
channel from the pure-loss channel:%
\begin{align}
0 & \rightarrow|\alpha\rangle_{A}\rightarrow|\sqrt{\eta}\alpha\rangle
_{B}\otimes|\sqrt{1-\eta}\alpha\rangle_{E},\\
1 & \rightarrow|-\alpha\rangle_{A}\rightarrow|-\sqrt{\eta}\alpha\rangle
_{B}\otimes|-\sqrt{1-\eta}\alpha\rangle_{E}.
\end{align}
That is, if the sender would like to transmit the symbol \textquotedblleft%
0,\textquotedblright\ then she prepares the coherent state $|\alpha\rangle
_{A}$ at the input, and the physical channel prepares the coherent state
$|\sqrt{\eta}\alpha\rangle_{B}$ for Bob and $|\sqrt{1-\eta}\alpha\rangle_{E}$
for Eve. A similar explanation holds for when the sender inputs the symbol
\textquotedblleft1.\textquotedblright\ A BPSK-coding scheme is such that
the distribution $p_{X}(x)$ is unbiased:\ there is an equal probability 1/2 to
pick \textquotedblleft0\textquotedblright\ or \textquotedblleft%
1\textquotedblright\ when selecting codewords. Thus, the expected density
operators at the output for Bob and Eve are respectively as follows:%
\begin{align}
\rho_{B} & =\frac{1}{2}\left( |\sqrt{\eta}\alpha\rangle\langle\sqrt{\eta
}\alpha|_{B}+|-\sqrt{\eta}\alpha\rangle\langle-\sqrt{\eta}\alpha|_{B}\right)
,\\
\rho_{E} & =\frac{1}{2}\left( |\sqrt{1-\eta}\alpha\rangle\langle\sqrt
{1-\eta}\alpha|_{B}+|-\sqrt{1-\eta}\alpha\rangle\langle-\sqrt{1-\eta}%
\alpha|_{B}\right) .
\end{align}
A straightforward computation reveals that the eigenvalues for $\rho_{B}$ are
a function only of the overlap $\left\vert \langle-\sqrt{\eta}\alpha
|\sqrt{\eta}\alpha\rangle\right\vert ^{2}=e^{-4\eta\left\vert \alpha
\right\vert ^{2}}\equiv e^{-4\eta\bar{n}}$ and are equal to \cite{GW12}
\begin{equation}
p^{B}(\eta,\bar{n})\equiv\frac{1}{2}\left( 1+e^{-2\eta\bar{n}}\right)
,\qquad 1-p^{B}(\eta,\bar{n})
=
\frac{1}{2}\left( 1-e^{-2\eta\bar{n}}\right) .
\end{equation}
Similarly, the eigenvalues of $\rho_{E}$ are given by%
\begin{equation}
p^{E}(\eta,\bar{n})\equiv\frac{1}{2}\left( 1+e^{-2\left( 1-\eta\right)
\bar{n}}\right) ,\qquad
1-p^{E}(\eta,\bar{n})
=
\frac{1}{2}\left( 1-e^{-2\left( 1-\eta\right)
\bar{n}}\right) .
\end{equation}
We can then immediately plug in to \eqref{eq:pure-state-cq-2nd-order-bnd} to
find a lower bound on the second-order coding rate for private communication
over the pure-loss bosonic channel:%
\begin{multline}
\log_{2}M_{\operatorname{priv}}^{\ast}(n,\varepsilon_{1}+\sqrt{\varepsilon
_{2}})\geq n\left[ h_{2}(p^{B}(\eta,\bar{n}))-h_{2}(p^{E}(\eta,\bar
{n}))\right] \\
+\sqrt{nv_{2}(p^{B}(\eta,\bar{n}))}\Phi^{-1}(\varepsilon_{1})+\sqrt
{nv_{2}(p^{E}(\eta,\bar{n}))}\Phi^{-1}(\varepsilon_{2})+O(\log n),\label{eq:pure-loss-2nd-order}
\end{multline}
where $h_{2}$ and $v_{2}$ respectively denote the binary entropy and binary
entropy variance:%
\begin{align}
h_{2}(\gamma) & \equiv-\gamma\log_{2}\gamma-(1-\gamma)\log_{2}(1-\gamma),\\
v_{2}(\gamma) & \equiv\gamma\left[ \log_{2}\gamma+h_{2}(\gamma)\right]
^{2}+(1-\gamma)\left[ \log_{2}\left( 1-\gamma\right) +h_{2}(\gamma)\right]
^{2}.
\end{align}
A benchmark against which we can compare the performance of a BPSK\ code with
$\left\vert \alpha\right\vert ^{2}=\bar{n}$ is the energy-constrained private
capacity of a pure-loss bosonic channel \cite{wilde2016energy}, given by%
\begin{equation}
g(\eta\bar{n})-g((1-\eta)\bar{n}), \label{eq:actual-p-cap}
\end{equation}
where $g(x)\equiv(x+1)\log_{2}(x+1)-x\log_{2}x$. Figure~\ref{fig:results} plots the normal approximation \cite{polyanskiy10} of the lower bound on the second-order
coding rate of BPSK\ coding for various parameter choices for $\varepsilon
_{1}$, $\varepsilon_{2}$, $\eta$, and $\bar{n}$, comparing it against the
asymptotic performance of BPSK\ and the actual energy-constrained private capacity in \eqref{eq:actual-p-cap}. The normal approximation consists of all terms in \eqref{eq:pure-loss-2nd-order} besides the $O(\log n)$ term and typically serves as a good approximation for non-asymptotic capacity even for small values of $n$ (when \eqref{eq:pure-loss-2nd-order} is not necessarily valid), as previously observed in \cite{polyanskiy10,TH12,TBR15}.
\begin{figure}[ptb]
\begin{center}
\subfloat[]{\includegraphics[width=.47\columnwidth]{crop-1.pdf}}
\qquad
\subfloat[]{\includegraphics[width=.47\columnwidth]{crop-2.pdf}}\newline%
\subfloat[]{\includegraphics[width=.47\columnwidth]{crop-3.pdf}}
\qquad
\subfloat[]{\includegraphics[width=.47\columnwidth]{crop-4.pdf}}
\end{center}
\caption{The figures plot the normal approximation for non-asymptotic BPSK private communication (using \eqref{eq:pure-loss-2nd-order}), the asymptotic limit for BPSK, and the asymptotic energy-constrained private capacity for various values of the channel transmissivity $\eta$,
the mean photon number $\bar{n}$, $\varepsilon_1$,
and $\varepsilon_2$.}%
\label{fig:results}%
\end{figure}
\section{Conclusion\label{sec:concl}}
This paper establishes a lower bound on the $\varepsilon$-one-shot private
classical capacity of a cq wiretap channel, which in turn leads to a lower
bound on the second-order coding rate for private communication over an
i.i.d.~cq wiretap channel. The main techniques used are position-based
decoding \cite{AJW17}\ in order to guarantee that Bob can decode reliably and
convex splitting \cite{ADJ17}\ to guarantee that Eve cannot determine which
message Alice transmitted. It is my opinion that these two methods represent a
powerful approach to quantum information theory, having already been used
effectively in a variety of contexts in \cite{ADJ17,AJW17}.
For future work, it would be good to improve upon the lower bounds given here.
Extensions of the methods of \cite{YSP16} and \cite{TB16}\ might be helpful in
this endeavor.\bigskip
\textit{Note}: After the completion of the results in the present paper,
Naqueeb Warsi informed the author of an unpublished result from \cite{Warsi17}%
, which establishes a lower bound on the $\varepsilon$-one-shot private
capacity of a cq wiretap channel in terms of a difference of the hypothesis
testing mutual information and a smooth max-mutual information.
\bigskip
\textbf{Acknowledgements.} I am grateful to Anurag Anshu, Saikat Guha, Rahul
Jain, Haoyu Qi, Qingle Wang, and Naqueeb Warsi for discussions related to the
topic of this paper. I acknowledge support from the Office of Naval Research
and the National Science Foundation.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,410
|
\section{Introduction}
The \hanoitpb\ is a classical problem often used to teach recursivity, originally proposed in 1883 by \'Edouard Lucas~\cite{1883-MISC-LaTourDHanoi-Lucas,1883-BOOK-RecreationsMathematiques-Lucas}, where one must move $n$ disks, all of distinct size, one by one, from a peg $\ensuremath A$ to a peg $\ensuremath C$ using only an intermediary peg $\ensuremath B$, while ensuring that at no time does a disk stands on a smaller one. As early as 1892, Ball~\cite{1892-BOOK-MathematicalRecreationsAndEssays-Ball} described an optimal recursive algorithm which moves the $n$ disks of a \hanoit\ in $2^n-1$ steps.
\begin{LONG}
Many generalizations have been studied, allowing more than three pegs~\cite{1941-AmericanMathematics-SolutionOfProblemNo2918-FrameStewart}, coloring disks~\cite{1985-JRM-TheTowersOfBrahmaAndHanoiRevisited-Wood}, and cyclic \hanoit s~\cite{1981-IPL-TheCyclicTowersOfHanoi-Atkison}. Some problems are still open, as the optimality of the algorithm for $4$-peg \hanoitpb, and the analysis of the original problem is still a source of inspiration hundreds of year after its definition: for instance, Allouche and Dress~\cite{1990-RAIRO-ToursDeHanoiEtAutomates-AlloucheDress} proved in 1990 that the movements of the \hanoitpb\ can be generated by a finite automaton, making this problem an element of $SPACE(1)$.
\end{LONG}
The solution to the \hanoitpb\ is simple enough that it can be memorized and regurgitated at will by students from all over the world: asking about it in an assignment or exam does not truly test a student's mastery of the concept of recursivity, pushing instructors to consider variants with slightly more sophisticated solutions. Some variants do not make the problem more difficult (e.g. changing the \texttt{insertion} and \texttt{removal} point to the bottom: the solution is exactly the same), some make it only slightly more difficult (e.g. considering the case where the disks are not necessarily of distinct sizes\begin{DISKPILEPROBLEM}, described and analized in Appendix~\ref{sec:diskPileProblem})\end{DISKPILEPROBLEM}, but some small changes can make it surprisingly more difficult.
We consider the {\bouncingt\ Problem}, which only difference with the \hanoitpb\ is the \texttt{insertion} and \texttt{removal} point in each tower, taken to be the middle instead of the top (see Figure~\ref{figureExplicativeToursARessorts} for an illustration with {\bouncing\ Tower} s of sizes $n=3$ and $n=4$, and Section~\ref{sec:definition} for the formal definition). If the disks all weight the same, one can imagine such a tower as standing on a spring, the elasticity $k$ of the spring being tuned so that the middle of the tower is always at the same height, where disks are inserted and removed.
\begin{TODO}
Add a figure with a bouncing tower on a spring, with a red arrow going down for gravity and a green arrow with the resistance of the spring $k=g$ or $k=g/2$.
\end{TODO}
\begin {figure}[h]
\parbox{.45\textwidth}{\includegraphics[width=.45\textwidth]{hanoimidle} }
\hfill
\parbox{.45\textwidth}{ \caption {An illustration of the rules for the \texttt{insertion} and \texttt{removal} in a {\bouncing\ Tower}, depending on the parity of its size (sizes $n=3$ and $n=4$ here). In each case, the shaded disk indicates the \texttt{removal} point and the arrow indicates the \texttt{insertion} point. \label {figureExplicativeToursARessorts} } }
\end {figure}
As for the classical \hanoit, such \texttt{insertion} and \texttt{removal} rules guarantee that any move is \emph{reversible} (i.e. any disk $d$ removed from a peg $X$ can always be immediately reinserted in the same peg $X$), that the \texttt{insertion} and \texttt{removal} positions are uniquely defined, that each peg can always receive a disk, and that each tower with one disk or more can always yield one disk. The problem is very similar to the \hanoitpb: one would expect answering the following questions to be relatively easy, possibly by extending the answers to the corresponding questions on \hanoit s\footnote{For a \hanoit, the answer to those question is that there is a single such shortest sequence, of length $2^n-1$, obtained by the recursion $h(n,A,B,C)=h(n{-}1,A,C,B)."A\rightarrow B;".h(n{-}1,B,A,C)$ if $n>0$ and $\emptyset$ otherwise.}:
\begin{quote}
\bf
Consider the problem of moving a {\bouncing\ Tower}\ of $n$ disks, all of distinct size, one by one, from a peg $\ensuremath A$ to a peg $\ensuremath C$ using only an intermediary peg $\ensuremath B$, while ensuring that at no time does a disk stands on a smaller one:
\begin{enumerate}
\item Which sequences of steps permit to move such a tower?
\item What is the minimal length of such a sequence?
\item How many shortest such sequences are there?
\end{enumerate}
\end{quote}
We show that there is a unique shortest sequence of steps which moves a {\bouncing\ Tower}\ of $n$ disks of distinct sizes, and that it is of length at most $\sqrt{3}^n$\begin{LONG} (i.e. exactly $\sqrt{3}^n=3^{\frac{n}{2}}$ if $n$ is even, and $\frac{3}{5}\sqrt{3}^{n-1}-\frac{2}{3}=3^{\frac{n-1}{2}}+2(3^{\frac{n-3}{2}}-1)<\sqrt{3}^n$ if $n$ is odd)\end{LONG}. As $\sqrt{3}\approx1.733<2$, this sequence is exponentially shorter than the corresponding one for the \hanoitpb\ (of length $2^n-1$). We define formally the problem and its basic properties in Section~\ref{sec:form-defin-basic}\begin{LONG}: its formal definition in Section~\ref{sec:definition}, some examples where such towers can be moved faster in Section~\ref{sec:moving-small-towers}, and some useful concepts on the \texttt{insertion} and \texttt{removal} order of a tower in Section~\ref{sec:struct-facts-single}\end{LONG}. We describe a recursive solution in Section~\ref{sec:solution}\begin{LONG}, via its algorithm in Section~\ref{sec:algorithm}, the proof of its correctness in Section~\ref{sec:corr-algor} and the analysis of its complexity in Section~\ref{sec:complexity-algorithm}\end{LONG}. The optimality of the solution is proved in Section~\ref{sec:optimality}, via an analysis of the graph of all possible states and transition (defined and illustrated in Section~\ref{sec:configuration-graph}) and a proof of optimality for each function composing the solution (Section~\ref{sec:proof-optimality}). We conclude with a discussion (Section~\ref{sec:discussion}) of various other variants of similar or increased complexity\begin{DISKPILEPROBLEM}, and share in Appendix~\ref{sec:diskPileProblem} the text and the solution of a simpler variant successfully used in undergraduate assignments and exams\end{DISKPILEPROBLEM}.
\begin{SHORT}
The proofs of correctness (Theorem~\ref{res:correctness}) and optimality (Theorem~\ref{res:proof-optimality}) are simple case studies, so they are described in Appendix~\ref{sec:omitted-proofs} for lack of space.
\end{SHORT}
\section{Formal Definition and Basic Facts}\label{sec:form-defin-basic}
In this section we define more formally the {\bouncing\ Tower}\ (Section~\ref{sec:definition}), how small examples already show that moving such towers require less steps than moving a \hanoit\ (Section~\ref{sec:moving-small-towers}), and some properties of the order in which disks are inserted or removed on a peg to build or destroy a tower (Section~\ref{sec:struct-facts-single}).
\subsection{Formal Definition}
\label{sec:definition}
The ``middle'' disk of a tower of even size is not well defined, nor is the ``middle'' \texttt{insertion} point in a tower of odd size: we define both more formally in such a way that if $n$ is odd, the \texttt{removal} position is the center one, and the \texttt{insertion} point is below it; while if $n$ is even, the \texttt{insertion} point is in the middle of the tower, while the \texttt{removal} position is below the middle of the tower (see Figure~\ref{figureExplicativeToursARessorts} for an illustration with sizes $n=3$ and $n=4$).
\begin{LONG}
More formally, on a peg containing $n$ disks ranked by increasing sizes,
the \texttt{removal} point is the disk of rank $\btextractingpoint{n}$; and
the \texttt{insertion} point is position $\btinsertingpoint{n}$.
\end{LONG}
The \texttt{insertion} of disk $d$ on peg $X$ is {\em legal} if inserting $d$ in the \texttt{insertion} point of $X$ yields a legal configuration, where no disk is above a smaller one. A move from peg $X$ to peg $Y$ is \emph{legal} if there is a disk $d$ to remove from $X$, and if the \texttt{insertion} of $d$ on the $Y$ is legal.
\subsection{Moving small towers - differences with \hanoi}\label{sec:moving-small-towers}
For size one or two, there is no difference in the moving cost between a \hanoit\ and a {\bouncing\ Tower}. The first difference appears for size three, when only five steps are necessary to move a {\bouncing\ Tower}\ (see the sequence of five steps to move a {\bouncing\ Tower}\ of size $n=3$ in Figure~\ref{3diskBouncingTowerInMove}) as opposed to the seven steps required for moving a classical \hanoit\ (see the sequence of seven steps to move a \hanoit\ of size $n=3$ in Figure~\ref{3diskHanoiTowerInMove}).
\newcommand{\hanoiStateWithTransition}[4]{%
\begin{tabular}{c}
\hspace{-3cm} #4 \\
\framebox[66pt]{\vbox{%
\hbox{%
\makebox[22pt]{#1}%
\makebox[22pt]{#2}%
\makebox[22pt]{#3}%
}%
\hbox{%
\makebox[22pt]{$\ensuremath A$}%
\makebox[22pt]{$\ensuremath B$}%
\makebox[22pt]{$\ensuremath C$}%
}%
}
}
\end{tabular}
}%
\begin{figure}[h]
\centering
\scalebox{.6}{
\hanoiStateWithTransition{\peg{\trois}{\deux}{\un}{}} {} {} {\ } %
\hanoiStateWithTransition{\peg{\trois}{\un}{}{}} {\peg{\deux}{}{}{}} {}{$\ensuremath A \rightarrow \ensuremath B$} %
\hanoiStateWithTransition{\peg{\un}{}{}{}} {\peg{\trois}{\deux}{}{}} {}{$\ensuremath A \rightarrow \ensuremath B$
\hanoiStateWithTransition{\peg{}{}{}{}} {\peg{\trois}{\deux}{}{}} {\peg{\un}{}{}{}}{$\ensuremath A \rightarrow \ensuremath C$} %
\hanoiStateWithTransition{}{\peg{\deux}{}{}{}}{\peg{\trois}{\un}{}{}}{$\ensuremath B \rightarrow \ensuremath C$}%
\hanoiStateWithTransition{}{}{\peg{\trois}{\deux}{\un}{}}{$\ensuremath B \rightarrow \ensuremath C$}%
}
\caption{A {\bouncing\ Tower}\ of three disks can be moved in just five steps.\label{3diskBouncingTowerInMove}}
\end{figure}
\begin{figure}[h]
\centering
\scalebox{.6}{
\hanoiStateWithTransition{\peg{\trois}{\deux}{\un}{}} {} {} {\ } %
\hanoiStateWithTransition{\peg{\trois}{\deux}{}{}} {\peg{}{}{}{}} {\peg{\un}{}{}{}} {$\ensuremath A \rightarrow \ensuremath C$}%
\hanoiStateWithTransition{\peg{\trois}{}{}{}} {\peg{\deux}{}{}{}} {\peg{\un}{}{}{}} {$\ensuremath A \rightarrow \ensuremath B$}%
\hanoiStateWithTransition{\peg{\trois}{}{}{}} {\peg{\deux}{\un}{}{}} {} {$\ensuremath C \rightarrow \ensuremath B$}%
\hanoiStateWithTransition{} {\peg{\deux}{\un}{}{}} {\peg{\trois}{}{}{}} {$\ensuremath A \rightarrow \ensuremath C$}%
\hanoiStateWithTransition{\peg{\un}{}{}{}} {\peg{\deux}{}{}{}} {\peg{\trois}{}{}{}} {$\ensuremath B \rightarrow \ensuremath A$}%
\hanoiStateWithTransition{\peg{\un}{}{}{}} {} {\peg{\trois}{\deux}{}{}} {$\ensuremath B \rightarrow \ensuremath C$}%
\hanoiStateWithTransition{}{}{\peg{\trois}{\deux}{\un}{}} {$\ensuremath A \rightarrow \ensuremath C$}%
}
\caption{A \hanoit\ of three disks require seven steps to be moved between two pegs.\label{3diskHanoiTowerInMove}}
\end{figure}
When an odd number of disks is present on the peg $\ensuremath A$, and an even number is present on pegs $\ensuremath B$ and $\ensuremath C$, a sub-tower of height $2$ can be moved from $A$ in $2$ steps, when in a \hanoit\ we need $3$ steps to move any subtower of same height. In the {\bouncingt\ Problem}, having a third disk ``fixed'' on $\ensuremath A$ yields a reduced number of steps. We formalize this notion of ``fixed'' disk in the next section.
\subsection{Structural facts on a single Peg}\label{sec:struct-facts-single}
Before considering the complete problem over three pegs, we describe some concept about single pegs, and on the order in which the disks are inserted and removed on a specific peg.
\begin{definition}
We define the {\em removal order} as the order in which disks (identified by their rank in the final tower) can be removed from a {\bouncing\ Tower}. Symmetrically, we define the {\em insertion order} as the order in which the disks are inserted in the tower.
\end{definition}
The symmetry of the rules concerning the \texttt{insertion} and \texttt{removal} location of {\bouncing\ Tower} s yields that the {\em insertion} order is the exact reverse of the {\em removal} order (the \texttt{insertion} point of a tower is the \texttt{removal} point of a tower with one more disk), and each disk removed from a peg can be immediately replaced exactly where it was.
In particular, a key argument to both the description of the solution in Section~\ref{sec:solution} and to the proof of its optimality in Section~\ref{sec:optimality} is the fact that, when some (more extreme) disks are considered as ``fixed'' (i.e. the call to the current function has to terminate before such disks are moved), the order in which a subset of the disks is removed from a peg depends on the number of those ``fixed'' disks.
\begin{TODO}
Add a figure with fixed disks
\end{TODO}
\begin{definition}
When moving recursively $n$ disks from a peg $\ensuremath X$ with $x>n$ disks, the $x-n$ last disks in the \texttt{removal} order of $\ensuremath X$ are said to be \emph{fixed}. The {\em parity} of peg $\ensuremath X$ is the parity of the number $x$ of disks {\em fixed} on this peg.
\end{definition}
\begin{LONG}
{\bouncing\ Tower} s cannot be moved much faster than \hanoit s:
\begin{lemma}\label{lemmaNoMoveWithSameParity}
It is impossible to move more than one disk between two pegs of same parity without a third peg.
\end{lemma}
\begin{lproof}
Between two pegs of same parity, the \texttt{removal} order is the same. So the first disk needed on the final peg will be the last one removed from the starting peg. With more than one disk, we need the third peg to dispose temporally other disks.
\end{lproof}
\begin{lemma}\label{lemmaNoMoveWithDistinctParity}
It is impossible to move more than two disk between two pegs of opposite parities without a third peg.
\end{lemma}
\begin{lproof}
Between two pegs of opposite parities, the \texttt{removal} orders are different: But the definition of the middle is constant when the number of disks changes of $2$. So after moving two disks the third cannot be inserted in the right place.
\end{lproof}
\end{LONG}
The \texttt{removal} and \texttt{insertion} orders are changing with the parity of the {\bouncing\ Tower}: Consider a peg with $n$ disks on it:
\begin{itemize}
\item if $n=2m+1$ is odd, then the disks are removed in the following order:
$$ (m+1, m+2,\\
m,m+3,\\
m-1,m+4,\\
\ldots,\\
3, 2m, \\
2, 2m+1,\\
1)
$$
\item if $n=2m$ is even, then the removal order is:
$$ (m+1, m,\\
m+2,m-1,\\
m+3,m-2,\\
\ldots,\\
2m-1,2,\\
2m,1)
$$
\end{itemize}
The relative order of $m$ and $m+2$, of $m-1$ and $m+3$, and more generally of any pair of disks $i$ and $m-i$ for $i\in[1..\lfloor n/2\rfloor]$, are distinct. More specifically, disks are alternately extracted below and above the \texttt{insertion} point. This implies the two following connexity lemma:
\begin{lemma}\label{connexitylemma}
The $k$ first disks removed from the tower are contiguous in the original tower, and they are either all smaller or all larger than the $(k+1)$-th disk removed.
\end{lemma}
\begin{TODO}
PROVE This lemma!
\end{TODO}
\begin{lemma}\label{reciprocofconnexitylemma}
If $k$ disks are all smaller than the disk below the \texttt{insertion} point, and all larger than the disk above the \texttt{insertion} point, then there exists an order in which to add those $k$ disks to the tower.
\end{lemma}
\begin{lproof}
By induction:
for one disk it is true;
for $k$ disks, if the \texttt{insertion} point after the \texttt{insertion} of disc $d$ is
above $d$ then add the larger and then the $k-1$ disks left,
else add the smaller and then the $k-1$ disks left.
\end{lproof}
We present in the next section a solution to the {\bouncingt\ Problem}\ which takes advantage of the cases where two disks can be moved between the same two pegs in two consecutive steps.
\section{Solution}\label{sec:solution}
One important difference between \hanoit s and {\bouncing\ Tower} s is that we need not always to remove $n-1$ disks of a tower of $n$ disks to place the $n$-th disk on another peg (e.g. in the sequence of steps shown in Figure~\ref{3diskBouncingTowerInMove}, disk $3$ was removed from $A$ when there was still a disk sitting on top of it). But we need always to remove at least ${n-2}$ disks in order to release the $n$-th disk, as it is the last or the last-but-one disk removed. This yields a slightly more complex recursion than in the traditional case. We describe an algorithmic solution in Section~\ref{sec:algorithm}, prove its correctness in Section~\ref{sec:corr-algor}, and analyze the length of its output in Section~\ref{sec:complexity-algorithm}. We prove the optimality of the solution produced separately, in Section~\ref{sec:optimality}.
\subsection{Algorithm} \label{sec:algorithm}
\providecommand{\move}[5]{\ensuremath{\mbox{\tt move#1}(#2,#3,#4,#5)}}
\providecommand{\unitarymove}[2]{\ensuremath{\mbox{\tt move}(#1,#2)}}
Note $|\ensuremath A|$ the number of disks on peg $\ensuremath A$, $|\ensuremath B|$ on $\ensuremath B$ and $|\ensuremath C|$ on $\ensuremath C$. For each triplet $(x,y,z)\in\{0,1\}^3$, we define the function $\move{xyz}{n}{A}{B}{C}$ moving $n$ disks from peg $\ensuremath A$ to peg $\ensuremath C$ using peg $\ensuremath B$ when $|A|\geq n$, $|A|-n \equiv x \mod 2$, $|B| \equiv y \mod 2$, $|C| \equiv z \mod 2$, and the $n$ first disks extracted from $A$ can be legally inserted on $B$ and $C$. Less formally, there are $x$ fixed disks on the peg $A$, $y$ on $B$ and $z$ on $C$.
\begin{TODO}
DEFINE $(p)_{xyz}$ from a previous version.
\end{TODO}
We need only to study three of those $2^3=8$ functions.
First, as the functions are symmetric two by two: for instance, $\move{000}{n}{A}{B}{C}$ behaves as $\move{111}{n}{A}{B}{C}$ would if the \texttt{insertion} point in a tower of odd size was above the middle disk, and the \texttt{removal} point in a tower of even size was above the middle of the tower: in particular, they have exactly the same complexity.
Second, the reversibility and symmetry of the functions yields a similar reduction: $\move{001}{n}{A}{B}{C}$ has the same structure as the function $\move{100}{n}{A}{B}{C}$ and the two have the same complexity.
We describe the python code implementing those functions in Figures~\ref{fig:move000}to~\ref{fig:move010}, so that the initial call is made through the call \verb+move000(n,"a","b","c")+, while recursive calls refer only to functions $\move{000}{n}{A}{B}{C}$ (Figure~\ref{fig:move000}), $\move{100}{n}{A}{B}{C}$ (Figure~\ref{fig:move100}), $\move{001}{n}{A}{B}{C}$ (similar to $\move{100}{n}{A}{B}{C}$ and described \begin{SHORT}in the Appendix \end{SHORT} in Figure~\ref{fig:move001}) and $\move{010}{n}{A}{B}{C}$ (Figure~\ref{fig:move010}).
\begin{figure}
\centering
\begin{minipage}[t]{.32\linewidth}
\caption{\\$move000(n,A,B,C)$\label{fig:move000}}
\begin{lstlisting}
def move(a,b):
print "("+a+",",
print b+")",
def move000(n,a,b,c):
if n>0 :
move100(n-1,a,c,b)
move(a,c)
move001(n-1,b,a,c)
\end{lstlisting}
\end{minipage} \hfill
\begin{minipage}[t]{.32\linewidth}
\caption{\\$move100(n,A,B,C)$\label{fig:move100} \begin{SHORT}($move001(n,A,B,C)$ is defined similarly in Figure~\ref{fig:move001} in the Appendix)\end{SHORT}
}
\begin{lstlisting}
def move100(n,a,b,c):
if n == 1 :
move(a,c)
elif n>1 :
move100(n-2,a,c,b)
move(a,c)
move(a,c)
move010(n-2,b,a,c)
\end{lstlisting}
\begin{LONG}
\caption{\\$move001(n,A,B,C)$\label{fig:move001}}
\begin{lstlisting}
def move001(n,a,b,c):
if n == 1 :
move(a,c)
elif n>1 :
move010(n-2,a,c,b)
move(a,c)
move(a,c)
move001(n-2,b,a,c)
\end{lstlisting}
\end{LONG}
\end{minipage}
\hfill
\begin{minipage}[t]{.32\linewidth}
\caption{\\$move010(n,A,B,C)$\label{fig:move010}}
\begin{lstlisting}
def move010(n,a,b,c):
if n == 1 :
move(a,c)
elif n == 2 :
move(a,b)
move(a,c)
move(b,c)
elif n>2 :
move010(n-2,a,b,c)
move(a,b)
move(a,b)
move010(n-2,c,b,a)
move(b,c)
move(b,c)
move010(n-2,a,b,c)
\end{lstlisting}
\end{minipage}
\label{fig:pythonCode}
\end{figure}
\begin{INUTILE}
\begin{figure}
\begin{minipage}[t]{.32\linewidth}
\begin{minipage}[t]{1\linewidth}
\caption{\\$move000(n,A,B,C)$\label{fig:move000}}
\begin{lstlisting}
IF n>0
move100(n-1,A,C,B)
move(A,C)
move001(n-1,B,A,C)
ENDIF
\end{lstlisting}
\end{minipage}
\begin{LONG}
\begin{minipage}[t]{1.0\linewidth}
\caption{\\$move100(n,A,B,C)$\label{fig:move100}}
\begin{lstlisting}
IF n==1
move (A,C);
ELSE
move100(n-2,A,C,B);
move(A,C);
move(A,C);
move010(n-2,B,A,C);
ENDIF
\end{lstlisting}
\end{minipage}
\end{LONG}
\end{minipage}
\begin{minipage}[t]{.32\linewidth}
\caption{\\$move001(n,A,B,C)$\label{fig:move001}}
\begin{lstlisting}
IF n==1
move(A,C);
ELSE
move010(n-2,A,C,B);
move(A,C);
move(A,C);
move001(n-2,B,A,C);
ENDIF
\end{lstlisting}
\end{minipage}
\hfill
\begin{minipage}[t]{.32\linewidth}
\caption{\\$move010(n,A,B,C)$\label{fig:move010}}
\begin{lstlisting}
IF n==1
move(A,C);
ELSIF n==2
move(A,B);
move(A,C);
move(B,C);
ELSE
move010(n-2,A,B,C);
move(A,B);
move(A,B);
move010(n-2,C,B,A);
move(B,C);
move(B,C);
move010(n-2,A,B,C);
ENDIF
\end{lstlisting}
\end{minipage}
\end{figure}
\end{INUTILE}
The algorithm for $\move{000}{n}{A}{B}{C}$ (in Figure~\ref{fig:move000}) has the same structure as the corresponding one for moving \hanoit s, the only difference being in the parity of the pegs in the recursive calls, which implies calling other functions than $\move{000}{n}{A}{B}{C}$, in this case $\move{001}{n}{A}{B}{C}$ and $\move{100}{n}{A}{B}{C}$.
The algorithms for $\move{100}{n}{A}{B}{C}$
(in Figure~\ref{fig:move100}) and $\move{001}{n}{A}{B}{C}$ (in Figure~\ref{fig:move001}) and
are taking advantage of the difference of parity between the two extreme pegs to move two consecutive disks in two moves, but still has a similar structure to the algorithm for $\move{000}{n}{A}{B}{C}$ and the corresponding one for moving \hanoit s (just moving two disks instead of one).
The algorithm for $\move{010}{n}{A}{B}{C}$ is less intuitive.
Given that the \texttt{removal} and \texttt{insertion} orders on the origin peg $A$ and on the destination peg $C$ are the same (because the parity of those pegs is the same), $n-1$ disks must be removed from $A$ before the last disk of the \texttt{removal} order\begin{LONG}, which yields a naive algorithm such as described in Figure~\ref{fig:NonOptimalMove010}\end{LONG}.
Such a strategy would yield a correct solution but not an optimal one, as it reduces the size only by one disk at the cost of two recursive calls and one step (i.e. reducing the size by two disks at the cost of four recursive calls and three steps), when another strategy (described in the algorithm in Figure~\ref{fig:move010}) reduces the size by two at the cost of three recursive calls and four steps\begin{LONG}: moving $n-2$ disks to $C$, the two last disks of the \texttt{removal} order on $B$, then $n-2$ disks to $A$, the two last disks of the \texttt{removal} order on $C$, then finally the $n-2$ disks to $C$\end{LONG}.
The first strategy ($f(n)=2f(n-1)+2=4f(n-2)+3$) yields a complexity within $\Theta(2^n)$ while the second strategy ($f(n)=3f(n-2)+4$) yields a complexity within $\Theta(3^\frac{n}{2})$. We show in Section \ref{sec:corr-algor} that moving two disks at a time is correct in this context and in Section~\ref{sec:optimality} that the latter yields the optimal solution.
\begin{LONG}
\begin{figure}
\begin{minipage}[t]{.3\linewidth}
\caption{\\Alternative (non optimal) take on $move010(n,A,B,C)$\label{fig:NonOptimalMove010}}
\begin{lstlisting}
IF n==1
move(A,C);
ELSE
move101(n-1,A,C,B);
move(A,C);
move101(n-1,B,A,C);
ENDIF
\end{lstlisting}
\end{minipage}
\hfill
\begin{minipage}[t]{.3\linewidth}
\caption{\\Alternative (non optimal) take on $move101(n,A,B,C)$\label{fig:NonOptimalMove101}}
\begin{lstlisting}
IF n==1
move(A,C);
ELSE
move010(n-1,A,C,B);
move(A,C);
move010(n-1,B,A,C);
ENDIF
\end{lstlisting}
\end{minipage}
\end{figure}
\end{LONG}
\subsection{Correctness of the algorithm}\label{sec:corr-algor}
\providecommand\IH{H}
We prove the correctness of our solution by induction on the number $n$ of disks.
\begin{theorem}\label{res:correctness}
For any positive integer value $n$,
and any triplet $(x,y,z)\in\{0,1\}^3$ of booleans,
the function $\move{xyz}{n}{A}{B}{C}$ produces
a sequence of legal steps which moves a {\bouncing\ Tower}\ from $A$ to $C$ via $B$.
\end{theorem}
The proof is based on the following invariant, satisfied by all recursive functions on entering and exiting:
\begin{definition}{\em Requirement for insertion $(i)$:}
The disks above the \texttt{insertion} point of $B$ or $C$ are all smaller than the first $n$ disks removed from $A$;
and the disks below the \texttt{insertion} point of $B$ or $C$ are all larger than the first $n$ disks removed from $A$ (see an illustration in Figure~\ref{requirementForInsertion}).
\end{definition}
\begin{figure}[hbtf]
\parbox{4cm}{
\[ \hanoiState
{\peg{\nmdeux}{\diskDots}{\quatre}{}}
{\peg{\n}{\nmun}{\deux}{\un}}
{\peg{\trois}{}{}{}}
\]
} \parbox{9cm}{
\caption{{\em Requirement for insertion $(i)$:}\label{requirementForInsertion} disks $4$ to $n-2$ can be inserted on $B$ as the \texttt{insertion} point of $B$ is between $2$ and~$n-1$; and on $C$ as the \texttt{insertion} point of $C$ is under~$3$. }}
\end{figure}
\begin{TODO}
RECOVER definition of $p(100)$ the predicates before each function from previous version of paper.
\end{TODO}
\begin{lproof}
Consider the property $\IH(n)=$ ``$\forall(x,y,z)\in\{0,1\}^3,$ $\forall i \leq n,$ $\move{xyz}{i}{A}{B}{C}$ is correct''. $\IH(0)$ is trivially true, and $\IH(1)$ can be checked for all functions at once. For all values $x,y,z$, the function $\move{xyz}{1}{A}{B}{C}$ is merely performing the step $\unitarymove{A}{C}$. The hypothesis $\IH(1)$ follows. Now, for a fixed $n> 1$, assume that $\IH(n-1)$ holds: we prove the hypothesis $\IH(n)$ separately for each function.
\begin{itemize}
\item {Analysis of $\move{000}{n}{A}{B}{C}$:}
\begin{enumerate}
\item According to $\IH(n-1)$ the call to $\move{100}{n-1}{A}{B}{C}$ is correct if $(i)$ and $(p)_{100}$ are respected. $(i)$ is implied by $(i)$ on $\move{000}{n-1}{A}{B}{C}$; $(p)_{100}$ is implied by $(p)_{000}$ and the remaining disk on $A$ ($a-n \mod 2 \equiv 0 \Rightarrow a-(n-1) \mod 2 \equiv 1 \mod 2$).
\item The step $\unitarymove{A}{C}$ is possible and legal because of the precondition $(i)$ for $\move{000}{n}{A}{B}{C}$: the disk moved was in the $n$ first removed from $A$, and so can be introduced on $C$.
\item The call to $\move{001}{n}{A}{B}{C}$ is symmetrical to $1$, and so correct.
\item We can check the final state by verifying that the number of disks removed from $A$ and added to $C$ is $(n-1) + 1 = n$.
\end{enumerate}
So $\move{000}{n}{A}{B}{C}$ is correct.
\item {Analysis of $\move{100}{n}{A}{B}{C}$:}
\begin{enumerate}
\item $\move{100}{n-2}{A}{B}{C}$ is correct according to $\IH(n-1)$, as the
requirements are also:
The requirement $(i)$ is given by $(i)$ for the initial call,
and the parity $(p)_{100}$ is respected because we move two disks
less than in the current call to $\move{100}{n}{A}{B}{C}$.
\item The two disks left (let us call them $\alpha$ and $\beta$) are in position (given fig. \ref{TwoLastDisksRemoved}, $(i)$) such that the removal order on $A$ is $(\alpha,\beta)$ and the \texttt{insertion} order on $C$ is $(\beta,\alpha)$ (see fig.\ref{TwoLastDisksRemoved}, $(ii)$). They can be inserted on $C$ because of requirement $(i)$. So the two disks are correctly moved in two steps.
\item The requirements for $\move{010}{n-2}{A}{B}{C}$ are satisfied:
\begin{itemize}
\item{$(i)$} stand as a consequence of the precondition $(i)$ for the current call, as the $n-2$ disks to be moved on $C$ were on $A$ before the original call, in the middle of $\alpha$ and $\beta$.
\item{$(p)_{010}$}: The number of disks on $C$ is still even as we added two disks. The number of disks on $A$ is still odd as we removed two disks.
\end{itemize}
So, because of $\IH(n-2)$, $\move{010}{n-2}{A}{B}{C}$ is correct.
\end{enumerate}
So $\move{100}{n}{A}{B}{C}$ is correct.
\begin {figure}[h]
\begin{center}
\parbox{5cm}{
\begin{center} \includegraphics{twoDisksOnOddPeg}\end{center}
\begin{center}$(i)$\end{center}
$n$ odd: $a$ is removed first,\\
$y$ is removed second.
}
\parbox{5cm}{
\begin{center}
\includegraphics{twoDisksOnEvenPeg}
\end{center}
\begin{center}$(ii)$\end{center}
$n$ even: $y$ is removed first,\\
$x$ is removed second.
}
\end{center}
\caption{Removal order of the last two disks.\label{TwoLastDisksRemoved}}
\end {figure}
\item {Analysis of $\move{001}{n}{A}{B}{C}$:} This function is the exact symmetric of $\move{100}{n}{A}{B}{C}$, for a task exactly symmetric, so has a symmetric proof of its correctness.
\item {Analysis of $\move{010}{2}{A}{B}{C}$:} The two disks (let us call them $\alpha$ and $\beta$) are in position (given fig. \ref{TwoLastDisksRemoved}, $(ii)$) such that the \texttt{removal} order on $A$ is $(\beta,\alpha)$ and the \texttt{insertion} order on $C$ is $(\alpha,\beta,)$, as $A$ and $C$ have the same parity. $\beta$ can be inserted on $B$ and they can both be inserted on $C$ because of requirement $(i)$. So the two disks are correctly moved in three steps, using peg $B$ to dispose temporally disk $\beta$. So $\move{010}{2}{A}{B}{C}$ is correct.
\item {Analysis of $\move{010}{n}{A}{B}{C}$ if $n>2$:} All along of this proof of correctness we shall use the fact that fixing $2$ disks on the same peg doesn't change the parity of this peg.
\begin{enumerate}
\item $\move{010}{n-2}{A}{B}{C}$ is correct as: from $(i)$ for the initial call results $(i)$ for the first recursive call; $(p)_{010}$ is a natural consequence of $(p)_{010}$ for the initial call (because parity conserved when icing two disks). So $\IH(n-1)$ implies that $\move{010}{n-2}{A}{B}{C}$ is correct.
\item $A$ and $B$ having different parities, we can move two consecutive disks in two consecutive calls as for $\move{100}{n}{A}{B}{C}$.
\item The second recursive call to $\move{010}{n-2}{A}{B}{C}$ verifies conditions $(i)$ and $(p)_{010}$ as only two extremes disk have been removed from $A$.
\item The two next steps are feasible because of the difference of parity between $B$ and $C$ (same argument as point $2$).
\item The last recursive call is symmetric to the first call, as we move back the $n-2$ disks between the two extreme disk, but this time on $C$.
\end{enumerate}
So $\move{010}{n}{A}{B}{C$} is correct. \qedhere
\end{itemize}
\end{lproof}
\begin{SHORT}
The proof of correctness is merely a study case over the three functions $\move{000}{n}{A}{B}{C}$, $\move{100}{n}{A}{B}{C}$ and $\move{010}{n}{A}{B}{C}$: for lack of space, we defer it to the appendix.
\end{SHORT}
We analyze the complexity of this solution in the next section.
\subsection{Complexity of the algorithm}\label{sec:complexity-algorithm}
Let $f_{xyz}(n)$ be the complexity of the function $\move{xyz}{n}{A}{B}{C}$,
when $|A|\geq n$, $|A|-n \equiv x \mod 2$, $|B| \equiv y \mod 2$ and $|C| \equiv z \mod 2$.
The algorithms from Figures~\ref{fig:move000} to~\ref{fig:move010} yield a recursive system of four equations.\begin{LONG}
\[
\left\{
\begin{array}{lll}
\forall x,y,z & f_{xyz}(0) &= 0\\
\forall x,y,z & f_{xyz}(1) &= 1\\
& f_{010}(2) &= 3\\
\\
\forall n>1, &f_{000}(n) &= f_{100}(n-1) + 1 + f_{001}(n-1)\\
\forall n>1, &f_{100}(n) &= f_{100}(n-2) + 2 + f_{010}(n-2)\\
\forall n>1, &f_{001}(n) &= f_{010}(n-2) + 2 + f_{001}(n-2)\\
\forall n>2, &f_{010}(n) &= 3 f_{010}(n-2) + 4
\end{array}
\right.
\]
\end{LONG} As $f_{001}$ is defined exactly as $f_{100}$ (because of the symmetry between $\move{001}{n}{A}{B}{C}$ and $\move{100}{n}{A}{B}{C}$), we can replace each occurence of $f_{001}$ by $f_{100}$, hence reducing the four equations to a system of three equations:
\[
\left\{
\begin{array}{lll}
\forall x,y,z & f_{xyz}(0) &= 0\\
\forall x,y,z & f_{xyz}(1) &= 1\\
& f_{010}(2) &= 3\\
\\
\forall n>1, &f_{000}(n) &= 2 f_{100}(n-1) + 1 \\
\forall n>1, &f_{100}(n) &= f_{100}(n-2) + 2 + f_{010}(n-2)\\
\forall n>2, &f_{010}(n) &= 3 f_{010}(n-2) + 4
\end{array}
\right.
\]
\begin{LONG}
\begin{figure}
\centering
$$
\begin{array}{c*{16}{|c}}
n & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline
f_{010} & 0 & 1 & 3 & 7 & 13 & 25 & 43 & 79 & 133 & 241 & 403 & 727 & 1213 & 2185 & 3643 & 6559 \\ \hline
f_{100} & 0 & 1 & 2 & 4 & 7 & 13 & 22 & 40 & 67 & 121 & 202 & 364 & 607 & 1093 & 1822 & 3280 \\ \hline
f_{000} & 0 & 1 & 3 & 5 & 9 & 15 & 27 & 45 & 81 & 135 & 243 & 405 & 729 & 1215 & 2187 & 3645 \\ \hline
3^{\lceil n/2\rceil } & 1 & 3 & 3 & 9 & 9 & 27 & 27 & 81 & 81 & 243 & 243 & 729 & 729 & 2187 & 2187 & 6561 \\
\end{array}
$$
\caption{The first values of $f_{010}$,$f_{100}$ and $f_{000}$, computed automatically from the recursion. those corrobolate the intuition that $f_{100}(n)<f_{000}(n)$ for values of $n$ larger than $1$.}
\label{fig:firstValues}
\end{figure}
\end{LONG}
Lemmas~\ref{res:f010} to \ref{res:f000} resolve the system function by function.
The function $f_{010}(n)$ can be solved independently from the others:
\begin{lemma}\label{res:f010}
$f_{010}(n)= \left\{ \begin{array}{ll}
0 & \mbox{if $n=0$;} \\
1 & \mbox{if $n=1$;} \\
3 & \mbox{if $n=2$;} \\
3^{\frac{n+1}{2}} - 2 & \mbox{if $n\geq 3$ is odd; and } \\
5 \times 3^{\frac{n}{2}-1} - 2 & \mbox{if $n\geq 4$ is even.}
\end{array} \right.
$
\end{lemma}
\begin{proof}
Consider the recurrence $X_{k+1}=3X_k +4$ at the core of the definition of $f_{010}$: a mere extension yields the simple expression $X_k=3^k(X_0+2)-2$.
\begin{itemize}
\item When $n\geq 3$ is odd, set $k=\frac{n-1}{2}\geq 1$, $U_0=1$ and $U_{k+1}=3U_k +4$ so that $f(2k+1)=U_k=3^k(1+2)-2$. Then $f_{010}(n)=3\times 3^{k} -2=3^{k+1} -2$ for $n\geq 3$ and odd.
\item When $n\geq 4$ is even, set $k=\frac{n}{2}\geq 1$, $V_0=3$ and $V_{k+1}=3V_k +4$ so that $f(2k)=V_k=3^k(3+2)-2$, so that $f_{010}(n)=5\times 3^k -2$ for $n\geq 4$ and even.
\end{itemize}
Gathering all the results yields the final expression.
\end{proof}
The expression for the function $f_{010}$ yields the expression for the function $f_{100}$:
\begin{lemma} \label{res:f100}
$f_{100}(n) = \left\{
\begin{array}{ll}
0 & \mbox{ if $n=0$;} \\
1 & \mbox{ if $n=1$;} \\
2 & \mbox{ if $n=2$;} \\
4 & \mbox{ if $n=3$;} \\
\frac{5}{2}\times 3^{\frac{n}{2}-1} +2 & \mbox{ where $n\geq4$ is even; and } \\
\frac{3^{\frac{n+1}{2}}-1}{2} & \mbox{ where $n\geq5$ is odd. } \\
\end{array}
\right.
$
\end{lemma}
\begin{proof}
Consider the projection of the system to just $f_{100}$:
\[ f_{100}(n) = \left\{
\begin{array}{ll}
0 & \mbox{ if $n=0$}\\
1 & \mbox{ if $n=1$}\\
f_{100}(n-2) +2 + f_{010}(n-2) & \mbox{ if $n\geq2$ }\\
\end{array}
\right.
\]
For any integer value of $k\geq0$, we combine some change of variables with the results from Lemma~\ref{res:f010} to yied two linear systems, which we solve separately:
\begin{itemize}
\item $V_k = f_{100}(2k)$ and $V_0=f_{100}(0)=0$ so that $f_{100}(n) = V_{k}$ if $n$ is even and $k=\frac{n}{2}$; and
\item $U_k = f_{100}(2k+1)$ and $U_0=f_{100}(1)=1$ so that $f_{100}(n) = U_{k}$ if $n$ is odd and $k=\frac{n-1}{2}$.
\end{itemize}
On one hand, $U_k = U_{k-1} + 2 + f_{010}(2k+1-2)$ for $k>0$ and $U_0=1$.
This yields a linear recurrence which we develop as follow:
\begin{eqnarray*}
U_k & = & U_{k-1} + 2 + f_{010}(2k-1) \mbox{ by definition;} \\
& = & U_{k-1} + 2 + 3\times 3^{\frac{(2k-1)-1}{2}} - 2 \mbox{ via Lemma~\ref{res:f010} because $2k-1$ is odd;} \\
& = & U_{k-1} + 3^k \mbox{ by mere simplification;} \\
& = & U_0 + \frac{3}{2}(3^k-1) \mbox{ by resolution of a geometric serie;} \\
& = & \frac{3^{k+1}-1}{2} \mbox{ because $U_0=1$.}
\end{eqnarray*}
Since $f_{100}(n) = U_{\frac{n-1}{2}}$ when $n$ is odd, the solution above yields $f_{100}(n) = \frac{3^{\frac{n+1}{2}}-1}{2}$ if $n$ is odd.
On the other hand, $V_k=V_{k-1} + 2 + f_{010}(2k-2)$ for $k>0$ and $V_0=0$.
The initial conditions of $f_{010}$ for $n=0,1$ and $2$ yields the three first values of $V_k$:
$V_0=0$;
$V_1= V_0 + 2 + f_{010}(0) = 0+2+0 = 2$; and
$V_2= V_1 + 2 + f_{010}(2) = 2+2+3=7$.
Then we develop the recursion for $k\geq 3$ similarly to $U_k$:
\begin{eqnarray*}
V_k & = & V_{k-1} + 2 + f_{010}(2k-2) \mbox{ by definition;} \\
& = & V_{k-1} + 2 + 5\times 3^{\frac{(2k-2)}{2}-1} - 2 \mbox{ for $2k-2\geq 4$ even, or any $k\geq 3$ via Lemma~\ref{res:f010};} \\
& = & V_{k-1} + 5\times 3^{k-2} \mbox{ by mere simplification (still only for $k\geq 3$);} \\
& = & V_2 + 5 \left( 3^1+\cdots + 3^{k-2}\right) \mbox{ by propagation;} \\
& = & V_2 + 5 \frac{ 3^{k-1}-2 }{2} \mbox{ by resolution of a geometric serie;} \\
& = & 7 + \frac{5}{2} (3^{k-1}-2) \mbox{ because $V_2=7$;} \\
& = & \frac{5}{2} 3^{k-1} +2 \mbox{ by simplification.}
\end{eqnarray*}
Since $f_{100}(n) = V_{\frac{n}{2}}$ when $n$ is even, the solution above yields $f_{100}(n) = \frac{5}{2} 3^{\frac{n}{2}-1} +2$ if $n$ is even.
Reporting those results in the definition of $f_{100}$ yields the final formula\begin{LONG}:
$$f_{100}(n) = \left\{
\begin{array}{ll}
0 & \mbox{ if $n=0$;} \\
1 & \mbox{ if $n=1$;} \\
2 & \mbox{ if $n=2$;} \\
4 & \mbox{ if $n=3$;} \\
\frac{5}{2}\times 3^{\frac{n}{2}-1} +2 & \mbox{ where $n\geq4$ is even; and } \\
\frac{3^{\frac{n+1}{2}}-1}{2} & \mbox{ where $n\geq5$ is odd. } \\
\end{array}
\right.
$$
\end{LONG}\begin{SHORT}.\end{SHORT}
\end{proof}
Finally, the expression for the function $f_{100}$ directy yields the expression for the function $f_{000}$:
\begin{lemma} \label{res:f000}
$f_{000}(n) = \left\{
\begin{array}{ll}
1 & \mbox{ if $n=1$}\\
3 & \mbox{ if $n=2$}\\
5 & \mbox{ if $n=3$}\\
3^{\frac{n}{2}} & \mbox{ where $n\geq4$ is even; and } \\
5(3^{\frac{n-3}{2}} + 1) & \mbox{ where $n\geq5$ is odd.} \\
\end{array}
\right.
$
\end{lemma}
\begin{proof}
\[f_{100}(n) = \left\{
\begin{array}{ll}
1 & \mbox{ if $n=1$;} \\
2 & \mbox{ if $n=2$;} \\
4 & \mbox{ if $n=3$;} \\
\frac{5}{2} 3^{\frac{n}{2}-1} +2 & \mbox{ where $n\geq4$ is even; and } \\
\frac{3^{\frac{n+1}{2}}-1}{2} & \mbox{ where $n\geq5$ is odd. } \\
\end{array}
\right.
\]
From these results, deduce the value of $f_{000}(n)$ using that $f_{000}(n) = 2 f_{100}(n-1) + 1$.
\[ f_{000}(n) = \left\{
\begin{array}{ll}
1 & \mbox{ if $n=1$}\\
3 & \mbox{ if $n=2$}\\
5 & \mbox{ if $n=3$}\\
5\times 3^{\frac{n-1}{2}-1} +5 & \mbox{ where $n\geq5$ is odd; and } \\
3^{\frac{n}{2}} & \mbox{ where $n\geq6$ is even. } \\
\end{array}
\right.
\]
\end{proof}
As $\sqrt{3}\approx1.73<2$, this value is smaller than the number $2^n-1$ of steps required to move a \hanoit. We prove that this is optimal in the next section.
\section{Optimality} \label{sec:optimality}
Each legal state of the {\bouncingt\ Problem}\ with three pegs and $n$ disks can be uniquely described by a word of length $n$ on the three letters alphabet $\{A,B,C\}$, where the $i$-th letter indicates on which peg the $i$-th largest disk stands. Moreover, each word of $\{A,B,C\}^n$ corresponds to a legal state of the tower, so there are $3^n$ different legal states (even though not all of them are reachable from the initial state).
To prove the optimality of our algorithm, we prove that it moves the disks along the shortest path in the {\em configuration graph} (defined in Section~\ref{sec:configuration-graph}) by a simple induction proof (in Section~\ref{sec:proof-optimality}).
\subsection{The configuration graph}\label{sec:configuration-graph}
The configuration graph of a {\bouncing\ Tower}\ has $3^n$ vertices corresponding to the $3^n$ legal states, and two states $s$ and $t$ are connected by an edge if there is a legal move from state $s$ to state $t$. The reversibility of moves (seen in Section~\ref{sec:struct-facts-single}) implies that the graph is undirected.
Consider the initial state $A\ldots A$ ($=A^n$). The smallest disk $1$ cannot be moved before the other disks are all moved to peg $B$ or all moved to peg $C$: we can't remove disk $1$ from peg $A$ if there is a disk under it, and we can't put it on another peg if a larger disk is already there. This partitions $G$ into three parts, each part being characterized by the position of disk $1$; these parts are connected by edges representing a move of disk $1$ (see the recursive decomposition of $G(n)$ in Figure~\ref{TotalGraphForHanoi}).
Each part is an instance of the configuration graph $G'(n-1)$ defining all legal steps of $(n-1)$ disks $\{2,\ldots,n\}$ given that disk $1$ is fixed on its peg.
\begin {figure}[h]
\centering
\includegraphics[width=.8\textwidth]{totalGraphForHanoi}
\caption{First decomposition of the configuration graph of the {\bouncingt\ Problem}. \label{TotalGraphForHanoi}}
\end{figure}
\newcommand\f{\rightarrow}
\newcommand\A{{{a}}}
\newcommand\B{{{b}}}
\newcommand\C{{{c}}}
Let us consider this subgraph $G'(n-1)$, when disk $1$ (the smallest) is fixed on one peg (say on peg $A$). Note each state of this graph $\A X\ldots Z$, where $\A$ stands for the disk $1$ fixed on peg $A$, and $X\ldots Z$ for positions of other disks on diverse pegs. The removal order changes from those observed in $G$ each time $|A|$ is odd.
To remove the two extreme disks $2$ and $n$ (not moving disk $1$, since it is fixed), it is necessary to move all other disks to a single other peg (same argument as for $G(n)$), so we can divide our configuration graph in subsets of states corresponding to different positions where disks $2$ and $n$ are fixed.
This defines $9$ parts, as each of the two fixed disks can be on one of the three peg.
Of those $9$ parts, we need focusing only on $5$:
\begin{itemize}
\item two parts of the graph cannot be accessed from the initial state $\A A\ldots A$, (see an illustration in Figure~\ref{somePartsCannotBeAccessed}); and
\item the part of the graph where disk $2$ is fixed on $B$ and disk $n$ is fixed on $C$ contains two parts, which are not connected for $n>4$ (see an illustration in Figure~\ref{somePartCanBeUnConnex}).
\end{itemize}
\begin{figure} \centering
\parbox{.2\textwidth}{
\hanoiState
{\peg{\deux}{\unIced}{}{}}
{\peg{\n}{}{}{}}
{\peg{\nmun}{\diskDots}{\trois}{}}
}
\parbox{.2\textwidth}{
\hanoiState
{\peg{\deux}{\unIced}{}{}}
{\peg{\nmun}{\diskDots}{\trois}{}}
{\peg{\n}{}{}{}}
}
\parbox{.5\textwidth}{ \caption{States where disk $2$ is on $A$ and disk $n$ is on another peg (i.e. $B$ or $C$) cannot be accessed from the initial state $A\ldots A$ for $n>4$. No move is possible from these states as $A$ cannot receive larger disk than $2$ (and all are), $B$ cannot receive smaller disk than $n$ (and all are), and $C$ cannot receive disk $2$ nor $n$ if $n>4$.
\label{somePartsCannotBeAccessed}}}
\end{figure}
\begin{figure} \centering
\parbox{.2\textwidth}{ \hanoiState { \peg{\nmun}{\diskDots}{\trois}{\unIced} } {
\peg{\deux}{}{}{} } { \peg{\n}{}{}{} } }
\parbox{.2\textwidth}{ \hanoiState { \peg{\unIced}{}{}{} } {
\peg{\nmun}{\diskDots}{\trois}{\deux} } { \peg{\n}{}{}{} } }
\parbox{.5\textwidth}{ \caption{States $\A B A\ldots A C$ and $\A B B\ldots B C$ are not connected in the subgraph where disks $2$ and $n$ are fixed on $B$ and $C$, and disk $1$ is fixed on $A$: As no disk can be inserted under $n$, if $n>4$ it is impossible to move the $n-3>1$ unfixed disks from $A$ to $B$ (as to move more than one disk between two pegs of same parity require a third peg).\label{somePartCanBeUnConnex}
}}
\end{figure}
\begin{INUTILE}
\begin {figure}[h]
\parbox{5cm}{
\begin{center}
\includegraphics[width=5cm]{totalGraphDecompositionWithOneDiskIced}
\end{center}
}
\parbox{10cm}{
\caption{Decomposition of the configuration graph with one disk fixed.}
\label{TotalGraphDecompositionWithOneDiskIced}
}
\end {figure}
\end{INUTILE}
The five remaining parts are very similar. Three of them are of particular importance as each contains one key state, which are $\A A\ldots A$, $\A B\ldots B$ and $\A C\ldots C$. Consider first the graphs $G'(n)$ for $n\in\{1,2,3\}$ ($n+1$ disks in total if we count the fixed one): they are represented in Figure~\ref{smallGraphs}. When one disk is fixed on $A$, the task of moving disks from $A$ to $B$ is symmetric with moving them from $A$ to $C$, but quite distinct from the task of moving disks from $B$ to $C$.
\begin {figure}\centering
\includegraphics{smallTotalGraphsWithOneDiskIced}
\includegraphics[width=\textwidth]{totalGraph3DisksAndOneIced}
\caption{Subgraphs $G'(n)$ with one disk fixed on the peg $\ensuremath A$ for $n\in\{1,2,3\}$.\label{smallGraphs}}
\end {figure}
Now, consider the part of the graph $G'(n-1)$ where the smallest and the largest disks ($2$ and $n$) are fixed on $A$. This part contains the initial state $A\ldots A$. The only way to free the smallest disk is to move the $n-3$ other disks to another peg.
\begin{TODO}
Male another figure illustrating how ``The only way to free the smallest disk is to move the $n-3$ other disks to another peg.''
(see an illustration in Figure~\ref{totalGraphDecompositionWithOneDiskIcedA})
\end{TODO}
Once disks $2$ and $n$ are fixed on the same peg (in addition to disk $1$), the situation is similar to the entire graph, with two fewer disks. It is the case each time two extreme disks are fixed on the same peg: when $2$ and $n$ are fixed on peg $C$ or $B$, or when $1$ and $n$ are fixed on peg $A$; the process can then ignore the two fixed disks to move the $n-3$ remaining disks, as the parity of the peg is unchanged. See the definitions of the graph $G'(n)$ in Figure~\ref{smallGraphs} for $n\in\{1,2,3\}$ and in Figure~\ref{totalGraphDecompositionWithOneDiskIcedA} for $n>3$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\textwidth]{totalGraphEtiquettedWithOneDiskFrozen}
\end{center}
\caption{Recursive definition of $G'(n)$, the graph of all legal steps when one disk is fixed on the first peg,~for $n>3$. There is no way to connect the states $aBB....BC$, $aBA...AC$, $aCC...CB$ and $aCA...AB$ without moving some of the disks from $\{1,2,n\}$.
\label{totalGraphDecompositionWithOneDiskIcedA}}
\end{figure}
\subsection{Proof of optimality}\label{sec:proof-optimality}
To prove the optimality of the solution described in Section~\ref{sec:solution}, we prove that the algorithm is taking the shortest path in the configuration graph defined in the last section. A side result is that this is the unique shortest solution.
\begin{SHORT}
Once understood the definition of the configuration graph, the proof of optimality is merely a study case: for lack of space, we defer it to the appendix.
\end{SHORT}
\begin{INUTILE}
\begin{lemma}
From one state in the configuration graph, we can have at most five edges.
\end{lemma}
\begin{lproof}
Three disk can be moved - at most one per peg - and each one can be moved to two peg at most {\em but} states where a disk $\alpha$ can be moved from $A$ to $B$ and another disk $\beta$ can be moved form $B$ to $A$ are subject to constraint $A$ and $B$ mustn't have the same parity. So we can have at most $2$ two-way steps (that's a coloration problem), which means four steps, plus $1$ one way move. (For instance $ACBA$ is a state connected with states $AABA$, $ACBB$, $ACBC$, $ACAA$,$ACCA$)
\end{lproof}
\end{INUTILE}
\begin{theorem} \label{res:proof-optimality}
$\forall(x,y,z)\in\{0,1\}^3,\, \forall n\geq 0,\, \move{xyz}{n}{A}{B}{C}$ moves optimally $n$ disks from $A$ to $C$.
\end{theorem}
\begin{lproof}
\begin{TODO}
CHANGE definition of Induction hypothesis to have the forall inside.
\end{TODO}
Define the induction hypothesis $\IH(n)$ as ``$\forall(x,y,z)\in\{0,1\}^3\, \move{xyz}{n}{A}{B}{C}$ moves optimally $n$ disks from $A$ to $C$''. Trivially $\IH(0)$ and $\IH(1)$ are true. Suppose that there exists an integer $N>1$ such that $\forall n<N$, the induction hypothesis $\IH(n)$ is true. We prove that $\IH(N)$ is then also true.
\begin{itemize}
\item $\move{000}{N}{A}{B}{C}$ is optimal:
$\move{000}{N}{A}{B}{C}$ for $N>0$ consists of one
call to $\move{100}{N}{A}{C}{B}$, one unitary step,
and one call to $\move{001}{N}{B}{A}{C}$.
So it moves optimally (by $\IH(N-1)$) from $\A A\ldots A$ to $\A
B\ldots B$, and then to $\C B\ldots B$, and after that to $\C C\ldots C$.
(In Figure~\ref{TotalGraphForHanoi} the right edge of
the triangle.)
A path not going through states $\A B\ldots B$ or $\C B\ldots B$ would take more steps:
\begin{itemize}
\item if we don't go through the state $\A B\ldots B$, then the state $\A C\ldots C$ is necessary, with a cost of $f_{100}(N-1)$, and also the state $\B C\ldots C$ (with a cost of $1$), and at the end of the path we have to go through the state $\C A\ldots A$, which optimal path to go to the final $\C C\ldots C$ state is of length $f_{100}(N-1)$: this path is of length $f_{100}(N-1)+1+f_{100}(N-1)$ and is already as long as the one given by $\move{000}{N}{A}{B}{C}$.
\item if we go through $\A B\ldots B$, but not through $\C B\ldots B$, then the path is not optimal as it must go through $\A C\ldots C$ and the optimal path from $\A A\ldots A$ to $\A C\ldots C$ doesn't go through $\A B\ldots B$.
\end{itemize}
So $\move{000}{N}{A}{B}{C}$ is optimal.
\item $\move{100}{N}{A}{B}{C}$ is optimal:
$\move{100}{N}{A}{B}{C}$ for $N>1$ consists of one call to $\move{100}{N-2}{A}{C}{B})$, two steps, and one call to $\move{010}{N-2}{B}{A}{C})$.
As before, we shall consider these recursive calls of order smaller
than $N$ as optimal because of $\IH(N-2)$. So we know how to move
optimally
from $\A A A\dots A A$
to $\A A B\dots B A$,
to $\A C B\dots B A$,
then to $\A C B\dots B C$
and to $\A C C\dots C C$
(in figure \ref{totalGraphDecompositionWithOneDiskIcedA}), this
corresponds to the left edge of the triangle).
We must now prove that other paths take more steps:
\begin{itemize}
\item We cannot avoid the state $\A C B\dots B C$, neither $\A C
B\dots B A$, as there is no other way out of $\A C C\dots C C$.
\item if we avoid the state $\A A B\dots B A$ then the optimal path to
$\A C B\dots B A$ necessarily passes by $\A BA\ldots AA$ and $\A
CA\ldots AA$, and is of length
$f_{100}(N-2)+1+f_{010}(N-2)+1+f_{010}(N-2)$, which is longer than
the whole solution given by the algorithm, of length $f_{100}(N)=
f_{100}(N-2) + 2 + f_{010}(N-2)$.
\end{itemize}
So $\move{100}{N}{A}{B}{C}$ is optimal.
\item $\move{010}{N}{A}{B}{C}$ is optimal:
$\move{010}{1}{A}{B}{C}$ and $\move{010}{2}{A}{B}{C}$ are special cases,
we can see in graphs $G'(1)$ and $G'(2)$ on figure
\ref{smallGraphs} page \pageref{smallGraphs}
that the optimal paths between $\A B\ldots B$ and $\A C\ldots C$
are of length $1$ and $3$, as the solutions produced by the algorithm.
So $\move{010}{1}{A}{B}{C}$ and $\move{010}{2}{A}{B}{C}$ are proven optimal.
$\move{010}{N}{A}{B}{C}$ for $N>2$ corresponds to a
path going through the states (the first disk being fixed on $\B$):
(please report to fig. \ref{totalGraphDecompositionWithOneDiskIcedA}
from $\A C C\ldots CC$ to $\A B B\ldots BB$ down left to down right. )
\[
\A C C\dots C C
\stackrel{f_{010}(N-2)}{\longrightarrow}
\A C B\dots B C
\stackrel{2}{\longrightarrow}
\A A B\dots B C
\]\[
\stackrel{f_{010}(N-2)}{\longrightarrow}
\A A C\dots C C
\stackrel{2}{\longrightarrow}
\A B C\dots C B
\stackrel{f_{010}(N-2)}{\longrightarrow}
\A B B\dots B B
\]
We shall demonstrate that all other paths take more steps:
\begin{itemize}
\item The states $\B A C\dots C A$ and $\B C A\dots A C$ are mandatory,
for connexity, and so are $\B A C\dots C B$ and $\B C A\dots A B$.
\item if we go through $\B B C\dots C B$,
then it's $\B B A\dots A B$ which is mandatory.
\item if we contourn $\B B C\dots C B$, then we shall go through
$\B A B\dots B B$, $\B C B\dots B B$ and $\B C A\dots A B$:
the total path would be of length $3+4 f_{010}(N-2)$,
to be compared with $4 +3 f_{010}(N-2)$ (We trade one step
with one recursive call).
As $f_{010}(N-2) \geq 1$ for $N-2\geq q$ (i.e. $N\geq 3>2$),
$\move{010}{N}{A}{B}{C}$ is optimal for $N>2$.
\end{itemize}
So $\move{010}{N}{A}{B}{C}$ is optimal. \qedhere
\end{itemize}
\end{lproof}
We discuss further extensions of those results in the next section.
\section{Discussion}\label{sec:discussion}
All the usual research questions and extensions about the \hanoitpb\ are still valid about the {\bouncingt\ Problem}. We discuss only a selection of them, such as the space complexity in Section~\ref{sec:spaceComplexity}, and the extension to other proportional \texttt{insertion} and \texttt{removal} points in Section~\ref{sec:levitating-towers}.
\subsection{Space Complexity}
\label{sec:spaceComplexity}
Allouche and Dress~\cite{1990-RAIRO-ToursDeHanoiEtAutomates-AlloucheDress} showed that the optimal sequence of steps required to move a \hanoit\ of $n$ disks can be obtained by a simple function from the prefix of an infinite unique sequence, which itself can be produced by a finite automaton. This proves that the space complexity of the \hanoitpb\ is constant.
The same technique does not seem to yield constant space for {\bouncing\ Tower} s: whereas the sequences of steps generated by each of the functions $\move{100}{n}{A}{B}{C}$, $\move{010}{n}{A}{B}{C}$ and $\move{001}{n}{A}{B}{C}$ are prefixes of infinite sequences, extracting those suffixes and combining them in a sequence corresponding to $\move{000}{n}{A}{B}{C}$ would require a counter using logarithmic space in the length of the sequences to be extracted, i.e. $\log_2 (\sqrt{3}^n)\in \Theta(n)$, which would still be linear in the number of disks.
\begin{DISKPILEPROBLEM}
\begin{TODO}
\subsection{Bouncing Disk Piles}
\label{sec:bouncingDiskPiles}
Whereas we studied the {\bouncingt\ Problem}\ in the case where all the disks were of distinct size, an extension would be to consider the case where not all sizes are distinct.
\end{TODO}
\end{DISKPILEPROBLEM}
\subsection{Levitating Towers}
\label{sec:levitating-towers}\label{alphaltIntroduced}
An extension of the {\bouncingt\ Problem}\ is to parametrize the \texttt{insertion} point, so that the \texttt{removal} point is at position $\extractingpoint{n}$ and the \texttt{insertion} point is under the disk at position $\insertingpoint{n}$ in a tower of $n$ disks, for $\alpha\in[0,\frac{1}{2}]$ fixed (the problem is symmetrical for $\alpha\in[\frac{1}{2},1]$). By analogy with {\bouncing\ Tower} s, we call this variant a {$\alpha$-\lt}. This parametrization creates a continuous range of variants, of which the \hanoitpb\ and the {\bouncingt\ Problem}\ are the two extremes:
\begin{itemize}
\item for $\alpha=0$, the \texttt{removal}/\texttt{insertion} point is always at the top, which corresponds to a \hanoit, while
\item for $\alpha=\frac{1}{2}$ the problem corresponds to a {\bouncing\ Tower}.
\end{itemize}
The complexity of moving a {$\alpha$-\lt}\ cannot be smaller than the one of a {\bouncing\ Tower}, as the key configuration permitting to move $2$ disks in $2$ steps between the same pegs is less often obtainable in a {$\alpha$-\lt}.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,793
|
Q: Publishing CNAME to Avahi over DBUS requires long lived process I am attempting to setup CNAMEs in Avahi in order to broadcast multiple hostnames. I've found a variety of examples online that all work but they all require long lived processes:
https://github.com/Dalee/avahi-cname-aliases/blob/master/avahi_cname_aliases/init.py#L83
# run and stay foreground
def run(self):
self.set_handlers()
self.load_aliases()
self.publish_aliases()
# keep aliases published
while self.running:
time.sleep(TTL)
https://github.com/george-hawkins/avahi-aliases-notes/blob/master/avahi-alias#L48
for name in sys.argv[1:]:
publish_cname(name)
try:
# Just loop forever
while 1: time.sleep(60)
except KeyboardInterrupt:
print("Exiting")
https://public.msli.com/lcs/jaf/publish_cnames.c
/** cnames should be a NULL-terminated array of alias hostnames for this host.
* Example invocation: const char * cnames = {"foo.local", "bar.local", NULL}; PublishAvahiCNames(cnames);
* Note that this function normally does not ever return!
*/
void PublishAvahiCNames(const char ** cnames)
I've confirmed that if the process that sets up the CNAME ends then the CNAME disappears but I can't find any documentation on avahi or dbus that indicates why these tools need to live forever. My use of dbus-monitor hasn't shown anything obvious that the scripts are doing to keep the CNAMEs alive.
Does anyone know what these scripts are doing to keep the published CNAMEs active in avahi?
A: The clients do nothing, just stay connected to the D-Bus. If they vanish the server cleans up their entries, cf. dbus-protocol.c:
} else if (dbus_message_is_signal(m, DBUS_INTERFACE_DBUS, "NameOwnerChanged")) {
char *name, *old, *new;
if (!dbus_message_get_args(m, &error, DBUS_TYPE_STRING, &name, DBUS_TYPE_STRING, &old, DBUS_TYPE_STRING, &new, DBUS_TYPE_INVALID)) {
avahi_log_warn("Error parsing NameOwnerChanged message");
goto fail;
}
if (!*new) {
Client *client;
if ((client = client_get(name, FALSE))) {
avahi_log_debug(__FILE__": client %s vanished.", name);
client_free(client);
}
}
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,094
|
How I know I've found a soul mate
I've found someone who's as enthusiastic about buses as I am. Yes, yes, I can hear you saying, 'Oh, so there are two of you in the world? What a coincidence that you should meet.'
I've decided he's a soulmate. Shame I'm already married, then. And shame he's only three-and-a-half.
He's the child of a friend of mine. I met him when I visited her in London, where we used to live, two days ago. She is a teacher at my previous school. She'd warned me by text message, 'Expect to be questioned about your bus preferences'. When she sent the message, I think she was expecting me to react with, 'Oh, damn. How boring will that be.'
She doesn't read my blog.
I got to the house and she made me coffee. Then I sat down. I hardly talked to Bus-Boy's Mum (BM). This is because I mainly conversed with Bus Boy (BB). The following is not a verbatim transcript of what went on, but I think the sentiments behind it are reproduced pretty faithfully ...
BB: Did you come on a bus?
Me: No, not today. But I often go on buses.
BM: Sorry about this. He's really into buses. I did warn you.
BB: I like buses.
Me: I like buses, too. In fact, I love buses.
BM: Don't feel you have to humour him.
BB: My mummy doesn't like buses.
Me: Really? What a silly mummy you have.
BM: Honestly, don't waste your time. He won't shut up about it. Come and have more coffee.
Me: You have lots of red buses here in London, don't you?
BB: I like red buses.
Me: My favourite bus is a gold bus. It's called the G1. 'G' for gold.
BM: It's really sweet of you, Fran, but he'll be fine playing. Come on. Have a biscuit.
BB: A gold bus!
Me: That's right. And it has blue seats. Shiny leather ones.
BB: Do you have any red buses?
Me: No, not where I live.
BB: Do you have any yellow buses?
BM: Darling, I think you should go and watch a video, don't you?
Me: Yes, some children go to school on yellow buses where I live. But no one else is allowed. Just children.
BB: Could I go on a yellow bus?
Me: When you're old enough for school, you could.
BM: So how are things in Warwickshire? Are you enjoying your new job?
BB: Do you have green buses?
BB: Orange buses?
Me: No, no orange ones.
BB: Stripey buses?
BM: Sweetheart, this is getting silly. Go and play on your tricycle.
Me: Stripey buses! Wouldn't that be funny? Buses that look like zebras!
BB: Chortle, chortle. I wish I had a stripey bus.
Me: Me, too. And wouldn't it be funny? The bus would disappear on the zebra crossing!
BB: Chortle, chortle, chortle, giggle, giggle. And what if it was a spotty one?
BM: Darling, I think Fran might want to talk to Mummy a bit now.
Me: That would be fab! We could pretend we were inside a leopard.
BB: (Rolls over laughing.) Inside a leopard! But it would really be a bus! Maybe we'd go to the zoo instead of the bus station!!
BM: How about we go and sit in the garden and let him play with his toys in here?
Me: We'd get to the zoo and they'd let us in because they thought we were a leopard and then we'd be in a cage!
BB: A bus in a cage! Chortle, chortle.
Me: And people would come and look at us and they wouldn't realise they had come to the zoo to stare at a bus! Wouldn't they feel silly when they found out?
BM: I think I'll just go and hang some washing out, if that's okay.
BB: And what if you had a bus with a trunk, like an elephant? Chortle, chortle.
Me: Yes, spraying water over all the other traffic. That would be hilarious.
BM: And then I'll pop down to the shops, if you two are happy.
BB: And everyone would think, 'Oh no, it's raining!'
BM: For a couple of hours, that is. Say goodbye to Mummy, darling. Look after Fran.
BB: But it wouldn't be raining. It would be the elephant bus!
Me: Yes, a big grey one. With flappy ears.
DOOR SLAMS.
BB: My favourite bus is the 267. What's your favourite bus number?
Sounds like you had a delightful time! We are missing out, here where I live. Yellow school buses and that's about it. We do have fire trucks, though!! The grandbaby and I run out to the front yard whenever we hear a siren. Great fun.
erm... cue giggles in the lighting box again... you've found yourself another fan in our membership secretary / sound operator...
Hello, membership secretary/sound operator. Lovely to meet you.
Anonymous 4/3/10 07:21
I now read your blog!!! I have been tipped off! hope all is well. must catch up soon. love BM
Fran Hill @ Being Miss 4/3/10 14:19
Anonymous/BM - we know each other? Let me think. Barry Manilow?
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,527
|
Q: QGIS Relations loss of form style I'm currently creating a project for management of highway drains in QGIS which involves the recording the feature attributes and logging of multiple inspections against a feature. I have a layer for the highway_drains and a separate layer for highway_inspections, utilizing the RELATIONS function under project properties works perfectly for the Parent/Child relationship of highway drains to inspections.
I used the drag and drop designer for both layers to reference lookups and style the form. However in order to view/edit the inspections within the form for highway_drains, I have to switch the highway_drains layer from 'drag and drop' designer to auto generate. This results in the lose of the form styling for the highway_drains layer, namely the TABS and order of fields (lookups and field aliases are preserved), styling for highway_inspections is preserved.
My Question: How can I preserve the form style for highway drains?
I'm using QGIS version 2.8.2
A: Why do you have to switch to auto-generate? All you need to do is open the relations section in the fields tab and drag the relation over to your form that you are building with the drag and drop designer. If you can't do this, then you may need to provide information on your relationship of other project properties
s
Here is the form view with the sub form
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,406
|
Q: nginx deploying multiple sevices (ports) on one docker I'm struggling for three days now for my nginx config, therefore maybe someone can help...
My situation now:
nginx reverse proxy <--> one VM with one DOCKER which hosts multiple services on different ports (9000 to 9005).
If i test the docker build locally with 127.0.0.1 url instead of public domain everything works fine. If i try to run with https or even https on nginx i fail.
Failings means, i can connect to my docker service 9001 (which is login service), i login into app and than there is a respone again over http and this request does not go through nginx.
My service configuration on VM/Docker
Service 9001 does have the prefix /auth
Service 9002 does have the prefix /dashboard
A request looks like: http://sub.domain/auth or http://sub.domain/dashboard
on Nginx i'm searching for this prefix and therefore make a route to the
correct service like so:
server {
listen 0.0.0.0:80;
server_name sub.domain;
location /auth/ {
proxy_pass http://172.18.1.25:9001;
proxy_read_timeout 300s;
# proxy header
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Content-Type-Options nosniff;
proxy_set_header X-Frame-Options SAMEORIGIN;
}
location /dashboard/ {
proxy_pass http://172.18.1.25:9002;
proxy_read_timeout 300s;
# proxy header
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Content-Type-Options nosniff;
proxy_set_header X-Frame-Options SAMEORIGIN;
}
location /device/ {
proxy_pass http://172.18.1.25:9005;
proxy_read_timeout 300s;
# proxy header
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Content-Type-Options nosniff;
proxy_set_header X-Frame-Options SAMEORIGIN;
}
}
** update **
Testet without nginx (direct open ports to vm ports) same problem.
--> next Test without docker. Running Services directly on my vm...pending
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 513
|
\section{Introduction}
Radical-ion pairs and their spin-dependent reactions \cite{steiner,steiner2} have been recently shown \cite{komPRE2009,komPRE2011,komPRE2012,katsop,komCPL,dellis1,dellis2,cidnp,JH,briegel,vedral,plenioPRA,sun,horePRL,plenioPRL,shao,kais} to be a paradigm system for the emerging field of quantum biology \cite{plenio_review}, that is, the study of quantum coherence effects, or in general the study of quantum information science in the context of biological systems. The biological significance of radical-ion-pair (RP) reactions is twofold, (i) they are understood to underlie the avian magnetic compass mechanism \cite{schulten,ritz,ww,horeNature,rodgers,mouritsen}, and (ii) they participate in the electron-transfer cascade reactions taking place in photosynthetic reactions centers \cite{boxer,matysik}. In any case, the experimentally founded science of spin-chemistry \cite{woodward} deals with such reactions in a wide range of chemical contexts. Hence the theoretical understanding of RP reactions at the fundamental level is of importance for current experimental work in spin chemistry, for further exploring quantum effects in biological systems as well as for the design of novel, and potentially quantum-limited biomimetic devices and sensors.
Theoretically, the fate of radical-ion-pair reactions and all relevant predictions are fully accounted for by the time evolution of $\rho$, the RP's spin density matrix. The time evolution of $\rho$ was until recently understood to be driven by (i) unitary Hamiltonian evolution due to all magnetic interactions within the RP, and (ii) RP population loss due to spin-dependent charge recombination. We have recently shown that the spin degrees of freedom of the RP form an open quantum system, i.e. there is a third source of time evolution: (iii) the spin decoherence inherent in the radical-pair mechanism \cite{komPRE2009,komPRE2011}. Moreover, since the RP is in general in a coherent (or partially coherent) superposition of spin states (we refer in particular to singlet-triplet coherence), the description of the RP's reaction kinetics appears not to be as straightforward as originally thought. In \cite{komPRE2011} we demonstrated that singlet-triplet (S-T) coherence of the RP is a central concept in understanding the intimately related effects (i)-(iii) and put forward a master equation satisfied by the density matrix $\rho$. While S-T decoherence was described \cite{komPRE2009} by first-principles perturbation theory (similar to most applications of the theory of Markovian open quantum systems leading to a Lindblad decoherence term), the reaction kinetics had been accounted for in a phenomenological manner open to criticism. Moreover, the introduction \cite{komPRE2011} of the coherence measure $p_{\rm coh}$ quantifying the "strength" of S-T coherence was also done intuitively.
In this work we formalize our approach along both fronts previously mentioned. In particular, (i) we show that the measure of S-T coherence introduced in \cite{komPRE2011} is not well-defined. We then introduce a new measure of S-T coherence based on recently appeared rigorous considerations by Plenio and co-workers \cite{plenio_coherence}, (ii) we formally derive the reaction terms of the master equation using quantum retrodiction, a concept borrowed from the field of quantum communications, and (iii) we introduce Monte Carlo (MC) simulation of single-RP quantum trajectories \cite{molmer,wiseman}. The MC simulation contains by design all relevant phenomena at the single-molecule level, and hence forms a unique tool to test the predictions of our master equation.
We show that the new measure of S-T coherence, properly scaling with the off-diagonal elements of the density matrix, is essential for the decomposition of $\rho$ into a mixture of maximally coherent and maximally incoherent states.
This decomposition underlies the retrodictive derivation of the new reaction terms, which lead to (a) a significantly improved agreement of the new master equation prediction with MC, and
(b) the derivation of precise and experimentally measurable decay rates for the S-T coherence.
In particular, in Section III we introduce the Monte Carlo simulation of single-RP quantum trajectories including only S-T decoherence and compare it with the master equation for non-recombining RPs where perfect agreement is expected by definition. In Section IV we elaborate on the shortcomings of our previous measure of S-T coherence and then introduce a new measure based on \cite{plenio_coherence}. The decomposition of $\rho$ into a mixture of maximally coherent and maximally incoherent states is presented in Section V. This decomposition is the basis of the rigorous theory of quantum retrodiction used to derive the reaction terms of the master equation, presented in Section VI. In Section VII we perform a Monte Carlo simulation of RP quantum trajectories including recombination, comparing the trajectory-average with the prediction of our new master equation. Finally, in Section VIII we discuss the decay of S-T coherence in a way that could be relevant to experimentally accessible observables and we compare our theory with the predictions of competing theoretical approaches. In the following Section we start with a few definitions and a brief review of previous work in order to make this work as comprehensive as possible for the general reader.
\section{Definitions and previous work}
The quantum degrees of freedom of RPs are formed by a multi-spin system embedded in a biomolecule. In particular, RPs are biomolecular ions created by a charge transfer from a photo-excited D$^*$A donor-acceptor biomolecular dyad DA, schematically described by the reaction ${\rm DA}\rightarrow {\rm D^{*}A}\rightarrow {\rm D}^{\bullet +}{\rm A}^{\bullet -}$, where the two dots represent the two unpaired electrons of the two radicals. The excited state D$^*$A is usually a spin zero state, hence the initial spin state of the two unpaired electrons is a singlet, denoted by $^{\rm S}{\rm D}^{\bullet +}{\rm A}^{\bullet -}$.
Now, both D and A contain a number of magnetic nuclei which hyperfine-couple to the donor's and acceptor's electron, respectively, effectively creating a different magnetic environment for the two unpaired electrons. This leads to S-T mixing, i.e. a coherent oscillation of the spin state of the electrons. Charge recombination terminates the reaction and leads to the formation of the neutral reaction products. Angular momentum conservation at this step empowers the molecule's spin degrees of freedom and their minuscule (relative to thermal) energy to determine the reaction's fate: singlet state RPs, $^{\rm S}{\rm D}^{\bullet +}{\rm A}^{\bullet -}$, recombine to reform the neutral spin zero DA molecules, whereas triplet RPs, $^{\rm T}{\rm D}^{\bullet +}{\rm A}^{\bullet -}$, recombine to a different (metastable) triplet neutral product $^{\rm T}$DA. For completeness we note that the reaction can, in principle, close through the so-called intersystem crossing $^{\rm T}{\rm DA}\rightarrow {\rm DA}$. The above are schematically shown in Fig. \ref{fig1}.
\begin{figure}
\includegraphics[width=5.5 cm]{simple_schematic.eps}
\caption{(Color online) Simplified energy level diagram depicting radical-ion-pair reaction dynamics. A donor-acceptor dyad is photo-excited and a subsequent charge transfer produces a singlet radical-ion pair. Magnetic interactions within the radical pair induce coherent singlet-triplet mixing, while spin-dependent charge recombination leads to singlet and triplet neutral products at the respective reaction rates $k_{\rm S}$ and $k_{\rm T}$. The reaction can in principle close through intersystem crossing from the triplet to the singlet ground state.}
\label{fig1}
\end{figure}
The straightforward part of RP dynamics are the unitary dynamics embodied in the magnetic Hamiltonian ${\cal H}$, which mainly contains (i) hyperfine couplings of the donor's (acceptor's) electron with the donor's (acceptor's) nuclear spins, (ii) Zeeman interaction of the donor's and acceptor's electrons with the externally applied magnetic field (nuclear Zeeman interaction is usually neglected), (iii) spin-exchange and dipolar interactions between the donor's and the acceptor's electron \cite{efimova,dellis1}.
Were this a closed system, its dynamics would be fully described by Liouville's equation $d\rho/dt=-i[{\cal H},\rho]$. However, it is not, hence there are more terms that make up the master equation, and they will be elaborated in the following. These terms involve two central operators, the singlet and triplet projectors ${\rm {\rm Q_S}}$ and ${\rm Q_T}$, respectively. Before defining them, we note that the density matrix $\rho$ describes the spin state of the RP's two electrons and $M$ magnetic nuclei located in D and A. The dimension of $\rho$ is $d=4\Pi_{j=1}^{M}(2I_{j}+1)$, where $I_j$ is the nuclear spin of the $j$-th nucleus, with $j=1, 2,..., M$. For our numerical work we consider the simplest possible RP, namely an RP containing just one spin-1/2 nuclear spin hyperfine coupled to e.g. the donor's electron. In this case the density matrix has dimension $d=8$. This simple model system exhibits the essential physics without the additional complication of more nuclear spins. We stress that the master equation we derive is general and equally applicable for any number of nuclear spins entering the magnetic Hamiltonian ${\cal H}$ and any sort of interactions included in ${\cal H}$.
Angular momentum conservation at the recombination process splits the RP's Hilbert space into an electron singlet and an electron triplet subspace, defined by the respective projectors ${\rm {\rm Q_S}}$ and ${\rm Q_T}$. These are $d\times d$ matrices given by ${\rm {\rm Q_S}}={1\over 4}\mathbbmtt{1}_{d}-\mathbf{s}_{D}\cdot\mathbf{s}_{A}$ and ${\rm Q_T}={3\over 4}\mathbbmtt{1}_{d}+\mathbf{s}_{D}\cdot\mathbf{s}_{A}$, where $\mathbf{s}_{D}$ and $\mathbf{s}_{A}$ are the spin operators of the donor and acceptor electrons written as $d$-dimensional operators, e.g. the $j$-th component of $\mathbf{s}_{D}$ is written as $s_{jD}=\hat{s}_{j}\otimes\mathbbmtt{1_2}\otimes\mathbbmtt{1}_{2I_1+1}\otimes\mathbbmtt{1}_{2I_2+1}...\otimes\mathbbmtt{1}_{2I_M+1}$, where the first operator in the previous Kronecker product refers to the donor's electron spin, the second to the acceptor's electron spin and the rest to the nuclear spins. By $\hat{\mathbf{s}}$ we have denoted the regular (2 dimensional) spin-1/2 operators and by $\mathbbmtt{1}_{m}$ the $m$-dimensional unit matrix. We note that the RP's singlet subspace has dimension $\Pi_{j=1}^{M}(2I_{j}+1)$ while the triplet subspace has dimension $3\Pi_{j=1}^{M}(2I_{j}+1)$. The electron multiplicity 1 in the former corresponds to the singlet state $|{\rm S}\rangle=(|\uparrow\downarrow\rangle-|\downarrow\uparrow\rangle)/\sqrt{2}$, while the multiplicity of 3 in the latter stems from the three triplet states $|{\rm T}_0\rangle=(|\uparrow\downarrow\rangle+|\downarrow\uparrow\rangle)/\sqrt{2}$, $|{\rm T}_{+}\rangle=|\uparrow\uparrow\rangle$ and $|{\rm T}_{-}\rangle=|\downarrow\downarrow\rangle$.
The projectors ${\rm Q_S}$ and ${\rm Q_T}$ are complete and orthogonal, i.e. ${\rm Q_S+Q_T}=\mathbbmtt{1}_{d}$ and ${\rm Q_S}{\rm Q_T}={\rm Q_T}{\rm Q_S}=0$. There are also two rates to consider, the singlet and triplet recombination rates, $k_{\rm S}$ and $k_{\rm T}$, respectively. These are defined as follows: consider an RP ensemble with no magnetic interactions (${\cal H}=0$) to be in the singlet (triplet) state. Then its population would decay exponentially at the rate $k_{\rm S}$ ($k_{\rm T}$). Finally, in any given time interval $dt$, the measured singlet and triplet neutral products will be $dn_{\rm S}=k_{\rm S}dt\rm Tr\{\rho {\rm {\rm Q_S}}\}$ and $dn_{\rm T}=k_{\rm T}dt\rm Tr\{\rho {\rm Q_T}\}$. These relations are simple to understand, namely in the time interval $dt$ there would be $k_{\rm S}dt$ singlet and $k_{\rm T}dt$ triplet recombinations if all RPs were in the singlet or triplet state, respectively. If they are in the general state described by $\rho$, then $k_{\rm S}dt$ and $k_{\rm T}dt$ have to be multiplied by the respective probabilities to be in the singlet or triplet state.
The initial state most often considered when doing calculations with the density matrix is the singlet electron-unpolarized nuclear spin state written as $\rho={\rm Q_S}/\rm Tr\{\rm Q_S\}$.
\subsection{Singlet-Triplet decoherence}
A more detailed look at the energy level structure of Fig. 1 reveals the picture depicted in Fig. 2, where we show the vibrational excited states of the singlet and triplet ground states, which form the singlet and triplet reservoir. Radical-pair recombination proceeds as a {\it real} transition of the RP to one of the quasi-resonant and quasi-continuous reservoir states. As we have demonstrated in \cite{komPRE2012}, there cannot be any coherence between the RP state and the neutral ground states, but only population transfer from the former to the latter, due to which the RP is an open system. What we have shown in \cite{komPRE2009} is that it is "doubly-open", because the same reservoir states lead to S-T decoherence. Using 2$^{\rm nd}$-order perturbation theory we have shown that {\it virtual} transitions to these vibrational reservoir states {\it and back} interrupt the coherent S-T mixing in individual RPs and hence cause the decay of the ensemble S-T coherence. This is described with a Lindblad-type and trace-preserving master equation
\begin{equation}
{{d\rho}\over {dt}}\Big|_{\rm decoh}=-i[{\cal H},\rho]-{{k_{\rm S}+k_{\rm T}}\over 2}\big({\rm {\rm Q_S}}\rho+\rho {\rm {\rm Q_S}}-2{\rm {\rm Q_S}}\rho {\rm {\rm Q_S}}\big)\label{MEnr}
\end{equation}
In other words, this equation describes the null quantum measurement of the RP's neutral reaction products: there is a certain probability that the RP will recombine during a time interval $dt$. If this
does not happen, i.e. if no reaction product is detected, then there are three different possibilities that could be realized within $dt$, (i) a projection to the singlet state, (ii) a projection to the triplet state and (iii) Hamiltonian evolution. In the following Section we present a Monte Carlo simulation of individual quantum trajectories and elaborate in detail on these issues.
\begin{figure}
\includegraphics[width=8.5 cm]{rip_levels.eps}
\caption{(Color online) Detailed energy level structure of radical-ion pairs. The vibrational excitations of the singlet (DA) and the triplet ($^{\rm T}$DA) ground state form a reservoir that probes the electron spin state of the RP, leading to an intramolecule measurement of ${\rm Q_S}$. Virtual transitions (rates $k_{\rm S}/2$ and $k_{\rm T}/2$) to the reservoir levels and back to the RP lead to S-T decoherence, while real transitions (rates $k_{\rm S}$ and $k_{\rm T}$) to the reservoir states followed by their decay to the ground state lead to recombination.}
\label{fig2}
\end{figure}
\begin{figure}
\includegraphics[width=8.5 cm]{decoh.eps}
\caption{(Color online) The time evolution of $\langle{\rm Q_S}\rangle$ for a model RP with one nuclear spin, taking into account only S-T decoherence and S-T mixing driven by the Hamiltonian ${\cal H}=\omega(s_{1z}+s_{2z})+A\mathbf{s}_{1}\cdot\mathbf{I}$, where the Larmor frequency is taken $\omega=A/10$ and the recombination rates are $k_{\rm S}=k_{\rm T}=A/4$. These parameters represent a typical RP at earth's field with a hyperfine coupling on the order of 1 mT and recombination times on the order of 20 ns. (a) single-RP quantum trajectory, depicting singlet and triplet projections at random instants in time. The initial RP state for this trajectory is $|S\rangle\otimes|\uparrow\rangle$. (b) average of 20,000 such trajectories (red solid line), half of which have initial state $|S\rangle\otimes|\uparrow\rangle$ while the other half have initial state $|S\rangle\otimes|\downarrow\rangle$. The time axis was split into 10,000 steps $dt$, in everyone of which one out of the three possibilities outlined in Section III was realized. The prediction of the trace-preserving master equation \eqref{MEnr} is shown by the black dashed line. The initial state for the density matrix was the usually considered singlet state with unpolarized nuclear spin, $\rho={\rm Q_S}/\rm Tr\{\rm Q_S\}$.}
\label{fig3}
\end{figure}
\section{Monte Carlo simulation of S-T decoherence using single-molecule quantum trajectories}
As well known from quantum optics, the absence of a detection event, e.g. a photon detection, in a quantum measurement, called "null" measurement, also has an effect on the system's quantum state. What we have shown in \cite{komPRE2009} is that the quantum state evolution of a non-recombining RP (absence of detection of recombination events) is given by the Lindblad master equation \eqref{MEnr}. This trace-preserving master equation encompasses the following three possibilities a non-recombining RP faces during the time evolution of its quantum state: \newline
{\bf (i)} a quantum jump to the singlet state $\rho_{\rm S}={\rm {\rm Q_S}}\rho {\rm {\rm Q_S}}/\rm Tr\{\rho {\rm {\rm Q_S}}\}$, taking place with probability
\begin{equation}
dp_{\rm S}={{(k_{\rm S}+k_{\rm T})dt}\over 2}\rm Tr\{\rho {\rm {\rm Q_S}}\}
\end{equation}
{\bf (ii)} a quantum jump to the triplet state $\rho_{\rm T}={\rm Q_T}\rho {\rm Q_T}/\rm Tr\{\rho {\rm Q_T}\}$, taking place with probability
\begin{equation}
dp_{\rm T}={{(k_{\rm S}+k_{\rm T})dt}\over 2}\rm Tr\{\rho {\rm {\rm Q_T}}\}
\end{equation}
{\bf (iii)} unitary evolution driven by the Hamiltonian ${\cal H}$, taking place with probability $1-dp_{\rm S}-dp_{\rm T}$.
In an ensemble of RPs, these single-molecule possibilities are unobservable, so we have to average over them.
This averaging {\it exactly} reproduces the master equation \eqref{MEnr}. In other words, writing $\rho_{t+dt}=dp_{\rm S}\rho_{\rm S}+dp_{\rm T}\rho_{\rm T}+(1-dp_{\rm S}-dp_{\rm T})(\rho_t-idt[{\cal H},\rho_t])$ leads to \eqref{MEnr} for $d\rho/dt=(\rho_{t+dt}-\rho_t)/dt$.
The physical significance of the sum $k_{\rm S}+k_{\rm T}$ appearing in the probabilities $dp_{\rm S}$ and $dp_{\rm T}$ is the fact that both singlet and triplet reservoirs continuously "measure" the same observable, namely ${\rm Q_S}$. The result of this measurement is either 1 or 0, corresponding to the singlet and triplet projections, respectively. In particular, the singlet reservoir measures the observable ${\rm Q_S}$ at the rate $k_{\rm S}/2$. The "yes" result of this measurement corresponds to ${\rm Q_S}=1$ and the singlet projection, while the no/null result corresponds to the triplet projection. Similarly, the triplet reservoir measures the observable ${\rm Q_T}=\mathbbmtt{1}-{\rm Q_S}$ at the rate $k_{\rm T}/2$. The "yes" result of this measurement corresponds to ${\rm Q_S}=0$ and a triplet projection, while the no/null result corresponds to the singlet projection. Equivalently, ${\rm Q_S}$ is measured at the total rate $(k_{\rm S}+k_{\rm T})/2$. Again, these measurements are unobservable and lead to the aforementioned S-T dephasing. What is observable is the detection of a neutral recombination product. The corresponding null detection implies the possibilities (i)-(iii).
For testing our code and providing a "baseline" for the simulations of Section VII we show in Fig.\ref{fig3} an example of an MC simulation of just the singlet-triplet decoherence described by \eqref{MEnr}.
To simulate the quantum trajectories of non-recombining RPs we start with $10^4$ RPs all being in the singlet state at $t=0$. We then evolve the state of each RP, using in each time increment $dt$ a random number $r$ uniformly distributed between 0 and 1. If $r<dp_{\rm S}$ we project the RP trajectory to the singlet state, if $dp_{\rm S}<r<dp_{\rm S}+dp_{\rm T}$ we project it to the triplet state, and if $1>r>dp_{\rm S}+dp_{\rm T}$ we evolve the RP state with the Hamiltonian ${\cal H}$. Due to these random quantum jumps, the S-T oscillations of the RPs suffer dephasing, hence the trajectory-averaged expectation value of ${\rm Q_S}$ exhibits S-T oscillations of decaying amplitude. The perfect agreement between MC and the master equation \eqref{MEnr} shown in Fig.\ref{fig3}b is expected {\it by definition}, i.e. the physics included in the MC simulation are those exactly reproducing the master equation. This agreement does not convey any information other than that our code is working properly and that the 10000 trajectories are statistically adequate for the comparison undertaken in the following.
\section{Singlet-Triplet Coherence}
Since ${\rm Q_S}+{\rm Q_T}=\mathbbmtt{1}$ (the unit matrix is henceforth understood to have the dimension of the particular RP under consideration), any density matrix $\rho$ can be written as $\rho=({\rm Q_S}+{\rm Q_T})\rho({\rm Q_S}+{\rm Q_T})$, or
\begin{equation}
\rho=\rho_{\rm SS}+\rho_{\rm TT}+\rho_{\rm ST}+\rho_{\rm TS},\label{generalrho}
\end{equation}
where $\rho_{xy}=Q_{x}\rho Q_{y}$, with $x,y={\rm S,T}$. It is clear that $\rho_{\rm SS}+\rho_{\rm TT}$ forms the incoherent part of $\rho$, whereas the S-T coherence is represented by $\rho_{\rm ST}+\rho_{\rm TS}$. A naturally arising question is how coherent is a particular RP state described by some density matrix $\rho$. Consider for simplicity an imaginary 4-dimensional RP. The state $|\psi\rangle=(|{\rm S}\rangle+|{\rm T}_{0}\rangle)/\sqrt{2}$, or equivalently $\rho={1\over 2}|{\rm S}\rangle\langle {\rm S}|+{1\over 2}|{\rm T}_0\rangle\langle {\rm T}_0|+{1\over 2}|{\rm S}\rangle\langle {\rm T}_{0}|+{1\over 2}|{\rm T}_{0}\rangle\langle {\rm S}|$ clearly is maximally S-T coherent, whereas the state $\rho={1\over 2}|{\rm S}\rangle\langle {\rm S}|+{1\over 2}|{\rm T}_0\rangle\langle {\rm T}_0|$ is maximally incoherent. There could also be an intermediate case of partial coherence, such as $\rho={1\over 2}|{\rm S}\rangle\langle {\rm S}|+{1\over 2}|{\rm T}_0\rangle\langle {\rm T}_0|+a|{\rm S}\rangle\langle {\rm T}_{0}|+a|{\rm T}_{0}\rangle\langle {\rm S}|$, with $a<1/2$. We thus need a measure of the "strength" of the "off-diagonal part" $\rho_{\rm ST}$ of the density matrix. In \cite{komPRE2011} we introduced the measure of coherence
\begin{equation}
p_{\rm coh}(\rho)={{\rm Tr\{\rho_{\rm ST}\rho_{\rm TS}\}}\over {\rm Tr\{\rho_{\rm SS}\}\rm Tr\{\rho_{\rm TT}\}}}\label{pcoh}
\end{equation}
However, this definition of $p_{\rm coh}$ is flawed in the following sense. S-T coherence is reflected by the value of the off-diagonal elements of the density matrix in the S-T basis. It is intuitively expected that such a measure should scale
linearly with the off-diagonal elements, however $p_{\rm coh}$ scales as the square of the off-diagonal elements of $\rho$. Hence if they decay at some rate $\Gamma$, $p_{\rm coh}$ will decay at $2\Gamma$, and this will skew the description of the relevant dynamics.
\subsection{Rigorous analysis of S-T coherence}
Although essential, a rigorous quantification of coherence in quantum systems has received little attention, at least compared to the quantification of entanglement which has advanced through the definition of several measures \cite{entmeas1,entmeas2}. Recently, Plenio and co-workers introduced a rigorous approach to quantifying quantum coherence \cite{plenio_coherence}. We will follow this approach to introduce a new well-behaved measured of S-T coherence.
The first step is to define the set of incoherent states $\mathcal{I}$. Since we are interested in S-T coherence, it is straightforward to define $\mathcal{I}$ as the set containing all density matrices $\rho$ for which $\rho=\rho_{\rm SS}+\rho_{\rm TT}$, i.e. the coherences $\rho_{\rm ST}$ and $\rho_{\rm TS}$ are absent. Plenio and coworkers then define a set of three criteria that any measure of coherence should satisfy. The first and most obvious (and the one that will be used in the following) is that $p_{\rm coh}(\rho)=0$ for $\rho\in\mathcal{I}$. In order not to overburden this discussion with technical details, this and the other two criteria are reproduced in Appendix A, where we also demonstrate in more detail the shortcomings of our previous definition \eqref{pcoh}.
In the new definition of $p_{\rm coh}$ to be shortly introduced, $p_{\rm coh}$ scales linearly with the off-diagonal elements of $\rho$, as it conforms with the Hilbert-Schmidt norm $C_{l_1}(\rho)$ shown in \cite{plenio_coherence} to be an acceptable measure of coherence. In this measure Plenio and co-workers sum the absolute value of all off-diagonal elements of the density matrix. However, we are not interested in quantifying coherences within the triplet subspace, e.g. among $|{\rm T}_{+}\rangle$ and $|{\rm T}_{-}\rangle$. Neither are we interested in nuclear spin coherences. We are only concerned with the coherence between the electron singlet and triplet subspaces. So in our new definition we will sum the absolute value of the amplitudes appearing in the coherences $|S\rangle\langle {\rm T}_{0}|$, $|{\rm S}\rangle\langle {\rm T}_{+}|$ and $|{\rm S}\rangle\langle {\rm T}_{-}|$. To do so we define
\begin{equation}
{\cal C}(\rho)=\sum_{j=0,\pm}\sqrt{\rm Tr\{\rho_{\rm ST}|T_j\rangle\langle T_j|\rho_{\rm TS}\}}\label{crho}
\end{equation}
This definition is visualized by a simple example in Appendix B. Before defining the new measure $p_{\rm coh}$ we note the following: (i) since $\rm Tr\{\rho\}$ is a decaying function of time due to recombination, we have to normalize ${\cal C}(\rho)$ by $\rm Tr\{\rho\}$ in order to get the genuine measure of coherence for the surviving RPs. (ii) as mentioned in \cite{plenio_coherence} the state of maximum coherence in a $d$-dimensional Hilbert space with basis $|j\rangle$ is $\sum_{j=1}^{d}{1\over \sqrt{d}}|j\rangle$. In our case, the most general pure state of an RP can be written as $|\psi\rangle=\alpha_{\rm S}|{\rm S}\rangle\otimes|\chi_{\rm S}\rangle+\sum_{j=0,\pm}\alpha_{j}|{\rm T}_{j}\rangle|\chi_{j}\rangle$, where $|\chi_{\rm S}\rangle$ and $|\chi_{j}\rangle$ are normalized nuclear spin states. Here S-T coherence is maximum when $|\alpha_{\rm S}|=|\alpha_{j}|=1/2$, and this maximum value is $\sum_{j=0,\pm}|\alpha_{\rm S}\alpha_{j}|=3/4$. However, if the Hamiltonian excites a subset of these coherences, e.g. only the S-T$_0$ coherence, the maximum value of the coherence would be smaller. Since in the following we use $p_{\rm coh}$ as a probability measure, we normalized ${\cal C}(\rho)$ with its maximum value obtained when $\rho$ evolves unitarily under the action of ${\cal H}$. So now we define
\begin{equation}
p_{\rm coh}(\rho)={1\over {\rm Tr\{\rho\}}}{{{\cal C}(\rho)}\over {\rm max\{{\cal C}(\tilde{\rho})\}}}\label{pcohnew}
\end{equation}
where $d\tilde{\rho}/dt=-i[{\cal H},\tilde{\rho}]$. We note that this new definition of $p_{\rm coh}$ is numerically very similar to the square-root of our earlier definition \eqref{pcoh}.
\section{Definition of $\rho_{\rm coh}$ and $\rho_{\rm incoh}$}
It is clear from \eqref{crho} that if we scale $\rho_{\rm ST}$ and $\rho_{\rm TS}$ with a positive number $\lambda$, i.e. if $\rho_{\rm ST}\rightarrow\lambda\rho_{\rm ST}$ and $\rho_{\rm TS}\rightarrow\lambda\rho_{\rm TS}$ then $p_{\rm coh}\rightarrow\lambda p_{\rm coh}$.
So going back to the general form \eqref{generalrho} of the density matrix $\rho$, if we choose $\lambda=1/p_{\rm coh}$, that is, if we define the density matrix
\begin{equation}
\rho_{\rm coh}=\rho_{\rm SS}+\rho_{\rm TT}+{1\over p_{\rm coh}}\rho_{\rm ST}+{1\over p_{\rm coh}}\rho_{\rm TS}\label{rhocoh},
\end{equation}
then $\rho_{\rm coh}$ will describe a maximally coherent state, $p_{\rm coh}(\rho_{\rm coh})=1$. The density matrix $\rho_{\rm coh}$ can be thought of as the S-T coherence distillation of $\rho$.
We can also define a maximally incoherent density matrix $\rho_{\rm incoh}$:
\begin{equation}
\rho_{\rm incoh}=\rho_{\rm SS}+\rho_{\rm TT},\label{rhoincoh}
\end{equation}
for which $p_{\rm coh}(\rho_{\rm incoh})=0$. Using Eqs. \eqref{generalrho}, \eqref{rhocoh} and \eqref{rhoincoh} it is then trivial to show that {\it any} density matrix $\rho$ can be written as:
\begin{equation}
\rho=(1-p_{\rm coh})\rho_{\rm incoh}+p_{\rm coh}\rho_{\rm coh}\label{decomp}
\end{equation}
This will be the starting point for the retrodictive derivation presented in the following Section. We note that this general decomposition of $\rho$ into $\rho_{\rm incoh}$ and $\rho_{\rm coh}$ was possible due to the particular definition of $\rho_{\rm coh}$ and its property that $p_{\rm coh}(\rho_{\rm coh})=1$, which itself relies on the linear scaling of $p_{\rm coh}$ mentioned previously. In other words, the following formal derivation based on quantum retrodiction would not be possible without the proper definition of the S-T coherence measure.
\section{Quantum retrodiction and radical-ion-pair recombination}
\subsection{Radical-ion-pair recombination from the single-molecule and from the ensemble perspective}
The density matrix of an ensemble of $N$ RPs is $\rho_t=\sum_{i=1}^{N}|\psi_i(t)\rangle\langle\psi_i(t)|$, where $|\psi_{i}\rangle$ is the spin state of the $i$-th RP. Each $|\psi_i\rangle$ has suffered a number of S- or T-quantum jumps until the time $t$. Due to recombination $N$ is time-dependent, since if the $i$-th RP recombines at time $t$, its quantum state $|\psi_i\rangle\langle\psi_{i}|$ at time $t$ must be subtracted from $\rho_t$ in order to update $\rho_t$ into $\rho_{t+dt}$. Although this is a simple physical picture from the perspective of quantum trajectories, it is not straightforward to translate it into a master equation. The root of the difficulty is S-T dephasing, which transforms a pure initial state into a mixture.
As well known, there is no unique way to unravel a density matrix into its component pure states. Hence we have to make due with the following physical scenario. Given the density matrix $\rho_t$ at some time $t$, and given the {\it measured} singlet and triplet neutral products during the infinitesimal interval $dt$, $dn_{\rm S}$ and $dn_{\rm T}$, respectively, how do we update $\rho_t$ into $\rho_{t+dt}$? In general, the change $d\rho=\rho_{t+dt}-\rho_t$ is caused by (i) the change of state of RPs that did not recombine during $dt$, call it $d\rho_{\rm decoh}$, given by \eqref{MEnr} and (ii) the RPs that did recombine during $dt$, call it $d\rho_{\rm recomb}$, i.e. $d\rho=d\rho_{\rm decoh}+d\rho_{\rm recomb}$. Clearly, $\rm Tr\{d\rho\}=\rm Tr\{d\rho_{\rm recomb}\}=-dn_{\rm S}-dn_{\rm T}$, but that alone cannot lead to the form of $d\rho_{\rm recomb}$.
We will now derive $d\rho_{\rm recomb}$ using the formal tools of quantum retrodiction. We then compare the predictions of the new master equation to the Monte Carlo simulation. The latter turns out to be a very useful tool, since dealing with an ensemble of pure states allows us, by default, to subtract the particular component states $|\psi_i\rangle$ of the recombined RPs from the considered ensemble.
\subsection{Radical-ion-pair recombination and quantum retrodiction}
The predictive approach to quantum measurements, which we are most familiar with, addresses the question: given the density matrix describing a physical system, what are the probabilities of specific measurement outcomes? The so-called retrodictive approach \cite{retro1,retro2}, used less often, is about the reverse: given a specific measurement outcome, what is the probability that the system's state prior to the measurement was this or that? Quantum retrodiction is relevant to quantum communication \cite{retro3,retro4}, since Bob, the receiver of quantum information, attempts to reconstruct the quantum state delivered to him by Alice, the sender, based on specific measurement outcomes.
The idea relating RP recombination to the concept of retrodiction and S-T coherence is the following. When an RP is in a particular state $|\psi\rangle$ just before it recombines, we must subtract $|\psi\rangle\langle\psi|$ from the density matrix to account for this recombination event. But since S-T dephasing produces a mixture of pure states, given the recombination product, which is either the singlet or the triplet ground state, one cannot unambiguously retrodict the pre-recombination state $|\psi\rangle$. A singlet recombination could for example result from a singlet RP as much as from an S-T coherent RP. The theory of quantum retrodiction allows us to retrodict $|\psi\rangle$ "on average". The way this is done depends on how coherent is the RP state described by the density matrix $\rho$, hence the necessity of defining $p_{\rm coh}$.
This is seen by examining the two extreme cases of minimum and maximum S-T coherence, for which $d\rho_{\rm recomb}$ is straightforward to derive.
Based on the general decomposition \eqref{decomp}, the theory of quantum retrodiction can then be seamlessly applied in the general case of a density matrix with partial S-T coherence.
\subsection{Recombination of maximally coherent radical-ion pairs}
Suppose that at time $t$ we have an ensemble of $N$ RPs all in some maximally S-T coherent state $|\psi\rangle$. Suppose further that the only change during the interval $dt$ is the recombination of just one RP, i.e. the detection of one neutral product. Clearly, scaling the normalization of $\rho$ from 1 to $N$ just for the sake of this discussion, it is $\rho_t=N|\psi\rangle\langle\psi |$ and $\rho_{t+dt}=(N-1)|\psi\rangle\langle\psi |$, since now we have one less RP in the state $|\psi\rangle$. This can be formalized as follows. For a maximally coherent ensemble of RPs all in the same state $|\psi\rangle$, the single-molecule density matrix will be $\rho/\rm Tr\{\rho\}$. If we define $\delta\rho_{\rm coh}^{\rm 1S}$ ($\delta\rho_{\rm coh}^{\rm 1T}$) to be the change in $\rho$ due to the measurement of {\it just one} singlet (triplet) neutral product, it will be
\begin{equation}
\delta\rho_{\rm coh}^{\rm 1S}=\delta\rho_{\rm coh}^{\rm 1T}=-{\rho\over {\rm Tr\{\rho\}}}\label{drcoh}
\end{equation}
\subsection{Recombination of maximally incoherent radical-ion pairs}
In the other extreme, suppose that $\rho_t$ is a maximally incoherent mixture of singlet and triplet RPs, i.e. $\rho_t=\rho_{\rm SS}+\rho_{TT}$. Then the detection of a singlet (triplet) recombination product leads us to conclude with certainty that it resulted from a singlet (trilpet) RP and hence we can reduce the population of singlet (triplet) RPs by one. If we define $\delta\rho_{\rm incoh}^{\rm 1S}$ ($\delta\rho_{\rm incoh}^{\rm 1T}$) to be the change in $\rho$ due to the recombination of {\it just one} singlet (triplet) RP, it will be
\begin{align}
\delta\rho_{\rm incoh}^{\rm 1S}&=-{{{\rm Q_S}\rho{\rm Q_S}}\over {\rm Tr\{ {\rm Q_S}\rho {\rm Q_S}\}}}=-{{{\rm Q_S}\rho{\rm Q_S}}\over {\rm Tr\{\rho {\rm Q_S}\}}}\label{drincohS}\\
\delta\rho_{\rm incoh}^{\rm 1T}&=-{{{\rm Q_T}\rho{\rm Q_T}}\over {\rm Tr\{ {\rm Q_T}\rho {\rm Q_T}\}}}=-{{{\rm Q_T}\rho{\rm Q_T}}\over {\rm Tr\{\rho {\rm Q_T}\}}}\label{drincohT}
\end{align}
The last equality in the above equations follows from the cyclic property of the trace and the fact that ${\rm Q_S}$ and ${\rm Q_T}$ are projectors, hence idempotent.
\subsection{Recombination of radical-ion pairs having partial S-T coherence}
We will now use the formalism of quantum retrodiction to derive the reaction operators for the general case of partial S-T coherence. The retrodiction formalism \cite{retro3,retro4} uses the preparation operators $\Lambda_{i}$ and the measurement operators $\Pi_{j}$. In particular, suppose that a system is prepared in a state $\rho_{i}$ with probability $P(i)$. The preparation operator is then defined as $\Lambda_{i}=P(i)\rho_{i}$. If the particular preparation is unknown then we have to average over all possible preparations and the system will be described by the density matrix $\rho=\sum_{i}\Lambda_{i}$. Suppose further that a measurement defined by the POVM set $\Pi_{i}$, where $\sum_{i}\Pi_{i}=\mathbbmtt{1}$, returns the $j$-th result. Defining $\rho_{j}^{r}=\Pi_{j}/\rm Tr\{\Pi_{j}\}$, the main result of retrodiction theory is that the {\it conditional probability} that state $\rho_{i}$ was prepared, given the measurement result $j$ is
\begin{equation}
P(i|j)={{\rm Tr\{\Lambda_{i}\rho_{j}^{r}\}}\over {\sum_{i}\rm Tr\{\Lambda_{i}\rho_{j}^{r}\}}}\label{retr}
\end{equation}
The POVM set of measurement operators of interest in our case consists of $\Pi_{1}={\rm {\rm Q_S}}$ and $\Pi_{2}={\rm Q_T}$, already mentioned to satisfy the condition ${\rm {\rm Q_S}}+{\rm Q_T}=\mathbbmtt{1}$. As shown before, the general form of the RP density matrix at time $t$ can be written as $\rho=\Lambda_{1}+\Lambda_{2}=(1-p_{\rm coh})\rho_{\rm incoh}+p_{\rm coh}\rho_{\rm coh}$, i.e. we identify $\Lambda_{1}=(1-p_{\rm coh})\rho_{\rm incoh}$ and $\Lambda_{2}=p_{\rm coh}\rho_{\rm coh}$, where $\rho_{\rm coh}$ and $\rho_{\rm incoh}$ have been defined by \eqref{rhocoh} and \eqref{rhoincoh}, respectively.
Suppose that during the interval $dt$ we have detected one $x$ neutral product, where $x={\rm S,T}$. To apply Eq. \eqref{retr}, we note that since $\rho_{x}^{r}={\rm Q}_x/\rm Tr\{{\rm Q}_x\}$, the denominator $\rm Tr\{{\rm {\rm Q}_x}\}$ of $\rho_{x}^{r}$ will drop out of Eq. \eqref{retr}. Further, since $\rho=\sum_{i}\Lambda_{i}$, the denominator in Eq. \eqref{retr} is proportional to the expectation value of ${\rm Q}_x$ at time $t$, i.e. $\sum_{i}\rm Tr\{\Lambda_{i}\rho_{x}^{r}\}\propto\rm Tr\{\rho {\rm {\rm Q}_x}\}$, hence given the detection of one $x$ neutral product, the probabilities that it originated either from $\rho_{\rm incoh}$ or from $\rho_{\rm coh}$ are
\begin{align}
P({\rm incoh}|x)&={{\rm Tr\{\Lambda_{1}{\rm {\rm Q}_x}\}}\over {\rm Tr\{\rho{\rm {\rm Q}_x}\}}}=(1-p_{\rm coh}){{\rm Tr\{\rho_{\rm incoh} {\rm {\rm Q}_x}\}}\over {\rm Tr\{\rho{\rm Q}_x\}}}\nonumber\\
P({\rm coh}|x)&={{\rm Tr\{\Lambda_{2}{\rm {\rm Q}_x}\}}\over {\rm Tr\{\rho{\rm {\rm Q}_x}\}}}=p_{\rm coh}{{\rm Tr\{\rho_{\rm coh} {\rm {\rm Q}_x}\}}\over {\rm Tr\{\rho{\rm Q}_x\}}}
\end{align}
Since the expectation value of ${\rm Q}_x$ in $\rho$ is the same as in $\rho_{\rm incoh}$ and $\rho_{\rm coh}$, it readily follows that
\begin{align}
P({\rm incoh}|{\rm S})&=P({\rm incoh}|{\rm T})=1-p_{\rm coh}\nonumber\\
P({\rm coh}|{\rm S})&=P({\rm coh}|{\rm T})=p_{\rm coh}\nonumber
\end{align}
We have shown how the density matrix changes upon detecting just one product in the extreme cases of maximum/minimum coherence. In the general case when the RP ensemble is described by $\rho$, detecting
{\it just one} singlet (triplet) neutral product leads to a change in $\rho$ given by $\delta\rho^{\rm 1S}$ ($\delta\rho^{\rm 1T}$), where
\begin{align}
\delta\rho^{\rm 1S}&=P({\rm incoh}|{\rm S})\delta\rho_{\rm incoh}^{\rm 1S}+P({\rm coh}|{\rm S})\delta\rho_{\rm coh}^{\rm 1S}\nonumber\\
\delta\rho^{\rm 1T}&=P({\rm incoh}|{\rm T})\delta\rho_{\rm incoh}^{\rm 1T}+P({\rm coh}|{\rm T})\delta\rho_{\rm coh}^{\rm 1T}\nonumber
\end{align}
The generalization to the case of detecting $dn_{\rm S}=k_{\rm S}dt\rm Tr\{\rho{\rm Q_S}\}$ singlet and $dn_{\rm T}=k_{\rm T}dt\rm Tr\{\rho{\rm Q_T}\}$ triplet neutral products is now straightforward:
\begin{align}
d\rho_{\rm recomb}&=dn_{\rm S}\delta\rho^{\rm 1S}+dn_{\rm T}\delta\rho^{\rm 1T}\label{recomb}
\end{align}
Since $\rm Tr\{\delta\rho_{\rm coh}^{\rm 1S}\}=\rm Tr\{\delta\rho_{\rm coh}^{\rm 1T}\}=\rm Tr\{\delta\rho_{\rm incoh}^{\rm 1S}\}=\rm Tr\{\delta\rho_{\rm incoh}^{\rm 1T}\}=-1$, it is $\rm Tr\{d\rho_{\rm recomb}\}=-dn_{\rm S}-dn_{\rm T}$, as it should be.
Using \eqref{MEnr} and \eqref{recomb}, we arrive at the master equation describing RP quantum dynamics:
\begin{align}
{{d\rho}\over {dt}}=&-i[{\cal H},\rho]\label{t1}\\
&-{{k_{\rm S}+k_{\rm T}}\over 2}\big(\rho {\rm {\rm Q_S}}+{\rm {\rm Q_S}}\rho-2{\rm {\rm Q_S}}\rho {\rm {\rm Q_S}}\big)\label{t2}\\
&-(1-p_{\rm coh})\big(k_{\rm S}{\rm {\rm Q_S}}\rho {\rm {\rm Q_S}}+k_{\rm T} {\rm Q_T} \rho {\rm Q_T}\big)\label{t3}\\
&-p_{\rm coh}{{dn_{\rm S}+dn_{\rm T}}\over {dt}}{\rho_{\rm coh}\over {\rm Tr\{\rho\}}}\label{t4}
\end{align}
The term in \eqref{t1} is the unitary Hamiltonian evolution which generates S-T coherence, the dissipation of which is given by term \eqref{t2}, while \eqref{t3} and \eqref{t4} are the spin-dependent reaction terms. This master equation has a form identical to the one derived in \cite{komPRE2011}, the crucial difference being the new definition of $p_{\rm coh}$ and the last term \eqref{t4} where we now have the appearance of $\rho_{\rm coh}$ instead of $\rho$ that was used phenomenologically in \cite{komPRE2011}.
Finally, we rewrite the master equation \eqref{t1}-\eqref{t4} in a more "user-friendly" form involving only the matrices $\rho_{xy}={\rm Q}_{x}\rho {\rm Q}_{y}$, where $x,y={\rm S,T}$.
\begin{align}
{{d\rho}\over {dt}}=&-i[{\cal H},\rho]\nonumber\\
&-{{k_{\rm S}+k_{\rm T}}\over 2}\big(\rho_{\rm ST}+\rho_{\rm TS}\big)\nonumber\\
&-(1-p_{\rm coh})\big(k_{\rm S}\rho_{\rm SS}+k_{\rm T}\rho_{\rm TT}\big)\nonumber\\
&-{1\over {\rm Tr\{\rho\}}}\big(k_{\rm S}\rm Tr\{\rho_{\rm SS}\}+k_{\rm T}\rm Tr\{\rho_{\rm TT}\}\big)\times\nonumber\\
&~~~\big(p_{\rm coh}\rho_{\rm SS}+p_{\rm coh}\rho_{\rm TT}+\rho_{\rm ST}+\rho_{\rm TS}\big)\nonumber
\end{align}
\begin{figure}
\includegraphics[width=8.0 cm]{ham_ev.eps}
\caption{(Color online) Time evolution of ${\rm Tr\{\tilde{\rho}\rm Q_S}\}$ (red solid line) and S-T coherence ${\cal C}(\tilde{\rho})$ (black dashed line) for the same RP considered in Fig. 3, taking into account only S-T mixing driven by the Hamiltonian ${\cal H}$, i.e. $d\tilde{\rho}/dt=-i[{\cal H},\tilde{\rho}]$. The singlet state obviously corresponds to zero S-T coherence, while the state in-between the extrema of ${\rm Tr\{\tilde{\rho}\rm Q_S}\}$ corresponds to an S-T superposition and hence maximum S-T coherence.}
\label{fig4}
\end{figure}
\begin{figure}
\includegraphics[width=8.5 cm]{qsexp_kS_eq_kT.eps}
\caption{(Color online) Time evolution of $\langle{\rm Q_S}\rangle$ including S-T mixing, S-T decoherence and recombination for the same RP Hamiltonian used in Figs.\ref{fig3}-\ref{fig4}, with $k_{\rm S}=k_{\rm T}=A/4$. (a) example of a single-RP quantum trajectory with initial state $|S\rangle\otimes|\uparrow\rangle$. (b) Monte Carlo simulation (red solid line) using 10,000 trajectories (two initial states $|S\rangle\otimes|\uparrow\rangle$ and $|S\rangle\otimes|\downarrow\rangle$, with 5000 trajectories for each), prediction of the master equation of this work (dashed line), and the earlier theory (solid line) introduced in \cite{komPRE2011}. The corresponding measure of S-T coherence $p_{\rm coh}$ is shown with the blue dotted line. The Monte Carlo and the theoretical prediction of this work coincide.}
\label{fig5}
\end{figure}
\section{Monte Carlo simulation of S-T decoherence and recombination using single-molecule quantum trajectories}
To the simulation presented in Section III we now add two additional possibilities in each time step $dt$: singlet and triplet recombination with probability $dr_{\rm S}=k_{\rm S}dt\langle{\rm Q_S}\rangle$ and $dr_{\rm T}=k_{\rm T}dt\langle{\rm Q_T}\rangle$, respectively. In the event that the $j$-th RP recombines within $dt$ at time $t$, its state $|\psi_{j}\rangle\langle\psi_{j}|$ is subtracted at time $t$ from the sum $\rho=\sum_{i}|\psi_{i}\rangle\langle\psi_{i}|$.
For a more comprehensive understanding of the considerations to follow, we first show in Fig.\ref{fig4} just the Hamiltonian evolution (no decoherence, no reaction) of $\langle{\rm Q_S}\rangle=\rm Tr\{\tilde{\rho}{\rm Q_S}\}$ and ${\cal C}(\tilde{\rho})$ for the model RP considered in our numerical examples. Clearly, when $\langle{\rm Q_S}\rangle=1$ it is ${\cal C}(\tilde{\rho})=0$, as expected since we have no coherence between the singlet and triplet subspace. This coherence is maximum at intermediate times in-between the extrema of $\langle {\rm Q_S}\rangle$.
In Fig.\ref{fig5}a we depict a single-RP quantum trajectory, similar to the one shown in Fig.\ref{fig3}a but now also including recombination. The recombination rates are taken equal, $k_{\rm S}=k_{\rm T}$. In Fig.\ref{fig5}b we show that using the newly derived master equation \eqref{t1}-\eqref{t4} we obtain a perfect agreement with the MC simulation that was lacking with the earlier theory. The MC simulation is the average of $10^4$ trajectories like the one shown in Fig.\ref{fig5}a. In Fig.\ref{fig5}b we also include the time evolution of $p_{\rm coh}$.
We next move to the asymmetric regime where $k_{\rm T}\neq 0$ and $k_{\rm S}=0$. This is of interest as it is found in the RPs appearing in a large number of photosynthetic reaction centers \cite{matysik}. In Fig.\ref{fig6}a and Fig.\ref{fig6}b we again plot $\langle{\rm Q_S}\rangle$ for $k_{\rm S}=0$, $k_{\rm T}=A/4$ and $k_{\rm T}=A/2$, respectively. While for the former we get a very good agreement between the Monte Carlo simulation and the master equation, the agreement is not perfect for the latter, but still much better than our earlier theory.
\begin{figure}
\includegraphics[width=8.5 cm]{qsexp_kS_neq_kT.eps}
\caption{(Color online) Similar plots with Fig.\ref{fig5} but with asymmetric recombination rates. (a) $k_{\rm S}=0$, $k_{\rm T}=A/4$. (b) $k_{\rm S}=0$, $k_{\rm T}=A/2$.}
\label{fig6}
\end{figure}
We comment on this in Section IX.
\section{Decay rate of singlet-triplet coherence}
For the sake of completeness we present a comparison between our theory, the traditional (or Haberkorn) approach \cite{haberkorn} and the theory put forward by Jones \& Hore \cite{JH}. First we reiterate \cite{komPRE2011} that the traditional theory results from our theory by forcing $p_{\rm coh}=0$. We also note that our master equation \eqref{t1}-\eqref{t4} is identical with the Jones-Hore equation in the case $k_{\rm S}=k_{\rm T}$. In this special case $p_{\rm coh}$ drops out of our master equation \eqref{t1}-\eqref{t4}. In Fig.\ref{fig7}a we plot the time evolution of $\langle{\rm Q_S}\rangle$ for all three theories, which qualitatively look quite similar. Their most obvious difference is how fast the S-T coherence is lost. By inspection it readily appears that the amplitude of the S-T oscillations in Fig.\ref{fig7}a decays faster in the Jones-Hore theory, slower in our theory and even slower in the traditional approach. We will now rigorously quantify this observation by following a general approach equally applicable to all three theories. This is based on the general decomposition \eqref{generalrho}, in particular we will consider the coherent part of $\rho$ which is $\rho_{c}=\rho_{\rm ST}+\rho_{\rm TS}$.
In our master equation $\rho_{c}$ appears both in the term \eqref{t2} and in the term \eqref{t4}. The latter is obvious, while the former can be seen by simple operator manipulations leading to ${\rm Q_S}\rho+\rho{\rm Q_S}-2{\rm Q_S}\rho{\rm Q_S}=\rho_{c}$. Thus, if we right (left) multiply the master equation \eqref{t1}-\eqref{t4} with ${\rm Q_S}$ (${\rm Q_T}$), then vice-versa, and take the sum we find that $\rho_{c}$ obeys the equation
\begin{equation}
{{d\rho_{c}}\over{dt}}=-i[{\cal H},\rho]_{c}-\Gamma_{c}\rho_{c}\label{rc},
\end{equation}
where $[{\cal H},\rho]_{c}={\rm Q_S}[{\cal H},\rho]{\rm Q_T}+{\rm Q_T}[{\cal H},\rho]{\rm Q_S}$. The decay of $\rho_{c}$ is governed by the rate
\begin{equation}
\Gamma_{c}=k_{\rm S}\Big({1\over 2}+\langle{\rm \tilde{Q}_S}\rangle\Big)+k_{\rm T}\Big({1\over 2}+\langle{\rm \tilde{Q}_T}\rangle\Big),
\end{equation}
where we defined $\langle{\rm \tilde{Q}_x}\rangle=\rm Tr\{\rho{\rm Q}_x\}/\rm Tr\{\rho\}$ with $x={\rm S,T}$. Moreover, since it will be needed in the following, by taking the trace of both sides in \eqref{t1}-\eqref{t4} we find that $\rm Tr\{\rho\}$, the normalization of $\rho$, obeys the equation
\begin{equation}
{{d\rm Tr\{\rho\}}\over{dt}}=-\kappa\rm Tr\{\rho\}\label{trr},
\end{equation}
where
\begin{equation}
\kappa=k_{\rm S}\langle{\rm \tilde{Q}_S}\rangle+k_{\rm T}\langle{\rm \tilde{Q}_T}\rangle\label{kappa}
\end{equation}
We finally define the "genuine" S-T decoherence rate as $\gamma_{c}=\Gamma_{c}-\kappa$. This describes the decay of S-T coherence due to all effects other than the changing normalization of $\rho$. This definition follows if we normalize $\rho_{c}$ by $\rm Tr\{\rho\}$ and then use \eqref{rc} and \eqref{trr}. Then we indeed find that the decay rate of $\rho_{c}/\rm Tr\{\rho\}$ is $\gamma_c$.
We now consider two cases, (a) $k_{\rm S}=k_{\rm T}=k$, and (b) $k_{\rm S}=0$ and $k_{\rm T}=2k$, so that $k_{\rm S}+k_{\rm T}$ is the same in both cases.
In case (a) we find that $\Gamma_{c}=2k$ since $\langle{\rm \tilde{Q}_S}\rangle+\langle{\rm \tilde{Q}_T}\rangle=1$. Moreover, $\kappa=k$, hence $\gamma_{c}=k$.
In case (b) it is $\Gamma_{c}=k(1+2\langle{\rm \tilde{Q}_T}\rangle)$, while $\kappa=2k\langle{\rm \tilde{Q}_T}\rangle$, hence $\gamma_{c}=k$.
We will now perform the same calculation for the traditional and the Jones-Hore theory. We first note that the equations \eqref{trr} and \eqref{kappa} are common for all three theories.
The traditional master equation is $d\rho/dt=-i[{\cal H},\rho]-k_{\rm S}({\rm Q_S}\rho+\rho{\rm Q_S})/2-k_{\rm T}({\rm Q_T}\rho+\rho{\rm Q_T})/2$. Again, multiplying from left and right with the projection operators as before we find that the decay rate of $\rho_{c}$ is $\Gamma_{c}=(k_{\rm S}+k_{\rm T})/2$. In case (a) it is found that $\gamma_{c}=0$, while in case (b) we get $\gamma_{c}=k(1-2\langle{\rm \tilde{Q}_T}\rangle)$.
\begin{figure}
\includegraphics[width=8.5 cm]{comparison.eps}
\caption{(Color online) Comparison of the three theories for the case presented in Fig.\ref{fig6}b, i.e. $k_{\rm S}=0$ and $k_{\rm T}=A/2$. (a) the S-T coherence, embodied by the amplitude of the oscillation of $\langle{\rm Q_S}\rangle$ decays faster in the Jones-Hore theory, slower in our theory and even slower in the traditional theory. (b) the corresponding decay rate $\gamma_{c}/k$, where $k_{\rm T}=2k$.}
\label{fig7}
\end{figure}
The Jones-Hore master equation is $d\rho/dt=-i[{\cal H},\rho]-k_{\rm S}({\rm Q_S}\rho+\rho{\rm Q_S}-{\rm Q_S}\rho{\rm Q_S})-k_{\rm T}({\rm Q_T}\rho+\rho{\rm Q_T}-{\rm Q_T}\rho{\rm Q_T})$.
We similarly find that $\Gamma_{c}=k_{\rm S}+k_{\rm T}$. Then in case (a) it follows that $\gamma_{c}=k$ and in case (b) $\gamma_{c}=2k(1-\langle{\rm \tilde{Q}_T}\rangle)$. For clarity we summarize the results in Table I.
The asymmetric case $k_{\rm T}\gg k_{\rm S}$ together with the singlet initial state is the regime of the quantum Zeno effect \cite{komPRE2009,pascazio,pascazio_review,kurizki} (most pronounced if $k_{\rm T}\gg\Omega$, where $\Omega$ is the S-T mixing frequency). In this regime, when the RP's spin state is about to evolve from the initial singlet state it is strongly back-projected to it due to the high $k_{\rm T}$. Thus, $\langle {\rm \tilde{Q}_S}\rangle$ decreases slowly from its initial value of 1, and hence $\langle {\rm \tilde{Q}_T}\rangle$ can be quite small, in particular, quite smaller than 1/2. This observation is common to all three theories. It thus follows that $2k(1-\langle\tilde{\rm Q}_{\rm T}\rangle)>k>k(1-2\langle\tilde{\rm Q}_{\rm T}\rangle)$. Indeed, as shown in Fig.\ref{fig7}b, the Jones-Hore theory predicts the largest decay rate for the S-T coherence, ours is intermediate and for the traditional theory it is the smallest.
\begin{table}
\caption{Decay rate of S-T coherence $\gamma_{c}$}
\begin{ruledtabular}
\begin{tabular}{|c|ccc|}
$\gamma_{c}$ & This work & Trad. theory & J.-H. theory\\
\hline
$k_{\rm S}=k_{\rm T}=k$ & $k$ & 0 & k \\
$k_{\rm S}=0$, $k_{\rm T}=2k$ & $k$ & $k(1-2\langle{\rm \tilde{Q}_T}\rangle)$ & $2k(1-\langle{\rm \tilde{Q}_T}\rangle)$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Discussion}
We will finally comment on the success of the master equation \eqref{t1}-\eqref{t4} in matching the MC simulation, which has inbuilt the fundamental physical processes of RP reactions at the single-molecule level. While for the case $k_{\rm S}=k_{\rm T}$ there is a perfect agreement between theory and MC, independent of the particular definition of $p_{\rm coh}$, for the asymmetric case $k_{\rm T}\gg k_{\rm S}$ we have the more noticeable theory-MC a deviation the higher $k_{\rm T}$ is. For most practical purposes such a small deviation should be of little concern, however, it is worthwhile to discuss.
To our understanding, the problem is an underestimation of S-T coherence that in principle can be hardly overcome. The reason is the impossibility to unravel a density matrix into its component pure states. S-T decoherence will produce a mixture of S-T coherent, yet dephased states, which when described by a density matrix will look equivalent to a mixture of S-T incoherent and S-T coherent states, as we have shown with the decomposition into $\Lambda_1$ and $\Lambda_2$. To exacerbate the problem for the sake of this discussion, consider for example a mixture of the coherent states $|\psi_1\rangle=(|{\rm S}\rangle+|{\rm T}_0\rangle)/\sqrt{2}$ and $|\psi_2\rangle=(|{\rm S}\rangle-|{\rm T}_0\rangle)/\sqrt{2}$ with equal weights. Then $\rho={1\over 2}|\psi_1\rangle\langle\psi_1|+{1\over 2}|\psi_2\rangle\langle\psi_2|={1\over 2}(|{\rm S}\rangle\langle{\rm S}|+|{\rm T}_0\rangle\langle{\rm T}_0|)$. This state appears as maximally incoherent, yet it is formed by maximally coherent states. Having access to the information embodied by $\rho$, it is impossible to unravel or retrodict the constituents $|\psi_1\rangle$ or $|\psi_2\rangle$.
From \eqref{t3} it is seen that in the asymmetric case where $k_{\rm S}=0$, if $p_{\rm coh}$ is underestimated, then we remove a correspondingly larger triplet character from $\rho$, and hence $\rho$ appears to be more singlet than it really is, as is evident from Fig.\ref{fig6}, i.e. the master equation overshoots the MC. Moreover, this deviation is noticeable at the minima of $p_{\rm coh}$, while it is indiscernible at the maxima of $p_{\rm coh}$. Again, this is due to the reaction term \eqref{t3} of the master equation, which is more pronounced for low values of $p_{\rm coh}$.
We finally reiterate that what we have treated is the fundamental quantum dynamics of RP reactions governed by the physical processes inherent in the radical-pair mechanism, i.e. S-T dephasing and charge recombination, stemming from virtual and real transitions to the products' vibrational reservoirs, respectively. Clearly, other sources of decoherence could be present, which are either fundamental or technical, and the manifestation of which could depend on the physical realization of the RP dynamics, e.g. whether the molecules are in solution or in the solid state as in photosynthetic reaction centers. Dephasing due to a bath of surrounding nuclear spins that have not been included in the magnetic Hamiltonian has analogues in the study of quantum dots \cite{rosen,petta1,petta2} and has been considered by several authors \cite{kavokin,briegel,walters}. To our understanding, a consensus on the physical significance and the quantitative details of this hyperfine relaxation is still lacking from the literature. Whether the S-T dephasing we consider is a dominant process or not will at the end depend on the comparison between the particular recombination rates $k_{\rm S}$ and $k_{\rm T}$ of the RP under consideration and the hyperfine relaxation rate, or in general, the rates of other relaxation processes in the particular RP environment.
A detailed understanding of the interplay of all possible decoherence mechanisms, whether fundamental or technical, is outside the scope of this work. It is, however, a basic requirement for connecting the microscopic dynamics of RP reactions with behavioral observations of the avian compass mechanism, a non-trivial exercise recently undertaken in \cite{vedral,kaszli,gauger,kaszli2}.
\section{Conclusions}
To summarize, we have used formal considerations for quantifying the strength of singlet-triplet coherence in radical-ion pairs, which is central for understanding their quantum state evolution. We have also applied the formalism of quantum retrodiction to provide a theoretically solid basis for deriving the master equation for radical-ion-pair quantum dynamics. This represents a refinement of our previous work, which is substantiated by Monte Carlo simulations. These have their own interest as they can realistically and precisely simulate the dynamics of RP reactions including all relevant physical processes. For most practical purposes, however, the master equation we derive should be adequate.
This work is about the self-consistency of our approach and not about making the case of which among the competing theories is the correct one. In other words, if the model presented in Fig. \ref{fig2} is a physically adequate model for describing RPs, as we believe it is, our newly introduced master equation represents a first-principles result alleviating problems with our previous phenomenological treatment. Nevertheless, we have compared the predictions of our approach with the other two competing theories and discussed in detail how all three theories describe the decay of S-T coherence, which is a central observable in RP reactions.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 500
|
Q: Proof of the limit of product of sequences I am trying to prove
if $\left\{a_n\right\}$ and $\left\{b_n\right\}$ are two sequences such that $a_n \to a$ and $b_n \to b$, then $a_nb_n \to ab$
I already know the proof using boundedness. But i tried in this way.
Using the fact that
$$a_{n} b_{n}-a b=\left(a_{n}-a\right)\left(b_{n}-b\right)+a\left(b_{n}-b\right)+b\left(a_{n}-a\right)$$
We get
$$|a_nb_n-ab|\leq |a_n-a||b_n-b|+|a||b_n-b|+|b||a_n-a|\to (1)$$
Now since $a_n,b_n$ are convergent, we have for every $\epsilon_1 >0$, $\exists n_0 \in \mathbb{N}$ such that $\forall n \geq n_0$
$$|a_n-a|<\epsilon_1 \to (2)$$
Like-wise for every $\epsilon_2>0$,
$\exists n_0' \in \mathbb{N}$ such that $\forall n \geq n_0'$
$$|b_n-b|<\epsilon_2 \to (3)$$
Using $(2),(3)$ in $(1)$ with $\Delta$ inequality we get
$$|a_nb_n-ab|<\epsilon_1\epsilon_2+|a|\epsilon_2+|b|\epsilon_1$$
whenever $n \geq max(n_0,n_0')$
Now how to conclude?
A: This is a good time to use a trick from analysis. You have all the epsilons, just have to show they go to zero. This is where you left off:
$$|a_nb_n-ab|<\epsilon_1\epsilon_2+|a|\epsilon_2+|b|\epsilon_1$$
whenever $n \geq max(n_0,n_0')$
We have to first define an arbitrary a 'master' epsilon, $\epsilon$. Then we lower $\epsilon_1$ and $\epsilon_2$ as far as we need to get the total under $\epsilon$. (We can lower them as far as we want because they are arbitrarily small.)
For any $\epsilon >0$ we can choose $\epsilon_1 < \min{(1, \ \frac{\epsilon}{3|b|}, \ \epsilon)}$, and $\epsilon_2 < \min{(1, \ \frac{\epsilon}{3|a|}, \ \epsilon)}$. Now $$|a_nb_n-ab|<\epsilon_1\epsilon_2+|a|\epsilon_2+|b|\epsilon_1 < \frac{\epsilon}{3} +\frac{\epsilon}{3}+\frac{\epsilon}{3} = \epsilon$$.
So $|a_nb_n-ab| \rightarrow 0$ and we arrive at $a_nb_n \rightarrow ab$
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 6,811
|
Lore: The Vex
10/21/2014 PlanetDestiny Analysis 11 comments
Unfortunately much of the game's lore is hidden within the Grimoire cards.
Thanks to Reddit user KentsFood, he's compiled information & lore about the Vex (mostly from the Grimoire), which you can read below.[divider]
The Vex are architects of ancient and complex structures on Venus and Mercury. Linked by a data network unlike any on Earth, they operate in unison, directed by a single unfathomable purpose.
Vex Programming: Hezen Corrective, Hezen Prime, Vega Prohibition, Aphix Invasive, Sol Divisive.
Bodies of animate metal with traces of living matter within, the Vex are an enigma, traveling through time and space with their gateways.
It is not clear if they have arrived here from the future, the past, or from another past or future. What is clear is that they represent a dire threat to humanity. They march with a singular purpose on the battlefield, driven by some unknown and unknowable intelligence, passionless and focused on completing their mysterious objectives.
The Vex transform the planets they occupy, terraforming inorganic landscape into massive computational structures.
Hezen Protective
Hezen Corrective
[divider]Units
To say we understand nothing about the Vex is a lie. We understand that they share a single mind, for lack of a better term, over million of units. For all we know, there could be hundreds of different minds. The Vex employ a number of different chassises, each having a number of different functions and, from what we understand, computational abilities. Many of the Vex Axis Minds: elite super Vex that contain instructions for a particular objective – leaving other Vex to focus on their tasks and leaving global planning for the Axis Mind – share similar structures with these units, and will be noted alongside that Chassis.
Goblins are the basic units. They do not take retreat, take cover, or falter. Unwavering, they will continues towards their objective until otherwise impeded. Headshots do nothing but enrage them, yet shooting the milky, protozoic substance at their core will dispatch them with ease.
Hobgoblins are specialised for sniping. Their eyes contain advanced optic sensors, their horns contain acute sensory modules that they use to fully grasp their surrounding enemies. Even stranger, Hobgoblins, when attacked, have a defence reflex that seals them in unbreakable stasis for a short amount of time.
Hobgoblins and Goblins have a milky substance at their core, which a record says is "Radiolarian." Radiolaria are protozoa found in the ocean on Earth (for example, after the Traveler came they could be found on every other planet and moon, this cryptarch does not know), known for creating intricate mineral skeletons for themselves. Perhaps this means that the Vex use this substance for growth and repair.
Harpies (singular Harpy) are the fastest of the Vex, and capable of flight. They are often spotted in "flocks" on Vex scouting missions or on patrol. Harpies cannot move and fire at the same time, they must first stop and stabilise themselves before shooting.
Gorgons are found within the Vault of Glass, and have the shape of Harpies, though they appear to be members of the Sol Imminent (Descendants). The Gorgons possess ontological power; the ability to decide what is real and what is not. There is no countermeasure to this threat. If a Gorgon gazes upon a Guardian and determines him to have never existed, it will be so. Countless guardians have been lost to this unbeatable weapon.
Zydron, Gate Lord
Prohibitive Mind
Sol Progeny
Minotaurs are strong, but interestingly, they're the builders of the Vex. Most of their computational power is devoted to constructing the various structures on Mars, Venus and Mercury suspected to extend through dimensions. Minotaurs are the toughest shells of the Vex, and will use their ability to teleport with aggression.
Most of the Vex Axis Minds are Minotaurs in structure. Gate Lords only appear in their physical form when summoned, otherwise they exist in a digital state. It is believed that the Gate Lords oversee travel between Vex Gates and their minds contain codes to other dimensions, times and places. A Guardian defeated Zydron, A Gate-Lord summoned on Venus, to retrieve it's head for the Queen of the reef. The Prohibitive Mind was in charge of the Vex assault on the Cabal on Mars. It coordinated Gate entrances and Vex attacks against key Cabal positions, until it was destroyed by a Guardian.
A Guardian was able to enter the Black Garden, and there they found the Sol Progeny. "Progeny" means children or descendants, and the models of these three Minotaurs coincided with past, present and future models of Vex units: Sol Primeval, Sol Divisive and Sol Imminent respectively. These Axis Minds were believed to serve as relays to whatever it was the Guardians destroyed in the Garden. The most interesting, and most lethal, of all the Minotaur model vex Axis Minds was Atheon. Found in the Vault of Glass. Atheon's functions are hard to describe, but it's (believed to be) intended goal is not.
Sekrion, Nexus Mind
The Gorgons
The Templar
Atheon, Time's Conflux
The Vex are suspected to be attempting to rewrite the Universe. To implant themselves so far forwards and backwards in time that they existed before all else and will exist until all else dies. Atheon controlled Vex Confluxes, believed to be extensions of the Vex Network. Atheon is basically the centre of time, dimension and space travel for the Vex, and his destruction could prove essential in dampening the Vex's plans.
The largest mobile frame is the Hydra. Supercomputers, they can chew through data sent to it by all other Vex nearby in a matter of seconds. Not only this, but it boasts incredible fire power and a shield that absorbs all damage against it. When a Hydra is destroyed, the explosion it creates is powerful and deadly, but could be used to a Guardians advantage.
Only two known Axis Mind Hydra exist: The first being Sekrion. Sekrion was a Nexus Mind, in charge of overseeing the Vex Network's implantation in the surface of Venus. Sekrion was likely in charge of relaying plans to Minotaurs around the planet on where to construct Confluxes and Nexuses. It's destruction could halt the the Vex's plans with Venus.
The other is the fabled Templar. Guarding the way to Atheon, the Templar and it's Oracles had the ability to perceive any point in time and decide what should happen in something called The Ritual of Negation. During the ritual, the Oracles told the Templar what they believe the future should be in favour of the Vex, and the Templar made it so. Not only this, but it was fully surrounded (unlike most Hydras) by a impregnable shield, meaning it was impossible to damage. Guardians found a way to destroy it however, and it's terrible reign over the Vault was ended.
Finally, the Vex have massive, immobile sentries called Cyclopses. Huge turrets to the Guardians with devastating void fire, they are believed to actually be sensors or beacons and that their weapon systems are secondary to their actual function: Either relaying instructions or involving Vex Travel.
Virgo Prohibition
Sol Divisive
[divider]Vex Subtypes
Mentioned a few times above, different groups of Vex have different objectives, all eventually leading to whatever grand plan the Vex are working towards. The Hezen Corrective for example, seek out the Fallen House of Winter and aggressively campaign against it, build Confluxes and apparently show interest in Golden Age Ruins, all around the Ishtar region. Most of the Vanguards contact with the Vex has been with the Corrective.
The Hezen Protective is the next major subtype on Venus. They are a defensive unit, focused around the Citadel, A massive Vex Gate and the Vault of Glass. The Protective, as all Vex, are working towards a goal which is unclear to us at this moment. They could be reacting to something that has not happened yet, or to something that happened centuries ago. It is never clear with the Vex.
On Mars, The Virgo Prohibition are relentless agressive Vex in constant conflict with the Cabal. However, despite the Vex's efforts, the Cabal have taken several key Vex gates and confluxes on Mars, and the Prohibition appear to be failing. Is it possible that the Vex made a mistake and misjudged an enemy? Or are they simply working towards something larger, more grand? As appears to be normal in this compilation of data, the Vex only create more questions instead of answers.
In the Black Garden, a subtype of Vex called the Sol Divisive protect and worship the Black Garden. Their function is unknown, but they must be important as the Garden clearly was instrumental to the Vex.
In the Vault of Glass, two additional Vex Subtypes were uncovered. The Sol Primeval and the Sol Imminent, Precursors andDescendants respectively. The Primeval appear to be from the past, built long before the Vex we know today, while the Descendants are the opposite. Advanced, future Vex. Does this mean that the Vex are unbeatable? That our future is extinction?
[divider]Other Information
During a mission into the Archive, an old research bank in the Ishtar Academy, Guardians uncovered several research logs of a team of scientists studying the Vex. Their names: Esi, Sundaresh, Shim and Duane-McNiadh. They uncovered a Vex specimen running an extremely accurate simulation of the team itself, the sim mimicking the teams actions. They began to believe that they themselves were simulations and that the Vex were in control. The original idea to create the Warminds may have been born in that research facility as a way to "call for help". It was believed the Vex would not be able to simulate the Warmind perfectly.
The researchers bodies were found at the entrance to the Archive, indicating they sealed it to stop something getting in, or out.
The Vault of Glass was discovered by a Titan named Kabr. He descended into the Vault, and fashioned armour out of the Vex he destroyed. Kabr is believed to have been responsible for the construction of the Relic, also rumoured to be hammered out of the shell of a Gorgon. The Relic has the same capabilities as a Gorgon (To decide what is real and what is not), so this rumour has merit. Since he was apparently unable to erase the Templar, Gorgons or Atheon, it is assumed he perished in the Vault, after relieving information of its whereabouts and danger to the Vanguard. Six Guardians ventured into the Vault and successfully halted Vex plans there, destroying the Templar and Atheon.
During the collapse, the planet Mercury was completely overrun by Vex. Once a beautiful, eden world Mercury sits covered in forgotten Vex structures. There is no life there anymore, so Lord Shaxx has been training Guardians there in the Crucible, for fighting in Vex structures.
The Black Garden is perhaps one of the more confusing Vex mysteries, and for the Vex that's saying a lot. The Garden does not appear to be on Mars, yet Mars is where it is located. The Vex worship the Garden, but it is unknown whether they were the creators or the created of the Garden. Legends have existed of the Garden since the golden age, too complex for me to repeat here. The Progeny defended it, activating one unit at a time, indicating it's power was not full.
The Garden could be a source of Vex Power, it could be a manifestation of the Darkness itself, it could simply be a form of Vex we had never encountered before. What is for sure, is that is destruction returned Light to the Traveler, and hinderance to the Vex's infinite plans.
Destiny Lorelore destiny vexvex lore
Previous Post:In Other News
Next Post:Truth Exotic Review
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,029
|
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="AndroidLayouts">
<shared>
<config />
</shared>
</component>
<component name="AndroidLogFilters">
<option name="TOOL_WINDOW_CONFIGURED_FILTER" value="Show only selected application" />
</component>
<component name="ChangeListManager">
<list default="true" id="1f4368e2-90a2-4185-816f-51c2f3cdb986" name="Default" comment="">
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/app/app-release.apk" afterPath="$PROJECT_DIR$/app/app-release.apk" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/.gradle/2.8/taskArtifacts/cache.properties.lock" afterPath="$PROJECT_DIR$/.gradle/2.8/taskArtifacts/cache.properties.lock" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/.gradle/2.8/taskArtifacts/fileHashes.bin" afterPath="$PROJECT_DIR$/.gradle/2.8/taskArtifacts/fileHashes.bin" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/.gradle/2.8/taskArtifacts/fileSnapshots.bin" afterPath="$PROJECT_DIR$/.gradle/2.8/taskArtifacts/fileSnapshots.bin" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/.gradle/2.8/taskArtifacts/taskArtifacts.bin" afterPath="$PROJECT_DIR$/.gradle/2.8/taskArtifacts/taskArtifacts.bin" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_recuperation.java" afterPath="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_recuperation.java" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_traitement.java" afterPath="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_traitement.java" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Constants.java" afterPath="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Constants.java" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/Information.java" afterPath="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/Information.java" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Outil/Jour.java" afterPath="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Outil/Jour.java" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/JourEmploi.java" afterPath="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/JourEmploi.java" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/app/build.gradle" afterPath="$PROJECT_DIR$/app/build.gradle" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/build/intermediates/dex-cache/cache.xml" afterPath="$PROJECT_DIR$/build/intermediates/dex-cache/cache.xml" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/app/src/main/res/layout/reveil_activity.xml" afterPath="$PROJECT_DIR$/app/src/main/res/layout/reveil_activity.xml" />
<change type="MODIFICATION" beforePath="$PROJECT_DIR$/.idea/workspace.xml" afterPath="$PROJECT_DIR$/.idea/workspace.xml" />
</list>
<ignored path="empli-tude.iws" />
<ignored path=".idea/workspace.xml" />
<option name="EXCLUDED_CONVERTED_TO_IGNORED" value="true" />
<option name="TRACKING_ENABLED" value="true" />
<option name="SHOW_DIALOG" value="false" />
<option name="HIGHLIGHT_CONFLICTS" value="true" />
<option name="HIGHLIGHT_NON_ACTIVE_CHANGELIST" value="false" />
<option name="LAST_RESOLUTION" value="IGNORE" />
</component>
<component name="ChangesViewManager" flattened_view="true" show_ignored="false" />
<component name="CreatePatchCommitExecutor">
<option name="PATCH_PATH" value="" />
</component>
<component name="ExecutionTargetManager" SELECTED_TARGET="default_target" />
<component name="FavoritesManager">
<favorites_list name="empli-tude" />
</component>
<component name="FileEditorManager">
<leaf>
<file leaf-file-name="ADE_traitement.java" pinned="false" current-in-tab="false">
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_traitement.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="94" column="18" selection-start-line="94" selection-start-column="18" selection-end-line="94" selection-end-column="18" />
<folding>
<element signature="imports" expanded="true" />
</folding>
</state>
</provider>
</entry>
</file>
<file leaf-file-name="AndroidManifest.xml" pinned="false" current-in-tab="true">
<entry file="file://$PROJECT_DIR$/app/src/main/AndroidManifest.xml">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="1.0455765">
<caret line="48" column="45" selection-start-line="48" selection-start-column="45" selection-end-line="48" selection-end-column="45" />
<folding />
</state>
</provider>
</entry>
</file>
<file leaf-file-name="Information.java" pinned="false" current-in-tab="false">
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/Information.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="37" column="13" selection-start-line="0" selection-start-column="0" selection-end-line="175" selection-end-column="0" />
<folding />
</state>
</provider>
</entry>
</file>
<file leaf-file-name="MainActivity.java" pinned="false" current-in-tab="false">
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/MainActivity.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="117" column="35" selection-start-line="116" selection-start-column="7" selection-end-line="117" selection-end-column="35" />
<folding />
</state>
</provider>
</entry>
</file>
<file leaf-file-name="Emploi.java" pinned="false" current-in-tab="false">
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/Emploi.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="78" column="28" selection-start-line="78" selection-start-column="26" selection-end-line="78" selection-end-column="28" />
<folding />
</state>
</provider>
</entry>
</file>
<file leaf-file-name="JourEmploi.java" pinned="false" current-in-tab="false">
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/JourEmploi.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="103" column="7" selection-start-line="103" selection-start-column="7" selection-end-line="103" selection-end-column="7" />
<folding />
</state>
</provider>
</entry>
</file>
<file leaf-file-name="Introduction.java" pinned="false" current-in-tab="false">
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Introduction.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="24" column="13" selection-start-line="24" selection-start-column="13" selection-end-line="24" selection-end-column="13" />
<folding />
</state>
</provider>
</entry>
</file>
<file leaf-file-name="Constants.java" pinned="false" current-in-tab="false">
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Constants.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="13" column="0" selection-start-line="13" selection-start-column="0" selection-end-line="13" selection-end-column="0" />
<folding />
</state>
</provider>
</entry>
</file>
<file leaf-file-name="build.gradle" pinned="false" current-in-tab="false">
<entry file="file://$PROJECT_DIR$/app/build.gradle">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="11" column="25" selection-start-line="11" selection-start-column="25" selection-end-line="11" selection-end-column="25" />
<folding />
</state>
</provider>
</entry>
</file>
<file leaf-file-name="ADE_recuperation.java" pinned="false" current-in-tab="false">
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_recuperation.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="106" column="27" selection-start-line="106" selection-start-column="27" selection-end-line="106" selection-end-column="27" />
<folding />
</state>
</provider>
</entry>
</file>
</leaf>
</component>
<component name="FileTemplateManagerImpl">
<option name="RECENT_TEMPLATES">
<list>
<option value="valueResourceFile" />
<option value="layoutResourceFile_vertical" />
<option value="Class" />
</list>
</option>
</component>
<component name="GenerateSignedApkSettings">
<option name="KEY_STORE_PATH" value="$USER_HOME$/Téléchargements/keystore.jks" />
<option name="KEY_ALIAS" value="Emplitude" />
<option name="REMEMBER_PASSWORDS" value="true" />
</component>
<component name="Git.Settings">
<option name="RECENT_GIT_ROOT_PATH" value="$PROJECT_DIR$" />
</component>
<component name="GradleLocalSettings">
<option name="availableProjects">
<map>
<entry>
<key>
<ExternalProjectPojo>
<option name="name" value="empli-tude" />
<option name="path" value="$PROJECT_DIR$" />
</ExternalProjectPojo>
</key>
<value>
<list>
<ExternalProjectPojo>
<option name="name" value=":app" />
<option name="path" value="$PROJECT_DIR$/app" />
</ExternalProjectPojo>
<ExternalProjectPojo>
<option name="name" value="empli-tude" />
<option name="path" value="$PROJECT_DIR$" />
</ExternalProjectPojo>
</list>
</value>
</entry>
</map>
</option>
<option name="availableTasks">
<map>
<entry key="$PROJECT_DIR$">
<value>
<list>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="clean" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the components produced by root project 'empli-tude'. [incubating]" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="components" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays all dependencies declared in root project 'empli-tude'." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="dependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the insight into a specific dependency in root project 'empli-tude'." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="dependencyInsight" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays a help message." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="help" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Initializes a new Gradle build. [incubating]" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="init" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the configuration model of root project 'empli-tude'. [incubating]" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="model" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the sub-projects of root project 'empli-tude'." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="projects" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the properties of root project 'empli-tude'." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="properties" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the tasks runnable from root project 'empli-tude' (some of the displayed tasks may belong to subprojects)." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="tasks" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Generates Gradle wrapper files. [incubating]" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="wrapper" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the Android dependencies of the project." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="androidDependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles all variants of all applications and secondary packages." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="assemble" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles all the Test applications." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="assembleAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles all Debug builds." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="assembleDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="assembleDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="assembleDebugUnitTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles all Release builds." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="assembleRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="assembleReleaseUnitTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles and tests this project." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="build" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles and tests this project and all projects that depend on it." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="buildDependents" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles and tests this project and all projects it depends on." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="buildNeeded" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs all checks." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="check" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="checkDebugManifest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="checkReleaseManifest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileDebugAidl" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileDebugAndroidTestAidl" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileDebugAndroidTestJavaWithJavac" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileDebugAndroidTestNdk" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileDebugAndroidTestRenderscript" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileDebugAndroidTestSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileDebugJavaWithJavac" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileDebugNdk" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileDebugRenderscript" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileDebugSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileDebugUnitTestJavaWithJavac" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileDebugUnitTestSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileLint" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileReleaseAidl" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileReleaseJavaWithJavac" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileReleaseNdk" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileReleaseRenderscript" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileReleaseSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileReleaseUnitTestJavaWithJavac" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="compileReleaseUnitTestSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Installs and runs instrumentation tests for all flavors on connected devices." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="connectedAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs all device checks on currently connected devices." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="connectedCheck" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Installs and runs the tests for debug on connected devices." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="connectedDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Installs and runs instrumentation tests using all Device Providers." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="deviceAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs all device checks using Device Providers and Test Servers." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="deviceCheck" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateDebugAndroidTestAssets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateDebugAndroidTestBuildConfig" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateDebugAndroidTestResValues" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateDebugAndroidTestResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateDebugAndroidTestSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateDebugAssets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateDebugBuildConfig" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateDebugResValues" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateDebugResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateDebugSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateReleaseAssets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateReleaseBuildConfig" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateReleaseResValues" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateReleaseResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="generateReleaseSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Installs the Debug build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="installDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Installs the android (on device) tests for the Debug build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="installDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="jarDebugClasses" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="jarReleaseClasses" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs lint on all variants." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="lint" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs lint on the Debug build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="lintDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs lint on the Release build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="lintRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs lint on just the fatal issues in the Release build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="lintVitalRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="mergeDebugAndroidTestAssets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="mergeDebugAndroidTestJniLibFolders" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="mergeDebugAndroidTestResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="mergeDebugAssets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="mergeDebugJniLibFolders" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="mergeDebugResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="mergeReleaseAssets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="mergeReleaseJniLibFolders" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="mergeReleaseResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Creates a version of android.jar that's suitable for unit tests." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="mockableAndroidJar" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="packageDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="packageDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="packageRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="preBuild" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="preDebugAndroidTestBuild" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="preDebugBuild" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="preDebugUnitTestBuild" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="preReleaseBuild" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="preReleaseUnitTestBuild" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.android.support:appcompat-v7:23.1.1" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="prepareComAndroidSupportAppcompatV72311Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.android.support:design:23.1.1" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="prepareComAndroidSupportDesign2311Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.android.support:recyclerview-v7:23.1.1" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="prepareComAndroidSupportRecyclerviewV72311Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.android.support:support-v4:23.1.1" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="prepareComAndroidSupportSupportV42311Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.balysv:material-ripple:1.0.2" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="prepareComBalysvMaterialRipple102Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.github.danielnilsson9:color-picker-view:1.4.0" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="prepareComGithubDanielnilsson9ColorPickerView140Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.mcxiaoke.volley:library-aar:1.0.0" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="prepareComMcxiaokeVolleyLibraryAar100Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="prepareDebugAndroidTestDependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="prepareDebugDependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="prepareDebugUnitTestDependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="prepareReleaseDependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="prepareReleaseUnitTestDependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="processDebugAndroidTestJavaRes" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="processDebugAndroidTestManifest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="processDebugAndroidTestResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="processDebugJavaRes" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="processDebugManifest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="processDebugResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="processDebugUnitTestJavaRes" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="processReleaseJavaRes" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="processReleaseManifest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="processReleaseResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="processReleaseUnitTestJavaRes" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the signing info for each variant." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="signingReport" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prints out all the source sets defined in this project." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="sourceSets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Run unit tests for all variants." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="test" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Run unit tests for the debug build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="testDebugUnitTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Run unit tests for the release build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="testReleaseUnitTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="transformClassesWithDexForDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="transformClassesWithDexForDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="transformClassesWithDexForRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="transformNative_libsWithMergeJniLibsForDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="transformNative_libsWithMergeJniLibsForDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="transformNative_libsWithMergeJniLibsForRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="transformResourcesWithMergeJavaResForDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="transformResourcesWithMergeJavaResForDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="transformResourcesWithMergeJavaResForDebugUnitTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="transformResourcesWithMergeJavaResForRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="transformResourcesWithMergeJavaResForReleaseUnitTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Uninstall all applications." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="uninstallAll" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Uninstalls the Debug build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="uninstallDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Uninstalls the android (on device) tests for the Debug build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="uninstallDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Uninstalls the Release build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="uninstallRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="validateDebugSigning" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$" />
<option name="name" value="zipalignDebug" />
</ExternalTaskPojo>
</list>
</value>
</entry>
<entry key="$PROJECT_DIR$/app">
<value>
<list>
<ExternalTaskPojo>
<option name="description" value="Displays the Android dependencies of the project." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="androidDependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles all variants of all applications and secondary packages." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="assemble" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles all the Test applications." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="assembleAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles all Debug builds." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="assembleDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="assembleDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="assembleDebugUnitTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles all Release builds." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="assembleRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="assembleReleaseUnitTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles and tests this project." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="build" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles and tests this project and all projects that depend on it." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="buildDependents" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Assembles and tests this project and all projects it depends on." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="buildNeeded" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs all checks." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="check" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="checkDebugManifest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="checkReleaseManifest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Deletes the build directory." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="clean" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileDebugAidl" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileDebugAndroidTestAidl" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileDebugAndroidTestJavaWithJavac" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileDebugAndroidTestNdk" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileDebugAndroidTestRenderscript" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileDebugAndroidTestSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileDebugJavaWithJavac" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileDebugNdk" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileDebugRenderscript" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileDebugSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileDebugUnitTestJavaWithJavac" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileDebugUnitTestSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileLint" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileReleaseAidl" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileReleaseJavaWithJavac" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileReleaseNdk" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileReleaseRenderscript" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileReleaseSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileReleaseUnitTestJavaWithJavac" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="compileReleaseUnitTestSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the components produced by project ':app'. [incubating]" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="components" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Installs and runs instrumentation tests for all flavors on connected devices." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="connectedAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs all device checks on currently connected devices." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="connectedCheck" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Installs and runs the tests for debug on connected devices." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="connectedDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays all dependencies declared in project ':app'." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="dependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the insight into a specific dependency in project ':app'." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="dependencyInsight" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Installs and runs instrumentation tests using all Device Providers." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="deviceAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs all device checks using Device Providers and Test Servers." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="deviceCheck" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateDebugAndroidTestAssets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateDebugAndroidTestBuildConfig" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateDebugAndroidTestResValues" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateDebugAndroidTestResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateDebugAndroidTestSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateDebugAssets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateDebugBuildConfig" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateDebugResValues" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateDebugResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateDebugSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateReleaseAssets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateReleaseBuildConfig" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateReleaseResValues" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateReleaseResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="generateReleaseSources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays a help message." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="help" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Installs the Debug build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="installDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Installs the android (on device) tests for the Debug build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="installDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="jarDebugClasses" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="jarReleaseClasses" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs lint on all variants." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="lint" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs lint on the Debug build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="lintDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs lint on the Release build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="lintRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Runs lint on just the fatal issues in the Release build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="lintVitalRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="mergeDebugAndroidTestAssets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="mergeDebugAndroidTestJniLibFolders" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="mergeDebugAndroidTestResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="mergeDebugAssets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="mergeDebugJniLibFolders" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="mergeDebugResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="mergeReleaseAssets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="mergeReleaseJniLibFolders" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="mergeReleaseResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Creates a version of android.jar that's suitable for unit tests." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="mockableAndroidJar" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the configuration model of project ':app'. [incubating]" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="model" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="packageDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="packageDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="packageRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="preBuild" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="preDebugAndroidTestBuild" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="preDebugBuild" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="preDebugUnitTestBuild" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="preReleaseBuild" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="preReleaseUnitTestBuild" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.android.support:appcompat-v7:23.1.1" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="prepareComAndroidSupportAppcompatV72311Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.android.support:design:23.1.1" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="prepareComAndroidSupportDesign2311Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.android.support:recyclerview-v7:23.1.1" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="prepareComAndroidSupportRecyclerviewV72311Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.android.support:support-v4:23.1.1" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="prepareComAndroidSupportSupportV42311Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.balysv:material-ripple:1.0.2" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="prepareComBalysvMaterialRipple102Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.github.danielnilsson9:color-picker-view:1.4.0" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="prepareComGithubDanielnilsson9ColorPickerView140Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prepare com.mcxiaoke.volley:library-aar:1.0.0" />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="prepareComMcxiaokeVolleyLibraryAar100Library" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="prepareDebugAndroidTestDependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="prepareDebugDependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="prepareDebugUnitTestDependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="prepareReleaseDependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="prepareReleaseUnitTestDependencies" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="processDebugAndroidTestJavaRes" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="processDebugAndroidTestManifest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="processDebugAndroidTestResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="processDebugJavaRes" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="processDebugManifest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="processDebugResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="processDebugUnitTestJavaRes" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="processReleaseJavaRes" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="processReleaseManifest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="processReleaseResources" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="processReleaseUnitTestJavaRes" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the sub-projects of project ':app'." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="projects" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the properties of project ':app'." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="properties" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the signing info for each variant." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="signingReport" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Prints out all the source sets defined in this project." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="sourceSets" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Displays the tasks runnable from project ':app'." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="tasks" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Run unit tests for all variants." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="test" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Run unit tests for the debug build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="testDebugUnitTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Run unit tests for the release build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="testReleaseUnitTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="transformClassesWithDexForDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="transformClassesWithDexForDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="transformClassesWithDexForRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="transformNative_libsWithMergeJniLibsForDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="transformNative_libsWithMergeJniLibsForDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="transformNative_libsWithMergeJniLibsForRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="transformResourcesWithMergeJavaResForDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="transformResourcesWithMergeJavaResForDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="transformResourcesWithMergeJavaResForDebugUnitTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="transformResourcesWithMergeJavaResForRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="transformResourcesWithMergeJavaResForReleaseUnitTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Uninstall all applications." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="uninstallAll" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Uninstalls the Debug build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="uninstallDebug" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Uninstalls the android (on device) tests for the Debug build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="uninstallDebugAndroidTest" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="description" value="Uninstalls the Release build." />
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="uninstallRelease" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="validateDebugSigning" />
</ExternalTaskPojo>
<ExternalTaskPojo>
<option name="linkedExternalProjectPath" value="$PROJECT_DIR$/app" />
<option name="name" value="zipalignDebug" />
</ExternalTaskPojo>
</list>
</value>
</entry>
</map>
</option>
<option name="modificationStamps">
<map>
<entry key="$PROJECT_DIR$" value="4368435412988" />
</map>
</option>
<option name="projectBuildClasspath">
<map>
<entry key="$PROJECT_DIR$">
<value>
<ExternalProjectBuildClasspathPojo>
<option name="modulesBuildClasspath">
<map>
<entry key="$PROJECT_DIR$">
<value>
<ExternalModuleBuildClasspathPojo>
<option name="entries">
<list>
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/gradle/1.5.0/gradle-1.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/gradle/1.5.0/gradle-1.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/gradle-core/1.5.0/gradle-core-1.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/gradle-core/1.5.0/gradle-core-1.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/jacoco/org.jacoco.core/0.7.4.201502262128/org.jacoco.core-0.7.4.201502262128-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/jacoco/org.jacoco.core/0.7.4.201502262128/org.jacoco.core-0.7.4.201502262128.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/transform-api/1.5.0/transform-api-1.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/transform-api/1.5.0/transform-api-1.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/databinding/compilerCommon/1.0-rc5/compilerCommon-1.0-rc5.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/net/sf/proguard/proguard-gradle/5.2.1/proguard-gradle-5.2.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/net/sf/proguard/proguard-gradle/5.2.1/proguard-gradle-5.2.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/lint/lint/24.5.0/lint-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/lint/lint/24.5.0/lint-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/builder/1.5.0/builder-1.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/builder/1.5.0/builder-1.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm-debug-all/5.0.1/asm-debug-all-5.0.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm-debug-all/5.0.1/asm-debug-all-5.0.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/google/guava/guava/17.0/guava-17.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/google/guava/guava/17.0/guava-17.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/annotations/24.5.0/annotations-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/annotations/24.5.0/annotations-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/googlecode/juniversalchardet/juniversalchardet/1.0.3/juniversalchardet-1.0.3.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/commons/commons-lang3/3.3.2/commons-lang3-3.3.2.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/tunnelvisionlabs/antlr4/4.5/antlr4-4.5.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/commons-io/commons-io/2.4/commons-io-2.4-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/commons-io/commons-io/2.4/commons-io-2.4.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/databinding/baseLibrary/1.0-rc5/baseLibrary-1.0-rc5-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/databinding/baseLibrary/1.0-rc5/baseLibrary-1.0-rc5.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/net/sf/proguard/proguard-base/5.2.1/proguard-base-5.2.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/net/sf/proguard/proguard-base/5.2.1/proguard-base-5.2.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/lint/lint-checks/24.5.0/lint-checks-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/lint/lint-checks/24.5.0/lint-checks-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/eclipse/jdt/core/compiler/ecj/4.4.2/ecj-4.4.2-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/eclipse/jdt/core/compiler/ecj/4.4.2/ecj-4.4.2.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/sdk-common/24.5.0/sdk-common-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/sdk-common/24.5.0/sdk-common-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/bouncycastle/bcprov-jdk15on/1.48/bcprov-jdk15on-1.48-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/bouncycastle/bcprov-jdk15on/1.48/bcprov-jdk15on-1.48.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/builder-model/1.5.0/builder-model-1.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/builder-model/1.5.0/builder-model-1.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm-tree/5.0.3/asm-tree-5.0.3-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm-tree/5.0.3/asm-tree-5.0.3.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/common/24.5.0/common-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/common/24.5.0/common-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/squareup/javawriter/2.5.0/javawriter-2.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/squareup/javawriter/2.5.0/javawriter-2.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/sdklib/24.5.0/sdklib-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/sdklib/24.5.0/sdklib-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/antlr/antlr-runtime/3.5.2/antlr-runtime-3.5.2.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/builder-test-api/1.5.0/builder-test-api-1.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/builder-test-api/1.5.0/builder-test-api-1.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm/5.0.3/asm-5.0.3-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm/5.0.3/asm-5.0.3.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/jack/jack-api/0.9.0/jack-api-0.9.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/jack/jack-api/0.9.0/jack-api-0.9.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/ddms/ddmlib/24.5.0/ddmlib-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/ddms/ddmlib/24.5.0/ddmlib-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/manifest-merger/24.5.0/manifest-merger-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/manifest-merger/24.5.0/manifest-merger-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/jill/jill-api/0.9.0/jill-api-0.9.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/jill/jill-api/0.9.0/jill-api-0.9.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/bouncycastle/bcpkix-jdk15on/1.48/bcpkix-jdk15on-1.48-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/bouncycastle/bcpkix-jdk15on/1.48/bcpkix-jdk15on-1.48.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/tunnelvisionlabs/antlr4-runtime/4.5/antlr4-runtime-4.5.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/tunnelvisionlabs/antlr4-annotations/4.5/antlr4-annotations-4.5.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/antlr/ST4/4.0.8/ST4-4.0.8.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm-analysis/5.0.3/asm-analysis-5.0.3-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm-analysis/5.0.3/asm-analysis-5.0.3.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/lint/lint-api/24.5.0/lint-api-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/lint/lint-api/24.5.0/lint-api-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/httpcomponents/httpclient/4.1.1/httpclient-4.1.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/httpcomponents/httpclient/4.1.1/httpclient-4.1.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/httpcomponents/httpmime/4.1/httpmime-4.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/httpcomponents/httpmime/4.1/httpmime-4.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/layoutlib/layoutlib-api/24.5.0/layoutlib-api-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/layoutlib/layoutlib-api/24.5.0/layoutlib-api-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/google/code/gson/gson/2.2.4/gson-2.2.4-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/google/code/gson/gson/2.2.4/gson-2.2.4.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/commons/commons-compress/1.8.1/commons-compress-1.8.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/commons/commons-compress/1.8.1/commons-compress-1.8.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/dvlib/24.5.0/dvlib-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/dvlib/24.5.0/dvlib-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/net/sf/kxml/kxml2/2.3.0/kxml2-2.3.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/net/sf/kxml/kxml2/2.3.0/kxml2-2.3.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/abego/treelayout/org.abego.treelayout.core/1.0.1/org.abego.treelayout.core-1.0.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/external/lombok/lombok-ast/0.2.3/lombok-ast-0.2.3-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/external/lombok/lombok-ast/0.2.3/lombok-ast-0.2.3.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/httpcomponents/httpcore/4.1/httpcore-4.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/httpcomponents/httpcore/4.1/httpcore-4.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/commons-codec/commons-codec/1.4/commons-codec-1.4-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/intellij/annotations/12.0/annotations-12.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/intellij/annotations/12.0/annotations-12.0.jar" />
</list>
</option>
<option name="path" value="$PROJECT_DIR$" />
</ExternalModuleBuildClasspathPojo>
</value>
</entry>
<entry key="$PROJECT_DIR$/app">
<value>
<ExternalModuleBuildClasspathPojo>
<option name="entries">
<list>
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/gradle/1.5.0/gradle-1.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/gradle/1.5.0/gradle-1.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/gradle-core/1.5.0/gradle-core-1.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/gradle-core/1.5.0/gradle-core-1.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/jacoco/org.jacoco.core/0.7.4.201502262128/org.jacoco.core-0.7.4.201502262128-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/jacoco/org.jacoco.core/0.7.4.201502262128/org.jacoco.core-0.7.4.201502262128.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/transform-api/1.5.0/transform-api-1.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/transform-api/1.5.0/transform-api-1.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/databinding/compilerCommon/1.0-rc5/compilerCommon-1.0-rc5.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/net/sf/proguard/proguard-gradle/5.2.1/proguard-gradle-5.2.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/net/sf/proguard/proguard-gradle/5.2.1/proguard-gradle-5.2.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/lint/lint/24.5.0/lint-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/lint/lint/24.5.0/lint-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/builder/1.5.0/builder-1.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/builder/1.5.0/builder-1.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm-debug-all/5.0.1/asm-debug-all-5.0.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm-debug-all/5.0.1/asm-debug-all-5.0.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/google/guava/guava/17.0/guava-17.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/google/guava/guava/17.0/guava-17.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/annotations/24.5.0/annotations-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/annotations/24.5.0/annotations-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/googlecode/juniversalchardet/juniversalchardet/1.0.3/juniversalchardet-1.0.3.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/commons/commons-lang3/3.3.2/commons-lang3-3.3.2.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/tunnelvisionlabs/antlr4/4.5/antlr4-4.5.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/commons-io/commons-io/2.4/commons-io-2.4-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/commons-io/commons-io/2.4/commons-io-2.4.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/databinding/baseLibrary/1.0-rc5/baseLibrary-1.0-rc5-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/databinding/baseLibrary/1.0-rc5/baseLibrary-1.0-rc5.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/net/sf/proguard/proguard-base/5.2.1/proguard-base-5.2.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/net/sf/proguard/proguard-base/5.2.1/proguard-base-5.2.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/lint/lint-checks/24.5.0/lint-checks-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/lint/lint-checks/24.5.0/lint-checks-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/eclipse/jdt/core/compiler/ecj/4.4.2/ecj-4.4.2-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/eclipse/jdt/core/compiler/ecj/4.4.2/ecj-4.4.2.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/sdk-common/24.5.0/sdk-common-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/sdk-common/24.5.0/sdk-common-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/bouncycastle/bcprov-jdk15on/1.48/bcprov-jdk15on-1.48-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/bouncycastle/bcprov-jdk15on/1.48/bcprov-jdk15on-1.48.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/builder-model/1.5.0/builder-model-1.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/builder-model/1.5.0/builder-model-1.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm-tree/5.0.3/asm-tree-5.0.3-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm-tree/5.0.3/asm-tree-5.0.3.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/common/24.5.0/common-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/common/24.5.0/common-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/squareup/javawriter/2.5.0/javawriter-2.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/squareup/javawriter/2.5.0/javawriter-2.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/sdklib/24.5.0/sdklib-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/sdklib/24.5.0/sdklib-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/antlr/antlr-runtime/3.5.2/antlr-runtime-3.5.2.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/builder-test-api/1.5.0/builder-test-api-1.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/builder-test-api/1.5.0/builder-test-api-1.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm/5.0.3/asm-5.0.3-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm/5.0.3/asm-5.0.3.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/jack/jack-api/0.9.0/jack-api-0.9.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/jack/jack-api/0.9.0/jack-api-0.9.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/ddms/ddmlib/24.5.0/ddmlib-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/ddms/ddmlib/24.5.0/ddmlib-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/manifest-merger/24.5.0/manifest-merger-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/build/manifest-merger/24.5.0/manifest-merger-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/jill/jill-api/0.9.0/jill-api-0.9.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/jill/jill-api/0.9.0/jill-api-0.9.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/bouncycastle/bcpkix-jdk15on/1.48/bcpkix-jdk15on-1.48-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/bouncycastle/bcpkix-jdk15on/1.48/bcpkix-jdk15on-1.48.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/tunnelvisionlabs/antlr4-runtime/4.5/antlr4-runtime-4.5.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/tunnelvisionlabs/antlr4-annotations/4.5/antlr4-annotations-4.5.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/antlr/ST4/4.0.8/ST4-4.0.8.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm-analysis/5.0.3/asm-analysis-5.0.3-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/ow2/asm/asm-analysis/5.0.3/asm-analysis-5.0.3.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/lint/lint-api/24.5.0/lint-api-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/lint/lint-api/24.5.0/lint-api-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/httpcomponents/httpclient/4.1.1/httpclient-4.1.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/httpcomponents/httpclient/4.1.1/httpclient-4.1.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/httpcomponents/httpmime/4.1/httpmime-4.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/httpcomponents/httpmime/4.1/httpmime-4.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/layoutlib/layoutlib-api/24.5.0/layoutlib-api-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/layoutlib/layoutlib-api/24.5.0/layoutlib-api-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/google/code/gson/gson/2.2.4/gson-2.2.4-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/google/code/gson/gson/2.2.4/gson-2.2.4.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/commons/commons-compress/1.8.1/commons-compress-1.8.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/commons/commons-compress/1.8.1/commons-compress-1.8.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/dvlib/24.5.0/dvlib-24.5.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/dvlib/24.5.0/dvlib-24.5.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/net/sf/kxml/kxml2/2.3.0/kxml2-2.3.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/net/sf/kxml/kxml2/2.3.0/kxml2-2.3.0.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/abego/treelayout/org.abego.treelayout.core/1.0.1/org.abego.treelayout.core-1.0.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/external/lombok/lombok-ast/0.2.3/lombok-ast-0.2.3-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/android/tools/external/lombok/lombok-ast/0.2.3/lombok-ast-0.2.3.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/httpcomponents/httpcore/4.1/httpcore-4.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/org/apache/httpcomponents/httpcore/4.1/httpcore-4.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/commons-codec/commons-codec/1.4/commons-codec-1.4-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/intellij/annotations/12.0/annotations-12.0-sources.jar" />
<option value="$APPLICATION_HOME_DIR$/gradle/m2repository/com/intellij/annotations/12.0/annotations-12.0.jar" />
<option value="$USER_HOME$/Android/Sdk/extras/android/m2repository/com/android/support/appcompat-v7/23.1.1/appcompat-v7-23.1.1.aar" />
<option value="$USER_HOME$/Android/Sdk/extras/android/m2repository/com/android/support/design/23.1.1/design-23.1.1.aar" />
<option value="$USER_HOME$/.gradle/caches/modules-2/files-2.1/com.mcxiaoke.volley/library-aar/1.0.0/cc8071e1ce2c478bc9974c1044c3f6f0b2a96bb0/library-aar-1.0.0.aar" />
<option value="$USER_HOME$/.gradle/caches/modules-2/files-2.1/com.balysv/material-ripple/1.0.2/8d6d7c69d4513f3fd647a6f397deb422daf37d6/material-ripple-1.0.2.aar" />
<option value="$USER_HOME$/.gradle/caches/modules-2/files-2.1/com.github.danielnilsson9/color-picker-view/1.4.0/3a4c9e9c1457b5b663cd84f7b475ed083926eda9/color-picker-view-1.4.0.aar" />
<option value="$USER_HOME$/Android/Sdk/extras/android/m2repository/com/android/support/support-v4/23.1.1/support-v4-23.1.1.aar" />
<option value="$USER_HOME$/Android/Sdk/extras/android/m2repository/com/android/support/recyclerview-v7/23.1.1/recyclerview-v7-23.1.1.aar" />
<option value="$USER_HOME$/Android/Sdk/extras/android/m2repository/com/android/support/support-annotations/23.1.1/support-annotations-23.1.1.jar" />
</list>
</option>
<option name="path" value="$PROJECT_DIR$/app" />
</ExternalModuleBuildClasspathPojo>
</value>
</entry>
</map>
</option>
<option name="name" value="app" />
<option name="projectBuildClasspath">
<list>
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/reporting" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/ivy" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/build-init" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/resources-sftp" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/language-jvm" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/language-native" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/open-api" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/announce" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/language-scala" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/antlr" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/ide" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/diagnostics" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/base-services-groovy" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/jetty" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/scala" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/testing-native" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/maven" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/test-kit" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/tooling-api" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/resources" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/model-core" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/plugin-use" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/plugin-development" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/core" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/platform-play" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/native" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/wrapper" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/resources-s3" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/platform-native" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/sonar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/dependency-management" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/javascript" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/ide-native" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/model-groovy" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/language-groovy" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/launcher" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/ui" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/tooling-api-builders" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/osgi" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/code-quality" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/messaging" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/language-java" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/plugins" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/platform-jvm" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/publish" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/resources-http" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/platform-base" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/internal-testing" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/ear" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/base-services" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/jacoco" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/build-comparison" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/internal-integ-testing" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/cli" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/src/signing" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-base-services-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-open-api-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-model-groovy-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-core-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-ui-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-messaging-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-native-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-tooling-api-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-wrapper-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-resources-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-docs-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/groovy-all-2.4.4.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-model-core-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/ant-1.9.3.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-launcher-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-cli-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/ant-launcher-1.9.3.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/gradle-base-services-groovy-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-language-groovy-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-plugins-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-build-init-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-code-quality-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-sonar-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-resources-sftp-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-platform-jvm-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-testing-native-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-jacoco-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-resources-s3-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-antlr-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-announce-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-resources-http-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-ide-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-maven-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-language-java-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-language-scala-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-osgi-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-tooling-api-builders-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-javascript-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-platform-base-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-plugin-use-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-platform-play-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-publish-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-platform-native-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-jetty-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-ivy-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-ear-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-dependency-management-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-scala-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-language-jvm-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-language-native-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-plugin-development-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-diagnostics-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-signing-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-build-comparison-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/ivy-2.2.0.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-test-kit-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-ide-native-2.8.jar" />
<option value="$USER_HOME$/.gradle/wrapper/dists/gradle-2.8-all/ah86jmo43de9lfa8xg9ux3c4h/gradle-2.8/lib/plugins/gradle-reporting-2.8.jar" />
<option value="$PROJECT_DIR$/buildSrc/src/main/java" />
<option value="$PROJECT_DIR$/buildSrc/src/main/groovy" />
</list>
</option>
</ExternalProjectBuildClasspathPojo>
</value>
</entry>
</map>
</option>
<option name="externalProjectsViewState">
<projects_view />
</option>
</component>
<component name="IdeDocumentHistory">
<option name="CHANGED_PATHS">
<list>
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Todo/Todo.java" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/emploi_du_temps.xml" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Propos/ExpendableView.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Parametre/Parametre.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_automatique.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_information.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_retour.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ColorPicker.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/Emploi.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/EmploiAjouterTache.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Outil/EvenementInternet.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Outil/Fichier.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Son/LancerSonReceiver.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Son/FermerSonReceiver.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/Cour.java" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/todo_tache.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/todo_ajouter.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/todo.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/sons.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/choix.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/color_picker.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/emploi_action.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/emploi_ajouter_tache.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/emploi_cour.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/emploi_du_jour.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/emploi_tache.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/introduction.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/nav_header_main.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/parametre.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/propos.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/menu/activity_main_drawer.xml" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/MainActivity.java" />
<option value="$PROJECT_DIR$/app/src/main/res/values/repas_page.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/repas_toolbar.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/repas.xml" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Repas/RepasPagerAdapter.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Repas/RepasPage.java" />
<option value="$PROJECT_DIR$/app/src/main/res/values/strings.xml" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/repas_page.xml" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Repas/Repas.java" />
<option value="$PROJECT_DIR$/app/src/main/AndroidManifest.xml" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Repas/RepasJour.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_recuperation.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Outil/Jour.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/Information.java" />
<option value="$PROJECT_DIR$/app/src/main/res/layout/reveil_activity.xml" />
<option value="$PROJECT_DIR$/app/build.gradle" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Constants.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/JourEmploi.java" />
<option value="$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_traitement.java" />
</list>
</option>
</component>
<component name="MavenImportPreferences">
<option name="generalSettings">
<MavenGeneralSettings>
<option name="mavenHome" value="Bundled (Maven 3)" />
</MavenGeneralSettings>
</option>
</component>
<component name="NamedScopeManager">
<order />
</component>
<component name="ProjectFrameBounds">
<option name="width" value="1366" />
<option name="height" value="735" />
</component>
<component name="ProjectInspectionProfilesVisibleTreeState">
<entry key="Project Default">
<profile-state>
<expanded-state>
<State>
<id />
</State>
<State>
<id>Android Lint</id>
</State>
</expanded-state>
<selected-state>
<State>
<id>AndroidLintAllowBackup</id>
</State>
</selected-state>
</profile-state>
</entry>
</component>
<component name="ProjectLevelVcsManager" settingsEditedManually="true">
<OptionsSetting value="true" id="Add" />
<OptionsSetting value="true" id="Remove" />
<OptionsSetting value="true" id="Checkout" />
<OptionsSetting value="true" id="Update" />
<OptionsSetting value="true" id="Status" />
<OptionsSetting value="true" id="Edit" />
<ConfirmationsSetting value="0" id="Add" />
<ConfirmationsSetting value="0" id="Remove" />
</component>
<component name="ProjectReloadState">
<option name="STATE" value="1" />
</component>
<component name="ProjectView">
<navigator currentView="AndroidView" proportions="" version="1">
<flattenPackages />
<showMembers />
<showModules />
<showLibraryContents />
<hideEmptyPackages />
<abbreviatePackageNames />
<autoscrollToSource />
<autoscrollFromSource />
<sortByType />
</navigator>
<panes>
<pane id="AndroidView">
<subPane>
<PATH>
<PATH_ELEMENT>
<option name="myItemId" value="empli-tude" />
<option name="myItemType" value="com.android.tools.idea.navigator.nodes.AndroidViewProjectNode" />
</PATH_ELEMENT>
<PATH_ELEMENT>
<option name="myItemId" value="Gradle Scripts" />
<option name="myItemType" value="com.android.tools.idea.navigator.nodes.AndroidBuildScriptsGroupNode" />
</PATH_ELEMENT>
</PATH>
<PATH>
<PATH_ELEMENT>
<option name="myItemId" value="empli-tude" />
<option name="myItemType" value="com.android.tools.idea.navigator.nodes.AndroidViewProjectNode" />
</PATH_ELEMENT>
</PATH>
</subPane>
</pane>
<pane id="PackagesPane" />
<pane id="Scope" />
<pane id="Scratches" />
<pane id="ProjectPane" />
</panes>
</component>
<component name="PropertiesComponent">
<property name="android.sdk.path" value="$USER_HOME$/Android/Sdk" />
<property name="android.project.structure.last.selected" value="SDK Location" />
<property name="android.project.structure.proportion" value="0.15" />
<property name="device.picker.selection" value="emulator-5554" />
<property name="recentsLimit" value="5" />
<property name="ANDROID_EXTENDED_DEVICE_CHOOSER_SERIALS" value="TA00300UYV" />
<property name="ANDROID_EXTENDED_DEVICE_CHOOSER_AVD" value="Nexus_4_API_16" />
<property name="last_opened_file_path" value="$PROJECT_DIR$/../AvosAvisWeb" />
<property name="ExportApk.ApkPath" value="$PROJECT_DIR$/app" />
<property name="ExportApk.Flavors" value="" />
<property name="ExportApk.BuildType" value="release" />
<property name="FullScreen" value="false" />
<property name="OverrideImplement.combined" value="true" />
<property name="OverrideImplement.overriding.sorted" value="false" />
</component>
<component name="RunManager" selected="Android Application.app">
<configuration default="true" type="AndroidRunConfigurationType" factoryName="Android Application">
<module name="" />
<option name="DEPLOY" value="true" />
<option name="ARTIFACT_NAME" value="" />
<option name="PM_INSTALL_OPTIONS" value="" />
<option name="ACTIVITY_EXTRA_FLAGS" value="" />
<option name="MODE" value="default_activity" />
<option name="TARGET_SELECTION_MODE" value="SHOW_DIALOG" />
<option name="PREFERRED_AVD" value="" />
<option name="CLEAR_LOGCAT" value="false" />
<option name="SHOW_LOGCAT_AUTOMATICALLY" value="true" />
<option name="SKIP_NOOP_APK_INSTALLATIONS" value="true" />
<option name="FORCE_STOP_RUNNING_APP" value="true" />
<option name="USE_LAST_SELECTED_DEVICE" value="false" />
<option name="PREFERRED_AVD" value="" />
<option name="SELECTED_CLOUD_MATRIX_CONFIGURATION_ID" value="-1" />
<option name="SELECTED_CLOUD_MATRIX_PROJECT_ID" value="" />
<option name="DEEP_LINK" value="" />
<option name="ACTIVITY_CLASS" value="" />
<method />
</configuration>
<configuration default="true" type="AndroidTestRunConfigurationType" factoryName="Android Tests">
<module name="" />
<option name="TESTING_TYPE" value="0" />
<option name="INSTRUMENTATION_RUNNER_CLASS" value="" />
<option name="METHOD_NAME" value="" />
<option name="CLASS_NAME" value="" />
<option name="PACKAGE_NAME" value="" />
<option name="EXTRA_OPTIONS" value="" />
<option name="TARGET_SELECTION_MODE" value="SHOW_DIALOG" />
<option name="PREFERRED_AVD" value="" />
<option name="CLEAR_LOGCAT" value="false" />
<option name="SHOW_LOGCAT_AUTOMATICALLY" value="true" />
<option name="SKIP_NOOP_APK_INSTALLATIONS" value="true" />
<option name="FORCE_STOP_RUNNING_APP" value="true" />
<option name="USE_LAST_SELECTED_DEVICE" value="false" />
<option name="PREFERRED_AVD" value="" />
<option name="SELECTED_CLOUD_MATRIX_CONFIGURATION_ID" value="-1" />
<option name="SELECTED_CLOUD_MATRIX_PROJECT_ID" value="" />
<method />
</configuration>
<configuration default="true" type="Application" factoryName="Application">
<extension name="coverage" enabled="false" merge="false" sample_coverage="true" runner="idea" />
<option name="MAIN_CLASS_NAME" />
<option name="VM_PARAMETERS" />
<option name="PROGRAM_PARAMETERS" />
<option name="WORKING_DIRECTORY" value="$PROJECT_DIR$" />
<option name="ALTERNATIVE_JRE_PATH_ENABLED" value="false" />
<option name="ALTERNATIVE_JRE_PATH" />
<option name="ENABLE_SWING_INSPECTOR" value="false" />
<option name="ENV_VARIABLES" />
<option name="PASS_PARENT_ENVS" value="true" />
<module name="" />
<envs />
<method />
</configuration>
<configuration default="true" type="JUnit" factoryName="JUnit">
<extension name="coverage" enabled="false" merge="false" sample_coverage="true" runner="idea" />
<module name="" />
<option name="ALTERNATIVE_JRE_PATH_ENABLED" value="false" />
<option name="ALTERNATIVE_JRE_PATH" />
<option name="PACKAGE_NAME" />
<option name="MAIN_CLASS_NAME" />
<option name="METHOD_NAME" />
<option name="TEST_OBJECT" value="class" />
<option name="VM_PARAMETERS" value="-ea" />
<option name="PARAMETERS" />
<option name="WORKING_DIRECTORY" value="$MODULE_DIR$" />
<option name="ENV_VARIABLES" />
<option name="PASS_PARENT_ENVS" value="true" />
<option name="TEST_SEARCH_SCOPE">
<value defaultName="singleModule" />
</option>
<envs />
<patterns />
<method>
<option name="Make" enabled="false" />
<option name="Android.Gradle.BeforeRunTask" enabled="true" />
</method>
</configuration>
<configuration default="true" type="JarApplication" factoryName="JAR Application">
<extension name="coverage" enabled="false" merge="false" sample_coverage="true" runner="idea" />
<envs />
<method />
</configuration>
<configuration default="true" type="Remote" factoryName="Remote">
<option name="USE_SOCKET_TRANSPORT" value="true" />
<option name="SERVER_MODE" value="false" />
<option name="SHMEM_ADDRESS" value="javadebug" />
<option name="HOST" value="localhost" />
<option name="PORT" value="5005" />
<method />
</configuration>
<configuration default="true" type="TestNG" factoryName="TestNG">
<extension name="coverage" enabled="false" merge="false" sample_coverage="true" runner="idea" />
<module name="" />
<option name="ALTERNATIVE_JRE_PATH_ENABLED" value="false" />
<option name="ALTERNATIVE_JRE_PATH" />
<option name="SUITE_NAME" />
<option name="PACKAGE_NAME" />
<option name="MAIN_CLASS_NAME" />
<option name="METHOD_NAME" />
<option name="GROUP_NAME" />
<option name="TEST_OBJECT" value="CLASS" />
<option name="VM_PARAMETERS" value="-ea" />
<option name="PARAMETERS" />
<option name="WORKING_DIRECTORY" value="$MODULE_DIR$" />
<option name="OUTPUT_DIRECTORY" />
<option name="ANNOTATION_TYPE" />
<option name="ENV_VARIABLES" />
<option name="PASS_PARENT_ENVS" value="true" />
<option name="TEST_SEARCH_SCOPE">
<value defaultName="singleModule" />
</option>
<option name="USE_DEFAULT_REPORTERS" value="false" />
<option name="PROPERTIES_FILE" />
<envs />
<properties />
<listeners />
<method />
</configuration>
<configuration default="false" name="app" type="AndroidRunConfigurationType" factoryName="Android Application">
<module name="app" />
<option name="DEPLOY" value="true" />
<option name="ARTIFACT_NAME" value="" />
<option name="PM_INSTALL_OPTIONS" value="" />
<option name="ACTIVITY_EXTRA_FLAGS" value="" />
<option name="MODE" value="default_activity" />
<option name="TARGET_SELECTION_MODE" value="SHOW_DIALOG" />
<option name="PREFERRED_AVD" value="" />
<option name="CLEAR_LOGCAT" value="false" />
<option name="SHOW_LOGCAT_AUTOMATICALLY" value="true" />
<option name="SKIP_NOOP_APK_INSTALLATIONS" value="true" />
<option name="FORCE_STOP_RUNNING_APP" value="true" />
<option name="USE_LAST_SELECTED_DEVICE" value="false" />
<option name="PREFERRED_AVD" value="" />
<option name="SELECTED_CLOUD_MATRIX_CONFIGURATION_ID" value="-1" />
<option name="SELECTED_CLOUD_MATRIX_PROJECT_ID" value="" />
<option name="DEEP_LINK" value="" />
<option name="ACTIVITY_CLASS" value="" />
<method />
</configuration>
<list size="1">
<item index="0" class="java.lang.String" itemvalue="Android Application.app" />
</list>
<configuration name="<template>" type="Applet" default="true" selected="false">
<option name="MAIN_CLASS_NAME" />
<option name="HTML_FILE_NAME" />
<option name="HTML_USED" value="false" />
<option name="WIDTH" value="400" />
<option name="HEIGHT" value="300" />
<option name="POLICY_FILE" value="$APPLICATION_HOME_DIR$/bin/appletviewer.policy" />
<option name="VM_PARAMETERS" />
</configuration>
<configuration name="<template>" type="#org.jetbrains.idea.devkit.run.PluginConfigurationType" default="true" selected="false">
<option name="VM_PARAMETERS" value="-Xmx512m -Xms256m -XX:MaxPermSize=250m -ea" />
</configuration>
</component>
<component name="ShelveChangesManager" show_recycled="false" />
<component name="SvnConfiguration">
<configuration />
</component>
<component name="TaskManager">
<task active="true" id="Default" summary="Default task">
<changelist id="1f4368e2-90a2-4185-816f-51c2f3cdb986" name="Default" comment="" />
<created>1454868564813</created>
<option name="number" value="Default" />
<updated>1454868564813</updated>
</task>
<servers />
</component>
<component name="ToolWindowManager">
<frame x="0" y="0" width="1366" height="735" extended-state="6" />
<editor active="false" />
<layout>
<window_info id="Palette	" active="false" anchor="left" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="2" side_tool="false" content_ui="tabs" />
<window_info id="Designer" active="false" anchor="right" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="6" side_tool="false" content_ui="tabs" />
<window_info id="Terminal" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.05704698" sideWeight="0.5" order="13" side_tool="false" content_ui="tabs" />
<window_info id="Android Model" active="false" anchor="right" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="5" side_tool="true" content_ui="tabs" />
<window_info id="Capture Analysis" active="false" anchor="right" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="4" side_tool="false" content_ui="tabs" />
<window_info id="Android Monitor" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="true" weight="0.32885906" sideWeight="0.5" order="7" side_tool="false" content_ui="tabs" />
<window_info id="Captures" active="false" anchor="left" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.25" sideWeight="0.5" order="6" side_tool="false" content_ui="tabs" />
<window_info id="Debug" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.4" sideWeight="0.5" order="3" side_tool="false" content_ui="tabs" />
<window_info id="Favorites" active="false" anchor="left" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="4" side_tool="true" content_ui="tabs" />
<window_info id="Event Log" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="9" side_tool="true" content_ui="tabs" />
<window_info id="Capture Tool" active="false" anchor="left" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="5" side_tool="false" content_ui="tabs" />
<window_info id="Version Control" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="10" side_tool="false" content_ui="tabs" />
<window_info id="Gradle Console" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="8" side_tool="true" content_ui="tabs" />
<window_info id="Build Variants" active="false" anchor="left" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="3" side_tool="true" content_ui="tabs" />
<window_info id="Gradle" active="false" anchor="right" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="3" side_tool="false" content_ui="tabs" />
<window_info id="TODO" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="6" side_tool="false" content_ui="tabs" />
<window_info id="Structure" active="false" anchor="left" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.25" sideWeight="0.5" order="1" side_tool="false" content_ui="tabs" />
<window_info id="Maven Projects" active="false" anchor="right" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="7" side_tool="false" content_ui="tabs" />
<window_info id="Project" active="false" anchor="left" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="true" weight="0.25" sideWeight="0.5" order="0" side_tool="false" content_ui="combo" />
<window_info id="Run" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.46464646" sideWeight="0.5" order="2" side_tool="false" content_ui="tabs" />
<window_info id="Ant Build" active="false" anchor="right" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.25" sideWeight="0.5" order="1" side_tool="false" content_ui="tabs" />
<window_info id="Application Servers" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="11" side_tool="false" content_ui="tabs" />
<window_info id="Hierarchy" active="false" anchor="right" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.25" sideWeight="0.5" order="2" side_tool="false" content_ui="combo" />
<window_info id="Cvs" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.25" sideWeight="0.5" order="4" side_tool="false" content_ui="tabs" />
<window_info id="Preview" active="false" anchor="right" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="8" side_tool="false" content_ui="tabs" />
<window_info id="Message" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="0" side_tool="false" content_ui="tabs" />
<window_info id="Find" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.33" sideWeight="0.5" order="1" side_tool="false" content_ui="tabs" />
<window_info id="Messages" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.32996634" sideWeight="0.5" order="12" side_tool="false" content_ui="tabs" />
<window_info id="Commander" active="false" anchor="right" auto_hide="false" internal_type="SLIDING" type="SLIDING" visible="false" weight="0.4" sideWeight="0.5" order="0" side_tool="false" content_ui="tabs" />
<window_info id="Inspection" active="false" anchor="bottom" auto_hide="false" internal_type="DOCKED" type="DOCKED" visible="false" weight="0.4" sideWeight="0.5" order="5" side_tool="false" content_ui="tabs" />
</layout>
</component>
<component name="Vcs.Log.UiProperties">
<option name="RECENTLY_FILTERED_USER_GROUPS">
<collection />
</option>
<option name="RECENTLY_FILTERED_BRANCH_GROUPS">
<collection />
</option>
</component>
<component name="VcsContentAnnotationSettings">
<option name="myLimit" value="2678400000" />
</component>
<component name="VcsManagerConfiguration">
<option name="LAST_COMMIT_MESSAGE" value="" />
</component>
<component name="XDebuggerManager">
<breakpoint-manager>
<option name="time" value="1" />
</breakpoint-manager>
<watches-manager />
</component>
<component name="editorHistoryManager">
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/sons.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/activity_main.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/choix.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/color_picker.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="47" column="62" selection-start-line="47" selection-start-column="62" selection-end-line="47" selection-end-column="62" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/content_main.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/dialog_repeter.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/dialog_temporisation.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/emploi_action.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/emploi_ajouter_tache.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/emploi_cour.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/emploi_du_jour.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/emploi_du_temps.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/emploi_tache.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/introduction.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/item.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/nav_header_main.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/parametre.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/propos.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/propos_child.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/propos_item.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/reveil.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/sonnerie_activity.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/menu/main.xml">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/drawable/ic_son.xml">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.36885247">
<caret line="9" column="0" selection-start-line="9" selection-start-column="0" selection-end-line="9" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Todo/Todo.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="27" column="13" selection-start-line="0" selection-start-column="0" selection-end-line="136" selection-end-column="1" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/menu/activity_main_drawer.xml">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.4918033">
<caret line="12" column="70" selection-start-line="12" selection-start-column="70" selection-end-line="12" selection-end-column="70" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/gradle/wrapper/gradle-wrapper.properties">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/values/strings.xml">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="105" column="33" selection-start-line="105" selection-start-column="33" selection-end-line="105" selection-end-column="33" />
</state>
</provider>
</entry>
<entry file="jar://$USER_HOME$/Android/Sdk/platforms/android-23/android.jar!/java/util/Date.class">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Outil/Fichier.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="17" column="13" selection-start-line="17" selection-start-column="13" selection-end-line="17" selection-end-column="13" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/EmploiAjouterTache.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="38" column="22" selection-start-line="38" selection-start-column="22" selection-end-line="38" selection-end-column="22" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_retour.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="6" column="17" selection-start-line="6" selection-start-column="17" selection-end-line="6" selection-end-column="17" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_automatique.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="36" column="0" selection-start-line="36" selection-start-column="0" selection-end-line="36" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/Cours.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="8" column="13" selection-start-line="8" selection-start-column="13" selection-end-line="8" selection-end-column="13" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Outil/Jour.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="13" column="72" selection-start-line="13" selection-start-column="72" selection-end-line="13" selection-end-column="72" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_information.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="83" column="0" selection-start-line="83" selection-start-column="0" selection-end-line="83" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ColorPicker.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="17" column="13" selection-start-line="17" selection-start-column="13" selection-end-line="17" selection-end-column="13" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Reveil/AlarmReceiver.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="21" column="13" selection-start-line="21" selection-start-column="13" selection-end-line="21" selection-end-column="13" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Reveil/SonnerieActivity.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="37" column="13" selection-start-line="37" selection-start-column="13" selection-end-line="37" selection-end-column="13" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Reveil/ReveilActivity.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="41" column="13" selection-start-line="41" selection-start-column="13" selection-end-line="41" selection-end-column="13" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/res/layout/reveil_activity.xml">
<provider selected="true" editor-type-id="android-designer">
<state />
</provider>
<provider editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="0" column="0" selection-start-line="0" selection-start-column="0" selection-end-line="0" selection-end-column="0" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/build.gradle">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="11" column="25" selection-start-line="11" selection-start-column="25" selection-end-line="11" selection-end-column="25" />
<folding />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_recuperation.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="106" column="27" selection-start-line="106" selection-start-column="27" selection-end-line="106" selection-end-column="27" />
<folding />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Introduction.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="24" column="13" selection-start-line="24" selection-start-column="13" selection-end-line="24" selection-end-column="13" />
<folding />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Constants.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="13" column="0" selection-start-line="13" selection-start-column="0" selection-end-line="13" selection-end-column="0" />
<folding />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/Emploi.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="78" column="28" selection-start-line="78" selection-start-column="26" selection-end-line="78" selection-end-column="28" />
<folding />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/JourEmploi.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="103" column="7" selection-start-line="103" selection-start-column="7" selection-end-line="103" selection-end-column="7" />
<folding />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/ADE_traitement.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="94" column="18" selection-start-line="94" selection-start-column="18" selection-end-line="94" selection-end-column="18" />
<folding>
<element signature="imports" expanded="true" />
</folding>
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/MainActivity.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="117" column="35" selection-start-line="116" selection-start-column="7" selection-end-line="117" selection-end-column="35" />
<folding />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/java/com/martinet/emplitude/Emploi/Information.java">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="0.0">
<caret line="37" column="13" selection-start-line="0" selection-start-column="0" selection-end-line="175" selection-end-column="0" />
<folding />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/app/src/main/AndroidManifest.xml">
<provider selected="true" editor-type-id="text-editor">
<state vertical-scroll-proportion="1.0455765">
<caret line="48" column="45" selection-start-line="48" selection-start-column="45" selection-end-line="48" selection-end-column="45" />
<folding />
</state>
</provider>
</entry>
</component>
</project>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,578
|
Al-Khaleej Khor Fakkan
Arabian Gulf League
Bani Yas FC
Al-Khaleej Khor Fakkan - Bani Yas FC match prediction
Al-Khaleej Khor Fakkan - Bani Yas FC 27 January 2023 watch online
Welcome to the live coverage of the Arabian Gulf League match between Al-Khaleej Khor Fakkan and Bani Yas FC!
The teams have taken the field, and the atmosphere is electric. Al-Khaleej Khor Fakkan, known for their assertive playing style, is facing off against Bani Yas FC, who is well-known for their tactical defense.
The game is about to begin, and we will be bringing you live updates on all the action, including live scores and highlights.So, stay tuned , as we bring you all the energy and exciting!
The crowd is on their feet, cheering and urging their teams on. Keep tuned in to see how this thrilling match will end.
Al-Khaleej Khor Fakkan - Bani Yas FC - live stream
What time does the match "Al-Khaleej Khor Fakkan" vs "Bani Yas FC" start?
The match "Al-Khaleej Khor Fakkan" vs "Bani Yas FC" starts at 13:15 on 27 January 2023 (UTC).
Where can I see the online broadcast of the match "Al-Khaleej Khor Fakkan" vs "Bani Yas FC"?
The graphic online broadcast of the match "Al-Khaleej Khor Fakkan" vs "Bani Yas FC" will be available on https://leon.bet/blog/ at 13:15 on 27 January 2023 (UTC).
What is the live match results "Al-Khaleej Khor Fakkan" and "Bani Yas FC" 27 January 2023?
The score 27 January 2023 match between "Al-Khaleej Khor Fakkan" and "Bani Yas FC" is available on match page .
Bet now Al-Khaleej Khor Fakkan - Bani Yas FC
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,807
|
Q: JPA persistence.xml flushmode with Hibernate 3.0 I understand that Hibernate uses transparent write behind by default for committing the transactions.
However, I would like my entity-manager to commit my transaction on the database immediately after the transaction is committed. Is there anyway I can configure this in persistence.xml of JPA ?
A: Hibernate would have to commit in the database at the time is made commits the transaction. You can also find it helpful to have two additional options:
*
*Define the Session and autocommit (entering the property "hibernate.connection.autocommit" in the properties of the connection)
*Forcing hibernate transaction synchronize with the database transaction in the middle of the transaction (by session.flush ())
Regards,
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,261
|
Johnny Stocks: Clean Energy - Last Year's Trade?
My original post was written Feb 13, 2011, shortly after Congress proposed deep cuts to clean energy funding.
At the time of writing, there were no near term catalysts that were expected to give clean energy any type of boost. However shortly after writing the article Libya blew up and WTI oil prices subsequently spiked. Triple-digit oil prices suddenly make alternative energy plays like CLNE or WPRT much more attractive.
Those who follow me on Twitter were notified when I covered my shorts in WPRT and CLNE. On Feb 17 I covered my WPRT short @ 15.81 for a 8 cent loss and on feb 18 I covered my CLNE short @ 11.96 for a 12 cent profit. Within days both of these stocks screamed higher along with global energy prices.
I was fortunate enough to cover my short positions in each of these stocks, which saved me from substantial losses.
Never become married to a position or a trading strategy.
When the fundamentals or the technicals change - get out of the position.
Congress last week proposed budget cuts of $900 million in clean energy funding. Obama had been budgeting $2.36 billion for energy and efficiency and renewable energy programs for fiscal 2011. The proposed cuts would chop almost 40% out of the budget for clean energy - which could have a devastating effect on clean energy related companies and their stocks.
What sectors could be affected?
Major solar names like FSLR, SPWRA and JASO have been very strong recently. While they could be affected, and could be due for a pullback, I have difficulty shorting stocks with such strong chart patterns.
FSLR's chart, for example, looks fantastic (see below). If I wasn't bearish on the fundamentals I would be all over this as a long trade.
I am not too familiar with individual wind companies specifically. A wind ETF like FAN or PWND could be affected, although as Tom Lydon at Seeking Alpha points out PWND has high levels of exposure to developed Europe and Asia, so some of these ETFs may not provide the best bang-for-the-buck as a short trade vs. other strategies.
Names like CLNE and WPRT immediately jump to mind here. The charts on each of these companies look absolutely brutal right now. I am short both of these names as each of them have recently broken out out multi-year uptrends.
The biggest risk to shorting these companies would be a sudden (and at this stage unexpected) sign that Obama is warming to the idea of passing the Natural Gas Act to provide incentives to natural gas fueled cars and trucks. Longs in these stocks waited for all of 2010 and got nothing. At this stage there are no signs that 2011 will be any different than 2010, especially if the tendency is toward cutting budgets for clean energy initiatives.
WPRT has just broken out of a 2-year long uptrend that started at a low of 3.01 in March 2009. The breakout of the trendline also coincided with a break of its 200 day moving average. As well all major moving averages have recently started to turn lower. Although the drop to date since breaking the 200 day moving average @ 17.94 has been severe, all technicals still point to much lower prices. The stock is currently trading at 15.10. My initial target is 12. Above 16.50 I would have to reassess the trade.
CLNE has just broken out of a descending triangle that had been forming for much of 2010 and early 2011. The stock closed at 12.17 on Friday. The measured move of the break of the descending triangle actually targets 3.26, which is just 3 cents above the stocks Nov 2008 low of 3.23. From a trading perspective, however, I am looking for a initial move to 10-11. Above 12.75 and I would need to reconsider, but it would take a move above 13.50 to reverse the bearish chart.
Obviously the companies mentioned above that have the greatest percentage of revenues derived from the US would have the most to lose under the proposed cutbacks.
But cutbacks on clean energy initiatives are not just being felt in North America. Just last month the Australian government announced it would be cutting A$500 million in funding for solar power and carbon capture and storage projects. The government also announced it would be cutting A$2.8 billion on climate control measures including the Green Car Innovation Fund and the Cleaner Car Rebate Scheme according to Bloomberg.
Blame for the Australian cuts is primarily being placed on the massive floods in Queensland which are estimated to have caused $20 billion in damages. But while the reason for the cuts differ, it goes to point out that clean energy initiates are one of the first at risk whenever a government faces a financial deficit.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,652
|
El Ejercito del Norte fue un ejercito conformado para combatir contra la Confederación Perú-Boliviana.El 19 de mayo de 1837 el entonces encargado del manejo de las relaciones exteriores de la Confederación Argentina, Juan Manuel de Rosas declaró la guerra a la Confederación Perú-Boliviana tanto por Tarija como por el hecho de que tropas peruanobolivianas invadieron la mayor parte de Jujuy, la Puna de Atacama y el norte de la provincia de Salta. Chile le había declarado también la guerra el 11 de noviembre de 1836, con el apoyo de peruanos contrarios a Santa Cruz y la Confederación, entablándose una alianza tácita. Alejandro Heredia fue nombrado comandante del Ejército del Norte (esto es, el ejército argentino cuyas tropas estaban compuestas casi en su totalidad por bisoños soldados reclutas del noroeste).
Tal ejército improvisado y mal pertrechado tuvo a Heredia como jefe máximo y a los generales Gregorio Paz y Manuel Virto como inmediatos comandantes. Prácticamente careció de todo apoyo logístico desde el resto de la Argentina (en parte porque las fuerzas argentinas de las otras regiones debían afrontar otros conflictos).
El 20 de enero de 1839 las fuerzas restauradoras al mando del general chileno Manuel Bulnes y el peruano Ramón Castilla logran la victoria de Yungay contra Santa Cruz que pone fin a la Confederación Perú-Boliviana. El 14 de febrero de 1839, el nuevo presidente de Bolivia comunicó el fin de la guerra y el 26 de abril de 1839 el gobierno argentino le puso fin oficialmente.
Historia
En 1837 estalló la guerra entre la Confederación Argentina y la Confederación Perú-Boliviana, tanto por el conflicto por la provincia de Tarija, como por el hecho de que tropas peruanobolivianas invadieron la mayor parte de Jujuy, la Puna de Atacama y el norte de la provincia de Salta. Rosas nombró a Alejandro Heredia comandante del Ejército del Norte (un ejército argentino cuyas tropas estaban compuestas casi en su totalidad por bisoños soldados reclutas del Noroeste argentino) contra el "tirano Santa Cruz".
Su plan era recuperar Tarija y Tupiza rápidamente, y atacar Potosí en otoño, a fin de que al enemigo le resultara difícil avanzar durante el invierno. Sus oficiales principales fueron Gregorio Paz y Manuel Virtu. Pero su ejército, improvisado y mal pertrechado careció de todo apoyo logístico desde el resto de la Argentina (principalmente porque las fuerzas argentinas de las otras regiones debían afrontar otros graves conflictos – bloqueos y hostilidades franco-inglesas, ataques de los "colorados" y unitarios instalados en Montevideo, apoyados por Brasil, Francia e Inglaterra, y diversas tropas mercenarias). También las luchas entre la población gaucha de las provincias con las poblaciones de etnias indígenas por ese entonces no integradas en la incipiente Argentina mantuvieron ocupadas a las provincias.
Las operaciones militares al mando de Alejandro Heredia lograron, con muchas dificultades, liberar las zonas de Jujuy y de Salta que habían sido invadidas. Pero no pudieron recuperar Tarija, al tener que enfrentar tropas mucho más numerosas, descansadas y mejor dirigidas por expertos oficiales mercenarios como el general alemán Otto Philip Braun. Se lograron algunas victorias en posición defensiva, especialmente en Humahuaca, pero los avances fueron casi nulos.
Tras la ocupación de casi todo Jujuy y el norte de Salta por parte de los peruano-bolivianos estos pudieron establecer un fuerte dispositivo ante las contraofensivas de las provincias argentinas: los caminos más directos como el de la Quebrada de Humahuaca o el de Iruya o el bastante más difícil de la Puna de Atacama estaban bloqueados. Pero lo más grave quizás fue el paso del caudillo Eustaquio Méndez al bando peruanoboliviano ya que en esa época la Confederación Perú-boliviana parecía ofrecer un estado mucho más rico, próspero y estable que el de la Confederación Argentina que no parecía poder salir de la feroz guerra civil que azotó a la Argentina en esa época.
Ante tantos obstáculos las tropas al mando de Heredia optaron por un rodeo de las zonas en donde era fuerte el enemigo, es decir un avance por el entonces casi desconocido Chaco hasta ingresar a Tarija, pero la bastante pequeña hueste argentina se encontró diezmada por las plagas que entonces presentaba la región chaqueña (paludismo, tripanosomiasis, amebiasis) y un calor tórrido extenuante. Las tropas argentinas del norte lograron reingresar a la región de Tarija pero fueron esperadas por tropas peruanobolivianas mucho más numerosas, bien alimentadas, frescas y descansadas en la cuesta Montenegro/Coyambuyo a poca distancia de la ciudad de Tarija, para agravar las contrariedades que sufrieron las bisoñas tropas de la Confederación Argentina en Tarija se encontraban refugiados muchos unitarios de origen argentino que no hesitaron en apoyar a los peruanobolivianos contra los federales argentinos.
Aunque la guerra no tuvo una conclusión clara, la derrota en Coyambuyo (llamada por los bolivianos y peruanos batalla de Montenegro, casi a las puertas de la ciudad de Tarija) significó un desprestigio para las tropas que lideraba Heredia y también para él mismo. En la práctica, terminó en un empate, que sería resuelto más tarde por el ejército chileno en Yungay.
Para entonces Heredía contaba con un poderoso ejército. Sus milicias tucumanas en 1838 se componían de un batallón y diez regimientos de caballería, cada uno formado por dos, tres o más escuadrones (de dos compañías de 62 hombres). Estas fuerzas consumían hasta el 60% del las erogaciones provinciales, pero permitieron al caudillo ejercer su «Protectorado» sobre Jujuy, Salta y Catamarca a través de los 5000 hombres bajo su mando, veteranos de las campañas ininterrumpidas que se venían dando desde los inicios de su gobierno.
Referencias
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,788
|
\section{Introduction}
``Pressure Swing Adsorption (PSA) is a technology used to separate some species from a gas under
pressure according to the molecular characteristics and affinity of the species for an adsorbent
material. Special adsorptive materials (e.g. zeolites) are used as a molecular sieve, preferentially
adsorbing the undesired gases at high pressure. The process then swings to low pressure to desorb
the adsorbent material'' (source: Wikipedia).
A typical PSA system involves a cyclic process where a number of connected vessels containing
adsorbent material undergo successive pressurization and depressurization steps in order to produce
a continuous stream of purified product gas. We focus here on a step of the cyclic process,
restricted to isothermal behavior.\\
As in general fixed bed chromatography, each of the $d$ species ($d\geq 2$) simultaneously exists
under two phases, a gaseous and movable one with velocity $u(t,x)$ and concentration $c_i (t,x)$
or a solid (adsorbed) other with concentration $q_i (t,x)$, $1\leq i\leq d$. We assume that mass
exchanges between the mobile and the stationary phases are infinitely fast, thus the two phases are
constantly at composition e\-qui\-li\-brium: the concentrations in the solid phase are given by
some relations $q_i=q_i^*(c_1,...,c_d)$ where the functions $q_i^* $ are the so-called
e\-qui\-li\-brium isotherms. A theoretical study of a model with finite
exchange kinetics was presented in \cite{B92} and a numerical approach was developed in \cite{B98}.
\newpage
In gas chromatography, velocity variations accompany changes in gas composition, especially in the
case of high concentration solute: it is known as the sorption effect. In the present model, the
sorption effect is taken into account through a constraint on the pressure (or on the density in
this i\-so\-ther\-mal case). See \cite {RAA70} and \cite{Ru84} for a precise description of the
process and \cite{BGJ08} for a survey on various related models.
The system for two species ($d=2$) with three unknowns $(u,c_1,c_2)$ is:
\begin{eqnarray}
\partial_t (c_1+q^*_1(c_1,c_2))+\partial_x(u\,c_1)&=&0, \label{un}\\
\partial_t (c_2+q^*_2(c_1,c_2))+\partial_x(u\,c_2)&=&0, \label{deux}\\
c_1+c_2&=& \rho(t), \label{trois}
\end{eqnarray}
with suitable initial and boundary data.
The function $\rho$ represents the \textit{given} total density of the mixture. The experimental
device is realized so that it is a given function depending only upon time and in the sequel we
assume that $\rho\equiv 1$ (which is not really restrictive from a theoretical point of view).
First existence results of large solutions satisfying some entropy criterium in the case of two
chemical species were obtained in \cite{BGJ06,BGJ07}.
In the previous system, it appears that we can expect strong singularities with respect to time for
the velocity $u$. For instance, let $c_1(t,x)\equiv \underline{c}_1$ be a constant,
$c_2(t,x)\equiv 1-\underline{c}_1$, $u(t,x)\equiv u_b(t)$ where $u_b$ is any $L^\infty$
function, then $(c_1,c_2,u)$ is a weak solution of (\ref{un}),(\ref{deux}),(\ref{trois}).
So we can build solutions with a strong oscillating velocity for this system. Furthermore
high oscillations of the incoming velocity $u_b$ slightly perturb the concentration as we will
see. Notice that we seek positive solutions $(c_1,c_2)$, thus, in view of (\ref{trois}) with
$\rho\,\equiv 1$, $c_1$, $c_2$ must satisfy $$0\leq c_1,\,c_2\leq 1.$$ We use the following
notations, introduced in \cite{BGJ07}: we set $c=c_1 \in [0,1]$ and
\begin{eqnarray*}
q_i(c)&=&q_i^ *(c,1-c),\quad i=1,2,\\
h(c)&=&q_1(c)+q_2(c),\\
I(c)&=&c+q_1(c).
\end{eqnarray*}
Adding (\ref{un}) and (\ref{deux}) we get, thanks to (\ref{trois}):
$$\partial_t (q_1(c)+q_2(c))+\partial_x u=0,$$
thus our purpose is to study the following system:
\begin{equation}\label{sysad}
\left\{ \begin{array}{ccl} \vspace{2mm}
\partial_t I(c)+ \partial_x (u\, c) & = & 0,\\
\partial_t h(c) + \partial_x u & = & 0,
\end{array}\right.
\end{equation}
supplemented by initial and boundary data:
\begin{equation} \label{sysad0}
\left\{ \begin{array}{ccl}
\vspace{2mm}c(0,x)&=&c_0(x) \in [0,1], \quad x > 0,\\
\vspace{2mm}c(t,0) &=&c_b(t) \in [0,1],\quad t>0,\\
u(t,0)&=&u_b(t)>0,\quad t>0.
\end{array}\right.
\end{equation}
Notice that we assume in (\ref{sysad0}) an incoming flux at the boundary, i.e. $\forall t>0,\
u_b(t)>0$. In the case where the first species is inert, that is $q_1=0$, the $I$ function reduces
to identity.
System (\ref{sysad}) has a null eigenvalue as the system exposed in \cite{BJ97}, but,
instead \cite{BJ97}, we cannot reduce this system to a single equation for general solutions with
shocks. In \cite{BR03} is studied another interesting 2x2 system with a linearly degenerate
eigenvalue which modelises some traffic flow. As in \cite{CG99,CG01,CGM04,CGM03,M04}, the zero
eigenvalue makes possible the existence of stratified solutions or the propagation of
large-amplitude high frequency waves. Usually, for genuinely nonlinear conservation laws, only high
oscillating solutions with small amplitude can propagate: see for instance \cite{DM85,CJR06}.
In this paper we prove, for large data, that the velocity is a stratified solution in the following
sense: $u(t,x)=u_b(t)\, v(t,x)$ where $v$ is as regular as the concentration $c$ and more than the
boundary data $u_b$. This decomposition for the velocity allows high oscillations with large
amplitude for velocity to propagate, without affecting the concentration. For this quasilinear
system we have propagation of high oscillations with large amplitude for velocity as in a semilinear
system, see for instance \cite{JMR93,J98-1,J08}, and we have strong profile for $u$ with double
scale as in
\cite{J98-2}.
This also permits to pass to the weak limit for $u$ at the boundary and to the strong limit in the
interior for the concentration. For the smooth case, we have no restriction on the isotherms, but
for the realistic case with shock-waves, we restrict ourselves to the classical treatment of
hyperbolic systems: eigenvalues are linearly degenerate or genuinely nonlinear. Furthermore we
obtain better interaction estimates when the shock and rarefaction curves are monotonous. It is
the case for instance for an inert gas and an active gas with the Langmuir isotherm. We conjecture
that our result is still valid for general isotherms with piecewise genuinely nonlinear eigenvalue.
The paper is organized as follows. In Section \ref{sh} we recall some basics results from
\cite{BGJ07} concerning hyperbolicity, entropies, weak entropy solutions of
System (\ref{sysad}).
In Section \ref{skf}, we study the case where concentration is smooth and the velocity is only
$L^\infty$.
In the remainder of the paper we study the case with only $BV $ concentrations. In short section
\ref{sFTA} we briefly expose the Front Tracking Algorithm (FTA) for System
(\ref{sysad}).
Section \ref{sRP} is devoted to the study of both shock and rarefaction curves. We state
the assumptions that we need to perform estimates with the Front Tracking Algorithm. These
assumptions restrict us to convex (or concave) isotherms and we give some examples from chemistry.
We obtain the fundamental interaction estimates in Section \ref{sIE} and $BV$ estimates for $v$ in
Section \ref{suBV}. Finally, we obtain strong stability for concentration with respect to weak limit
on the boundary velocity in Section \ref{sks}.
\section{Hyperbolicity and entropies} \label{sh}
In order the paper to be self contain, we recall without any proof some results exposed in
\cite{BGJ07}.
It is well known that it is possible to analyze the system of Chromatography, and thus System
(\ref{sysad}), in terms of hyperbolic system of P.D.E. provided we exchange the time and space
variables and $u >0$: see \cite{RAA86} and also \cite{RSVG88} for instance. In this framework the
vector state will be
$U=\left(
\begin{array}{l}
u \\
m
\end{array}\right)$
where $m=u\,c $ is the flow rate of the first species. In this vector state, $u$ must be understood
as $u\,\rho$, that is the total flow rate.
In the sequel, we will make use of the function $f=q_1\,c_2-q_2\,c_1$ introduced by Douglas and {\em
al.} in \cite{DCRBT88}, written here under the form
\begin{eqnarray} \label{eqf}
f(c)& = & q_1\,c_2-q_2\,c_1= q_1(c)-c\,h(c).
\end{eqnarray}
Any equilibrium isotherm related to a given species is always increasing with respect to the
corresponding concentration (see \cite{DCRBT88}) i.e. $\displaystyle\frac{\partial q^*_i}{\partial c_i}\geq
0$.
Since $c = c_1$ and $c_2 = 1 - c$, it follows:
\begin{equation}\label{qprime}
q'_1\geq 0 \geq q'_2.
\end{equation}
Let us define the function $H$ by
\begin{eqnarray} \label{H}
H(c) & =1+(1-c)\,q'_1(c)-c\,q'_2(c)& =1+q_1'(c)-ch'(c).
\end{eqnarray}
From (\ref{qprime}), $H$ satisfies $H\geq 1$ and we have the following relation
between $f$, $H$ and $h$:
\begin{eqnarray*}\label{fsec}
f''(c)&=&H'(c)-h'(c).
\end{eqnarray*}
\newpage
\subsection{Hyperbolicity}
Concerning hyperbolicity, we refer to \cite{D00,S96,Sm94}. System (\ref{sysad}) takes the form
\begin{equation}\label{sysadum}
\partial_x U +\partial_t \Phi(U)=0\hbox{ with }
U=\left(
\begin{array}{l}
u \\
m
\end{array}\right)
\hbox{ and }
\Phi(U)=\left(
\begin{array}{l}
h(m/u) \\\\
I(m/u)
\end{array}\right).
\end{equation}
The eigenvalues are:
\begin{eqnarray*}
\label{eigenvalues}
0 &\mbox{ and } &
\lambda=\displaystyle\frac{H(c)}{u},
\end{eqnarray*}
thus in view of (\ref{H}) the system is strictly hyperbolic. The zero eigenvalue is of course
linearly degenerate, moreover the right eigenvector
$r=\left(\begin{array}{c} h'(c) \\
1+q'_1(c)
\end{array} \right )$
associated to $\lambda$ satisfies $\displaystyle d\lambda \cdot
r=\displaystyle\frac{H(c)}{u^2}\,f''(c)$, thus $\lambda$
is genuinely nonlinear in each domain where $f''\neq 0$.
\begin{proposition}[\cite{BGJ07} Riemann invariants]\label{PropRi}~\\
System (\ref{sysad}) admits the two Riemann invariants:
\begin{eqnarray*}
c \quad \hbox{ and } \quad w= \ln(u)+g(c)=L+g(c),
\qquad \mbox{ where } g'(c)=\displaystyle\frac{-h'(c)}{H(c)} \quad \mbox{and }
L=\ln(u).
\end{eqnarray*}
Furthermore this system can be rewritten for smooth solutions as:
\begin{eqnarray}
\label{smooth}
\partial_x c + \frac{H(c)}{u}\, \partial_t c = 0,\quad
\quad \partial_x( \ln(u)+g(c) )= \partial_x w= 0.
\end{eqnarray}
\end{proposition}
\subsection{Entropies}\label{subsecEntropies}
Dealing with entropies, it is more convenient, as shown in \cite{BGJ07}, to
work with the functions
\begin{eqnarray*}
G(c)=\exp(g(c)), & W=\exp(w)=u\,G(c).
\end{eqnarray*}
Notice that $G$ is a positive solution of
$ H G'+h'G=0$.
\\
Denote $E(c,u)$ any smooth entropy and $Q=Q(c,u)$ any associated entropy flux.
Then, for smooth solutions,
$\partial_x E +\partial_t Q=0$. Moreover:
\begin{proposition}[\cite{BGJ07} Representation of all smooth
entropies]\label{smoothentr}~\\
The smooth entropy functions for System (\ref{sysad}) are given by
\begin{eqnarray*}
E(c,u) & =&\phi(w)+u \,\psi(c)
\end{eqnarray*}
where $\phi$ and $\psi$ are any smooth real functions.
The corresponding entropy fluxes satisfy
\begin{eqnarray*}
Q'(c) & = & h'(c)\, \psi(c) + H(c)\, \psi'(c) .
\end{eqnarray*}
\end{proposition}
Moreover, in \cite{BGJ3}, the authors looked for convex entropies for System
(\ref{sysadum}) (i.e. System (\ref{sysad}) written in the $(u,m)$ variables) in
order to get a kinetic formulation.
The next proposition gives us a family of degenerate convex
entropies independently of convexity of $f$ or of the isotherms.
\begin{proposition}[\cite{BGJ07} Existence of degenerate convex entropies]~\\
If $\psi$ is convex or degenerate convex, i.e. $\psi'' \geq 0$,
then $E=u\,\psi(c)$ is a degenerate convex entropy.
\end{proposition}
There are some few cases (water vapor or ammonia for instance) where the
isotherm is convex. There is also the important case with an inert carrier gas
and an active gas with a concave or convex isotherm (see
\cite{BGJ06,BGJ07,BGJ08}). In these cases, the next proposition
ensures the existence of $\lambda$-Riemann invariants which are also strictly
convex entropies. In such cases, $w$ is monotonous with respect to $x$ for any
entropy solution.
\begin{proposition}[\cite{BGJ07} When $\lambda$-Riemann invariant is a convex
entropy\label{Prop2RE}]~\\
There are strictly convex entropy of the form $E=\phi(w)$
if and only if $G''$ does not vanish.
\\
More precisely, for $\alpha >0$, $E_\alpha(c,u)= u^\alpha\, G^\alpha(c) $
is an increasing entropy with respect to the Riemann invariant $W$. It is strictly convex for
$\alpha > 1$ if
$G'' >0$ and for $\alpha < 1$ if $G'' <0$.
\end{proposition}
Unfortunately, when $G$ has an inflexion point such system does not admit any
strictly convex entropy.
When one gas is inert, it is always the case if the sign of the second
derivative of the isotherm changes.
See for instance \cite{BGJ07} for the BET isotherm.
\begin{remark}
In general, System (\ref{sysad}) is not in the Temple class. It is the case if and only if $f''$
does not vanish and $\partial_xW=0$ for all entropy solution (\cite{BGJ5}). For
instance, System (\ref{sysad}) with two linear isotherms is in the Temple class.
\end{remark}
\begin{proposition}[\cite{BGJ07} Non Existence of strictly convex entropy]
\label{Propconvex}~\\
If sign of $G''$ changes
then System (\ref{sysad}) does not admit strictly convex smooth entropy.
\end{proposition}
\subsection{Definition of weak entropy solution}
We have seen that there are two families of entropies:
$u\,\psi(c)$ and $\phi(u\,G(c))$.
\\
The first family is degenerate convex (in variables $(u,uc)$)
provided $\psi''\geq 0$.
So, we seek after weak entropy solutions which satisfy
$ \partial_x \left(u\,\psi(c) \right) + \partial_t Q(c) \leq 0$ in
the distribution sense.
\\
The second family is not always convex. There are only two interesting cases,
namely $\pm G''(c) > 0$ for all $c \in [0,1]$.
When $G''>0$ and $\alpha > 1$, we expect to have
$\partial_x ( u\,G(c))^\alpha \leq 0 $ from Proposition
\ref{Prop2RE}.
But, the mapping $ W \mapsto W^\alpha$ is increasing on $\mathbb{R}^+$.
So, the last inequality reduces to
$\partial_x ( u\,G(c)) \leq 0 $.
\\
In the same way, if $ G''<0$, we get $ \partial_x ( u\,G(c)) \geq 0 $.
\\
Now, we can state a mathematical definition of weak entropy solutions.
\begin{definition}\label{defwes}
Let be $T>0$, $X>0$, $u \in L^\infty((0,T)\times(0,X), \mathbb{R}^+)$, $ 0 \leq c(t,x) \leq \rho \equiv 1$
for almost all $(t,x) \in(0,T)\times(0,X)$. Then $(c,u)$ is a {\bf weak entropy solution}
of System (\ref{sysad})-(\ref{sysad0}) with respect to the family of entropies
$u\,\psi(c)$ if, for all convex
(or degenerate
convex) $ \psi $:
\begin{eqnarray} \label{ineqentropies}
\frac{\partial}{\partial x}\left(u\,\psi(c) \right)
+ \frac{\partial}{\partial t} Q(c) & \leq & 0,
\end{eqnarray}
in $\mathcal{D}'([0,T[\times[0,X[)$, where $Q'=H\psi'+h'\psi$, that is, for all
$\phi\in\mathcal{D}([0,T[\times[0,X[)$:
$$\int_0^X \int_0^T
\left( u\,\psi(c)\,\partial_x\phi + Q(c)\,\partial_t\phi\right) \,dt\,dx + \int_0^T
u_b(t)\,\psi(c_b(t))\,\phi(t,0)\,dt + \int_0^X Q(c_0(x))\,\phi(0,x)\,dx\geq 0.$$
\end{definition}
%
\begin{remark}
If $\pm G'' \geq 0$ then $u\,\psi = \pm u\,G(c)$ is a degenerate convex entropy,
with entropy flux $ Q\equiv 0$, contained in the family of entropies
$u\,\psi(c)$. So, if $G''$ keeps a constant sign on $[0,1]$, $(c,u)$ has to
satisfy:
\begin{equation} \label{IRdecay}
\pm \frac{\partial}{\partial x} \left(u\,G(c) \right) \leq 0,
\quad \mbox{ if } \pm G'' \geq 0 \mbox{ on $[0,1]$.}
\end{equation}
Notice that the entropies $u\,\psi(c)$ and the entropy $u\,G(c)$ are linear with
respect to the velocity $u$.
\end{remark}
\subsection{About the Riemann Problem}
The implementation of the Front Tracking Algorithm used extensively from Section
\ref{sFTA} requires some results about the solvability of the following Riemann problem:
\begin{eqnarray}
\left\{ \begin{array}{ccc} \vspace{2mm}
\partial_x u +\partial_t h(c)&=&0,\\
\partial_x (u c)+\partial_t I(c) & = & 0,
\end{array}\right.\label{sysPR}\\
c(0,x)=c^- \in [0,1], \quad x > 0, &\quad
&
\left\{
\begin{array}{ccl}
c(t,0) &=&c^+ \in [0,1],\\
u(t,0)&=&u^+ > 0,
\end{array}
\right.
t>0.\label{dataPR}
\end{eqnarray}
We are classically looking for a selfsimilar
solution, i.e.: $\displaystyle c(t,x)=C(z)$, $u(t,x)=U(z)$ with $z=\displaystyle\frac{t}{x} >0$. The answer is given
by the three following results (\cite{BGJ07}).
\begin{proposition}
Assume for instance that $0\leq a<c^-<c^+<b\leq 1$ and $f''>0$ in $]a,b[$. Then the only smooth
self-similar solution of (\ref{sysPR})-(\ref{dataPR}) is such that :
\begin{equation}\label{Cz}
\left \{\begin{array}{cccr}
C(z)&=&c_- ,& 0 < z < z_-,\\
\displaystyle\frac{d C}{ dz} & = &
\displaystyle \frac{H(C)}{z\,f''(C) },&\; z_- < z < z_+, \\
C(z) & =& c_+, & z_+ < z,
\end{array} \right.
\end{equation}
where
$ z^+=\displaystyle\frac{H(c^+)}{u^+} $, $z^-=z^+\displaystyle\,e^{-\Phi(c^+)}$ with $\Phi(c)=\displaystyle\int_{c^-}^c
\frac{f''(\xi)}{H(\xi)}\,d\xi$. Moreover $u^-=\displaystyle\frac{H(c^-)}{z^-}$ and $U$ is given by:
\begin{equation}\label{Uz}
\left \{ \begin{array}{cccr}
U(z)& = & u_- ,& 0 < z < z_- ,\\
U(z)& =& \displaystyle\frac{H(C(z))}{z}, & \; z_- < z < z_+, \\
U(z)& =& u_+ & z_+ < z.
\end{array} \right.
\end{equation}
\end{proposition}
\begin{proposition}
If $(c^-,c^+)$ satisfies the following admissibility condition equivalent to the Liu
entropy-condition (\cite{L76}):
$$\hbox{for all } c \hbox{ between } c^- \hbox{ and } c^+, \quad \frac{f(c^+)-f(c^-)}{c^+ - c^-}\leq
\frac{f(c)-f(c^-)}{c - c^-},$$
then the Riemann problem (\ref{sysPR})-(\ref{dataPR}) is
solved by a shock wave defined as:
\begin{equation}\label{CUshock}
C(z)=\left\{\begin{array}{ccl}
c^- &\hbox{ if } & 0 < z < s, \\
c^+ & \hbox{ if }& s<z
\end{array}\right.,
\qquad
U(z)=\left \{\begin{array}{ccl}
u^- &\hbox{ if }& 0< z < s,\\
u^+ &\hbox{ if } & s < z,
\end{array} \right.
\end{equation}
where $u^-$ and the speed $s$ of the shock are obtained through
$$\frac{[f]}{u^-\,[c]} + \frac{1+h^-}{u^-}=s = \frac{[f]}{u^+\,[c]} + \frac{1+h^+}{u^+},$$
where $[c]=c^+ - c^-$, $[f]=f^+ - f^- = f(c^+) - f(c^-)$, $h^+=h(c^+)$, $h^-=h(c^-)$.
\end{proposition}
\begin{proposition}
Two states $U^-$ and $U^+$ are connected by a contact discontinuity if and only if $c^-=c^+$ (with
of course $u^-\neq u^+$), or $c^-\neq c^+$ and $f$ is affine between $c^-$ and
$c^+$.
\end{proposition}
It appears from these results that we can build a weak entropy solution of the Riemann problem
(\ref{sysPR})-(\ref{dataPR}) in a very simple way (see \cite{BGJ07}), similar to the scalar case
with flux $f$, for any data. In particular, if $f''$ has a constant sign (which is the framework in
Section \ref{sFTA}), the Riemann problem is always solved by a simple wave.
\section{ Case with smooth concentration}\label{skf}
System (\ref{sysad}) has the strong property that there exist weak entropy
solutions with
{\it smooth} concentration $c$ on $(0,T)\times(0,X)$ but not necessarily smooth
velocity $u$, for
some positive constants $T$ and $X$. Furthermore, $c$ is the solution of a
scalar conservation law.
\subsection{Existence of weak entropy solutions with smooth
concentration}\label{sscsc}
For this section, we refer to \cite {CGM04}, \cite {CGM03}.
We have a similar result in \cite{BGJ06} but only with smooth velocity.
Here, we obtain by the classical method of characteristics {\it existence and
uniqueness} of a weak
entropy solution with smooth concentration and only $L^\infty$ velocity.
\newpage
\begin{theorem}[Unique weak entropy solution
with {\it smooth} concentration]~\\ \label{thewessc}
Let $T_0 > 0$, $ X> 0$,
$ c_0 \in W^{1,\infty}([0,X],[0,1])$,
$ c_b \in W^{1,\infty}([0,T_0],[0,1])$,
$ \ln u_b \in L^{\infty}([0,T_0],\mathbb{R})$.
\\
If $c_0(0)= c_b(0)$ then there exists $T\in ]0,T_0]$ such that System
(\ref{sysad})-(\ref{sysad0})
admits a {\bf unique } weak entropy solution $(c,u)$ on $[0,T]\times[0,X]$ with
\begin{eqnarray*}
c \in W^{1,\infty}([0,T]\times[0,X],[0,1]),
& \qquad
\ln u \in L^{\infty}([0,T], W^{1,\infty}([0,X],\mathbb{R})).
\end{eqnarray*}
Furthermore, for any $\psi \in C^1([0,1],\mathbb{R})$, setting
$$F'(c) =(H(c)\,G(c))^{-1} \hbox{ and } Q'=H\,\psi' + h'\,\psi,$$
$(c,u)$ satisfies:
\begin{eqnarray}
\partial_x(u\,\psi(c)) + \partial_t Q(c) = 0,
\quad
\partial_x(u\,G(c) ) = 0,\label{eqeisc} \\ \nonumber \\
\partial_t c + u_b(t)\,G(c_b(t))\,\partial_x F(c) = 0. \label{eqRIw}
\end{eqnarray}
\end{theorem}
{\bf \textit{Proof: }}
we build a solution using the Riemann invariants and we check
that such a solution is an entropy solution. Next, we prove uniqueness.\\
Using the Riemann invariant $W=u\,G(c)$ ($\partial_xW= 0$) and the boundary
data we define $u$ by:
\begin{eqnarray*}
\label{numero}
u(t,x)= \frac{u_b(t)\, G(c_b(t))}{G(c(t,x))},
\end{eqnarray*}
so $u$ is smooth with respect to $x$.
%
Then, the first equation of (\ref{smooth}) can be rewritten as follows:
\begin{eqnarray} \label{eqtransportc}
\partial_t c+ \mu \,\partial_x c=0,
&
\quad \mbox{ with } \quad
&
\displaystyle
\mu = \lambda^{-1}= \frac{u}{H(c)}
= \frac{u_b(t)\, G(c_b(t))}{H(c)\,G(c)} = \mu(t,c).
\end{eqnarray}
We solve (\ref{eqtransportc}) supplemented by initial-boundary value data
$(c_0,c_b)$ by the standard characteristics method. Let us define, for a
given $(\tau,x)$, $X(\cdot,\tau,x)$ as the solution of:
\begin{eqnarray*}
\frac{dX(s,\tau,x)}{ds}= \mu(s,c(s,X(s,\tau,x))),
& \quad &
X(\tau,\tau,x)=x.
\end{eqnarray*}
Since $\displaystyle \frac{dc}{ds}(s,X(s,\tau,x)) =0$ from (\ref{eqtransportc}), we have
\begin{eqnarray*}
X(s,\tau,x) = x - b(s,\tau)\,F'(c(\tau,x))
&
\mbox{ with } &
b(s,\tau)=\displaystyle \int_s^\tau u_b(\sigma) \,G(c_b(\sigma))\,d\sigma.
\end{eqnarray*}
Now, for some $T\in[0,T_0]$ defined later on, we split $\Omega= [0,T]\times[0,X]$
according to the characteristic line $\Gamma$ issuing from the corner $(0,0)$,
i.e. we define the sets
$\Omega^\pm = \{(t,x) \in \Omega,\; \pm (x-X(t,0,0)) \geq 0 \}$.
\\
Since $\partial_x X(t,0,x)=1- b(t,0)\,F''(c_0(x))\,\partial_x c_0(x)$,
$b(0,0)=0$ and $ b(.,0) \in W^{1,\infty}(\Omega^+)$,
the mapping $x \mapsto X(t,0,x)$ is a Lipschitz diffeomorphism for $0\leq
t\leq T$ with $T\in]0,T_0]$ small enough. Then we define on $\Omega^+$, for each
$t\in [0,T]$, $\xi(t,x)$ such as $X(t,0,\xi(t,x))=x$.
Then we have $ c(t,x)= c_0(\xi(t,x))$ on $\Omega^+$.
Furthermore $ \partial_t \xi = -\partial_s X/\partial_x X$ and thus $c$ is
Lipschitz continuous in time and space on $\Omega^+$.
\\
We work in a similar way on $\Omega^-$ and $c \in W^{1,\infty}(\Omega^-)$.
Since $c$ is continuous on $\Gamma$ from the compa\-tibility conditions $c_0(0)=c_b(0)$
we have $c \in W^{1,\infty}(\Omega)$.
\\
By construction $(c,u)$ satisfies (\ref{smooth}) rewritten as follows:
\begin{eqnarray*}
\partial_x \ln u = - \partial_x g(c),
& &
u\,\partial_x c+ H \,\partial_t c =0.
\end{eqnarray*}
These equations imply:
$$
\partial_x u = - u\, \partial_x g(c)
= - u\, g'(c)\,\partial_x c
= - u \,g'(c)\, \left(-\frac{H(c)}{u}\, \partial_t c \right)
= - h'(c)\, \partial_t c= - \partial_t h(c).
$$
Now we check that $(c,u)$ satisfies (\ref{eqeisc}).
Let $\psi$ be a $C^1$ function. Using the identity
$ Q'=h'\psi + H \psi'$
and the previous equations we have:
\begin{eqnarray*}
\partial_x(u\,\psi(c)) + \partial_t Q(c)
& = &
\psi\, \partial_x u + u \,\psi' \,\partial_x c + Q' \,\partial_t c
= \psi \,\left( \partial_x u + h'\,\partial_t c \right)
+\psi'\, \left( u\partial_x c + H\,\partial_t c \right)
\\
&= &\psi \times 0 + \psi' \times 0 = 0.
\end{eqnarray*}
By the way (\ref{eqeisc}) implies (\ref{ineqentropies}), so
$(c,u)$ is an entropy solution of System (\ref{sysad}).
\\
We now prove the uniqueness of such a weak entropy solution.\\
Precisely, if $ c \in W^{1,\infty}([0,T]\times[0,X],[0,1])$ and
$\ln u \in L^\infty ((0,T), W^{1,\infty}(0,X))$
satisfy (\ref{ineqentropies}) in $\mathcal{D}'([0,T[\times[0,X[)$ with
initial-boundary data $c_0,c_b,u_b$ then we show that $(c,u)$ is necessarily the
previous solution built by the method of characteristics.
\\
Choosing the convex functions $\psi(c) = \pm 1$ and $\psi(c) = \pm c$ we obtain
(\ref{sysad}).
The main ingredient to conclude the proof is the fact that $u$ admits
a classical partial derivative only with respect to $x$. Thus classical
computations with smooth functions to obtain (\ref{smooth})
as in the proof of Proposition \ref{PropRi} are still valid.
Now $(c,u)$ satisfies (\ref{smooth}), which implies
from the beginning of the proof of Theorem \ref{thewessc}
that $(c,u)$ is our previous solution.
\cqfd
\begin{remark}~
\begin{enumerate}
\item
Notice that $T,X$ are only depending on $ \|\ln(u_b) \|_{L^\infty}$, $ \|
c_b \|_{W^{1,\infty}}$, $ \| c_0 \|_{W^{1,\infty}}$. Thus,
if $\displaystyle \left(u_b^ \varepsilon \right)_{0< \varepsilon \leq 1}$ is a sequence of
boundary velocity data such that $\displaystyle \left(\ln u_b^ \varepsilon \right)$ is
uniformly bounded in $L^\infty(0,T_0) $, and if $(c^ \varepsilon_0), (c^ \varepsilon_b)$ are
some initial and boundary concentration data uniformly bounded in $W^{1,\infty}$
with the compatibility condition at the corner $c_0(0)=c_b(0)$, then there
exist $T>0$ and $X>0$ and Lipschitz bounds for $c^ \varepsilon, \ln u^ \varepsilon$
on $[0,T]\times [0,X]$ independent of $ \varepsilon$.
\item
As in \cite{BGJ06}, we have a global solution with smooth concentration
if $\lambda$ is genuinely nonlinear (for instance an inert case and a Langmuir
isotherm), with monotonicity assumptions on $c_0$ and $c_b$.
\end{enumerate}
\end{remark}
\subsection{Strong stability with respect to velocity}\label{ssStcs}
In case of a Lipschitz continuous concentration, we now give a strong stability
result for the concentration with respect to a weak limit of the boundary
velocity.
\begin{theorem}[Strong stability for {\it smooth} concentration]~\\
\label{thess}
Let be $T_0 > 0$, $ X> 0$,
$ c_0 \in W^{1,\infty}([0,X],[0,1])$,
$ c_b \in W^{1,\infty}([0,T_0],[0,1])$ such that $c_0(0)= c_b(0)$,
and $(\ln u_b^ \varepsilon)_{0< \varepsilon\leq 1}$ a bounded
sequence in $L^\infty(0,T_0)$.
Then, there exists $T\in ]0, T_0[$
such that System (\ref{sysad}) admits
a unique weak entropy solution
$(c^ \varepsilon,u^ \varepsilon)$
with $ c^ \varepsilon \in W^{1,\infty}([0,T]\times[0,X],[0,1]),
\ln u^ \varepsilon \in L^{\infty}([0,T], W^{1,\infty}([0,X],\mathbb{R}))$
and
with initial and boundary values:
\begin{equation} \label{sysad0eps}
\left\{ \begin{array}{ccl}
\vspace{2mm}c^ \varepsilon(0,x)&=&c_0(x) \in [0,1], \quad x > 0,\\
\vspace{2mm}c^ \varepsilon(t,0) &=&c_b(t) \in [0,1],\quad t>0,\\
u^ \varepsilon(t,0)&=& \displaystyle u_b^ \varepsilon\left(t \right)>0,\quad t>0.
\end{array}\right.
\end{equation}
If $(u_b^ \varepsilon)$ converges towards $ \overline{u}_b$
in $L^\infty(0,T_0)$ weak$-*$ when $ \varepsilon$ goes to $0$,
then $(c^ \varepsilon)$ converges in \newline $L^\infty([0,T]\times[0,X])$ towards the
unique
smooth solution of
\begin{eqnarray} \label{eqclim}
\partial_t c + \overline{u}_b(t)\, G(c_b(t))\, \partial_x F(c)= 0,
\qquad c(t,0)=c_b(t), \; c(0,x)=c_0(x).
\end{eqnarray}
Furthermore we have:
\begin{eqnarray*}
\lim_{ \varepsilon \rightarrow 0}
\left \| u^ \varepsilon(t,x) - u_b^ \varepsilon(t)\frac{ G(c_b(t))}{G(c(t,x))}
\right\|_{L^\infty ([0,T]\times[0,X])} = 0.
\end{eqnarray*}
\end{theorem}
{\bf \textit{Proof: }}
thanks to Theorem \ref{thewessc}, there exists $T>0$ such that
System (\ref{sysad}), with initial and boundary values (\ref{sysad0eps})
admits the unique weak entropy solution $(c^ \varepsilon,u^ \varepsilon)$
with smooth concentration in the previous sense
on $[0,T]\times[0,X]$. \\
Since $(c^ \varepsilon)$ is bounded in $W^{1,\infty}$, up to a subsequence,
$(c^ \varepsilon)$ converges strongly in $L^\infty $ to $c$.
Using (\ref{eqRIw}) in conservative form, we can pass to the limit
and get (\ref{eqclim}).
Problem (\ref{eqclim}) has a unique solution by the method of characteristics.
Thus, the whole sequence $(c^ \varepsilon)$ converges. We recover the last limit
for $u^ \varepsilon$ thanks to $\partial_x (u\,G(c)) =0$.
\cqfd
Notice that if $\overline{u}_b$ is a constant function
for instance $u_b^ \varepsilon(t)=u_b(t/ \varepsilon)$ with $u_b$ periodic
we can compute the concentration with only using a constant velocity
(the mean velocity) as in liquid chromatography.
\\
\underline{An example from geometric optics}:
if $u_b^ \varepsilon(t) =\displaystyle u_b\left(t,\frac{t}{ \varepsilon} \right)$
where $u_b(t,\theta) \in L^\infty((0,T),C^0(\mathbb{R}/\mathbb{Z}))$, $\inf u_b >0$,
we have a similar result with Equation
(\ref{eqclim}) for $c$
where
$\displaystyle
\overline{u}_b(t)= \int_0^1 u_b(t,\theta)\, d\theta
$
and a profile $U$:
\begin{eqnarray*}
\lim_{ \varepsilon \rightarrow 0}
\left \| u^ \varepsilon(t,x) - U\left(t,x,\frac{t}{ \varepsilon} \right) \right\|_{L^\infty}= 0&
\mbox{ where }
U(t,x,\theta)= \displaystyle u_b(t,\theta) \frac{G(c_b(t))}{G(c(t,x))}.
\end{eqnarray*}
\section{Front Tracking Algorithm}\label{sFTA}
In Section \ref{skf}, where $c$ is smooth and $ \ln u_b $ is in
$L^\infty(0,T)$, we have seen that there exists a {\it smooth} function $v$
such that
\begin{eqnarray} \label{udecomposition}
u(t,x) & = & u_b(t)\,v(t,x).
\end{eqnarray}
Furthermore $c$ satisfies the {\it scalar} conservation law (\ref{eqRIw}).
For only $BV$ data we cannot expect to obtain
such a scalar conservation law for the concentration, except in the case of
linear isotherms. In that case, the scalar conservation law (\ref{eqRIw}) and
System (\ref{sysad}) have the same solution for the Riemann Problem, but linear
isotherms are of a poor interest from Chemical Engineering point of view.
The first interesting case is the case with an inert gas and a
Langmuir isotherm, first mathematically studied in \cite{BGJ06}.
Nevertheless we guess that (\ref{udecomposition}) is still true with $v \in
BV$. From \cite{BGJ06,BGJ07} we have yet obtained $BV$ regularity with respect to
$x$ with a Godunov scheme. To get $BV$ regularity with respect to $t$ we will
use a more precise algorithm to study wave interactions, namely a Front Tracking
Algorithm (FTA).
The Front Tracking method for scalar conservation laws was introduced by
Dafermos, \cite{D72}. The method was extended to genuinely nonlinear systems of
two conservation laws by DiPerna \cite{D76}. For our purpose, we do not use the
generalisation to genuinely nonlinear systems of any size by Bressan \cite{Br92} or Risebro
\cite{R93}.
The FTA is much more complicated when an eigenvalue is piecewise ge\-nu\-inely nonlinear, see
\cite{AM07, AM04, GLF07}. Then, we restrict ourselves to the case where $\lambda$ is
genuinely nonlinear, which allows us to treat some relevant cases from the point of view of chemical
engineering like an inert gas with a Langmuir isotherm, two active gas with a binary Langmuir
isotherm for instance. For this purpose we work in the framework exposed in the recent and yet
classical Bressan's Book \cite{Br00}. In this framework we assume $f'' \geq
0$, then a Riemann
problem presents only two waves:
\begin{enumerate}
\item
a contact discontinuity with speed $0$,
\item
a rarefaction wave with speed $\lambda > 0$
\\
or a shock wave with speed between $\lambda^-$ and $\lambda^+$, characteristic speeds associated to
the left and right states, respectively.
\end{enumerate}
Let be $\delta >0$. A $\delta$-approximate Front Tracking solution
of System (\ref{sysad}) is a pair of piecewise constant function $c^\delta(t,x),
u^\delta(t,x)$, whose jumps are located along finitely many straight lines
$t=t_\alpha(x)$ in the $t-x$ plane and approximately satisfy the entropy
conditions. For each $x >0$ and $\psi'' \geq 0$, one should thus have an
estimate of the form:
\begin{eqnarray}
\label{weakEC}
\displaystyle \sum_\alpha
\left( [u^\delta\,\psi(c^\delta)] - \frac{d t_\alpha}{dx}[Q(c^\delta)] \right) (t_\alpha,x)& \leq &
{\cal O}(\delta),
\end{eqnarray}
where $[u]=u^+-u^-$ is the jump across a jump line,
and the sum is taken over all jump for $x$ fixed.
Inequality (\ref{weakEC}) implies that $(c^\delta,u^\delta)$ is ``almost an entropy
solution'':
\begin{equation}\label{IEA}
\partial_x u^\delta \psi (c^\delta) +\partial_t \psi(c^\delta) \leq {\cal O}(\delta).
\end{equation}
That's enough to get an entropy solution ``issued from FTA'' when $\delta$ goes to zero.
Since we want to only use piecewise constant functions, it is convenient to
approximate a continuous rarefaction wave by a piecewise constant function.
For this purpose, the rarefaction curve is dicretized with a step of order
$\delta$ and then (\ref{weakEC}) still holds.
We now briefly describe an algorithm which generates these Front Tracking approximations. The
construction starts on the initial line $x=0$ and the boundary $t=0$ by taking a piecewise constant
approximation of initial value $c_b(t), u_b(t) $ and boundary values $ c_0(x)$.
Let $ t_1 <\cdots < t_N$, $\tilde{x}_{1} < \cdots <\tilde{x}_M$ be the points where initial-boundary
values are discontinuous. For each $\alpha = 1,\cdots,N$, the Riemann problem generated by the jump
of initial constant values at $(t_\alpha,x=0)$ is approximately solved on a forward neighborhood
of $(t_\alpha,0)$ in the $t-x$ plane by a function invariant on line $ t-t_\alpha= \mbox{a}\,x$,
for all positive $a$, and piecewise constant. Notice that the boundary is characteristic, then we
have only one wave associated with the speed $\lambda$ in the corner $(0,0)$.
The approximate solution $(c^\delta,u^\delta)$ can then be prolonged until $x_1 > 0$ is reached,
when the first
set of interactions between two wave-fronts takes place. If $x_1 > \tilde{x}_{1}$ we first have to
solve the characteristic boundary Riemann problem at $ (t=0,x=\tilde{x}_{1})$.
Since $(c^\delta,u^\delta)(.,x_1)$ is still a piecewise constant function, the corresponding
Riemann problems can
again be approximately solved within the class of piecewise constant functions. The solution is
then continued up to a value $x_2$ where the next characteristic boundary Riemann problem occurs or
the second set of wave interactions takes place, and so on.
According to this algorithm, contact discontinuity fronts travel with speed zero, shock fronts
travel exactly with Rankine-Hugoniot speed, while rarefaction fronts travel with an approximate
characteristic speed. However, one exception to this rule must be allowed if three or more fronts
meet at the same point. To avoid this situation, we must change the positive speed $\lambda$ of one
of the incoming shock fronts or rarefaction fronts. Of course this change of speed can be chosen
arbitrarily small and we have again Inequality (\ref{weakEC}).
Notice that, for $2\times 2 $ system the number of wave-fronts cannot approach infinity in finite
$x>0$. DiPerna shows in \cite{D76} that the process of regenerating the solution
by solving local Riemann problems yields an approximating solution within the class of piecewise
constant functions that is globally defined and that contains only a finite number of
discontinuities in any compact subset of the $t-x$ quarter plane
$ t\geq 0, x\geq 0$. We then do not consider non-physical fronts as in \cite{Br00} for general $
n\times n $ systems with $ n\geq 3$.
\section{About the shock and rarefaction curves}\label{sRP}
In this section we state our assumptions to use the FTA
with large data. Precisely we work in classical hyperbolic case,
namely, eigenvalues are linearly degenerate or genuinely nonlinear. We assume:
\begin{eqnarray} \label{Hconvex}
\lambda=\frac{H(c)}{u}
\mbox{ is {\bf genuinely nonlinear}}
& \quad &
\mbox{ i.e }
f \mbox{ is {\bf convex} on } [0,1].
\end{eqnarray}
Actually $\lambda$ is genuinely nonlinear for $f''\neq 0$, but since
$f=c_1q_2-c_2q_1$ (see (\ref{eqf})) we can assume that
$f'' > 0$ exchanging the gas labels $1$ and $2$ if necessary.
\\
Our analysis of wave interactions in Section \ref{sIE} is more precise with monotonous
$\lambda$-wave curves, then we also assume:
\begin{eqnarray} \label{Hmonotonous}
\mbox{$\lambda$-wave curves are {\bf monotonous}}.
\end{eqnarray}
To state precisely this last assumption let us introduce some notations. Let $(c_-,L_-)$ be a left
constant state connected to $(c_+,L_+)$ a right constant state by a $\lambda$-wave curve. In the
genuinely linear case, with Assumption (\ref{Hconvex}), $\lambda$-wave curve is a rarefaction
curve with $ c_- < c_+$ or a shock curve with $c_-> c_+$. The sign of $[c]= c_+-c_-$ comes from the
general study of the Riemann problem in \cite{BGJ07}. From the Riemann invariant $w=\ln u + g(c) $
and the Rankine-Hugoniot conditions a $\lambda$-wave curve can be written as follows (see
\cite{BGJ07}):
\begin{eqnarray}
\label{eqlwave}
[L]= L_+-L_- = \ln u_+ - \ln u_-
& = &
T(c_+,c_-) =
\left \{ \begin{array}{ll}
- [g]= -(g(c_+) - g(c_-)) & \mbox{ if } c_- < c_+ \\
S(c_+,c_-) & \mbox{ else }
\end{array} \right.
\end{eqnarray}
We give an explicit formula for $S$ in Lemma \ref{lRH}.
\\
Notice that we use only one Riemann invariant, namely $c$, to write
$\lambda$-wave curves. Indeed $L=\ln u$ and $c$ have quite different behavior
as seen in \cite{BGJ06,BGJ07} and this paper. Furthermore we can give some simple
criterion to have monotonous $\lambda$-wave curves. For instance, as
$g'=-h'/H$, the rarefaction curve is monotonous if and only if $h$ is
monotonous.
A chemical example, investigated in \cite{BGJ06}, is the case of an inert gas
($q_1=0$) and an active gas with a Langmuir isotherm:
$ \displaystyle q_2^*(c_2) = Q_2 \frac{K_2 c_2}{1+K_2 c_2}$.
For this case we have
\begin{eqnarray}
\label{likeLangmuir}
f'' > 0,\qquad
& h'<0, \qquad &
\displaystyle \frac{\partial S}{\partial c_-} \geq 0
\geq \frac{\partial S}{\partial c_+}.
\end{eqnarray}
The first condition of (\ref{likeLangmuir}) gives us (\ref{Hconvex}) and the
last one gives us (\ref{Hmonotonous}).
\\
Notice that if we exchange labels $1$ and $2$ for gas, Inequalities
(\ref{likeLangmuir}) simply become:
$$
f''< 0 < h', \quad
\frac{\partial S}{\partial c_-} \leq 0
\leq \frac{\partial S}{\partial c_+}.
$$
Let us give some isotherm examples such that
(\ref{likeLangmuir}) is satisfied.
\begin{proposition} \label{PHtrue}
For the following examples, Assumptions (\ref{Hconvex}), (\ref{Hmonotonous}) are
valid:
\begin{enumerate}
\item
one gas is inert: $q_1=0$, and the other has a concave isotherm: $q_2^{''} \leq
0$,
\item
two active gas with linear isotherms:
$\displaystyle q_i^*(c_1,c_2)
=\displaystyle K_i c_i,
$
$i=1,2$,
\item
two active gas with binary Langmuir isotherms:
$ \displaystyle q_i^*(c_1,c_2)
=\displaystyle \frac{Q_i K_i c_i}{1+K_1 c_1 +K_2c_2},
$
$i=1,2$,
where positive constants
$Q_1, Q_2 ,K_1 \geq K_2 $ satisfy:
$Q_1 K_1 < Q_2 K_2$.
\end{enumerate}
Furthermore, for two active gas with binary Langmuir isotherms, $\lambda$ is
genuinely nonlinear, i.e. (\ref{Hconvex}) is satisfied, if $Q_1 K_1 \neq Q_2
K_2$.
\end{proposition}
The first case is the most classical case when only one gas is active and his
isotherm has no inflexion point, for instance the Langmuir isotherm.
\\
The second case is less interesting in chemistry and only valid when
concentrations are near constant states.
\\
For the third case, notice that $K_1 \geq K_2$ is not really an assumption
(exchange the labels if necessary).\\[2mm]
{\bf \textit{ Proof of Proposition} \ref{PHtrue}:} we use some technical Lemmas
postponed to Subsection \ref{teclem}. The point is to satisfy
(\ref{likeLangmuir}).
{\it 1.} \underline{Case with an inert gas:}
we have $h=q_2, \, f(c)=-c \, h(c), \, f'=-h-ch',\, f''=-2h'-c \, h''$, which
implies $ h' =q_2' \leq 0$, $h''=q_2" \leq 0$ and then $f''\geq 0$.
We conclude thanks to Lemmas \ref{lc+} and \ref{Lpourinertgas}.
\medskip
{\it 2.} \underline{Case with linear isotherms:}
linear isotherms are
$q_1(c)=K_1 c$, $q_2(c)=K_2(1-c)$ with $K_1 \geq 0, \, K_2 \geq 0$
then $q'_1(c)=K_1 \geq 0$, $q'_2(c)=-K_2 \leq 0$,
$h'(c)=q'_1(c)+q'_2(c)=K_1-K_2$, $f''(c)=2(K_2-K_1)$.
We assume $K_1 \leq K_2$, then we have $h' \leq 0 \leq f''$. Since $q_i" = 0$,
$i=1,2$, we conclude thanks to Lemmas \ref{lc+} and \ref{derivee}.
\medskip
{\it 3.} \underline{Case with a binary Langmuir isotherm:}
we have
$\displaystyle q_1(c)=\displaystyle \frac{Q_1 K_1 c}{D}, \, q_2(c)=\displaystyle \frac{Q_2 K_2(1-c)}{D}$
where
$D=1+K_1 c +K_2(1-c).$
Then
$ \displaystyle q'_1(c)=\displaystyle \frac{Q_1 K_1(1+K_2)}{D^2} \geq 0, \, q'_2(c)=-\displaystyle \frac{Q_2
K_2(1+K_1)}{D^2} \leq 0, $
\\
$h'(c)=q'_1(c)+q'_2(c) \leq 0$ if and only if $Q_1 K_1(1+K_2) \leq Q_2
K_2(1+K_1)$,
\\
$q''_1(c)=\displaystyle \frac{2 Q_1 K_1(1+K_2)(K_2-K_1)}{D^3} \leq 0$ if and only if $K_1
\geq K_2$,
\\
$q''_2(c)=\displaystyle \frac{2 Q_2 K_2(1+K_1)(K_1-K_2)}{D^3} \geq 0$ if and only if $K_1
\geq K_2$,
\\
$f''(c)=\displaystyle \frac{2(Q_2 K_2-Q_1 K_1)(1+K_1)(1+K_2)}{D^3} \geq 0$ if and only if
$Q_2 K_2 \geq Q_1 K_1$.
\\
Since $Q_1K_1 \leq Q_2 K_2$,
we get $f"\geq 0$ and
$\displaystyle \frac{Q_1}{Q_2}\leq \frac{K_2}{K_1}$.
$\displaystyle 1 \leq \frac{1+K_1}{1+K_2}$ because $K_1\geq K_2$,
so we have
$\displaystyle \frac{Q_1}{Q_2}\leq \frac{K_2}{K_1}\frac{1+K_1}{1+K_2}$, i.e. $h' \leq 0$.
Now we conclude with Lemmas \ref{lc+} and \ref{derivee}.
\cqfd
\subsection{Technical lemmas about shock curves}\label{teclem}
We express the shock curves as follows.
\begin{lemma}\label{lRH}
We have
$
\exp\{S(c_+,c_-)\}
=\displaystyle \frac{u_-}{u_+}
=\displaystyle \frac{\alpha+h_-}{\alpha+h_+},
$
where
$h_\pm=h(c_\pm)$ and $\alpha= \displaystyle \frac{[f]}{[c]}+1$.
\end{lemma}
{\bf \textit{Proof: }}
first, from the Rankine Hugoniot conditions:
$\displaystyle \frac{[uc]}{[c+q_1(c)]}= \displaystyle \frac{[u]}{[h]}$,
i.e.
$[h]=\displaystyle \frac{[u][c+q_1(c)]}{[uc]}$, we obtain
\begin{eqnarray} \label{eqRH1}
\displaystyle \frac{u_+}{u_-} & = &
\displaystyle \frac{[c+q_1(c)]-c_-[h]}{[c+q_1(c)]-c_+[h]}
\end{eqnarray}
where $[c]=c_+-c_-$ and $[h]=h(c_+)-h(c_-)=h_+-h_-$,
and we get (\ref{eqRH1}) thanks to the following computations:
\begin{eqnarray*}
[c+q_1(c)]-c_-[h] & = &
[c+q_1(c)]-c_-\displaystyle \frac{[u][c+q_1(c)]}{[uc]}
=\displaystyle \frac{[c+q_1(c)]}{[uc]} \left( [uc]-c_-[u] \right)
\\
& =& \displaystyle \frac{[c] u_+}{[uc]}[c+q_1(c)],
\end{eqnarray*}
\begin{eqnarray*}
[c+q_1(c)]
- c_+ [h]
& = &
[c+q_1(c)]-c_+ \displaystyle \frac{[u][c+q_1(c)]}{[uc]}
=\displaystyle \frac{[c+q_1(c)]}{[uc]} \left( [uc]-c_+[u] \right)
\\ & =& \displaystyle \frac{[c] u_-}{[uc]}[c+q_1(c)].
\end{eqnarray*}
Rewriting (\ref{eqRH1}) we get
\begin{eqnarray*}
\displaystyle \frac{u_-}{u_+}
& = &
\displaystyle \frac{[c+q_1(c)]-c_+[h]}{[c+q_1(c)]-c_-[h]}
= \displaystyle \frac{[q_1]+[c]-c_+[h]}{[q_1]+[c]-c_-[h]}
=\displaystyle \frac{[q_1]+[c]+c_+(h_--h_+)}{[q_1]+[c]+c_-(h_- - h_+)}
\\
& = &
\displaystyle \frac{[q_1]-c_+h_+ + [c]+h_-c_+}{[q_1]+c_-h_-+[c]-h_+c_-}
=\displaystyle \frac{[f]+[c]+h_-[c]}{[f]+[c]+h_+ [c]}
= \displaystyle \frac{\alpha+h_-}{\alpha+h_+},
\end{eqnarray*}
which concludes the proof.
\cqfd
We need to know the sign of $\alpha + h_\pm$ before studying the sign of $\displaystyle
\frac{\partial S}{\partial c_\pm}$.
\begin{lemma}\label{signe}
If $h' \leq 0$ and $ c_+ < c < c_-$ then
$\displaystyle\alpha +h(c_+)\geq
\alpha +h(c)\geq \alpha +h(c_-) >0$.
\end{lemma}
{\bf \textit{Proof: }}
since $h' \leq 0$ and $c_+ < c_-$ we have $h(c_+) \geq h(c_-)$ and it is enough
to show that
$\displaystyle \frac{[f]}{[c]}+1 +h(c_-) >0$.
This inequality is equivalent to $[f]+[c] +[c]h(c_-) <0$ because
$[c]=c_+-c_-<0$.
Since $f(c)=q_1(c)-ch(c)$ the inequality is equivalent to $[q_1]+[c] < c_+ [h]$.
We know that $q'_1 \geq 0$,
$ c_+< c_-$,
$h' \leq 0$ then $[q_1] \leq 0, \, [c]<0, \, [h] \geq 0$
and then
$[q_1]+[c] < 0 < c_+ [h]$.
\cqfd
\begin{lemma}\label{lc+}
If $h' \leq 0$, if $f$ is convex and if $c_+ < c_-$ then we have
$\displaystyle \frac{\partial S}{\partial c_+} (c_+,c_-)\leq 0$.
\end{lemma}
{\bf \textit{Proof: }}
we have $S(c_+,c_-)=[L]=\ln(u_+)-\ln(u_-)=\ln(\displaystyle \frac{u_+}{u_-})$
and
$\displaystyle \frac{\partial}{\partial c_+} \displaystyle \frac{u_+}{u_-}=
\displaystyle \frac{\partial}{\partial c_+} \displaystyle \frac{\alpha+h_+}{\alpha+h_-}$
thanks to Lemma \ref{lRH}.
A calculus gives
$\displaystyle \frac{\partial}{\partial c_+} \displaystyle \frac{\alpha+h_+}{\alpha+h_-}
=\frac{1}{(\alpha+h_-)^2}\left( - \frac{\partial \alpha}{\partial c_+} [h]
+h'(c_+)(\alpha+h_-) \right)$.
Now $\displaystyle \frac{\partial \alpha}{\partial c_+} \geq 0 $ because $f$ is convex,
next
$[h]\geq 0$ since $ h'\leq 0$ and $ c_+ < c_-$. Lastly
$ \alpha+h_- > 0$ from Lemma \ref{signe} and we get $\displaystyle \frac{\partial
S}{\partial c_+}
(c_+,c_-)\leq 0$.\cqfd
The following result concerns the case with an inert gas:
\begin{lemma}\label{Lpourinertgas}
If $q_1=0$ and $q''_2 \leq 0$ then $\displaystyle \frac{\partial S}{\partial c_-} \geq 0$
for $c_- > c_+$.
\end{lemma}
{\bf \textit{Proof: }}
if $q_1=0$ then $f(c)=-ch(c), \, h(c)=q_2(c)$ then $h'(c)=q'_2(c) \leq 0$.
By a direct computation and thanks to Lemma \ref{lRH}, we have $$\displaystyle
\frac{u_+}{u_-}=\displaystyle \frac{[c]-c_-[h]}{[c]-c_+[h]}=\displaystyle
\frac{[c]-c_+[h]+[c][h]}{[c]-c_+[h]}=1+ \displaystyle \frac{1}{\displaystyle
\frac{1}{[h]}-\frac{c_+}{[c]}}.$$
But $\displaystyle \frac{\partial}{\partial c_-} \frac{1}{[h]} <0$, $- \displaystyle
\frac{c_+}{[c]}$ decreases, then $\displaystyle \frac{u_+}{u_-}$ increases with respect to
$c_-$.
\cqfd
In the case of two active components we
have the following result:
\begin{lemma}\label{derivee}
If $ q_1^{''} \leq 0 \leq q_2^{''}$ and if $f$ is convex then
$\displaystyle \frac{\partial S}{\partial c_-} (c_+,c_-)\geq 0$.
\end{lemma}
{\bf \textit{Proof: }}
let be $c$ between $c_+$ and $c_-$. From Lemma \ref{signe} we have:
\begin{eqnarray*}
u(c)=\displaystyle \frac{f(c_+)-f(c)}{c_+-c}+1+h(c_+)>0,
& \quad &
v(c)=\displaystyle \frac{f(c_+)-f(c)}{c_+-c} +1+h(c)>0.
\end{eqnarray*}
We rewrite $S$ using the functions $u,v$. With Lemma \ref{lRH} we get
immediately:
\begin{eqnarray*}
\displaystyle
S(c_+,c_-) & =& \ln \left( \displaystyle
\frac{[f]/[c]+1+h_+}{[f]/[c]+1+h_-}\right)\nonumber
\\
& =& \ln \left( \displaystyle \frac{u(c_-)}{v(c_-)} \right).
\end{eqnarray*}
The function $f$ is convex, so $u$ is increasing.
>From equality $f(c)=q_1(c)-ch(c)$ we have
\begin{eqnarray*} (v(c)-1)(c_+-c) & = &
q_1(c_+) -c_+h(c_+)-q_1(c)+ch(c) +h(c)(c_+-c) \\
&=&q_1(c_+)-q_1(c)-c_+(h(c_+)-h(c)).
\end{eqnarray*}
Recall that $h(c)=q_1(c)+q_2(c)$, so we have:
\begin{eqnarray*} (v(c)-1)(c_+-c) &=&
q_1(c_+)-q_1(c)-c_+(q_1(c_+)+q_2(c_+)-q_1(c)-q_2(c))\\
& =&
(1-c_+)(q_1(c_+)-q_1(c))-c_+(q_2(c_+)-q_2(c)).
\end{eqnarray*}
Finally,
$v(c)-1= (1-c_+) \displaystyle \frac{q_1(c_+)-q_1(c)}{c_+-c} -c_+ \displaystyle
\frac{q_2(c_+)-q_2(c)}{c_+-c}$
with $0 \leq c_+ \leq 1$.
Now, $q_1$ is concave and $q_2$ is convex,
so $v$ is decreasing.
Finally,
$\displaystyle \frac{u}{v}$ is increasing
and $\displaystyle \frac{\partial S}{\partial c_-} \geq 0$.\cqfd
\section{Interactions estimates}\label{sIE}
In this section we study the evolution of the total variation of $L=ln(u)$,
denoted $TV L$, through waves interactions. It is a key point to obtain some
$BV$ bounds and a special structure for velocity.
Let us denote $(c_0,L_0)$, $(c_1,L_1)$, $(c_2,L_2)$, three constant states such
that:
\begin{itemize}
\item
the Riemann problem with $(c_0,L_0)$ for the left state and
$(c_1,L_1)$ for the right state is solved by a simple wave $\mathcal{W}_1$,
\item
the Riemann problem with $(c_1,L_1)$ for the left state and
$(c_2,L_2)$ for the right state is solved by a simple wave $\mathcal{W}_2$,
\item
$\mathcal{W}_1$ and $\mathcal{W}_2$ interact.
\end{itemize}
Just after the interaction we have two outgoing waves
$\mathcal{W}_1^*$, $\mathcal{W}_2^*$,
and the intermediary constant state
$(c_1^*,L_1^*)$.
We denote by $TV L$ the total variation of $\ln u$ just before interaction:
$$ TV L = |L_0-L_1| + |L_1-L_2|.$$
We denote by $TV L^*$ the total variation of $\ln u$ just {\it after }
the interaction:
$$ TV L^* = |L_0-L_1^*| + |L_1^*-L_2|.$$
We use similar notation for the concentration.
\\
Denote by $\alpha_-$ the negative part of $\alpha$:
$ \alpha_-=\max(0,-\alpha)=-\min(0,\alpha) \geq 0.$
\\
We have the following key estimates:
\begin{theorem}[Variation on $TV \ln u$ and $TV c$
through two waves interaction]\label{2wi}~\\
Assume (\ref{Hconvex}). Then there exists $\Gamma> 0$, a true constant such
that:
\label{thWI}
\begin{eqnarray} \label{TVLquad}
TV L^* & \leq & TV L + \Gamma\, |c_0 - c_1|\,|c_1 - c_2|, \\
TV c^* & \leq & TV c. \label{TVcdec}
\end{eqnarray}
Furthermore, if (\ref{Hmonotonous}) is also satisfied then:
\begin{eqnarray} \label{TVLshock}
TV L^* & \leq & TV L + \Gamma (c_1 - c_0)_- \times (c_2 - c_1)_-,
\end{eqnarray}
in addition, if $S$, from (\ref{eqlwave}), satisfies the following triangular
inequality:
$$S(c_2,c_0) \leq S(c_2,c_1)+S(c_1,c_0)$$ when $ c_0> c_1> c_2$, then
\begin{eqnarray} \label{TVLdec}
TV L^* & \leq & TV L.
\end{eqnarray}
\end{theorem}
Inequality (\ref{TVcdec}) means that the total variation of $c $ does not
increase and Inequality (\ref{TVLshock}) means that the total variation of
$\ln u $ does not increase after a wave interaction except when two shocks
interact. In this last case the increase of $TV \ln u$ is quadratic with respect
to the concentration variation.
Such estimates are only valid when $f$ has no inflexion point. Else,
$\lambda$-wave curves are only Lipschitz and we loose the quadratic control for
the total variation of $L$.
\medskip
\underline{\bf Proof of Inequality (\ref{TVcdec})}:
the decay of the total variation of the concentration is straightforward
since $c$ is constant through a contact discontinuity,
i.e. $c_1^*= c_0$ :\\
$
TV c^*=|c_2-c_1^*| + |c_1^*-c_0|
= |c_2-c_0 | \leq |c_0 -c_1|+|c_1-c_2| = TV c.
$
\cqfd
\medskip
\underline{\bf Proof of Inequality (\ref{TVLquad})}:
this proof is much more complicated. We only assume (\ref{Hconvex}). The proof
is a consequence of the following lemmas.
\begin{lemma} If a $\lambda$-wave
interacts with a contact discontinuity
\label{lDDl}
then we have $TV L^* = TV L $.
\end{lemma}
{\bf \textit{Proof: }}
it is the simplest case.
We have $c_1=c_2$ from the contact discontinuity, so, with $T$ defined in
(\ref{eqlwave}),
$L_1-L_0= T(c_1,c_0)=T(c_2,c_0)$ and,
since $c_1^*= c_0$, we have
$L_2-L_1^*=T(c_2,c_1^*)= T(c_2,c_0)$.
Then $$ L_2-L_1^* = L_1-L_0, $$ which implies
$ L_2-L_1 = L_1^*-L_0$ and
$TV L^* = TV L .$
\cqfd
\begin{lemma} \label{TRI}
There exists a constant $\Gamma > 0$ such that, for all
$c_0,c_1,c_2 \in [0,1]$:
$$ |T(c_2,c_0)-T(c_2,c_1)-T(c_1,c_0)|
\mid \leq \Gamma \mid c_2-c_1 \mid \mid c_1 - c_0 |.$$
\end{lemma}
{\bf \textit{Proof: }}
we define $R$ by
$R(\alpha,\beta)
=T(c_2,c_0)-T(c_2,c_1)-T(c_1,c_0)$.
We have to prove that
$R(\alpha,\beta)={\cal O}(\alpha \beta),$
where $\alpha=c_1-c_0, \, \beta =c_1-c_2$.
We denote $c=c_2, \, b=c_1, \, a=c_0$.
We have $T \in {\cal C}^3([0,1],\mathbb{R})$ since $\lambda$ is genuinely nonlinear
and $T(b,b)=0$. We apply the Taylor's formula:
\begin{eqnarray*}
T(c,a) & = & T(b-\beta, b + \alpha)
=T(b,b)-\beta \partial_1 T(b,b)+ \alpha \partial_2 T(b,b)
\\ & & +
\int_0^1(1-t)(\beta^2 \partial_1^2S + \alpha^2 \partial_2^2 T -2 \alpha \beta
\partial_{12}^2T)(b-t \beta, b+t \alpha)dt,
\\
T(b,a) & =& T(b, b +\alpha)=T(b,b)+\alpha \partial_2T(b,b)+\int_0^1 (1-t)
\alpha^2 \partial_2^2 T(b, b+t \alpha)dt,
\\
T(c,b)& =& T(b-\beta, b)=T(b,b)-\beta \partial_1 T(b,b)+\int_0^1(1-t)\beta^2
\partial_1^2 T (b-t \beta, b)dt,
\\
R(\alpha,\beta)&=&T(c,a)-T(c,b)-T(b,a)
\\ & =&
-T(b,b)+\int_0^1(1-t) ( \beta^2 (\partial_1^2T(b-t \beta,b + t
\alpha)-\partial_1^2T(b-t \beta,b))+\\
&& \alpha^2(\partial_2^2T(b-t \beta,b+t \alpha)-\partial_2^2 T(b,b+t \alpha ))
-2 \alpha \beta \partial_1 \partial_2 T(b-t \beta,b+t \alpha) )dt.
\end{eqnarray*}
Since
\begin{eqnarray*} \partial_1^2T(b-t \beta,b +t \alpha)-\partial_1^2T(b-t
\beta,b) & = & {\cal O}(t\alpha)={\cal O}(\alpha),\\
\partial_2^2T(b-t \beta,b+t \alpha)-\partial_2^2T(b,b+t \alpha)& = & {\cal O}(t
\beta)={\cal O}(\beta), \\
\partial_1 \partial_2 T(b-t \beta,b+t \alpha)
& = & {\cal O}(1),
\end{eqnarray*}
we conclude that
$R(\alpha,\beta)={\cal O}(\beta^2 \alpha+\alpha^2 \beta+\alpha \beta)={\cal
O}(\alpha \beta)$.\cqfd
To conclude the proof of Inequality (\ref{TVLquad}) it suffices to use the next
lemma.
\begin{lemma}\label{llDl} If two $\lambda$-waves interact then we have
$TV L^* \leq TVL+ \Gamma |c_{2}-c_{1}|\, |c_1-c_{0}|.$
\end{lemma}
{\bf \textit{Proof: }} by definition of $TV L $ and $TVL^*$ it suffices to prove that
$$ L_1^* = L_0 + \mathcal{O}(|c_{2}-c_{1}|\, |c_1-c_{0}|),$$
since
$
TV L^*= |L_2-L_1^*| + |L_1^*-L_0|
\leq |L_2 -L_0| +2 |L_1^*-L_0|
\leq TV L + 2 |L_1^*-L_0|.
$
\\
Indeed, we have:
$L_1 -L_0 = T(c_1,c_0)$, $L_2 -L_1 = T(c_2,c_1)$, $L_2 -L_1^* = T(c_2,c_1^*)=
T(c_2,c_0) $. Next:
$ L_2 -L_0 = T(c_2,c_1)+T(c_1,c_0)$
and then
$$ L_1^* -L_0 = T(c_2,c_1)+T(c_1,c_0) - T(c_2,c_0) ,$$
which allows us to conclude the proof of
Lemma \ref{llDl} with Lemma \ref{TRI}.
\cqfd
The proof of Inequality (\ref{TVLquad}) is now complete. \cqfd
\underline{\bf Proof of Inequalities (\ref{TVLshock}),
(\ref{TVLdec})}:
\medskip
we assume again (\ref{Hmonotonous}) and also (\ref{likeLangmuir}) to fix the
signs. There are more cases to study:
\begin{itemize}
\item
first, we have yet studied in Lemma \ref{lDDl} the interaction
of a shock wave or a rarefaction wave ($\lambda$-wave) with a
contact discontinuity (1-wave): the contact discontinuity is ``transparent''
since $TV L^*= TV L$ and the concentration variation is also invariant.
\item
second, we study the interaction of a shock wave with a rarefaction wave
($\lambda$-waves with different types): see Lemmas
\ref{RSDR}, \ref{RSDS}, \ref{SRDR} and \ref{SRDS}. We get
$TV L^* < TV L$ and the concentration variation decreases. It is the only
case where $TVL$ and $TVc $ decrease.
\item
finally, we study the interaction of two shock waves. In this situation
$TV L^* \geq TV L$ and $TVc $ is invariant.
\\
Furthermore, if $S$ satisfies some ``triangular inequality'', we get $TV L^* =
TV L$.
\end{itemize}
In order to simplify the notations we denote by D a contact discontinuity, R a
rarefaction wave and S a shock wave. `` RD $\rightarrow$ DR " means that a
rarefaction wave coming from the left interacts with a contact discontinuity and
produces a new left wave, namely a contact discontinuity, and a new right wave,
namely a rarefaction.
\\
Since a contact discontinuity has a null speed and a $\lambda$-wave has a
positive speed, the only cases for $\mathcal{W}_1,\,\mathcal{W}_2$ are: RD, SD,
RS, SR and SS.
\\
For the resulting waves $ \mathcal{W}_1^*, \mathcal{W}_2^*$, there are 7
cases.
\\
The first two cases {RD $\rightarrow$ DR} and {SD $\rightarrow$ DS} have yet
been studied in Lemma \ref{lDDl}.
\begin{lemma}
In the case {RS $\rightarrow$ DR}\label{RSDR},
$TVL$ decreases i.e. $TVL^* < TVL$.
\end{lemma}
{\bf \textit{Proof: }}
at the beginning, we have a rarefaction,
then $c_0 < c_1$, $L_0 > L_1$, and a shock, then $c_2 < c_1$, $L_2 > L_1$.
After the interaction, we have a contact discontinuity, then $c_0=c_1^*$, and a
rarefaction, then $c_1^* < c_2$, $L_1^* > L_2$.
Finally, we have $c_0=c_1^* < c_2 < c_1$ then $g(c_0) =g(c_1^*) \leq g(c_2)
\leq g(c_1)$. We can write
\begin{eqnarray*}
TVL &=&\mid L_0 -L_1 \mid + \mid L_1 -L_2 \mid=L_0-L_1 + L_2 - L_1,
\\
TVL^*& = & \mid L_0 -L_1^*\mid + \mid L_2 - L_1^* \mid =\mid L_0-L_1^*\mid +
L_1^*-L_2.
\end{eqnarray*}
There are two cases:
\begin{itemize}
\item
the simplest is $L_0 > L_1^*$,
then $TVL^*=L_0-L_1^*+L_1^*-L_2=L_0-L_2 < L_0-L_1 < TVL$,
\item
the second case is
$L_0 < L_1^*$.
Let us define $\tilde L_2$ by
$$ L_0-\tilde L_2=L_1^*-L_2 ,$$
then
$ L_0-\tilde L_2=L_1^*-L_2
=g(c_2)-g(c_1^*)=g(c_2)-g(c_0) \leq g(c_1)-g(c_0)= L_0-L_1$ because $[L]=-[g]$
for a rarefaction and $c_1^*=c_0$. Since shock curves are decreasing, we know
that $\tilde L_2 > L_1$, so
$TVL^*=L_1^*-L_0+L_1^*-L_2 =L_2-\tilde L_2 +L_0 -\tilde L_2
< L_2-L_1+L_0-L_1=TVL.$\cqfd
\end{itemize}
\begin{lemma}
In the case {RS $\rightarrow$ DS} \label{RSDS} we get
$TVL^* \leq TVL$.
\end{lemma}
{\bf \textit{Proof: }}
this case needs the assumption $\displaystyle \frac{\partial S}{\partial c_-} \geq 0$.
At the beginning, we have a rarefaction: $c_1 > c_0$ and $L_1 < L_0$ with a
shock: $c_2 < c_1$ and $L_2 > L_1$.
The state $(c_2,L_2)$ is connected with a shock $(c_1^*,L_1^*)$: $c_2 < c_1^*$
and $L_1^* < L_2$.
The state $(c_0,L_0)$ is connected with a contact discontinuity $(c_1^*,L_1^*)$:
$c_0 = c_1^*$.
Finally, we have $c_2 < c_0=c_1^* < c_1.$
Then $TVL=\mid L_0 -L_1 \mid + \mid L_1 -L_2 \mid=L_0-L_1 + L_2 - L_1$
and
$TVL^*= \mid L_0 -L_1^*\mid + \mid L_2 - L_1^* \mid =L_2-L_1^*+\mid
L_1^*-L_0\mid.$
But, with the assumption, $\displaystyle \frac{\partial S}{\partial c_-} \geq 0$,
$S(c_2,c_0)=S(c_2,c_1^*)=L_2-L_1^* < S(c_2,c_1)=L_2-L_1$
then $L_1^* > L_1$.
\\
There are two cases:
\begin{itemize}
\item
if $L_0 > L_1^*$ then $TV L^*=L_0-L_1^*+L_2-L_1^* < L_0-L_1+L_2-L_1=TVL$,
\item
else
$L_0 < L_1^*$ then $TV L^*=-L_0+L_1^*+L_2-L_1^*=L_2-L_0< L_2-L_1< TVL$.\cqfd
\end{itemize}
\begin{lemma}
In the case {SR $\rightarrow$ DR}\label{SRDR} we have $TV L^* \leq TVL$.
\end{lemma}
{\bf \textit{Proof: }}
in the beginning, we have a shock who interacts with a rarefaction
then $c_1 < c_0$, $L_1>L_0$ and
$c_2 > c_1$, $L_1 > L_2$.
\\
After the interaction, we have a contact discontinuity then $c_0=c_1^*$ and a
rarefaction then $c_2 > c_1^*$ and $L_1^* > L_2$.
Finally, we have $c_1 < c_0 =c_1^* < c_2.$
Since $g' \geq 0$, we have
$g(c_1) \leq g(c_0) \leq g(c_2)$.
\\
For a rarefaction $[L]=-[g]$ then
$L_2-L_{1} ^{*}=g(c_1^*)-g(c_2)=g(c_0)-g(c_2)$ because $c_1^*=c_0$,
\\
$L_2-L_1=g(c_1)-g(c_2) \leq g(c_0)-g(c_2)$ because $c_1 < c_0$ and $g' \geq 0$.
\\
So we have: $L_2-L_1 \leq g(c_1^*)-g(c_2)=L_2-L_1^*$ and
\\
$TVL=\mid L_1-L_0 \mid + \mid L_2-L_1 \mid =L_1-L_0+L_1-L_2
\geq L_1-L_2$,
\\
$TV L^*=\mid L_1^*-L_0 \mid +\mid L_2-L_1^* \mid=\mid L_1^*-L_0 \mid
+L_1^*-L_2.$
\\
There are two cases:
\begin{itemize}
\item
the first is $L_1^* > L_0$ then $TV L^*=L_1^*-L_0-L_2+L_1^*=2
L_1^*-L_0-L_2=-(L_2-L_1^*)+L_1^*-L_0$
$<-(L_2-L_1)+L_1^*-L_2+L_2-L_0 < -L_2+L_1-L_2+L_1+L_2-L_0 = 2L_1-L_2-L_0 =TVL$,
\item
the second case is $L_1^* < L_0$ then $TV L^*=-L_1^*+L_0-L_2+L_1^*=L_0-L_2 \leq
L_1-L_2 \leq TVL$.
\cqfd
\end{itemize}
\begin{lemma} In the case {SR $\rightarrow$ DS}\label{SRDS},
$TVL$ decreases i.e. $TV L ^* \leq TVL$.
\end{lemma}
This situation is illustrated in Fig. \ref{SR-DS}.
{\bf \textit{Proof: }}
it is the most difficult case.
At the beginning, we have a shock then $c_1 < c_0$ and $L_1 >L_0$.
The shock interacts with a rarefaction then $c_2 > c_1$ and $L_2 < L_1$.
\\
We then have
$TVL = \mid L_1 -L_0 \mid + \mid L_2-L_1
\mid=L_1-L_0+L_1-L_2.$
\\
The state $(c_2,L_2)$ is connected to $(c_1^*,L_1^*)$ by a shock then $c_2 <
c_1^*$ and $L_1^* < L_2$.
\\
The state $(c_0,L_0)$ is connected to $ (c_1^*,L_1^*)$ by a contact
discontinuity then $c_0 = c_1^*$.
\\
Finally, we have $c_1 < c_2 < c_1^*=c_0$,
$S(c_1,c_0)=S_{10}> S(c_2,c_0)=S_{20}
=S(c_2,c_1^*)=L_2-L_1^*$,
\\
$L_1-L_0 =S_{10} > S_{20}=L_2-L_1^*$,
because $\displaystyle \frac{\partial S}{\partial c_+} <0$.
\\
There are two cases:
\begin{itemize}
\item
if $L_0 < L_1^*$ (see Fig. \ref{1and2}, left) then $ L_2 < L_1$ and
$$TV L^*= \mid L_1^* -L_0 \mid + \mid L_2-L_1^* \mid = L_1^* -L_0 + L_2-L_1^*
=L_2-L_0< L_1-L_0 < TVL,$$
\item
if $L_1^* < L_0$ (see Fig. \ref{1and2}, right) then we define $\tilde L_2$ by
$\tilde L_2 -L_0
=S_{20}=L_2-L_1^*< S_{10}=L_1-L_0$ and
$TV L^* = \mid L_1^* -L_0 \mid + \mid L_2-L_1^* \mid =L_0-L_1^*+L_2-L_1^*=\tilde
L_2 -L_2+ S_{20} < L_1-L_0+L_1-L_0=TVL.$
\cqfd
\end{itemize}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.5]{SRDS.eps}
\caption{case SR $\rightarrow$ DS. \label{SR-DS}}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.4]{CL1.eps}
\includegraphics[scale=0.4]{CL2.eps}
\caption{SR $\rightarrow$ DS: first case, left, and second case,
right.\label{1and2}}
\end{figure}
\newpage
The following case is the only one where $TVL $ increases,
except if $S$ satisfies a "triangular inequality".
\begin{lemma} In the case {SS $\rightarrow$ DS}\label{SSDS}
we have $$TV L^*= TVL+2 \max(S_{20}-S_{21}-S_{10},0) = TVL +2 \max
(L_0-L_1^*,0)\geq TVL.$$
\end{lemma}
{\bf \textit{Proof: }}
at the beginning, we have a shock: $c_1 < c_0$ and $L_1 >L_0$. It interacts with
an another shock:
$c_2 < c_1$ and $L_2 > L_1$.
\\
The state $(c_2,L_2)$ is connected to $(c_1^*,L_1^*)$ by a shock then $c_2 <
c_1^*$ and $L_1^* < L_2$.
\\
The state $(c_0,L_0)$ is connected with with $(c_1^*,L_1^*)$ by a contact
discontinuity then $c_0 = c_1^*$.
\\
Finally, we have $c_2 < c_1 < c_0 =c_1^*$
and $L_0 < L_1 < L_2.$
\\
With $L_2-L_1=S_{21}>0$, $L_1-L_0=S_{10}>0$,
$L_2-L_1^*=S_{20}>0$,
we have:
\\
$TVL= \mid L_2-L_1 \mid + \mid L_1 - L_0 \mid =L_2-L_1+L_1-L_0=S_{21}+S_{10},$
\\
$TV L^*=\mid L_2-L^*_1 \mid + \mid L_1^*-L_0 \mid
=\mid S_{20} \mid +\mid L_1^*-L_2+L_2-L_0 \mid$
$
=S_{20}+\mid -S_{20}+L_2-L_0\mid.$
\\
There are two cases to study:
\begin{itemize}
\item
if $-S_{20}+L_2-L_0 \geq 0$
i.e. $S_{20}=L_2-L_1^* \leq S_{21}+S_{10}=L_2-L_0$
i.e. $L_0 < L_1^*$
\\
then
$TV L^*=S_{20}-S_{20}+L_2-L_0=L_2-L_0=TVL$,
\item
else $L_1^* < L_0$ and we have
\begin{eqnarray*}
TV L^* & = &S_{20}+S_{20}-L_2+L_0=2S_{20}-2L_2+2L_0+L_2-L_0
\\ & =& 2(S_{20}-(L_2-L_0))+TVL,
= 2(S_{20}-S_{21}-S_{10})+TVL
\\ &= &2 (L_0-L_1^*) +TVL,
\end{eqnarray*}
\end{itemize}
which conclude the proof of Lemma \ref{SSDS}.\cqfd
The proof of Theorem \ref{thWI} is now complete.\cqfd
\section{$BV$ estimates with respect to time for the velocity} \label{suBV}
In System (\ref{un})-(\ref{deux})-(\ref{trois}), there is no partial derivative
with respect to $t$ for $u$. Nevertheless, the hyperbolicity of this
system ( with $x$ as the evolution variable) suggests that a $BV$ regularity of
the "initial" data $u_b$ for $x=0$ is propagated. Furthermore, in the case with
smooth concentration, the Riemann invariant $u\,G(c)$ suggests that when $\ln
u_b$ is only in $L^\infty(0,T)$, we can hope $ u(t,x)/u_b(t)$ to be still $BV$
in time for almost all $x$.
We prove that this $BV$ structure of the velocity is still valid with some convexity
assumptions, using a Front Tracking Algorithm (FTA).
We conjecture that this structure is still valid for the general case without
convexity assumption or, better, with a piecewise genuinely nonlinear eigenvalue
$\lambda= H(c)/u$. But, in this last case, the FTA becomes very complicated
(see Dafermos' comments in \cite{D00}).
\subsection{The case $\ln u_b \in BV(0,T)$}\label{BV}
We first precise the notations used in the
next theorem. We define the function $c_I$ on $(0,T)$ by
\begin{eqnarray*}
c_I(s)= \left \{ \begin{array}{cl}
c_0(s) & \mbox{ if }0 < s < X \\
c_b(-s) & \mbox{ if } 0 < -s < T
\end{array} \right.,
\end{eqnarray*}
and we set $TV c_I = TV c_I[-T,X]$.
\\
There exists a positive constant $\gamma$ such that
if $(c_-,L_-)$ is connected to $(c_+,L_+)$ by a $\lambda$-wave then
$|L_+-L_-|\leq \gamma \,|c_+-c_-|$. That is an easy consequence of
(\ref{eqlwave}). Indeed, it is yet proven in \cite{BGJ06}, Lemma 3.1, with
an inert gas, or in \cite{BGJ07}, Lemma 4.1, for two active gases.
\\
The constant $\Gamma$ comes from Theorem \ref{thWI}.
\begin{theorem}{\bf [Propagation of BV regularity in time for the velocity]}
\label{thbvp}\\
Assume (\ref{Hconvex}). If $\ln u_b \in BV(0,T)$, if
$c_0,c_b \in BV $ and if $(u,c)$ is a weak entropy solution of
System (\ref{un})-(\ref{deux})-(\ref{trois}), coming from the Front Tracking
Algorithm, then $c \in BV ((0,T)\times (0,X))$ and $ u\in
L^\infty((0,T),BV(0,X))\cap
L^\infty((0,X),BV(0,T))$. More precisely:
\begin{eqnarray*}
\max \left(\sup_{0 <t <T} TV_x c(t,.)[0,X],
\sup_{0 <x <X} TV_t c(.,x)[0,T] \right)
& \leq & TV c_I, \\
\sup_{0 <t <T} TV_x \ln u(t,.)[0,X] & \leq &
TV \ln u_b + \gamma\, TV c_I,
\\
\sup_{0 <x <X} TV_t \ln u(.,x)[0,T] & \leq &
TV \ln u_b
+ 2 \gamma \,TV c_I + \frac{\Gamma}{2}\, (TV c_I)^2.
\end{eqnarray*}
\end{theorem}
Compared to \cite{BGJ06,BGJ07}, the new result is that $u(t,x)$ is BV with respect to
time if $u_b$ is in $BV(0,T)$ i.e. the last inequality of the Theorem \ref{thbvp}. With the
Godunov scheme used in \cite{BGJ06,BGJ07} we do not obtain such time regularity for the velocity. It
is the reason why we use the FTA to get more precise estimates. Notice that we consider a local (in
time and space) problem for reasons of realism: we could consider a global one as well, i.e. for
$(t,x)\in(0,+\infty)^2$.
\medskip
{\bf \textit{Proof: }}
The easiest BV estimate on the concentration $c$ after interaction (estimate (\ref{TVcdec}) in
Theorem \ref{2wi}), which is always valid independently of the velocity $u$, yields to a control of
$c$ in $L^\infty_tBV_x \cap L^\infty_xBV_x$ as in \cite{BGJ07}, since $\lambda$ waves always have
a positive speed. From Lemma 4.8 of \cite{BGJ07} p.80 (ore more simply Lemma 3.1 of \cite{BGJ06} p.
557) we get $ L^\infty_{t,x} \cap L^\infty_tBV_x$ bounds for the velocity $u$. It follows, from a
natural adaptation of the estimates and compactness argument of the proof of Theorem 5.1 p 563. in
\cite{BGJ06} or Theorem 6.1 p.83 in \cite{BGJ07}, that there exists a subsequence which converges to
a solution of the initial boundary value problem with the prescribed data $ c_0,c_b,u_b$ when
$\delta$ goes to zero, thanks to the approximate entropy inequality (\ref{IEA}). Furthermore, as in
\cite{BGJ06,BGJ07}, we recover strong traces at $t=0$ and $x=0$.
Notice that this existence proof is also valid without any $BV$ assumption on the
velocity at the boundary: we only need $\ln u_b$ in $L^\infty(0,T)$.
The $BV$ estimate with respect to time for $\ln u$, i.e. the third estimate in the theorem, is a
consequence of two following lemmas.\cqfd
Let $(u,c)$ be an entropy solution coming from FTA.
For $\delta > 0$, representing the distance from the boundary
$x=0$ or $t=0$, let us define:
\begin{eqnarray*}
L(s,\delta) & = & \displaystyle \left \{ \begin{array}{cc}
\ln u(t=|s|,x=\delta) & \mbox{if } -T < s <0 \\
\ln u(t=\delta,x=s) & \mbox{if } 0 < s <X
\end{array} \right., \\
TV L(0) & =& \limsup_{\delta \rightarrow 0} TV L(.,\delta)[-T,X].
\end{eqnarray*}
For piecewise data, $ TV L(0)$ is the total variation of $\ln u$ just before
the first interaction.
\begin{lemma}
Before wave-interactions
we have $TVL(0) \leq TV \ln u_b + 2 \gamma\, TV c_I$.
\end{lemma}
{\bf \textit{Proof: }}
it suffices to prove this inequality for a piecewise constant approximate
solution issued from the FTA.
We discretize $[0,T]$ and $[0,X]$ as follows:
\\
$ T=s_1 > s_2 \cdots > s_m >s_{m+1}= 0 < s_{m+2} < \cdots < s_N=X$.
\\
For $i=1,\cdots,m$ let us define the following piecewise approximations of $c$
and $\ln u$:
\begin{eqnarray*}
c_{i} = \frac{1}{s_i-s_{i+1}}\int_{s_{i+1}}^{s_{i}} c_b(t)dt,
&&
L_{i} = \frac{1}{s_{i}-s_{i+1}}\int_{s_{i+1}}^{s_{i}} \ln (u_b(t))dt.
\end{eqnarray*}
Since $t=0$ is a characteristic boundary we define only $c_i$
for $i=m+1, \cdots, N-1$ by:
\begin{eqnarray*}
c_{i} = \frac{1}{s_{i}-s_{i+1}}\int^{s_i}_{s_{i+1}} c_0(x)dx.
\end{eqnarray*}
For $i<m$ we solve the $i^{th}$ Riemann Problem
with left state $(c_i,Li)$ and right state $(c_{i+1},L_{i+1})$
and we denote by $c_i^*,L_i^*$ the intermediary state.
Indeed $c_i^*=c_{i+1}$ since $c$ is constant through a contact discontinuity.
From Lemma 3.1 p. 557 of \cite{BGJ06}
(or Lemma 4.1 p.78-79 of \cite{BGJ07} for two active gases) we know
that:
\begin{eqnarray*}
|L_i - L_i^*| & \leq & \gamma\, |c_i -c_i^*|= \gamma \,|c_i-c_{i+1}|.
\end{eqnarray*}
We now estimate the total variation of $\ln u$ for the $i^{th}$ Riemann problem:
\begin{eqnarray*}
|L_i - L_i^*|+ |L_i^*-L_{i+1}| & \leq &
|L_i - L_i^*|+ \left( | L_i^*-L_i|+ |L_i-L_{i+1}| \right)\\
&\leq&
2 \gamma\,|c_i-c_{i+1}|+ |L_i-L_{i+1}| .
\end{eqnarray*}
Now, we look at the corner $t=0$, $x=0$ and $i=m$.
There is only a $\lambda$-wave since the boundary is characteristic.
With the left state $(c_m,L_m)$ and only $(c_{m+1})$ for the right state,
the resolution of the Riemann problem gives us a new constant value for $\ln u$,
namely $L_{m+1}=L^*_m$.
We have again the estimate
$|L_m - L_m^*|= |L_m -L_{m+1}| \leq \gamma \,|c_m-c_{m+1}|$.
So for $i=m+1, m+2,\cdots, N-1$
we define $L_i $ solving the characteristic Riemann problems
with the estimate:
\begin{eqnarray*} |L_i -L_{i+1}| \leq & \gamma |c_i-c_{i+1}|.
\end{eqnarray*}
Summing up with respect to $i$, we obtain the total variation on $L$ just before
the first wave interaction:
\begin{eqnarray*}
TVL & \leq &
\sum_{i<m} \left( 2 \gamma\,|c_i-c_{i+1}|+ |L_i-L_{i+1}| \right)
+ \sum_{i\geq m} \gamma\,|c_i-c_{i+1}|\\
&\leq &TV \ln u_b + 2 \gamma \,TV c_I.
\end{eqnarray*}
\cqfd
\begin{lemma}
We have the following estimate:
$TVL \leq TV L(0)+ \displaystyle\frac{\Gamma}{2}(TV c_I)^2.$
\end{lemma}
{\bf \textit{Proof: }}
we prove this estimate for any constant piecewise approximation
built from the FTA. The same estimate is still true passing to the limit.
First, we enumerate the absolute value of the jump
concentration initial-boundary value from the left to the right:
$$\alpha_i = c_i - c_{i-1} \qquad i=1,\cdots,N. $$
Notice that we have $N+1$ constant states for the initial-boundary data:
$ (c_0,L_0),\cdots, (c_N,L_N).$
From Theorem \ref{thWI},
the increase of the total variation of $\ln u$
is governed by following inequality
$TV L^* \leq TV L + \Gamma |\alpha_{i-1}| |\alpha_i|$
if the wave number $i-1$ interacts with the wave number $i$.
Since $c$ is constant through a contact discontinuity
($c$ is a 2-Riemann invariant)
and the jump of $c$ adds up if two $\lambda$-waves interact,
we consider only interaction between $\lambda$-waves.
Indeed we neglect that interaction with rarefaction has the tendency
to reduce $TV L$.
\\
We measure the strength of $\lambda$-wave with the jump of $c$ through the
wave.
We have positive or negative sign whether we have a rarefaction wave
or a shock wave.
\\
Let $\displaystyle \left( \alpha_i^k\right)_{1 \leq i \leq N-k}$
be the strength of the $\lambda$-wave number $i$
(labeled from the left to the right) after the interaction number $k$.
We have $ \alpha_i^0= \alpha_i$ and denote by $j^k$
the index such that
the interaction number $k$ occurs
with the $\lambda$-wave number $j^k$ and $j^k+1$ where
$ 1 < j_k \leq N-k$.
For $1\leq i < N-k$,
the strengths of $\lambda$-waves after the interaction number $k > 0$
are given by:
\begin{eqnarray*}
\alpha_i^k & =& \displaystyle
\left\{
\begin{array}{cc}
\alpha_i^{k-1} & \mbox{ if } i < j^k \\
\alpha_{i}^{k-1}+\alpha_{i+1}^{k-1} & \mbox{ if } i = j^k \\
\alpha_{i+1}^{k-1} & \mbox{ if } i > j^k
\end{array}
\right.,
\end{eqnarray*}
and the increasing of $TV L$ is less or equal than
$\Gamma S^k$ where, from Theorem \ref{thWI},
$$S^0 = 0,\qquad S^{k} = S^{k-1} + | \alpha_{i}^{k-1}||\alpha_{i+1}^{k-1}|.$$
Let us define the integers $l_i^k$ as follows:
$l^0_i = i$ and at each interaction
$$
l^k_i = \left\lbrace
\begin{array}{rcl}
l^{k-1}_i& if & i < j^k,\\
l^{k-1}_{i+1}& if & i=j^k,..., N-k+1.
\end{array}\right.
$$
Notice that after each interactions with two $\lambda$-waves, there is only one outgoing
$\lambda$-wave. Thus, the number of $\lambda$-waves decreases at each interactions, which proves
again (see
\cite{D76}) that the number of interactions is finite and the FTA is well
posed.
By induction, we see that:
$\displaystyle \alpha_i^k = \sum_{l^k_i\leq l < l^{k}_{i+1}} \alpha_l$
where
$ \displaystyle l^k_1=1 < l^k_2 < \cdots < l^k_{N-k+1}=N-k+1$,
$l^0_i=i$ and $l^k_i$ is non decreasing with respect to $k$.
Now, from the definition of $S^k$, we can deduce that:
\begin{eqnarray}
\label{eqSkind}
\displaystyle S^k & = &
S^{k-1} + \sum_{(i,j)\in J^k}|\alpha_i||\alpha_j|,
\end{eqnarray}
where $ \displaystyle J^k =\{ (i,j);\;
l^{k-1}_{j^k}\leq i < l^{k-1}_{j^k+1}\leq j < l^{k-1}_{j^k+2}\}$.
\\
Let us check that:
\begin{eqnarray} \label{eqSk}
S^k &=& \sum_{(i,j)\in I^k}|\alpha_i||\alpha_j|,
\end{eqnarray}
where
$ \emptyset = I^0 \subset I^1 \subset \cdots \subset
I^{k-1} \subset I^k \subset \cdots \subset
I =\{(i,j);\; 1 \leq i < j \leq N\}$.
It is true for $k=0$. It is true for all $k$ if
$ I^{k-1} \cap J^k = \emptyset$ and then
$ I^{k}= I^{k-1} \cup J^k $.
The point is only to prove that $ I^{k-1} \cap J^k = \emptyset$.
Terms $ |\alpha_i||\alpha_j|$ in the last sum of
(\ref{eqSkind}) have indexes $i$ and $j$
which appear in two consecutive intervals, i.e.
$ l^{k-1}_{j^k}\leq i < l^{k-1}_{j^k+1}\leq j < l^{k-1}_{j^k+2}$
and after, for $i=j^k$,
$ l^k_i = l^{k-1}_i$ and $ l^k_{i+1}=l^{k-1}_{i+2}$.
So $i$ and $j$ live in the same interval and
then terms $ |\alpha_i||\alpha_j|$ cannot appear again
in $S^{k+1}$, $S^{k+2}$, \ldots,
since such intervals are not decreasing.
\\
The same is true for all indexes in $I^k$.
They can appear at most one time in $S^k$.
We then have $ I^{k-1} \cap J^k = \emptyset$
and (\ref{eqSk}) is true.
We easily estimate $S^k$, which concludes the proof:
$$S^k \leq \sum_{(i,j)\in I}|\alpha_i||\alpha_j|
\leq \frac{1}{2}\sum_{i=1}^N\sum_{j=1}^N |\alpha_i||\alpha_j|
= \frac{1}{2}\left( \sum_{i=1}^N|\alpha_i|\right)^2
\leq \frac{1}{2}\left( TV c_I\right)^2.$$
\cqfd
\subsection{The case $\ln u_b \in L^\infty(0,T)$}
For $\ln u_b \in L^\infty$ and $c_0,c_b \in BV$ we get a $BV$ structure
for the velocity.
\begin{theorem}\label{thbvu}{\bf [ BV structure for the velocity]}
We assume (\ref{Hconvex}).
\\
If $\ln u_b \in L^\infty(0,T)$, if $c_0, \, c_b \in BV$ and if $(c,u)$ is a weak entropy solution
issued from the FTA, then
$$\max \left(\sup_{0 <t <T} TV_x c(t,.)[0,X],\sup_{0 <x <X} TV_t c(.,x)[0,T]
\right) \leq TV c_I $$
and there exists a function $v$ and constants $\gamma,\,\Gamma>0$ such that
$u(t,x) = u_b(t) \times v(t,x)$ with
$$ \ln v \in \left \{ L^\infty((0,X),BV(0,T)) \cap
L^\infty((0,T),BV(0,X))\right\}\subset BV((0,T)\times(0,X)),$$
\begin{eqnarray*}
\sup_{0 <t <T} TV_x \ln v(t,.)[0,X] & \leq & \gamma\, TV c_I, \\
\sup_{0 <x <X} TV_t \ln v(.,x)[0,T] & \leq &
2\,\gamma\, TV c_I + \frac{\Gamma}{2} \left(TV c_I \right)^2.
\end{eqnarray*}
\end{theorem}
The new result in this theorem is that $\displaystyle \frac{u(t,x)}{u_b(t)} $ is BV with
respect to time, although $u_b$ is not assumed to be BV, but just in
$L^\infty$. The other regularity properties have yet been proved in
\cite{BGJ06,BGJ07}.\\[2mm]
{\bf \textit{Proof: }}
the first estimates for $c$ are easily obtained as in Theorem \ref{thbvp}
since the total variation of the concentration does not increase after an
interaction. The existence proof of such entropy solution follows the beginning of the
proof of Theorem \ref{thbvp} which is a natural adaptation of the existence proof from \cite{BGJ06,
BGJ07} with only $L^\infty$ velocity.\\
We now study the new $BV$ estimates for $v$.
We can define $v$ by the relation $u(t,x)=u_b(t)v(t,x)$ because $u_b>0$.
Let be $ M = \ln v$ and $M_b=\ln v(\cdot, x=0)$. The initial total variation of $M$ on $x=0$ is
$TV M_b =0$ since $v(t,x=0) = 1$.
\\
We approach $u_b$ with a piecewise constant data (thus in $BV$)
and we show that the BV estimate for $M$ is independent of $u_b$.
Notice the fundamental relation:
$$[L] = \ln u_+ -\ln u_-
= \ln ( u_b(t)\, v_+) - \ln (u_b(t)\, v_-)
= \ln v_+ -\ln v_-= [M].$$
The equality $[L]=[M]$ implies that the $\lambda$-waves (\ref{eqlwave}) are the
same
in coordinates $(c,L)$ and $(c,M)$.
Then, Theorem \ref{thWI} is still valid replacing $L$ by $M$.
We then can repeat the proof of Theorem \ref{thbvp}
to get $BV$ estimates for $v$.
\cqfd
\section{Weak limit for velocity with $BV$ concentration}\label{sks}
When $c$ is only in $BV$, we cannot reduce System (\ref{sysad})
to a scalar conservation law for $c$ as in section \ref{skf}.
Indeed, since the shock speeds depend on the velocity, we have a true $
2\times 2$ hyperbolic system.
Nevertheless we can state following stability result.
\begin{theorem}[Stability with respect to weak limit for the velocity
\label{thsswlv}]~\\
Let $ \left( \ln (u_b^ \varepsilon) \right)_{0< \varepsilon<1}$ be a bounded sequence
in $L^\infty(0,T)$, such that
$$u_b^ \varepsilon \rightharpoonup \overline{u}_b
\mbox{ in } L^\infty(0,T) \mbox{ weak *}.$$
Let be $c_0\in BV((0,X),[0,1])$ and $c_b \in BV((0,T),[0,1])$.
Let $(c^ \varepsilon,u^ \varepsilon)$ be a weak entropy solution of System (\ref{sysad})
on $(0,T)\times (0,X)$ issuing from the FTA
with initial and boundary values:
\begin{equation*} \label{sysad0epsBV}
\left\{ \begin{array}{ccl}
\vspace{2mm}c^ \varepsilon(0,x)&=&c_0(x), \quad X> x > 0,\\
\vspace{2mm}c^ \varepsilon(t,0) &=&c_b(t),\quad T> t>0,\\
u^ \varepsilon(t,0)&=&u_b^ \varepsilon\left(t \right),\quad T>t>0.
\end{array}\right.
\end{equation*}
Then, there exists $ (u(t,x), c(t,x))$, weak entropy solution of System
(\ref{sysad})
supplemented by initial and boundary values:
\begin{equation*} \label{eqcu0}
\left\{ \begin{array}{ccl}
\vspace{2mm}c(0,x)&=&c_0(x), \quad x > 0,\\
\vspace{2mm}c(t,0) &=&c_b(t),\quad t>0,\\
u(t,0)&=&\overline{u}_b(t),\quad t>0,
\end{array}\right.
\end{equation*}
such that, when $ \varepsilon$ goes to $0$ and up to a subsequence:
\begin{eqnarray*}
c^ \varepsilon(t,x) & \rightarrow & c(t,x)
\mbox{ strongly in } L^1( [0,T]\times[0,X]), \\
u^ \varepsilon(t,x) & \rightharpoonup & u(t,x)
\mbox{ weakly in } L^\infty( [0,T]\times[0,X]) \mbox{ weak *}, \\
u^ \varepsilon(t,x) & = & u_b^ \varepsilon(t)\times v(t,x) + o(1)
\mbox{ strongly in } L^1( [0,T]\times[0,X]),
\mbox{ where } \displaystyle v(t,x)= \frac{u(t,x)}{\overline{u}_b(t)}.
\end{eqnarray*}
\end{theorem}
For the convergence of the whole sequence we need the uniqueness
of the entropy solution for initial-boundary value problem:
(\ref{sysad}), (\ref{sysad0}).
\\
{\bf \textit{Proof: }}
from Theorem \ref{thbvu} we know that $u^ \varepsilon (t,x) = u_b^ \varepsilon(t)
v^ \varepsilon(t,x)$ where the sequences $(\ln v^ \varepsilon )_{0< \varepsilon}$
and $( c^ \varepsilon )_{0< \varepsilon}$ are uniformly bounded in $BV((0,T)\times(0,X))$.
Then, up to a subsequence, we have the following strong convergence
in $ L^1((0,T)\times(0,X))$:
$ v^ \varepsilon \rightarrow v $,
$ c^ \varepsilon \rightarrow c $.
$(c^ \varepsilon,u^ \varepsilon)$ is a weak entropy solution for (\ref{sysad}) means
for all $\psi$ such that $\psi"\geq 0$ and $Q$ such that
$ Q'=h'\psi+ H\psi'$ we have in distribution sense:
$
\partial_x\left(u^ \varepsilon(t,x) \psi(c^ \varepsilon) \right)
+\partial_t Q(c^ \varepsilon) \leq 0,
which is rewritten as follows:
$
\partial_x\left(u_b^ \varepsilon(t)v^ \varepsilon(t,x) \psi(c^ \varepsilon) \right)
+\partial_t Q(c^ \varepsilon) \leq 0.
$ Passing again to the weak-limit against a strong limit we get:
\partial_x\left(\overline{u}_b(t)v(t,x) \psi(c) \right)
+\partial_t Q(c) \leq 0.$
i.e. $(c,u=\overline{u}_b\times v)$ is a weak entropy solution for System
(\ref{sysad}).
We also can pass to the limit on initial-boundary data.
Since there exists $\delta $ such that $ 0 < \delta < u_b^ \varepsilon < \delta^{-1}$,
$ v^ \varepsilon(t,x) \rightarrow v(t,x)$ means
$ u^ \varepsilon(t,x)/u_b^ \varepsilon(t) - v(t,x) \rightarrow 0$
and also means
$ u^ \varepsilon(t,x)- u_b^ \varepsilon(t) \times v(t,x) \rightarrow 0$,
which concludes the proof.
\cqfd
\underline{ An example of high oscillations
for velocity}:
as an example of weak limit we consider the case of high oscillations
for velocity on the boundary.
Let be $u_b(t,\theta) \in L^\infty((0,T),C^0(\mathbb{R}/\mathbb{Z},\mathbb{R}))$,
$\overline{u}_b(t)= \displaystyle \int_0^1 u_b(t,\theta)d\theta$ and assume
$\inf u_b > 0$.
With $u_b^ \varepsilon(t) = \displaystyle u_b\left(t,\displaystyle \frac{t}{ \varepsilon} \right)$
and the same notations as in Theorem \ref{thsswlv} we have:
\begin{itemize}
\item
first, oscillations do not affect the behavior of the concentration
since $(c^ \varepsilon) $ converges strongly in $L^1 $ towards $c$ and
the limiting system depends only on the average $\overline{u}_b$ and not on
oscillations;
\item
second, $(u^ \varepsilon)$ converges weakly towards
$\overline{u}_b(t) \times v(t,x)$
and we have a strong profile for $u^ \varepsilon$:
$$\lim_{ \varepsilon \rightarrow 0}
\left \| u^ \varepsilon(t,x) - U\left(t,x,\frac{t}{ \varepsilon} \right)
\right\|_{L^1((0,T)\times(0,X))} = 0,
\mbox{ where }
U(t,x,\theta)= \displaystyle u_b(t,\theta) \times v(t,x).$$
\end{itemize}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,453
|
{"url":"https:\/\/socratic.org\/questions\/54ade596581e2a3da138053f","text":"# Question #8053f\n\nJan 8, 2015\n\nThe answer is $3.0$ $\\text{g}$ of $B a \\left(S {O}_{4}\\right)$ will form in this reaction.\n\nYour general chemical reaction is\n\n$N {a}_{2} S {O}_{4 \\left(a q\\right)} + B a {\\left(N {O}_{3}\\right)}_{2 \\left(a q\\right)} \\to B a S {O}_{4} \\left(s\\right) + 2 N a N {O}_{3 \\left(a q\\right)}$\n\nNotice that you've got a $1 : 1$ mole ratio between $B a {\\left(N {O}_{3}\\right)}_{2}$ and $B a S {O}_{4}$; this means that for every mole of $B a {\\left(N {O}_{3}\\right)}_{2}$ used, 1 mole of solid will be produced.\n\nThe number of $B a {\\left(N {O}_{3}\\right)}_{2}$ moles can be determined using its molarity:\n\n$C = \\frac{n}{V} \\implies {n}_{B a {\\left(N {O}_{3}\\right)}_{2}} = C \\cdot V = 0.50$ $\\text{M} \\cdot 25 \\cdot {10}^{- 3}$ $\\text{L}$\n\n${n}_{B a {\\left(N {O}_{3}\\right)}_{2}} = 0.013$ $\\text{moles}$\n\nThis means of course that ${n}_{B a S {O}_{4}} = {n}_{B a {\\left(N {O}_{3}\\right)}_{2}} = 0.013$ moles of solid will be formed. Knowing that $B a S {O}_{4}$'s molar mass is $233.3$ $\\text{g\/mol}$, the mass produced will be\n\n${m}_{B a S {O}_{4}} = {n}_{B a S {O}_{4}} \\cdot 233.3 \\frac{g}{m o l} = 0.013$ $\\text{moles} \\cdot 233.3 \\frac{g}{m o l}$\n\n${m}_{B a S {O}_{4}} = 3.0$ $\\text{g}$\n\nThe reaction's complete ionic equation is\n\n$2 N {a}_{\\left(a q\\right)}^{+} + S {O}_{4 \\left(a q\\right)}^{2 -} + B {a}_{\\left(a q\\right)}^{2 +} + 2 N {O}_{3 \\left(a q\\right)}^{-} \\to B a S {O}_{4 \\left(s\\right)} + 2 N {a}_{\\left(a q\\right)}^{+} + 2 N {O}_{3 \\left(a q\\right)}^{-}$\n\nThe net ionic equation is\n\n$B {a}_{\\left(a q\\right)}^{2 +} + S {O}_{4 \\left(a q\\right)}^{2 -} \\to B a S {O}_{4 \\left(s\\right)}$","date":"2019-10-19 11:56:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 24, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.700938880443573, \"perplexity\": 922.0588105665698}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986693979.65\/warc\/CC-MAIN-20191019114429-20191019141929-00502.warc.gz\"}"}
| null | null |
Portul Corabia este un port la Dunăre care are o vechime de peste 130 de ani.
Construit inițial la 2 km în amonte față de actuala locație, vechiul port era destinat mai ales expediției de produse agricole.
Din 1984, portul este strămutat în actualul său amplasament, dispunând la ora actuală de un front de acostare la Dunăre de 1.126 metri și de 15 dane de acostare și operare.
Note
Legături externe
Site web oficial
Corabia
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,902
|
Q: Selenium *read* text in prompt We're using Selenium for implementing web tests. In one scenario, our application comes up with a browser prompt. It is possible to handle browser prompts with:
var alert = driver.SwitchTo().Alert();
alert.Accept(); // accept prompt
alert.Dismiss(); // dismiss prompt
alert.Text; // get text from prompt
alert.SendKeys("text"); // fill out the input element on the prompt
Is there any chance we can read the pre-published text from the input element?
A: The JavaScript code for the prompt() method which displays this type of dialog looks like this:
window.prompt("prompt text", "default value");
This will display a dialog box with a label and an input text box where the user is expected to type the value. In the example above, the label will have the text of "prompt text"; the input box will be prepopulated with "default value". The WebDriver Alert.getText() method returns the prompt text, but you're right, there is no way to get the default value at the moment.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 9,607
|
{"url":"http:\/\/www.math.purdue.edu\/calendar\/","text":"# Calendar\n\n## Yesterday\n\n### Ph.D. Thesis Defense, Agnid Banerjee, BRNG B212\n\nMonday, March 10, 2014, 1:00 - 2:00 PM EDT\n\n## Normalized p-laplacian Evolution, Boundary Behavior of Non-negative Solutions of Fully Nonlinear Parabolic Equations, Gradient Bounds for $p$-harmonic Systems with Vanishing Neumann( dirichlet) Data in a Convex Domain.\n\nCommittee: D. Danielli, Co-Chair, N. Garofalo, Co-Chair, A. Petrosyan, N K. Yip\n\n### Geometry Seminar, Babak Modami, UIUC, MATH 731\n\nMonday, March 10, 2014, 3:30 - 4:30 PM EDT\n\n## Symbolic Coding of Weil-Petersson Geodesic Flow\n\nAbstract: The Weil-Petrsson (WP) metric is an incomplete Riemannian metric on the moduli space of Riemann surfaces with sectional curvatures not bounded away from 0 and $-\\infty$. These features prevent applying most of standard techniques to study the global geometry and dynamics of WP metric. I review some results about a kind of symbolic coding of the WP geodesic flow using laminations and subsurface coefficients. Further we apply some estimates on the WP metric and its derivatives in the thin part of moduli space to show that the strong asymptotics of a class of WP geodesics is determined by the associated laminations. As a result we give a symbolic condition for divergence of WP geodesics in the moduli space.\n\n### Bridge to Research Seminar, Professor Gregery Buzzard, Purdue University, BRNG B222\n\nMonday, March 10, 2014, 4:30 - 5:30 PM EDT\n\n## Optimal Filters for High-speed Compressive Detection in Spectroscopy\n\nAbstract: A key bottleneck to high-speed chemical analysis is the time required to collect and analyze chemical spectral data. One approach to overcoming this problem is based on a new type of filter, in which energy from multiple frequencies is aggregated in order to reduce noise effects and decrease the total number of measurements needed for a given purpose. In this talk I describe the mathematical formulation of this problem and describe results on the selection of optimal filters for estimation of the chemical composition of a sample when the sample is a combination of a finite set of known chemicals.\n\n## Today\n\n### Department of Mathematics Colloquium, Timo Seppalainen, University of Wisconsin, MATH 175\n\nTuesday, March 11, 2014, 4:30 - 5:30 PM EDT\n\n## Variational Formulas for Directed Paths in a Random Medium\n\nAbstract: This talk begins with a reminder of classic random walk and then proceeds to models of random paths currently studied in probability and statistical mechanics. In particular, we discuss directed percolation and directed polymer models. Subadditive ergodic theory gives deterministic large scale limits for these models, but properties of these limits have remained a challenge for decades. We describe some new variational formulas that characterize these limits and connections with other features of the models such as fluctuation exponents. Refreshments will be served at 4 p.m. in the Math Library Lounge.\n\n## Tomorrow\n\n### WAGS, Hal Schenck, University of Illinois, MATH 731\n\nWednesday, March 12, 2014, 3:30 - 4:30 PM EDT\n\n## Geometry of Wachspress Surfaces\n\nAbstract: Let P_d be a convex polygon with d vertices. The associated Wachspress surface W_d is a fundamental object in approximation theory, dened as the image of the rational map from P^2 to P^(d-1), determined by the Wachspress barycentric coordinates for Pd. We show this is a regular map on a blowup X of P^2, and if d > 4 is given by a very ample divisor on X, so has a smooth image. We determine generators for the ideal I(X), and prove that in graded lex order, the initial ideal of I(X) is given by a Stanley-Reisner ideal. As a consequence, we show that the associated surface is arithmetically Cohen-Macaulay, of Castelnuovo-Mumford regularity two, and determine all the graded Betti numbers of I(X). Joint work with Corey Irving, Santa Clara Univ.\n\n### GMIG Seminar, Florian Faucher, Purdue University, REC 225\n\nWednesday, March 12, 2014, 3:30 - 4:30 PM EDT\n\n## Multi-level, Multi-frequency Elastic full Waveform Inversion\n\nAbstract: We study the inverse boundary value problem for the elastic wave equation and recovery of the P- and S- wavespeeds upon taking a time-Fourier transform of the data. We design a hierarchical compressed reconstruction in a multi-level scheme for the inverse problem associated with the time-harmonic elastic isotropic wave equation, at selected frequencies of the data. The compression is based on a domain partitioning of the subsurface, while the hierarchy is established through refinement. The coefficients are assumed to be piecewise constant functions following the domain partitioning. Our method is based on Haar wavelets for the compression using strategies providing a gradual increase in the number of subdomains via the analysis of the Gauss-Newton Hessian. We eventually carry out numerical experiments in two and three dimensions for the reconstruction through the update of the Lam\u00e9 parameters $\\lambda$ and $\\mu$.\n\n## Thursday\n\n### Automorphic Forms Seminar, Dr. Yueke Hu, University of Wisconsin-Madison, REC 302\n\nThursday, March 13, 2014, 1:30 - 2:30 PM EDT\n\nLocal Integral of Triple Product L-function and Subconvexity Bound Abstract: Venkatesh proposed a strategy to prove the subconvexity bound in the level aspect for triple product L-function. With the integral representation of triple product L-function, if one can get an upper bound for the global integral and a lower bound for the local integrals, then one can get an upper bound for the L-function, which turns out to be a subconvexity bound. Such a subconvexity bound was obtained essentially for representations of square free level. I will talk about how to generalize this result to the case with higher ramifications as well as joint ramifications.\n\n### Probability Seminar, Hosam Mahmoud, George Washington University, REC 226\n\nThursday, March 13, 2014, 3:30 - 4:30 PM EDT\n\n## Analysis of Quickselect under Yaroslavskiy's Dual-Pivoting Algorithm\n\nThere is excitement within the algorithms community about a new partitioning method introduced by Yaroslavskiy. This algorithm renders Quicksort slightly faster than the case when it runs under classic partitioning methods. We show that this improved performance in Quicksort is NOT sustained in Quickselect, a variant of Quicksort for finding order statistics. Distributions of several cost measures (when suitably scaled) are given as the fixed-point solution of distributional equations defining contraction in the Zolotarev metric space. These limiting distributions are of perpetuities (a sum of products of independent mixed continuous random variables).\n\n### PDE Seminar, Prof. Fernando Charro, University of Texas at Austin, REC 316\n\nThursday, March 13, 2014, 3:30 - 4:30 PM EDT\n\n## A Fractional Analogue of the Monge-Ampere Operator\n\nAbstract: In this talk we consider a fractional analogue of the Monge-Ampere operator. Our operator is a concave envelope of fractional linear operators of the form $\\inf_{A\\in \\mathcal{A}}L_Au,$ where the set of operators corresponds to all affine transformations of determinant one of a given multiple of the fractional Laplacian. We set up a relatively simple framework of global solutions prescribing data at infinity and global barriers. In our key estimate, we show that the operator that realizes the infimum remains strictly elliptic, which allows to deduce an Evans-Krylov regularity result and therefore that solutions are classical.\n\n### Learning Seminar on Perfectoid Space, Prof. Andrei Jorza, University of Notre Dame,, MATH 731\n\nThursday, March 13, 2014, 3:30 - 4:30 PM EDT\n\n## Rigid Geometry (I)\n\nAbstract: Scholze uses the language of perfectoid spaces to prove a comparison theorem for rigid analytic spaces in the spirit of the Hodge decomposition for de Rham cohomology of compact Kaehler manifolds. We'll explore rigid analytic spaces and their formal models through examples.\n\n### Commutative Algebra Seminar, Dr. Paolo Mantero, University of California, Riverside, UNIV 301\n\nThursday, March 13, 2014, 4:30 - 5:30 PM EDT\n\n## On two conjectures by Harbourne-Huneke and Chudnovsky\n\nAbstract: A long-standing conjecture by Chudnovsky predicts the existence of a specific lower bound c(X) for the degree of any hypersurface in P^N passing through a fixed set X of points with multiplicity m. The lower bound c(X) is defined in terms of N and the initial degree of the ideal defining X. Harbourne and Huneke in 2011 showed that this conjecture would follow from a new conjecture that they propose (which would give refined comparison information between symbolic and ordinary powers of the ideal defining X). In this talk we will discuss these two conjectures and prove some results, for instance, when X is a set of general points.\n\n## Two Weeks\n\n### Department of Mathematics Colloquium, Craig Evans, Berkeley, MATH 175\n\nTuesday, March 25, 2014, 4:30 - 5:30 PM EDT\n\nTBA Refreshments will be served at 4 p.m. in the Math Library Lounge.\n\n### Learning Seminar on Perfectoid Space, Prof. Andrei Jorza, University of Notre Dame,, MATH 731\n\nThursday, March 27, 2014, 3:30 - 4:30 PM EDT\n\n## Rigid Geometry (II)\n\nAbstract: Perfectoid spaces are objects in Huber's category of adic spaces, which are the \"most canonical\" types of rigid geometric objects. We'll look at how adic and Berkovich spaces differ from rigid spaces and what makes adic spaces suitable, mainly through examples.\n\n### Probability Seminar, TBA, REC 226\n\nThursday, March 27, 2014, 3:30 - 4:30 PM EDT\n\nTBA\n\n## Three Weeks\n\n### Ph.D. Thesis Defense, Urs Fuchs, BRNG 1255\n\nThursday, April 3, 2014, 3:30 - 4:30 PM EDT\n\n## Pseudoholomorphic curves in symplectic and contact geometry and their application in dynamics\n\nCommittee: P. Albers (Co-Chair), L. Lempert (Co-Chair), S. Bell, S.K. Yeung\n\n### Probability Seminar, TBA, REC 226\n\nThursday, April 3, 2014, 3:30 - 4:30 PM EDT\n\nTBA\n\n### Computational and Applied Mathematics Seminar, Professor Hongyu Liu, Univ. of North Carolina at Charlotte, REC 316\n\nFriday, April 4, 2014, 3:30 - 4:30 PM EDT\n\n## Recovery by a Single Far-Field Measurement\n\nAbstract: In this talk, I will describe the recent theoretical and computational progress on recovering electromagnetic scatterers by using a single far-field measurement. We establish the uniqueness in determining PEC obstacles of general polyhedral type. We also develop several direct imaging schemes, which work in an extremely general setting, assuming the uniqueness holds true.\n\n## April\n\n### Department of Mathematics Colloquium, Arlie Petters, Duke, MATH 175\n\nTuesday, April 8, 2014, 4:30 - 5:30 PM EDT\n\nTBA Refreshments will be served at 4 p.m. in the Math Library Lounge.\n\n### Ph.D. Thesis Defense, Dan Tran, BRNG B261\n\nThursday, April 10, 2014, 3:30 - 4:30 PM EDT\n\n## Direct Images as Hilbert Fields and Their Curvatures\n\nCommittee: L. Lempert (Chair), D. Catlin, S.K. Yeung, S. Bell\n\n### Probability Seminar, TBA, REC 226\n\nThursday, April 10, 2014, 3:30 - 4:30 PM EDT\n\nTBA\n\n### Ph.D. Thesis Defense, Koushik Ramachandran, REC 303\n\nFriday, April 11, 2014, 1:15 - 2:15 PM EDT\n\n## Asymptotic behavior of positive harmonic functions in certain unbounded domains\n\nCommittee: A. Eremenko (Co-chair), S. Mayboroda (Co-chair), R. Banuelos, D. Drasin, B. Davis\n\n### Computational and Applied Mathematics Seminar, Professor Robert Krasny, University of Michigan, REC 316\n\nFriday, April 11, 2014, 3:30 - 4:30 PM EDT\n\n## A Treecode-Accelerated Boundary Integral Poisson-Boltzmann Solver for Solvated Proteins\n\nAbstract: We present a treecode-accelerated boundary integral (TABI) solver for electrostatics of solvated proteins described by the linear Poisson-Boltzmann equation. The method uses a well-conditioned boundary integral formulation for the electrostatic potential and its normal derivative on the molecular surface. The surface is triangulated by MSMS and the integral equations are discretized by centroid collocation. The linear system is solved by GMRES and the matrix-vector product is carried out by a Cartesian treecode which reduces the cost from $O(N^2)$ to $O(N\\log N)$, where $N$ is the number of faces in the triangulation. The code is applied to two test cases, the Kirkwood sphere and a medium sized protein. We compare TABI results with those obtained using the grid-based APBS code, in terms of error, CPU run time, and memory usage, and we also present parallel TABI simulations. The TABI solver exhibits good serial and parallel performance combined with relatively simple implementation, efficient memory usage, and geometric adaptability. This is joint work with Weihua Geng (Southern Methodist University).\n\n### Department of Mathematics Colloquium, David Isaacson, RPI, MATH 175\n\nTuesday, April 15, 2014, 4:30 - 5:30 PM EDT\n\n## Problems in Electrical Impedance Tomography\n\nAbstract: Several mathematical problems that arise in the design , construction, and testing of electrical impedance tomography, EIT, systems will be discussed. These systems apply patterns of currents to electrodes on a bodies surface and measure the voltages that result. From this electrical data images of the conductivity inside the body are reconstructed and displayed. Since lungs filled with air and hearts emptied of blood have lower conductivities that lungs depleted of air and hearts filled with blood , both heart and lung functions can be monitored by EIT systems. Since the dispersion relations of breast cancers may be different from noncancerous tissues EIT may be used to try to improve the diagnosis of breast cancer. From a mathematical point of view the reconstruction of internal conductivity from surface measurements of the current to voltage map is an inverse boundary value problem for a low frequency approximation to Maxwell's equations. It will be explained how the analysis of this inverse problem led to the design of adaptive current tomography systems at RPI. Images and movies made by RPI's ACT systems showing ventilation and perfusion , as well as breast cancers will be shown. Refreshments will be served at 4 p.m. in the Math Library Lounge.\n\n### Probability Seminar, TBA, REC 226\n\nThursday, April 17, 2014, 3:30 - 4:30 PM EDT\n\nTBA\n\n### CCAM Lunch Seminar, Professor Aditya Viswanathan, Michigan State University, BRNG 1222\n\nFriday, April 18, 2014, 11:30 - 12:30 PM EDT\n\n## TBA\n\n### Computational & Applied Mathematics Seminar, Professor Fabio Milner, Arizona State University, REC 316\n\nFriday, April 18, 2014, 3:30 - 4:30 PM EDT\n\nTBA\n\n### Ph.D. Thesis Defense, Lichen Ni, BRNG B212\n\nMonday, April 21, 2014, 2:00 - 3:00 PM EDT\n\n## C^1-continuous Spectral Elements\n\nCommittee: S. Dong (Chair), J. Xia, J. Shen, P. Li\n\n### Ph.D. Thesis Defense, Shuhao Cao, BRNG 1254\n\nMonday, April 21, 2014, 4:00 - 5:00 PM EDT\n\n## The a posteriori error estimation in finite element method for the H(curl) problems\n\nCommittee: Z. Cai (Chair), J. Xia, J. Shen, P. Li\n\n### Ph.D. Thesis Defense, Jing Wang, BRNG B212\n\nWednesday, April 23, 2014, 1:00 - 2:00 PM EDT\n\n## C^1-continuous Spectral ElementsSub-Riemannian Heat Kernels on Model Spaces and Curvature-dimension Inequalities on Contact Manifolds\n\nCommittee: F. Baudoin (Chair), R. Banuelos, D. Danielli, N. Garofalo\n\n### Probability Seminar, TBA, REC 226\n\nThursday, April 24, 2014, 3:30 - 4:30 PM EDT\n\nTBA\n\n### Computational & Applied Mathematics Colloquium, Professor Leszek Demkowicz,The University of Texas at Austin, REC 316\n\nFriday, April 25, 2014, 3:30 - 4:30 PM EDT\n\n## Discontinuous Petrov Galerkin (DPG) Method with Optimal Test Functions Fundamentals\n\nAbstract: The coming June will mark the fifth anniversary of the first two papers in which Jay Gopalakrishnan and I proposed a novel Finite Element (FE) technology based on what we called the ultra-weak variational formulation'' and the idea of computing (approximately) optimal test functions on the fly [1,2]. We called it the Discontinuous Petrov Galerkin Method''. Shortly afterward we learned that we owned neither the concept of the ultra-weak formulation nor the name of the DPG method, both introduced in a series of papers by colleagues from Milano: C. L. Bottasso, S. Micheletti, P. Causin and R. Sacco, several years earlier. The name ultra-weak'' was stolen from O. Cessenat and B. Despres. But the idea of computing optimal test functions was new... From the very beginning we were aware of the fact that the Petrov-Galerkin formulation is equivalent to a Minimum Residual Method (generalized Least Squares) in which the (minimized) residual is measured in a dual norm, the idea pursued much earlier by colleagues from Texas A&M: J. Bramble, R. Lazarov and J. Pasciak. Jay and I were lucky; a few months after putting [1,2] on line, Wolfgang Dahmen and Chris Schwab presented essentially the same approach pointing to a connection with mixed methods and the fact that the use of discontinuous test functions is not necessary. The lecture will focus on fundamentals of the DPG method. We will discuss the equivalence of several formulations: Petrov-Galerkin method with optimal test functions, minimum residual formulation and a mixed formulation. We will summarize well-posedness results for formulations with broken test functions: the ultra-weak formulation based on first order systems and the formulation derived from standard second order equations. Standard model problems: Poisson, linear elasticity, Stokes, linear acoustics and Maxwell equations, will be used to illustrate the methodology with h-, p-, and hp-convergence tests. The DPG method comes with a posteriori-error evaluator (not estimator...) built in which provides a natural framework for adaptivity. Take home message: The DPG method guarantees stability for any well-posed linear problem. [1] L. Demkowicz and J. Gopalakrishnan, A class of discontinuous Petrov-Galerkin methods. PartI: The transport equation,'' CMAME: 199, 23-24, 1558-1572, 2010. [2] L. Demkowicz and J. Gopalakrishnan, A class of discontinuous Petrov-Galerkin methods. Part II: Optimal test functions,'' Num. Meth. Part. D.E.:27, 70-105, 2011.\nRefreshments will be served in the Math Library Lounge at 3:00 PM.\n\n## May\n\n### Computational and Applied Mathematics Seminar, Professor Shari Moskow, Drexel University, REC 316\n\nFriday, May 2, 2014, 3:30 - 4:30 PM EDT","date":"2014-03-11 21:07:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5467086434364319, \"perplexity\": 1850.272818934003}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-10\/segments\/1394011278480\/warc\/CC-MAIN-20140305092118-00051-ip-10-183-142-35.ec2.internal.warc.gz\"}"}
| null | null |
De Lijnmarkt is een straat in de binnenstad van de Nederlandse stad Utrecht.
De circa 150 meter lange straat bevindt zich tussen de Maartensbrug en de Gaardbrug. Ze loopt op korte afstand evenwijdig aan de Oudegracht. De panden aan de oostzijde van de Lijnmarkt liggen met de achterzijde direct langs deze waterweg, zonder tussenliggende straat, werfkelders of werf. In vroegere tijden was er nog wel een open werf langs de gracht, maar in de loop der eeuwen mocht deze overbouwd worden mits er een openbaar pad zou blijven. De galerij met bogen is het resultaat van deze aanpassing.
De Lijnmarkt is in de middeleeuwen ontstaan. Vanouds hadden handel, ambachten en markten zich geconcentreerd in de aan de voet van de burcht Trecht gelegen handelswijk Stathe. De markten in handelswaren kenden veelal per product ieder een specifieke locatie in en direct om de handelswijk. Rond 1200 kende de gehele locatie van de Lijnmarkt al markt. Vermoedelijk was hier een markthal te vinden en werd er gehandeld in koren.
Afbeeldingen
Zie ook
Lijst van rijksmonumenten in de Lijnmarkt
Externe link
Diverse foto's van de Lijnmarkt - Het Utrechts Archief
Straat in Utrecht (Binnenstad)
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,870
|
Q: set azuresqlserveractivedirecoty admin to ADgroup in azure sqlsever using azure powershell inline task in azure devops I am trying to execute below command in azure devops to set the AD group as setsqlserveradmin.
Set-AzSqlServerActiveDirectoryAdministrator -ResourceGroupName "xyz" -ServerName "xyzsqlserver" -DisplayName "ADgroup" -ObjectId "27f75d8c-xxxx-xxxx-xxxx-xxxxxxxxxx"
Below is the Error logs
2020-05-07T15:55:05.2211587Z ##[command]Disconnect-AzAccount -Scope Process
ErrorAction Stop 2020-05-07T15:55:05.6167436Z ##[command]Clear-AzContext -Scope Process - ErrorAction
Stop 2020-05-07T15:55:05.9479005Z ##[error]Cannot find the Azure Active Directory object 'Adgroup'.
Please make sure that the user or group you are authorizing is registered in the current
subscription's Azure Active directory. To get a list of Azure Active Directory groups use Get-
AzADGroup, or to get a list of Azure Active Directory users use Get-AzADUser. 2020-05-
07T15:55:06.0117846Z ##[section]Finishing: Azure PowerShell script: InlineScript
Note - I checked Adgroup and correponding objectid is correct.
powershell task 4.0 and version 3.1.0
A: I can reproduce your issue, first, make sure the group is in the same tenant of your service connection.
Then navigate to the Azure portal -> Azure Active Directory -> App registrations -> find the AD App Registration related to your service connection, follow the steps below to add the Application permission Directory.Read.All of Azure Active Directory Graph(not Microsoft Graph), don't forget to click the Grant admin consent for xxx button at last.
After adding the permission, there is some delay(30m - 1h), then test the command, it works.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,384
|
\section{Introduction}
Experimental and simulated solar devices yield a certain current under a given applied voltage and illumination.
Due to the underlying semiconductor physics of p-n junctions, this relation is highly non-linear \cite{shockley1949theory}.
Most of these current-voltage (IV) curves can be described by a simple electronic model with a lumped series and shunt resistance, a diode, and an ideal current source.
The numerical challenge is to find the five fitting parameters for all electric components within the single-diode equivalent-circuit model from a set of experimental data given at $n$ discrete voltages $V_1$, ..., $V_n$ and their corresponding current values $I_1(V_1)$, ..., $I_n(V_n)$.
There exist many algorithms in literature \cite{hegedus2004thin,diantoro2018shockley,toledo2018two,mehta2019accurate} and even their errors are analysed \cite{phang1986review}.
However, the goal of this work is to find a numerically robust method that works also for IV data perturbed by noise effects, such as data outliers, sparse data sets, or higher voltages in modules instead of cells and hence a significantly higher diode ideality factor.
The following section will present the methodology of fitting, while the one after introduces an executable program capable of fitting measured data sets.
\section{Methods}
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\linewidth]{OneDiodeModel.pdf}
\caption{\textbf{Single-diode equivalent-circuit model.} While the photo current represents the generated current, the voltage-dependent shunt and diode currents flow in the opposing direction reducing the produced power.}
\label{fig:onediodeModel}
\end{figure}
In general, the current-voltage relation\footnote{In this work, all currents are treated as absolute currents $I$ with the SI unit Amperes ($\mathrm A$). However, the same methodology also works for current densities $j$ with the SI unit Ampere per square meter $\left( \frac{\mathrm A}{\mathrm m^2} \right)$.} of regular solar devices at the temperature $T$ as seen in Figure \ref{fig:onediodeModel} can be described by the implicit relation \cite{hegedus2004thin}
\begin{align}
I(V) &= -I_{\mathrm{ph}} + I_\mathrm{sh}(V) + I_\mathrm{d}(V) \notag\\
&=-I_{\mathrm{ph}} + \frac{V - I(V) \cdot R_{\mathrm{s}}}{R_{\mathrm{sh}}} \notag\\
&\;\;\;\:\,+ I_0 \cdot \left(\exp\left(\frac{q_{\mathrm{e}} \cdot\left(V - I(V) \cdot R_{\mathrm{s}}\right)}{n_{\mathrm{d}} {k_{\mathrm{B}}} T}\right) - 1\right)\label{diodeEQ}.
\end{align}
Here, $I_{\mathrm{ph}}$ is the generated photo current, $R_{\mathrm{s}}$ and $R_{\mathrm{sh}}$ the lumped resistances in series and shunt, $I_0$ the reverse saturation current, $n_{\mathrm{d}}$ the diode ideality factor, ${k_{\mathrm{B}}}$ the Boltzmann constant, and $q_{\mathrm{e}}$ the elementary charge.
By using the Lambert W function \cite{lambert1758observationes} it can be converted into an explicit equation. \cite{jain2004exact,ghani2013numerical,zinsser2022finite}
\begin{align}
I(V) =\;&\frac{n_{\mathrm{d}} {k_{\mathrm{B}}} T}{\qR_{\mathrm{s}}} \cdot \mathcal{W}\left(f_{\mathrm{Lam}}\right) + \frac{V - R_{\mathrm{sh}} \cdot \left(I_{\mathrm{ph}} + I_0\right)}{R_{\mathrm{s}}+R_{\mathrm{sh}}} \label{diodeEQexplicit},
\end{align}
where $\mathcal{W}\left(x\right)$ is the Lambert W function and
\begin{align}
f_{\mathrm{Lam}} = \;& \frac{q_{\mathrm{e}} I_0 R_{\mathrm{sh}} R_{\mathrm{s}}}{n_{\mathrm{d}} {k_{\mathrm{B}}} T \left(R_{\mathrm{s}}+R_{\mathrm{sh}}\right)} \notag\\
&\cdot \exp\left({\frac{q_{\mathrm{e}} R_{\mathrm{sh}} \Big(R_{\mathrm{s}} \left(I_{\mathrm{ph}} + I_0\right)+V\Big)}{n_{\mathrm{d}} {k_{\mathrm{B}}} T \left(R_{\mathrm{s}}+R_{\mathrm{sh}}\right) }}\right).
\end{align}
This section starts with the actual process of fitting and finishes with the calculation of the characteristic data, such as the open circuit voltage from the fitted parameters.
In Table \ref{tab:variables}, all variables used in this work are explained.
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\centering
\begin{tabular}{ l p{6cm} }
\toprule
variable & meaning\\
\midrule
$V$ & voltage \\
$V_i$ & $i$-th voltage of the experimental data set to be fitted \\
$V_{\mathrm{oc}}$ & open circuit voltage \\
$V_\mathrm{MPP}$ & voltage at the maximum power point \\
$I$ & current \\
$I_i$ & $i$-th current of the experimental data set to be fitted \\
$I_{\mathrm{ph}}$ & generated photo current \\
$I_\mathrm{d}$ & current flowing across the diode \\
$I_\mathrm{sh}$ & shunt current \\
$I_{\mathrm{ph}}$ & generated photo current \\
$I_0$ & reverse saturation current \\
$I_{\mathrm{sc}}$ & short circuit current \\
$I_\mathrm{MPP}$ & current at the maximum power point \\
$P$ & power \\
$P_i$ & $i$-th power of the experimental data set to be fitted (calculated from $V_i$ and $I_i$) \\
$P_\mathrm{MPP}$ & power at the maximum power point \\
$R_{\mathrm{s}}$ & series resistance \\
$R_{\mathrm{sh}}$ & shunt resistance \\
$n_{\mathrm{d}}$ & diode ideality factor \\
$\mathrm{FF}$ & fill factor \\
$q_{\mathrm{e}}$ & elementary charge \\
${k_{\mathrm{B}}}$ & Boltzmann constant \\
$\mathcal{W}(x)$ & Lambert W function \cite{lambert1758observationes} \\
$\mathcal{L}_i$ & function value of $\mathcal W(x)$ in the $i$-th iteration of approximation \\
$R^2$ & coefficient of determination \\
$n$ & amount of experimental data points \\
\bottomrule
\end{tabular}
\caption{\textbf{Table of all variables and symbols in this work.} All current values can also be replaced by current density values.}
\label{tab:variables}
\end{table}
\renewcommand{\arraystretch}{1}
\subsection{Fitting Procedure}
The fitting process is divided into two sections.
The first one being the initial guess for the start values of the fitting parameters and the second one being the actual fitting algorithm itself.
Since IV characteristics are highly non-linear, getting a sophisticated initial guess is crucial for the subsequent fitting algorithm and therefore is the most important task.
As already described in the introduction, the goal of the outlined initial guess is not to be as precise as possible, but to be as robust and universal as possible.
\subsubsection{Initial Guess}
The initial guess for the fitting algorithm is obtained by the following procedure.
\begin{enumerate}
\item As a first step, a cubic Savitsky-Golay filter \cite{savitzky1964smoothing} with a window size of 9 is applied to the experimental data in order to smooth data and not be sensitive to outlier data and noise. Moreover, the current values are eventually multiplied by $-1$ in order to put the data in the appropriate quadrant.
\item The maximum power point (MPP) is roughly estimated as the discrete data point with the maximum power calculated via $P_i = V_i \cdot I_i$.
\item The open circuit voltage $V_{\mathrm{oc}}$ is estimated as the linear interpolation of last data point with a negative current and the first data point with a positive current. If there are only data points with negative currents it is calculated via $V_{\mathrm{oc}} = 1.2 \cdot V_\mathrm{MPP}$.
\item The diode ideality factor is estimated to be $n_{\mathrm{d}} = 2 \cdot \frac{V_{\mathrm{oc}}}{\left[\mathrm{Volt}\right]}$.
\item All data points with a voltage below 20\,\% of $V_{\mathrm{oc}}$ are fitted with a linear regression. The inverse slope is considered as the shunting resistance $R_{\mathrm{sh}}$ and the y-intercept as the photo current $I_{\mathrm{ph}}$.
\item The 5 data points with the largest voltage are fitted linearly. The inverse slope of the regression is taken as the initial guess for the series resistance $R_{\mathrm{s}}$.
\item Finally, the reverse saturation current is calculated by the diode equation \eqref{diodeEQ} at $V = V_{\mathrm{oc}}$ and hence $I(V_{\mathrm{oc}}) = 0$ via $I_0 = \frac{I_{\mathrm{ph}} - V_{\mathrm{oc}}/R_{\mathrm{sh}}} {\exp\left(q_{\mathrm{e}} V_{\mathrm{oc}}/(n_{\mathrm{d}} {k_{\mathrm{B}}} T)\right) - 1}.$
\end{enumerate}
\subsubsection{Convergent Fitting Method}
Starting from the initial guess described in the past section, the partial derivation with respect to all 5 fitting parameters are derived analytically via the Lambert W function.
Using them, a Levenberg–Marquardt algorithm \cite{levenberg1944method,marquardt1963algorithm} is used in order to perform a regression to all data points.
Since the initial guesses for the photo current and the shunt resistance are typically accurate within a few percent, they are not fitted with the other parameters.
Hence, only $I_0$, $n_{\mathrm{d}}$, and $R_{\mathrm{s}}$ are fitted in this subprocedure.
At this point, the regression curve usually fits the data points very well.
However, in a second procedure, all 5 parameters are fitted with the Levenberg–Marquardt algorithm to give the program the chance to adapt every parameter at the same time.
\subsection{Calculate Characteristic Data}
Once all fitted diode parameters $I_{\mathrm{ph}}$, $I_0$, $n_{\mathrm{d}}$, $R_{\mathrm{s}}$, and $R_{\mathrm{sh}}$ are determined, the solar parameters open circuit voltage $V_{\mathrm{oc}}$, short circuit current $I_{\mathrm{sc}}$, and fill factor $\mathrm{FF}$ need to be calculated.
This section describes their direct determination from the fitting parameters.
\subsubsection{Open Circuit Voltage $V_{\mathrm{oc}}$}
At the open circuit point, the current $I(V)$ vanishes.
Therefore, the open circuit voltage $V_{\mathrm{oc}}$ can be determined by Equation \eqref{diodeEQ} via
\begin{align}
V_{\mathrm{oc}} &= R_{\mathrm{sh}} I_{\mathrm{ph}} - \frac{n_{\mathrm{d}} {k_{\mathrm{B}}} T}{q_{\mathrm{e}}} \cdot \mathcal{W}\left(f_{\mathrm{Lam}}^{\mathrm{oc}}\right)
\end{align}
with
\begin{align}
f_{\mathrm{Lam}}^{\mathrm{oc}} &= \frac{q_{\mathrm{e}} I_0 R_{\mathrm{sh}}} {n_{\mathrm{d}} {k_{\mathrm{B}}} T} \cdot \exp\left(\frac{q_{\mathrm{e}} R_{\mathrm{sh}} I_{\mathrm{ph}}} {n_{\mathrm{d}} {k_{\mathrm{B}}} T} \right)\label{eq:Voc}.
\end{align}
If the exponent in Equation \eqref{eq:Voc} is larger than the maximum exponent of double value ($\sim 308$ for IEEE double precision \cite{ieee2019standard}) the Lambert W function $\mathcal{W}\left(f_{\mathrm{Lam}}\right)$ is calculated via the approximation for large numbers given in Appendix \ref{apd:lambert}.
\subsubsection{Short Circuit Current $I_{\mathrm{sc}}$}
The short circuit point is defined as the state without an externally applied voltage and therefore shorted contacts.
Hence, $V=0$ is be plugged into Equation \eqref{diodeEQexplicit} and can be solved via the Lambert W function yielding
\begin{align}
I_{\mathrm{sc}} &= \frac{n_{\mathrm{d}} {k_{\mathrm{B}}} T}{q_{\mathrm{e}} R_{\mathrm{s}}} \cdot \mathcal{W}\left(f_{\mathrm{Lam}}^{\mathrm{sc}}\right) - \frac{R_{\mathrm{sh}} (I_{\mathrm{ph}} + I_0)}{R_{\mathrm{sh}} + R_{\mathrm{s}}}
\end{align}
with
\begin{align}
f_{\mathrm{Lam}}^{\mathrm{sc}} &= \frac{q_{\mathrm{e}} I_0 R_{\mathrm{sh}} R_{\mathrm{s}}}{n_{\mathrm{d}} {k_{\mathrm{B}}} T (R_{\mathrm{sh}}+R_{\mathrm{s}})} \cdot \exp\left( \frac{q_{\mathrm{e}} R_{\mathrm{sh}} R_{\mathrm{s}} (I_{\mathrm{ph}} + I_0)}{n_{\mathrm{d}} {k_{\mathrm{B}}} T (R_{\mathrm{sh}}+R_{\mathrm{s}})} \right).
\end{align}
\subsubsection{Maximum Power Point MPP}
In order to obtain the power $P$ from a current-voltage characteristic, the current $I$ is multiplied by the voltage $V$.
The maximum power point (MPP) is defined as the point in the IV curve with the highest produced power.
To calculate the MPP, Equation \eqref{diodeEQexplicit} is multiplied by $V$ and afterwards the maximum is determined via
\begin{align}
\frac{\mathrm dP(V)}{\mathrm dV} = \frac{\mathrm d(I(V) \cdot V)}{\mathrm dV} = 0
\end{align}
by using the Newton-Raphson method.
This yields the MPP voltage $V_\mathrm{MPP}$, resulting in the MPP current $I_\mathrm{MPP} = I(V_\mathrm{MPP})$ via Equation \eqref{diodeEQexplicit} and MPP power $P_\mathrm{MPP} = P(V_\mathrm{MPP})$.
\subsubsection{Fill Factor FF}
Based on the calculation of the MPP, the fill factor $\mathrm{FF}$ is the area ratio of the two rectangles drawn from the origin towards the MPP and from the origin towards the axis intersects.
It is therefore given as
\begin{align}
\mathrm{FF} &= \frac{V_\mathrm{MPP}I_\mathrm{MPP}}{V_{\mathrm{oc}}I_{\mathrm{sc}}}.
\end{align}
\subsubsection{Coefficient of Determination $R^2$}
To be able to tell how good the fit matches the experimental data, a gauge is introduced.
This happens to be the coefficient of determination $R^2$ and is defined as
\begin{align}
R^2 &= 1 - \frac{\sum_i^n \Big(I_i(V_i) - I(V_i)\Big)^2}{\sum_i^n \Big(I_i(V_i) - I_\mathrm{mean}\Big)^2},
\end{align}
where
\begin{align}
I_\mathrm{mean} &= \frac 1n \sum_i^n I_i(V_i).
\end{align}
It typically ranges from 0 (fit always predicts $I_\mathrm{mean}$) to 1 (fit perfectly matches all given data points).
A negative $R^2$ means that the model is even worse than the mean value.
\section{Program}
The executable program of the described algorithm is adaptable to certain circumstances as seen in Figure \ref{fig:options}.
Both formatting preferences and units of the voltage and the current (density) can be selected.
Moreover, a certain temperature is required.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\linewidth]{options.png}
\caption{\textbf{Screenshot of the preferences section.} Besides the format symbols and the temperature, the units of the voltage and the current or current density can be selected.}
\label{fig:options}
\end{figure}
The main window of the program is shown in Figure \ref{fig:program}.
Within the left side, the experimental current-voltage data can be inserted.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{program.png}
\caption{\textbf{Screenshot of the main menu of the program.} On the left side, experimental data is inserted. The middle section contains the fitted diode parameters, the resulting solar cell parameters, all buttons and the logging output box. On the right side, a plot with the experimental data (black points), the fitted curve (dark red) and the area covering the MPP power (light red) are shown.}
\label{fig:program}
\end{figure*}
The middle section shows the fitted diode parameters and the calculated characteristic parameters.
Furthermore, there are all buttons to operate the program.
The main button "Fit" simply fits the experimental data on the left with the above described methodology.
There is also the possibility to only get the initial guess from the data.
The button "Fit from above parameters" does not calculate an initial guess but rather takes the already defined diode parameters as starting values.
The check boxes determine which parameters are fitted.
On the right side, the experimental data is plotted as black points and the fitted curve as red line.
Additionally, a red box represents the covered power at the MPP.
The source code and executable file can be found under \url{https://github.com/Pixel-95/SolarCell_DiodeModel_Fitting} within a GitHub repository.
\section{Conclusion}
This work introduces a methodology of fitting current-voltage characteristics of solar devices with the single-diode equivalent-circuit model.
Due to the general calculation of the initial guess it is applicable even to perturbed data sets of cells and modules.
Afterwards a Levenberg–Marquardt algorithm is applied in order to convergently determine the required fitting parameters.
\section{Acknowledgements}
The author acknowledges fruitful and productive discussions with Tim Helder, Erwin Lotter, and Andreas Bauer.
This work was inspired and supported by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) under the contract number 0324353A (CIGSTheoMax).
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,931
|
Seven New Movies Added to the Cannes Film Festival Line-up
New Trailer For AUGUST: OSAGE COUNTY
Full-Length Trailer For THE TURNING
STAR WARS: EPISODE VII Gets A Production Start Date
Bradley Cooper Offered Rocket Raccoon In GUARDIANS OF THE GALAXY?
Vesna Sunrider
As we already know, the Cannes Film Festival opens in France on May 16th, and a documentary about the world's waste will get a special screening. All in all, there are seven new movies added to the official line-up, as following: Trashed, by British director Candida Brady; Australian director Wayne Blair's The Sapphires, a musical about an Australian aboriginal singing group, joins the midnight movies roster along with Franck Khalfoun's Franco-American horror remake Maniac; Organizers also added three films Monday to the festival's Un Certain Regard sidebar – Djeca, by Aida Begic from Bosnia-Herzegovina; Renoir, by France's Gilles Bourdos; and Gimme the Loot, Adam Leon's film about Bronx graffiti artists. The Cannes lineup also includes Final Cut, a montage film produced by Hungarian auteur Bela Tarr. Also, final decision for the jury presided by Italian filmmaker Nanni Moretti is: German actress Diane Kruger, British actor Ewan McGregor, fashion designer Jean-Paul Gaultier, The Descendants director Alexander Payne, Palestinian actress Hiam Abbass, British writer Andrea Arnold, French actress Emanuelle Devos and Haitian filmmaker Raoul Peck.
Related Topics:Alexander Payne, Andrea Arnold, Diane Kruger, Ewan McGregor, Franck Khalfoun, Nanni Moretti, The Descendants, Wayne Blair
More in Cannes Film Festival
CLOUDS OF SILS MARIA Premiere at 2014 Cannes Film Festival – Kristen Stewart
CLOUDS OF SILS MARIA Premiere at 2014 Cannes Film Festival - Kristen Stewart
MR. TURNER Premiere at 2014 Cannes Film Festival – Julianne Moore
MR. TURNER Premiere at 2014 Cannes Film Festival - Julianne Moore
MR. TURNER Premiere at 2014 Cannes Film Festival – Blake Lively on Red Carpet
MR. TURNER Premiere at 2014 Cannes Film Festival - Blake Lively on Red Carpet
Nicole Kidman at GRACE OF MONACO Photocall – 67th Annual Cannes Film Festival
Paz Vega at GRACE OF MONACO Photocall – 67th Annual Cannes Film Festival
Paz Vega at GRACE OF MONACO Photocall - 67th Annual Cannes Film Festival
FANTASTIC FOUR reboot gets a director?
Sandra Oh, Emily Watson, Nia Vardalos and Melora Hardin in 33 LIBERTY LANE Phone-Sex Comedy
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,372
|
import sys
from setuptools import setup, find_packages, Extension
name = 'ikp3db'
version = '1.4.1'
if sys.version_info[:2] < (3,6):
sys.exit('Sorry, IKPdb only supports Python 3.6 and above.')
long_description = (
open('README.rst').read()
)
iksettrace3_module = Extension('iksettrace3', sources=['iksettrace3.c'])
setup(
name = name,
version = version,
#packages = find_packages('src'), # We use py_modules below
py_modules = ['ikp3db'],
#package_dir={'': 'src'},
license='MIT',
author='Cyril MORISSE',
author_email='cmorisse@boxes3.net',
description="A hackable CPython 3.6+ remote debugger designed for the Web and online IDE integration. Fork of IKPdb.",
long_description = long_description,
keywords = "debugger debug remote tcp",
include_package_data=True,
url = 'https://github.com/cmorisse/ikp3db',
classifiers=[
#'Framework :: Buildout',
'Development Status :: 5 - Production/Stable',
'Environment :: Web Environment',
'Intended Audience :: Developers',
'Topic :: Software Development :: Debuggers',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python',
'Natural Language :: English',
],
ext_modules=[iksettrace3_module]
)
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,493
|
\section{Introduction}
In Quantum Mechanics, quantum states are modelled by vectors in a complex Hilbert space. In the presence of an environment, one uses the formalism of density matrices to describe quantum states. The dimension of the corresponding Hilbert space grows exponentially with the number of quantum particles one wants to describe, hence it is very difficult to precisely characterize the interesting physical properties of arbitrary density matrices. One resorts then to understanding the properties of \emph{typical quantum states}, in the sense that one would like to find properties which hold with large probability, for some suitable (natural from a physical perspective) probability measure on the set of density matrices.
Several such natural probability measures have been considered in the literature: the induced measure (closely related to the Wishart ensemble from random matrix theory) \cite{zyczkowski2001induced}, the Bures measure (motivated by statistical considerations) \cite{hall1998random}, random matrix product states (motivated by condensed matter theory) \cite{garnerone2010typicality}, and random graph states (which encode a given entanglement structure) \cite{collins2010randoma}. Our first contribution in this work is to introduce density matrix ensembles for bipartite quantum systems which have a given \emph{symmetry}. We consider respectively, the symmetric and the anti-symmetric subspace of a tensor product of Hilbert spaces, and introduce random density matrices supported on these subspaces. Since the construction is very similar to that of the induced ensemble \cite{zyczkowski2001induced}, we call these the \emph{bosonic}, respectively the \emph{fermionic induced ensembles} of mixed quantum states.
We then study the entanglement properties of states from these ensembles, focusing on the bipartite case (two fermions and two bosons). We analyze the spectrum of the \emph{partial transposition} \cite{horodecki1996separability,peres1996separability} of these random density matrices in the large $N$ limit. For the bipartite fermionic ensemble, we find (see Corollary \ref{cor:fermionic-threshold}) that a typical fermionic mixed state is entangled. This is due to the presence of a large negative eigenvalue in the spectrum of the partial transposition of these states.
In the bosonic case, the situation is more complicated, since the symmetry of the state is responsible for a large positive eigenvalue of the partial transposition. This outlier eigenvalue makes the study of the spectrum of the partial transposition more delicate. The asymptotic spectrum of the partial transposition of a random bosonic density matrix is computed in Theorem \ref{thm:main}, which we state informally here:
\begin{theorem}
Let $\rho_N$ be a $N^2 \times N^2$ element from the bosonic ensemble. The spectrum of $\rho_N$ contains, in the large $N$ limit:
\begin{itemize}
\item a large, \emph{outlier}, positive eigenvalue, of order $1/N$
\item a \emph{bulk}, containing $N^2-1$ eigenvalues, of order $1/N^2$, whose empirical distribution follows, when properly normalized, a \emph{shifted semicircle distribution}.
\end{itemize}
The bulk of the spectrum is characterized by the shape parameter of the bosonic induced ensemble, and we observe a \emph{threshold phenomenon}: the limiting probability distribution of the bulk eigenvalues has negative support if and only if the shape parameter $c$ is smaller than 4.
\end{theorem}
We discuss the implications of this result for the entanglement of typical bosonic states in Corollary \ref{cor:bosonic-threshold}: when the shape parameter of the bosonic induced ensemble is smaller than $4$, a typical symmetric mixed state will be entangled. We also show that the \emph{realignment criterion} \cite{chen2002matrix, rudolph2003cross} gives exactly the same entanglement threshold as the positive partial transposition (PPT) criterion.
We analyze the asymptotic spectrum of the random matrices we consider (and their partial transposition) using the moment method. Due to the imposed symmetry of the models, direct computations are cumbersome; we put forward a novel connection between such moment computations and a well-known graph polynomial: the \emph{circuit counting polynomial} of directed graphs. In combinatorics and graph theory, this polynomial has received a lot of attention recently, in relation to the \emph{interlace polynomial} \cite{arratia2004interlace}. We use this connection in our derivations to bound the asymptotic moments of the random matrices we are considering, showcasing the importance of the newly discovered parallel between moment computations and graph polynomials.
The paper is organized as follows. In Section \ref{sec:first-def} we recall some basic definitions from quantum information theory and from the theory of symmetric operators. In Sections \ref{sec:fermionic} and \ref{sec:bosonic} we introduce, respectively, the fermionic and the bosonic induced ensembles. In Section \ref{sec:main-result} we state the main result on
the asymptotic spectrum of a random bosonic quantum state. In Section \ref{sec:graph-poly} we recall the main facts on the circuit counting graph polynomial, relating it to generalized traces of the symmetric projection. This connection is used in Section \ref{sec:moments} to express the moments of the partial transposition of a random bosonic density matrix to evaluations of the circuit counting graph polynomial. Sections \ref{sec:t-channel}, \ref{sec:diagrammatics}, \ref{sec:bounds}, \ref{sec:tensor-eval} contain technical results used in Section \ref{sec:proof-main-result} for the proof of the main result. We conclude in Section \ref{sec:conclusion}, where we also lay out some directions for future research.
\section{Quantum states, entanglement, symmetry}\label{sec:first-def}
In this section we recall the basic notions from quantum information theory which motivate our work, and then gather some basic mathematical facts regarding symmetric operators needed later. We start with a brief overview of quantum states, the notion of quantum entanglement, and the partial transposition entanglement criterion.
In quantum mechanics, quantum states are modelled by \emph{density matrices}. Each quantum system comes with a complex Hilbert space $\mathcal H$, which we shall consider here to be finite dimensional: $\mathcal H \approx \mathbb C^N$; we say that the quantum system has $N$ degrees of freedom. A density matrix is a positive semidefinite operator acting on $\mathcal H$, with unit trace:
$$\rho \in \mathcal M_N(\mathbb C), \, \rho \geq 0, \, \Tr \rho = 1.$$
The Hilbert space corresponding to two quantum systems is obtained by taking the \emph{tensor product} of the individual Hilbert spaces:
$$\mathcal H_{AB} = \mathcal H_A \otimes \mathcal H_B.$$
A bipartite quantum state $\rho_{AB}$ is said to be \emph{separable} if it can be written as a convex mixture of product states:
$$\rho_{AB} = \sum_{i=1}^k p_i \alpha_i \otimes \beta_i,$$
for density matrices $\alpha_i$, $\beta_i$, and probabilities $p_i$. Non-separable quantum states are called \emph{entangled}. Deciding whether a given bipartite density matrix is separable or entangled is a computationally hard problem \cite{gurvits2004classical}. For this reason, several computationally efficient necessary or sufficient conditions for entanglement have been developed \cite[Section VI.B]{horodecki2009quantum}.
In this paper, we shall be concerned with the most famous entanglement criterion, the \emph{positive partial transpose} criterion \cite{peres1996separability,horodecki1996separability} (we shall also briefly discuss the realignment criterion in Section \ref{sec:bosonic}). The starting observation is that the partial transposition of a separable state is positive semidefinite:
$$\rho_{AB}^\Gamma := [\operatorname{id} \otimes \Tr](\rho_{AB}) = \sum_i \alpha_i \otimes \beta_i^\top \geq 0.$$
Hence, if the partial transposition of a quantum state $\rho_{AB}$ is \emph{not} positive semidefinite, the state must be entangled. This criterion is known to be exact only if the total Hilbert space dimension is at most 6; we refer the reader to \cite[Section VI.B.1]{horodecki2009quantum} for further properties.
\bigskip
We shall now discuss several mathematical considerations regarding symmetry in quantum information theory; more precisely, we shall investigate the symmetric subspace of a tensor product of Hilbert spaces, and the related operators. Consider the $r^{\textrm{th}}$ tensor power $\mathcal{H}^{\otimes r}$ of $\mathcal H$. A natural basis of $\mathcal{H}^{\otimes r}$ is given by the set $\{e_{i_1}\otimes \ldots \otimes e_{i_r} \, : \, i_1,\ldots, i_r \in [N]\}$ where the $e_i$'s form a basis of $\mathcal{H}$, and $[N]$ denotes the set $\{1,2, \ldots, N\}$. Let $\sigma\in S_r$. We define the action of $\sigma$ on $\mathcal{H}^{\otimes r}$ by extending linearly its action on basis elements, so that,
\begin{equation}
\sigma(e_{i_1}\otimes \ldots \otimes e_{i_r})=e_{i_{\sigma(1)}}\otimes \ldots \otimes e_{i_{\sigma(r)}}.
\end{equation}
We now define two orthogonal projectors
\begin{equation}
P_{s,r}=\frac 1 {r!}\sum_{\sigma\in S_r} \sigma \quad \textrm{ and } \quad P_{a,r}=\frac1{r!}\sum_{\sigma\in S_r} \epsilon(\sigma) \sigma
\end{equation}
where $\epsilon(\sigma)$ is the signature of the permutation $\sigma$. $P_{s,r}$ are projectors on the symmetric subspace $\mathrm{Sym}_r \mathcal{H}$ and $P_{a,r}$ are projectors on the antisymmetric subspace (or $r^{\textrm{th}}$ exterior power) $\wedge^r\mathcal{H}$.
We define the $r^{\textrm{th}}$ symmetric power (resp. the $r^{\textrm{th}}$ exterior power) of $\mathcal H$ as the subspace $P_{s,r}(\mathcal{H}^{\otimes r})$ (resp. $P_{a,r}(\mathcal{H}^{\otimes r})$). We recall
\begin{equation}
N_s[r]:=\mathrm{dim }\ \mathrm{Sym}_r \mathcal{H}= \binom{N+r-1}{r} \ \mathrm{ and } \ N_a[r]:= \mathrm{dim }\ \wedge^r\mathcal{H}=\binom{N}{r}.
\end{equation}
The orthogonal projection on the symmetric subspace can be written as \cite{harrow2013church}
\begin{equation}\label{eq:def-P-sym}
P_{s,r} = \binom{N+r-1}{r} \int_{\|x\|=1} (xx^*)^{\otimes r} \mathrm{d}x,
\end{equation}
For example, in the bipartite case, we have
$$P_{s,2} = \frac 1 2 (I+F) \ \textrm{and } P_{a,2}=\frac12(I-F)$$
where $F$ is the \emph{flip} (or \emph{swap}) operator $F x \otimes y = y \otimes x$ while $I$ is the identity permutation and is represented by a $N^2\times N^2$ identity matrix. Bipartite tensors can be identified with matrices in a straightforward way, and, in that case, we have $P_{s,2} A = (A+A^\top)/2$ and $P_{a,2} A=(A-A^\top)/2$. Note that this corresponds to symmetrizing and anti-symmetrizing matrices; in the complex case, we do not obtain however self-adjoint or skew-adjoint matrix, since we are not taking complex conjugates of the entries. Since most of our work will use these projectors for $r=2$, we use a simplified notation for this case and denote $P_s=P_{s,2}$ and $P_a=P_{a,2}$.
We state below a computation which will be useful later (see \cite{holmes2018partial} for a related problem, stated in representation theoretic language).
\begin{lemma}\label{lem:partial-trace-P-sym}
The partial trace of the projection on the symmetric subspace can be computed as follows: (below, $0 \leq r \leq k$):
$$[\id_r \otimes \Tr_{k-r}]P_{s, k} = \frac{N[k]}{N[r]} P_{s,r}.$$
\end{lemma}
\begin{proof}
We use the integral representation from eq.~\eqref{eq:def-P-sym}:
\begin{align*}[\id_r \otimes \Tr_{k-r}]P_{s, k} &= N[k] \int_{\|x\|=1} [\id_r \otimes \Tr_{k-r}]\left((xx^*)^{\otimes k}\right) \mathrm{d}x,\\
&= N[k] \int_{\|x\|=1} (xx^*)^{\otimes r} \mathrm{d}x\\
&= \frac{N[k]}{N[r]} P_{s,r}.
\end{align*}
\end{proof}
\section{The fermionic induced ensemble}\label{sec:fermionic}
In this section we introduce the first random matrix model for quantum states that we shall study in this paper, that is the matrix model for random fermionic states. Random quantum states have played an important role in quantum physics and quantum information theory, being used to study properties of typical quantum systems and also as a source of examples of states with interesting properties.
Ensembles of random density matrices have been considered by several authors, having different physical and mathematical motivations \cite{zyczkowski2001induced,zyczkowski2003hilbert,nechita2007asymptotics,garnerone2010typicality,zyczkowski2011generating}. The obvious candidate, the \emph{Lebesgue measure} on the convex body of density matrices, is part of a one-parameter family of probability measures, called the \emph{induced ensembles} \cite{zyczkowski2001induced}. The general setting is as follows. Consider a random pure state $\ket \psi \in \mathcal H \otimes \mathcal H_E$, uniformly distributed on the unit sphere the tensor product between the Hilbert space of interest $\mathcal H$ and an auxiliary Hilbert space $\mathcal H_E$, called the \emph{environment}. Define a random density matrix
$$\rho := \Tr_E \ketbra{\psi}{\psi}$$
by tracing out the environment. In this way, we obtain a random matrix acting on $\mathcal H$; the dimension of the environment, $\dim \mathcal H_E$, is a parameter of the model. It was recognized in \cite{sommers2004statistical,nechita2007asymptotics} that the induced measure can be understood as a \emph{normalized Wishart measure}:
$$\rho = \frac{Y}{\Tr Y},$$
where $Y = GG^*$, for $G : \mathcal H \to \mathcal H_E$ a random Gaussian matrix (an element of the \emph{Ginibre ensemble}).\\
We start by introducing the random matrix model for fermionic quantum states that we call the \emph{fermionic induced ensemble}. We then show the entanglement results for this model of random fermionic states.\\
In practice, we study an antisymmetric random Gaussian tensor $G_a\in \wedge^r \mathcal{H}\otimes \mathcal{H}_E$. We identify the space of antisymmetric tensors
\begin{equation}\label{eq:identif_fermions}
\wedge^r\mathcal{H}\cong \textrm{Ran}(P_{a,r})\subseteq \mathcal{H}^{\otimes r},
\end{equation}
where $P_{a,r}$ is defined in Section \ref{sec:first-def} to be the projector on the antisymmetric subspace of $\mathcal{H}^{\otimes r}$. We use this identification to construct the antisymmetric random Gaussian tensor from a complex random standard Gaussian tensor $G \in \mathcal{H}^{\otimes r}\otimes \mathcal{H}_E$, that is a tensor whose entries are standard normal random variables. We will be interested in the marginals obtained by tracing out the environmental degrees of freedom. We study the random matrix $A:= \Tr_E(G_a G_a^*)$. Note that, using the identification from \eqref{eq:identif_fermions}, we can also write
\begin{equation}\label{eq:fermionic-Wishart}
A=P_{a,r}\Tr_E(GG^*)P_{a,r},
\end{equation}
where $G$ is the complex standard Gaussian tensor. In the language of quantum information theory, normalizing matrices $A$ yields random density matrices having an \emph{antisymmetry} constraint, reflecting their fermionic character on the $r$ copies of $\mathcal H$.
\begin{definition}
The \emph{fermionic induced ensemble} of parameters $(N,M,r)$ is the set of random bipartite density matrices of size $N^r$ defined by
\begin{equation}
\rho^{(A)}=\frac{A}{\Tr A},
\end{equation}
where $A$ is an antisymmetrized Wishart random matrix from Eq.~\eqref{eq:fermionic-Wishart}. Matrices from the fermionic induced ensemble are supported on the antisymmetric subspace of $(\mathbb{C}^{N})^{\otimes r}$.
\end{definition}
Let us now comment on our choice of model. As already detailed in the introduction, our main motivation is that these types of states are more realistic than any (possibly non-symmetric) random states, since quite often in quantum experiments we consider states of electrons or say ${}^{87}\mathrm{Rb}$ (which are fermions), or states of photons or ${}^4\mathrm{He}$ atoms (which are bosons, whose ensemble we introduce later in section \ref{sec:bosonic}). The corresponding degrees of freedom must have antisymmetric/symmetric wave functions, since they are fermions/bosons, while the environment has no reason to have any kind of symmetries. This explains why we impose the symmetry condition on the tensor product $\mathcal{H}^{\otimes r}$, while no symmetry is imposed on the tensor product with the environment system $\mathcal H_E$. \\
In the rest of this section, we set $r=2$. This is for technical reasons. If in the fermionic case, we could rather easily extend our results to general $r$, it does not appear feasible with our current techniques to study the bosonic case for general $r$. In order to stay coherent all along the paper we decided to present our results in the $r=2$ case for both fermionic and bosonic induced ensembles.\\
We set the dimension of the Hilbert spaces as follows
\begin{align*}
\dim \mathcal{H}&=:N \\
\dim \mathcal{H}_E&=:M.
\end{align*}
We shall consider an asymptotic regime where both $N,M \to \infty$ in such way that $M = cN^2$ for some fixed constant $c \in (0, \infty)$. This choice of scaling is the standard one needed for the convergence of the standard Wishart random matrices to the Mar\u{c}enko-Pastur distribution:
\begin{equation}\label{eq:Marchenko-Pastur}
\mathrm{d}\mathrm{MP}_c=\max (1-c,0)\delta_0+\frac{\sqrt{(b-x)(x-a)}}{2\pi x} \; \mathbf{1}_{[a,b]}(x) \, \mathrm{d}x,
\end{equation}
with $a = (1-\sqrt c)^2$ and $b=(1+\sqrt c)^2$.\\
The limiting eigenvalue distribution of the properly normalized random matrix $A$ is the same as that of a non-symmetrized Wishart matrix.
\begin{proposition}\label{prop:moments-A}
The random matrix $A$ converges in moments, as $N \to \infty$, $M \sim cN^2$, to a Mar\u{c}enko-Pastur distribution of parameter $2c$:
$$\forall p \geq 1, \qquad \lim_{N \to \infty} \mathbb{E} \frac{1}{N_a[2]} \Tr \left[ \left(\frac{A}{N_a[2]}\right)^p \right] = \int x^p \mathrm{d}\mathrm{MP}_{2c}(x).$$
\end{proposition}
\begin{proof}
After restricting it to its support ($\mathrm{Sym}_2(\mathbb C^N)$), the random matrix $W$ is indeed a Wishart matrix of parameters $(N_a[2], M)$. The result follows from the standard theorem of Mar\u{c}enko and Pastur, see \cite{marcenko1967distribution}, \cite[Theorem 3.6]{bai2010spectral}, or \cite[Proposition 2.4]{dartois2020joint}. The parameter of the limiting distribution is given by
$$\lim_{N \to \infty} \frac{M}{N_a[2]} = 2c.$$
\end{proof}
In order to understand the entanglement properties of the fermionic random states we just introduced, we proceed by first studying the average of their partial transposition. We show that $A^{\Gamma}$ has, on average, a negative eigenvalue, on a larger scale than the rest of the spectrum, for all $c>0$ (in the asymptotic regime where $M \sim cN^2$). Before we commence, recall that the \emph{maximally entangled state} is the unit norm vector
$$\ket \Omega = \frac{1}{\sqrt N} \sum_{i=1}^N \ket{ii} \in \mathbb C^N \otimes \mathbb C^N,$$
where $\{\ket i \}_{i \in [N]}$ is the canonical orthonormal basis of $\mathbb C^N$. We also write $\omega = \ketbra{\Omega}{\Omega}$ for the corresponding density matrix.
\begin{proposition}\label{prop:fermionic-large-eigenvalue}
The average of the partial transpose $A^{\Gamma}$ is given by
\begin{equation}
\mathbb{E} A^{\Gamma}=MP_{a,2}=\frac{M}{2}I_{N^2}-\frac{MN}{2} \omega.
\end{equation}
The eigenvalues of $\mathbb{E} A^{\Gamma}$ are as follows:
\begin{equation}
\mathrm{spec}(\mathbb{E} A^{\Gamma})=\begin{cases} -\frac{M(N-1)}{2} & \textrm{ with multiplicity } 1 \\
\frac{M}{2} & \textrm{with multiplicity } N^2-1.
\end{cases}
\end{equation}
\end{proposition}
\begin{proof}
The proof is a direct computation. Using Wick-Isserlis theorem we compute $\mathbb{E} A=\frac{M}{2}(I_{N^2}-F)$, where $F$ is the flip (or swap) operator
$$F=\sum_{i,j=1}^N\ketbra{ij}{ji}.$$
Applying the partial transpose to the flip operator, we deduce that
$$F^{\Gamma}=\sum_{i,j=1}^N(\ketbra{ij}{ji})^{\Gamma}=\sum_{i,j=1}^N\ketbra{ii}{jj}=N\ketbra{\Omega}{\Omega} = N \omega.$$
Writing $P_{a,2} = (I-F)/2$ yields the formula for $\mathbb{E} A^\Gamma$.
The exact form of the spectrum follows from the fact that $\omega$ is a rank-one projection.
\end{proof}
We take away from the above proof that the maximally entangled vector $\lvert \Omega \rangle$ is an eigenvector of $\mathbb{E} A^{\Gamma}$ with a negative eigenvalue for all $N\ge2, c>0$, asymptotically of the form $-\frac{c}{2}N^3+O(N^2)$. The next theorem shows that, the same holds asymptotically for the random matrix $A^\Gamma$.
\begin{theorem}
The maximally entangled vector $\lvert \Omega \rangle$ is an approximate eigenvector of the random matrix $N^{-3}A^\Gamma$ with corresponding approximate eigenvalue $-c/2$: if
\begin{equation}
\delta_N:=\|N^{-3}A^\Gamma\lvert \Omega \rangle+c/2\lvert \Omega \rangle \|^2,
\end{equation}
then $\lim_{N\rightarrow\infty}\mathbb{E}\delta_N=0$. In particular, the random variable $\delta_N$ converges in probability towards zero as $N\rightarrow \infty$.
\end{theorem}
\begin{proof}
We start with a more explicit formula of $\delta_N$:
\begin{equation}
\delta_N=N^{-6}\langle \Omega\rvert (A^\Gamma)^2\lvert \Omega\rangle+cN^{-3}\langle \Omega \rvert A^{\Gamma} \lvert \Omega \rangle+\frac{c^2}{4}.
\end{equation}
Computing the average, we obtain, thanks to Proposition \ref{prop:fermionic-large-eigenvalue},
\begin{equation}
\mathbb{E}\delta_N=N^{-6}\mathbb{E}\langle \Omega\rvert (A^\Gamma)^2\lvert \Omega\rangle-\frac{c^2}{4}+o(1).
\end{equation}
We then use Wick-Isserlis theorem to compute $\mathbb{E}\langle \Omega\rvert (A^\Gamma)^2\lvert \Omega\rangle$, and we find an expression involving the partial trace $\Tr_1 P_{a,2}$,
\begin{align*}
\mathbb{E}\langle \Omega\rvert (A^\Gamma)^2\lvert \Omega\rangle&= M^2\langle \Omega | (P_{a,2}^\Gamma)^2| \Omega \rangle + \frac{M}{N} \Tr\left[ (\Tr_1 P_{a,2})^2 \right] \\
&=\frac{M^2}{N}\Tr\left[ (\Tr_1 P_{a,2})^2 \right]+\frac{M}{N} \Tr\left[ (\Tr_1 P_{a,2})^2 \right]\\
\end{align*}
We compute $\Tr_1(P_{a,2})=\frac12(N-1)I_N$ from which we deduce
\begin{align*}
\mathbb{E}\langle \Omega\rvert (A^\Gamma)^2\lvert \Omega\rangle&=\left[ \frac{c^2}{4}N^6+\frac{c}{4}N^4\right]\left( 1-\frac2{N}+\frac1{N^2}\right)=\frac{c^2}{4}N^6+O(N^5),
\end{align*}
showing that $\mathbb{E}\delta_N\to 0$. We then use Markov's inequality to bound the probability that $\delta_N$ is positive. Indeed, for all $a>0$, we have
\begin{equation}
\mathbb{P}(\delta_N\ge a)\le \frac1{a}\mathbb{E}\delta_N \to 0,
\end{equation}
concluding the proof.
\end{proof}
These two results are here sufficient to conclude that the random matrix $A^{\Gamma}$ is not positive semi-definite, as indeed it has a negative eigenvalue at scale $N^3$. As a consequence, the corresponding normalized random state $\rho^{(A)}$ is entangled. More precisely, we have the corollary
\begin{corollary}\label{cor:fermionic-threshold}
Consider a sequence $(\rho^{(A)}_N)_{N\ge2}$ of density matrices from the fermionic induced ensemble of parameters $(N,M,2)$, with $M=cN^2$. Then, for all $c>0$,
\begin{equation}
\lim_{N\rightarrow \infty} \mathbb{P}[(\rho^{(A)}_N)^\Gamma\ge 0]=0,
\end{equation}
hence,
\begin{equation}
\lim_{N\rightarrow \infty} \mathbb{P}[\rho^{(A)}_N \textrm{ is entangled}\,]=1.
\end{equation}
\end{corollary}
This is in contrast with the result of Aubrun \cite{aubrun2012partial} who considered random distinguishable states. In his framework, it was shown that the analog of the above statement is valid if and only if the environmental Hilbert space is large enough (\textit{i.e.} for $c>4$). What we show here is that for states of fermions, typical states are always entangled, this does not depend on the size of the environment as long as it is commensurable with the two particle Hilbert space $\wedge^2\mathcal{H}$ size. In the next part we study the same problem for random states of bosons. Bosonic states prove to be considerably more difficult to study and the rest of this paper is devoted to them.
\section{The bosonic induced ensemble}\label{sec:bosonic}
In this section, we shall consider a setting analog to the one of the previous section \ref{sec:fermionic}, only now the system Hilbert space has a \emph{symmetric tensor product structure}. In other words, we shall start from a random Gaussian tensor $G_s \in \mathrm{Sym}_r(\mathcal{H})\otimes \mathcal{H}_E$. In practice, we shall use the identification
\begin{equation}\label{eq:sym-subspace-range-Psym}
\mathrm{Sym}_r (\mathcal{H}) \cong \operatorname{Ran} (P_{s,r}) \subseteq \mathcal{H}^{\otimes r},
\end{equation}
where we recall that $P_{s}$ is the orthogonal projection on the symmetric subspace of $\mathcal H \otimes \mathcal H$. The tensor factor $\mathcal{H}_E$ is again the environment Hilbert space. These random vectors $G_s$ can be seen as (un-normalized) bosonic states on two copies of $\mathcal H$ in contact with an environment $\mathcal H_E$. Similarly, we will be interested in the marginals obtained by tracing out the environmental degrees of freedom. We study the corresponding random matrix $W=\Tr_E(G_sG_s^*)$. Again, using the identification from \eqref{eq:sym-subspace-range-Psym}, we can also write
\begin{equation}\label{eq:def-W}
W=P_{s,r}\Tr_E(G G^*)P_{s,r},
\end{equation}
where $G$ is a random (complex) Gaussian vector in the Hilbert space $\mathcal{H}^{\otimes r}\otimes \mathcal{H}_E$; see Figure \ref{fig:W} for a pictorial representation of the random matrix $W$ at $r=2$. Normalizing matrices $W$ yields random density matrices which have a \emph{symmetry} constraint.
\begin{definition}\label{def:bosonic-induced}
The \emph{bosonic induced ensemble} of parameters $(N,M,r)$ is the set of random bipartite density matrices of size $N^r$ defined by
$$\rho = \frac{W}{\Tr W},$$
where $W$ is the symmetrized Wishart random matrix from \eqref{eq:def-W}. Matrices from the bosonic induced ensemble are supported on the symmetric subspace of $(\mathbb{C}^N)^{\otimes r}$.
\end{definition}
\begin{figure}
\centering
\includegraphics{symmetrized-Wishart.pdf}
\caption{A graphical representation of the symmetrized Wishart matrix $W$ that we shall study in this work. The grey boxes surrounding the matrix $GG^*$ are symmetrizers.}
\label{fig:W}
\end{figure}
Mimicking the construction of Section \ref{sec:fermionic}, we set the dimension of the Hilbert spaces as follows
\begin{align*}
\dim \mathcal{H}&=:N \\
\dim \mathcal{H}_E&:=M.
\end{align*}
Doing so, the limiting eigenvalue distribution of the properly normalized random matrix $W$ is the same as that of a non-symmetrized Wishart and also an anti-symmetrized matrix.
\begin{proposition}\label{prop:moments-W}
The random matrix $W$ converges in moments, as $N \to \infty$, $M \sim cN^2$, to a Mar\u{c}enko-Pastur distribution of parameter $2c$:
$$\forall p \geq 1, \qquad \lim_{N \to \infty} \mathbb{E} \frac{1}{N_s[2]} \Tr \left[ \left(\frac{W}{N_s[2]}\right)^p \right] = \int x^p \mathrm{d}\mathrm{MP}_{2c}(x).$$
\end{proposition}
\begin{proof}
After restricting it to its support ($\mathrm{Sym}_2(\mathbb C^N)$), the random matrix $W$ is indeed a Wishart matrix of parameters $(N_s[2], M)$. The result follows from the standard theorem of Mar\u{c}enko and Pastur, see \cite{marcenko1967distribution}, \cite[Theorem 3.6]{bai2010spectral}, or \cite[Proposition 2.4]{dartois2020joint}. The parameter of the limiting distribution is given by
$$\lim_{N \to \infty} \frac{M}{N_s[2]} = 2c.$$
\end{proof}
\begin{remark}
The exact moments of order $p$ of the random matrix $W$ can be computed in different manners, either as a sum over permutations, or as a sum over bipartite combinatorial maps
with one black vertex and $p$ edges $\mathcal{M}=(\alpha, \gamma)$ where $\alpha$ is a permutation of $S_p$ and $\gamma = (1 \, 2 \, 3\, \cdots \, p)$ is the full cycle:
\begin{align*}
\mathbb{E} \Tr \left[ \left(\frac{W}{N_s[2]}\right)^p \right] &= \sum_{\alpha \in S_p} N_s[2]^{\# (\gamma^{-1}\alpha)} M^{\#\alpha} \\
&= \sum_{\mathcal{M}}N_s[2]^{F(\mathcal{M})}M^{V_{\circ}(M)},
\end{align*}
where $F(\mathcal{M})$ is the number of faces of $\mathcal{M}$ and $V_{\circ}(\mathcal{M})$ its number of white vertices. We refer the reader to \cite[Section 2]{dartois2020joint} for the dictionary between these two approaches.
\end{remark}
\begin{remark}
The asymptotic moments from Proposition \ref{prop:moments-W} correspond to sums over \emph{geodesic permutations} $\alpha$, or, equivalently, over \emph{planar bipartite maps, or dessins d'enfants,} $\mathcal M$ with one black vertex. The limiting moments are values of the {Narayana polynomial}:
$$\mathrm{Nar}(p, 2c) = \int x^p \mathrm{d}\mathrm{MP}_{2c}(x) = \sum_{k=0}^{p-1}\frac{1}{(k+1)(2c)^k} \binom{p-1}{k}\binom{p}{k}.$$
\end{remark}
\section{The partial transposition --- statement of main results}\label{sec:main-result}
In this section we state the main result of this work, the characterization of the limiting spectrum of the \emph{partial transposition} of the random matrix $W$, in the large $N$ limit. Recall that $W$ is a $N^2 \times N^2$ random matrix, supported on the symmetric subspace $\mathrm{Sym}_2(\mathbb C^N) \subseteq \mathbb C^N \otimes \mathbb C^N$. Its partial transpose is defined as the action of the transposition operator on the second tensor factor:
\begin{equation}\label{eq:def-W-Gamma}
W^\Gamma = [\mathrm{id} \otimes \mathrm{transp}](W).
\end{equation}
The next theorem is the main result of our paper, providing a description of the spectrum of $W^\Gamma$ in the large $N$ limit. Interestingly, $W^\Gamma$ has a large eigenvalue of order $N^3$, and $N^2-1$ eigenvalues of order $N^2$.
\begin{theorem}\label{thm:main}
Let $\lambda_1\ge \lambda_2\ge \ldots \lambda_{N^2}$ be the eigenvalues of the random matrix $W^\Gamma$ from Eq.~\eqref{eq:def-W-Gamma}. Then, in the asymptotic regime $N \to \infty$ and $M \sim cN^2$,
\begin{itemize}
\item $N^{-3}\lambda_1 \rightarrow \frac{c}{2}$ in probability
\item The empirical distribution $\sum_{i=2}^{N^2}\delta_{N^{-2}\lambda_i}$ converges in moments to a semi-circular distribution of mean $c/2$ and variance $c/4$.
\end{itemize}
\end{theorem}
We recall here that the \emph{semicircular distribution} with average $m$ and variance $\sigma^2$ is given by $$\mathrm{d} \mathrm{SC}_{m,\sigma} = \frac{1}{2 \pi \sigma} \sqrt{4\sigma^2 - (x-m)^2} \mathbf{1}_{[-2\sigma+m, 2\sigma+m]}(x) \, \mathrm{d}x.$$
Let us record here an important corollary for the theory of quantum information. The \emph{partial transposition criterion} \cite{peres1996separability,horodecki1996separability} states that a \emph{separable} (i.e.~non-entangled) density matrix $\rho$ has a positive semidefinite partial transposition: $\rho^\Gamma \geq 0$. This fact is mostly used in the reverse direction, as an entanglement criterion: given a density matrix $\sigma$ having a partial transposition with at least one negative eigenvalue ($\sigma^\Gamma \ngeq 0$) is entangled. We have thus the following result, regarding the bosonic induced ensemble defined in Definition \ref{def:bosonic-induced}.
\begin{corollary}\label{cor:bosonic-threshold}
Consider a sequence $(\rho_N)_{N \geq 1}$ of density matrices from the bosonic induced ensemble of parameters $(N,M,2)$, with $M \sim cN^2$ as $N \to \infty$. If $c < 4$, then
$$\lim_{N \to \infty} \P[\rho_N^\Gamma \geq 0] = 0.$$
In particular,
$$\lim_{N \to \infty} \P[\rho_N \text{ is entangled }] = 1.$$
\end{corollary}
\begin{remark}
Note that the other regime, where $c > 4$, is much more involved, since the absence of negative support of the limiting measure of $\rho_N^\Gamma$ does not exclude the absence of negative outliers. In the standard (non-symmetrized) Wishart case, Aubrun \cite{aubrun2012partial} computed large moments of the partial transposition in order to achieve this conclusion. We leave these considerations open. Indeed, in our case, considerations on large moments would not allow us to conclude. This is due to our use of the Cauchy interlacing theorem. The study of large moments of the compressed matrix $Q$, introduced later, does not allow us to retain enough information on the behaviour of outliers of $\rho_N^\Gamma$; at the same time, in this work, we cannot study them directly because of the large eigenvalue of $W^\Gamma$, as will be seen later.
\end{remark}
\bigskip
The proof of the main theorem is rather involved, and we shall proceed in several steps in the following sections, culminating with the final proof given in Section \ref{sec:proof-main-result}. We start, in the spirit of Section \ref{sec:fermionic}, with an analysis of the average state and of the outlier eigenvalue. However, in the bosonic case, this will not be sufficient to make a statement about entanglement. Hence the need of a more detailed study of the spectrum, provided by our main result, Theorem \ref{thm:main}.
Following the approach of Section \ref{sec:fermionic}, we compute the average of the partially transposed matrix $W^\Gamma$, and show that already on average, there is an outlier, \emph{positive}, eigenvalue on a scale larger than the rest of the spectrum.
Recall that the \emph{maximally entangled state} is the unit norm vector
$$\ket \Omega = \frac{1}{\sqrt N} \sum_{i=1}^N \ket{ii} \in \mathbb C^N \otimes \mathbb C^N,$$
where $\{\ket i \}_{i \in [N]}$ is the canonical orthonormal basis of $\mathbb C^N$.
\begin{proposition}\label{prop:eigenval-eigenvec-average}
The average of the partial transpose $W^{\Gamma}$ is given by
$$\mathbb{E} W^\Gamma = M P_s^\Gamma = \frac{M}{2} I_{N^2} + \frac{MN}{2} \ketbra{\Omega}{\Omega}.$$
The eigenvalues of $\mathbb{E} W^\Gamma$ are as follows
$$\operatorname{spec}(\mathbb{E} W^\Gamma) = \begin{cases}
\frac{M(N+1)}{2} \qquad &\text{ with multiplicity $1$}\\
\frac{M}{2} \qquad &\text{ with multiplicity $N^2-1$.}
\end{cases}
$$
\end{proposition}
\begin{proof}
The proof is a direct computation. Using Wick-Isserlis theorem we compute $\mathbb{E} W=\frac{M}{2} (I_{N^2}+F)$. Applying the partial transposition on the flip operator
$$F=\sum_{i,j=1}^N\ketbra{ij}{ji},$$
we deduce that
$$F^{\Gamma}=\sum_{i,j=1}^N(\ketbra{ij}{ji})^{\Gamma}=\sum_{i,j=1}^N\ketbra{ii}{jj}=N\ketbra{\Omega}{\Omega}.$$
Writing $P_s = (I+F)/2$ yields the formula for $\mathbb{E} W^\Gamma$.
The spectrum follows from the fact that $\ketbra{\Omega}{\Omega}$ is a rank-one projection.
\end{proof}
We note that $\lvert \Omega \rangle$ is an eigenvector of $P_s^{\Gamma}$ with eigenvalue $(N+1)/2$. Hence, the average $\mathbb{E} W^\Gamma$ has an eigenvalue $M(N+1)/2\sim cN^3/2$ with eigenvector $\ket \Omega$. We show next that the same holds, asymptotically, for $W^\Gamma$.
\begin{theorem}\label{thm:eigenvec-convergence}
The maximally entangled vector $\ket \Omega$ is an approximate eigenvector of the random matrix $N^{-3}W^\Gamma$ with corresponding approximate eigenvalue $c/2$: if
$$\epsilon_N:=\|N^{-3} W^\Gamma \ket \Omega - c/2 \ket \Omega\|^2,$$
then $\lim_{N \to \infty} \mathbb{E} \epsilon_N = 0$. In particular, the random variable $\epsilon_N$ converges in probability towards zero as $N\to\infty$.
\end{theorem}
\begin{proof}
Manipulating the expression of $\epsilon_N$ we have
$$\epsilon_N=N^{-6}\langle \Omega | (W^\Gamma)^2 | \Omega \rangle - cN^{-3} \langle \Omega | W^\Gamma | \Omega \rangle+ c^2/4.$$
Computing the average and using Proposition \ref{prop:eigenval-eigenvec-average} we have
$$\mathbb{E} \epsilon_N=N^{-6} \mathbb{E}\langle \Omega | (W^{\Gamma})^2| \Omega \rangle - c^2/4 + o(1).$$
We compute $\mathbb{E}\langle \Omega | (W^{\Gamma})^2| \Omega \rangle$ using the Wick-Isserlis theorem and find, using Lemma \ref{lem:partial-trace-P-sym},
\begin{align*}
\mathbb{E}\langle \Omega | (W^{\Gamma})^2| \Omega \rangle &= M^2\langle \Omega | (P_s^\Gamma)^2| \Omega \rangle + \frac{M}{N} \Tr\left[ (\Tr_1 P_s)^2 \right] \\
&=\frac{M^2}{N}\Tr\left[ (\Tr_1 P_s)^2 \right]+\frac{M}{N} \Tr\left[ (\Tr_1 P_s)^2 \right]\\ &=\left[N^6\frac{c^2}{4}+N^4\frac{c}{4}\right]\left(1+\frac2N+\frac1{N^2}\right)= \frac{c^2}{4}N^6+O(N^5),
\end{align*}
establishing the first claim.
The convergence in probability follows by Markov's inequality.
\end{proof}
\bigskip
Let us finish this section by addressing another important entanglement criterion: \emph{realignment} \cite{chen2002matrix, rudolph2003cross}. The realignment criterion states that any separable bipartite density matrix $\rho$ satisfies the following inequality:
$$\|\rho^R\|_1 \leq 1,$$
where $\|\cdot \|_1$ denotes the Schatten 1-norm or the nuclear norm (i.e.~the sum of the singular values of a matrix), while $\rho^R$ denotes the realignment (or the reshuffling, see \cite[Chapter 10.2]{bengtsson2006geometry}) of a matrix $\rho$, defined algebraically by
$$\langle ij | \rho^R | kl \rangle = \langle ik | \rho | jl \rangle,$$
or graphically in Figure \ref{fig:realignment}.
\begin{figure}
\centering
\includegraphics{realignment.pdf}
\caption{The realignment of a bipartite matrix $\rho$.}
\label{fig:realignment}
\end{figure}
The key observation here is that the realignment and the partial transposition of a matrix are related as follows:
$$\rho^R = \left(\rho^\Gamma\right)\cdot F \quad \implies \quad \|\rho^R\|_1 = \|\rho^\Gamma\|_1.$$
In the case of a (non-normalized) random matrix $W$ from the induced bosonic ensemble, entanglement follows from the inequality $\|W^\Gamma\|_1 > \Tr W$. Note that, asymptotically as $N \to \infty$ and $M \sim cN^2$, the trace behaves as $\Tr W \sim (c/2) N^4$, while Theorem \ref{thm:main} states that
$$\|W^\Gamma\|_1 \sim (c/2) N^3 + N^4 \int |x| \, \mathrm{d} \mathrm{SC}_{c/2, \sqrt c/2}(x).$$
We note that the large outlier eigenvalue does not contribute asymptotically to the value of the Schatten 1-norm. Hence, in order for the realignment criterion to detect entanglement in the random bosonic matrix $W$, the following inequality needs to hold:
$$\int |x| \, \mathrm{d} \mathrm{SC}_{c/2, \sqrt c/2}(x) > \frac c 2 = \int x \, \mathrm{d} \mathrm{SC}_{c/2, \sqrt c/2}(x).$$
But the above happens precisely when the probability measure $\mathrm{SC}_{c/2, \sqrt c/2}$ has negative support, which is also the condition for the partial transpose criterion to detect the entanglement of $W$. Hence, we conclude that, in the case of the induced bosonic ensemble, the partial transpose and the realignment criteria have \emph{identical thresholds} ($c_0 = 4$). This situation is to be contrasted with the case of the usual induced ensemble, where the threshold for the partial transpose criterion ($c^\Gamma_0 = 4$ \cite{aubrun2012partial}) is strictly larger than the threshold for the realignment criterion ($c_0^R = (8/3\pi)^2 \approx 0.72$ \cite{aubrun2012realigning}), proving that, in some sense, the former criterion is stronger than the latter.
\section{The circuit counting graph polynomial}\label{sec:graph-poly}
In this subsection we discuss some basic facts about the \emph{circuit counting polynomial} of directed graphs. Of special importance to us is Theorem \ref{thm:graph-poly-trace-Psym}, relating the generalized bipartite trace of the symmetric projector $P_s = P_{s, 2}$ to the circuit counting polynomial $J$ of an associated graph.
We start by recalling the definition of a digraph.
\begin{definition}
A digraph is a pair $G=(V,A)$ where $V$ is a set of vertices and $A$ is a multi-set of ordered pairs of (possibly non distinct) elements of $V$. The fact that $A$ is a multi-set allows for multiple edges.
\end{definition}
In this paper, we will denote when needed the elements of $A$ as $\{i\rightarrow j\}$ for $i,j \in V$ to better picture the ordering. Possibly non-distinct elements in the pairs means the digraphs can have loops. \\
Note that incoming edges of a vertex $i$ are the edges of the form $\{a\rightarrow i\}$, while outgoing edges are the edges of the form $\{i\rightarrow a\}$. A loop edge is both incoming and outgoing for the vertex it is adjacent to. A digraph is said to be $k$-in/$k$-out if each vertex has $k$ incoming edges and $k$ outgoing edges.\\
We introduce the circuit counting polynomial $J$ of a digraph $G$ as
\begin{equation}
J(G;x)=\sum_{k\ge 1}j_kx^k,
\end{equation}
with $j_k=|\{ \textrm{covers of } G \textrm{ in } k \textrm{ cycles}\}|$. Note that the cycles must follow the edge orientations.
The circuit counting polynomial possesses several properties useful to us. As we will be interested only in the case of digraphs that are $2$-in/$2$-out, we state these properties only in the case of $2$-in/$2$-out digraphs. The circuit counting polynomial has the following trivial multiplicativity property.
\begin{lemma}\label{lem:mult-J}
Let $G$ be a 2-in/2-out digraph having connected components $G_1, G_2, \ldots, G_k$ (which are connected 2-in/2-out digraphs). Then
$$J(G; x) = \prod_{i=1}^k J(G_i; x).$$
\end{lemma}
Next, we consider skein relations. The reason is that a cycle cover of $G$ corresponds to a choice of state for each vertex of $G$ (as defined in \cite[Definition 2.1]{ellis2004identities} and below; the case we are interested in is the Eulerian case). These skein relations read graphically
\begin{equation}\label{eq:skein-relation-J}
J\left(\raisebox{-16mm}{\includegraphics[scale=0.4]{J-before-move.pdf}};x\right) \ = \ J\left(\raisebox{-16mm}{\includegraphics[scale=0.4]{J-after-move-1.pdf}};x\right) \ + \ J\left( \raisebox{-16mm}{\includegraphics[scale=0.4]{J-after-move-2.pdf}};x\right).
\end{equation}
Additionally, when evaluated on connected graphs, it is of maximal degree for graphs that have only cut vertices, where we recall that a cut vertex is a vertex whose removal disconnects the graph. This fact is a simple extension of \cite{las1983polynome}, that we show here. We give a proof below, using the graphical skein relations.
\begin{theorem}\label{thm:cut-vertex-relation}
Let $G$ be a $2$-in/$2$-out connected digraph and $v$ be a cut vertex. Denote by $G\backslash v$ the \emph{connected} digraph obtained from $G$ by erasing the vertex $v$ and reconnecting the edges in an orientation preserving way. Then
\begin{equation}\label{eq:cut-vertex-relation}
J(G;x)=(x+1)J(G\backslash v;x).
\end{equation}
\end{theorem}
\begin{proof}
We have using the skein relations described above at the cut vertex $v$
\begin{equation}\label{eq:move-J}
J\left(\raisebox{-10mm}{\includegraphics[scale=0.4]{J-cut-vertex.pdf}};x\right)\ = \ J\left(\raisebox{-10mm}{\includegraphics[scale=0.4]{J-cut-vertex-move1.pdf}};x\right) \ + \ J\left(\raisebox{-10mm}{\includegraphics[scale=0.4]{J-cut-vertex-move2.pdf}};x\right)
\end{equation}
where the gray areas represent the rest of the graph. Since $v$ is a cut vertex these gray areas are connected only through $v$. The blue dashed lines inside the gray parts show how a cycle that would pass through the edges entering and exiting the gray parts would behave. Notice in particular that due to the $2$-in/$2$-out constraint, a cycle going through one edge entering a gray part has to go out of this gray part through the edge exiting it. Now notice that one of the choice of state for $v$ (the leftmost term of the right hand side) leads to a disconnected graph that has one more cycle than the other choice of state that leads to a connected graph (rightmost term of the right hand side). Thus we have, graphically,
\begin{equation}
J\left(\raisebox{-10mm}{\includegraphics[scale=0.4]{J-cut-vertex-move1.pdf}};x\right) = x J\left(\raisebox{-10mm}{\includegraphics[scale=0.4]{J-cut-vertex-move2.pdf}};x\right).
\end{equation}
Therefore,
\begin{equation}
J\left(\raisebox{-10mm}{\includegraphics[scale=0.4]{J-cut-vertex.pdf}};x\right)\ = \ (x+1)J\left(\raisebox{-10mm}{\includegraphics[scale=0.4]{J-cut-vertex-move2.pdf}};x\right).
\end{equation}
\end{proof}
Let $G$ be a $2$-in/$2$-out digraph with $p$ vertices such that if every vertex is a cut vertex. Then repeated use of equation \eqref{eq:cut-vertex-relation} leads to
\begin{equation}\label{eq:only-cut-J}
J(G;x)=(x+1)^pJ(L;x)=x(x+1)^p,
\end{equation}
where $L$ is the trivial digraph made of an edge looping on itself with no vertices. We later show that digraphs with only cut vertices are the ones maximizing the degree of the circuit counting polynomial.
In the case of $2$-in/$2$-digraphs, the circuit counting polynomial is related to the \emph{interlace polynomial} \cite{arratia2004interlace}, defined here according to \cite{brijder2011nullity}. For a (non oriented) graph $H=(V,E)$, the interlace polynomial $q(H;x)$ is
\begin{equation}
q(H;x)=\sum_{\substack{S\subseteq V}} (x-1)^{\textrm{dim }\textrm{ker } A_{H[S]}},
\end{equation}
where $H[S]$ is the sub-graph induced by the subset $S$ of vertices of $H$ and $A_{H[S]}$ denotes the adjacency matrix of $H[S]$. Note that $\textrm{dim }\textrm{ker } A_{H[S]}$ is often called the nullity of $H[S]$. Note also that the degree of the interlace polynomial of a graph is bounded by its number of vertices:
\begin{equation}\label{eq:UB-deg-q}
\deg q(H; x) \leq |H|.
\end{equation}
We use repeatedly the following property. For connected 2-in/2-out digraphs, the circuit polynomial is related to the interlace polynomial $q(x)$ of a corresponding interlace graph via the following theorem.
\begin{theorem}[{\cite[Theorem 24]{arratia2004interlace}}]\label{thm:Arratia}
Consider $G$ an Eulerian $2$-in/$2$-out digraph, $C$ any Euler circuit of $G$ and $H=H(C)$ its interlace graph. Then
\begin{equation}
J(G;x)=x q(H;x+1)=x m(G;x+1)
\end{equation}
with $q$ the interlace polynomial, and $m$ the Martin polynomial.
\end{theorem}
Let us recall a bit of terminology. An Euler circuit $C$ of a connected digraph $G$ is a circuit passing through every edge of $G$ exactly once. For a $2$-in/$2$-out digraph $G$ the interlace graph $H(C)$ of one of its Euler circuits $C$ is the (non oriented!) graph defined as having the same vertex set than $G$ and having an edge between vertices $a$ and $b$ if those are interlaced in $C$. That is, they appear in the following way $C = \ldots a \ldots b \ldots a \ldots b \ldots$, see \cite{arratia2004interlace} for these definitions. It is easy to deduce that connected $2$-in/$2$-out digraphs $G$ having an Euler circuit with no interlace pairs have an interlace graph $H(C)$ maximizing the degree of $q(H(C);x)$ since $H(C)$ is empty (thus $\textrm{dim }\textrm{ker } A_{H(C)[S]}=|S|$ for all $S\subseteq V$).\\
In \cite{arratia2004interlace} the following result is established.
\begin{lemma}
If a $2$-in/$2$-out digraph $G$ has an Euler circuit $C$ with no interlace pairs, then $G$ has only one Euler circuit.
\end{lemma}
A simple consequence is that $2$-in/$2$-out digraphs $G$ maximizing the degree of the interlace polynomial of their interlace graphs have a unique Euler circuit, see also Lemma \ref{lem:max-J-only-cut}. We can now give a very useful bound for the degree of the circuit counting polynomial. We denote by $K(G)$ the number of connected components of the graph $G$; here, we do not care about the orientation of the edges in $G$ when defining connected components.
\begin{lemma}\label{lem:UB-deg-J}
Given a 2-in/2-out digraph $G$ on $n$ vertices, we have
$$\deg J(G;x) \leq n + K(G).$$
\end{lemma}
\begin{proof}
We shall use the relation between the circuit counting polynomial and the interlace polynomial from Theorem \ref{thm:Arratia} on the connected components $G_1, G_2, \ldots, G_{K(G)}$ of $G$. The degree of the interlace polynomial of a connected $2$-in/$2$-out digraph with $p$ vertices is bounded by $p$ (see Eq.~\eqref{eq:UB-deg-q}), hence $\deg J(G_i) \leq n_i+1$, where $n_i$ is the number of vertices of $G_i$. Using the multiplicativity property of $J$ from Lemma \ref{lem:mult-J}, we have
$$\deg J(G; x) = \sum_{i=1}^{K(G)} \deg J(G_i; x) \leq \sum_{i=1}^{K(G)} n_i + 1 = n + K(G).$$
\end{proof}
We then use the above bound to show the next lemma.
\begin{lemma}\label{lem:max-J-only-cut}
Among all connected digraphs with a given number of vertices, the ones maximizing the degree of the circuit-counting polynomial $J$ are precisely the ones having only cut vertices.
\end{lemma}
\begin{proof}
Assume that a connected digraph $G$ has only $k<p$ out of $p$ vertices which are cut vertices. Then using skein relations \eqref{eq:skein-relation-J}, we can pick a vertex that is not a cut vertex of $G$ and reduce it. We obtain two connected $2$-in/$2$-out digraphs with $p-1$ vertices $G'$ and $G''$. Indeed, they are connected because the vertex we chose to reduce is not a cut-vertex, hence none of the moves disconnect the digraph. Then using the bound of Lemma \ref{lem:UB-deg-J} above we have that $\textrm{deg} \ J(G'), \textrm{deg} \ J (G'')\le p$. We also know from equation \eqref{eq:only-cut-J} that connected $2$-in/$2$-out digraphs with $p$ vertices have circuit counting polynomial of degree $p+1$, proving the claim.
\end{proof}
We introduce a practical notation for the rest of this paper. We denote $G_{\sigma_1,\sigma_2}$, for $\sigma_1, \sigma_2\in S_p$, the $2$-in/$2$-out digraph on $p$ vertices whose (oriented) edge (multi-)set is
\begin{equation}\label{eq:def-G-a-b}
E=\{i\rightarrow \sigma_1(i) \, : \, i\in [p]\} \sqcup \{ i\rightarrow\sigma_2(i) \, : \, i \in [p]\}.
\end{equation}
Note in particular that there is an edge of the form ${i\rightarrow k}$ if either the matrix of the permutation $\sigma_1$ has element $(k,i)$ equal to $1$ or the matrix of the permutation $\sigma_2$ has element $(k,i)$ equal to $1$. If there are two edges of the form ${i\rightarrow k}$ then both matrices of the permutations $\sigma_1, \sigma_2$ have elements $(k,i)$ equal to $1$. Finally, there is no edge from $i$ to $k$ if and only if both $\sigma_1, \sigma_2$ have elements $(k,i)$ equal to $0$. These facts together with a moment of reflection reveal that the adjacency matrix $A_{G_{\sigma_1,\sigma_2}}$ of the $2$-in/$2$-out digraph is the sum of the two matrices of the permutations $\sigma_1, \sigma_2$
\begin{equation}
A_{G_{\sigma_1,\sigma_2}}= \sigma_1+\sigma_2,
\end{equation}
where we used the same notation for the permutations $\sigma_{1,2}$ and their matrix representations. In general, we also have the reciprocal, that holds in a more general setting,
\begin{proposition}
Given a $k$-in/$k$-out digraph $G$ with $p$ vertices, there exists $k$ permutations $\sigma_1,\sigma_2,\ldots, \sigma_k\in S_p$ such that
\begin{equation}
A_{G}=\sum_{i=1}^{k}\sigma_i
\end{equation}
where $A_G$ is the adjacency matrix of the digraph $G$.
\end{proposition}
\begin{proof}
First notice that due to the regularity of the digraph $G$, $A_G$ is, up-to-normalization, a bistochatic matrix. Indeed, the $k$-in/$k$-out property implies that every row and every column of $A_G$ must sum to $k$. From this remark, the proposition is a trivial consequence of the Birkhoff-von Neumann algorithm applied to $A_G$.
\end{proof}
This proposition implies that by choosing to index $2$-in/$2$-out digraphs with two permutations we did not restrict the set of digraphs we are looking at. However note that the choice of permutations, given a digraph, is not unique and there may be different collections of permutations representing the same digraph. It is possible to describe classes of permutations that lead to equivalent digraphs, but this will be of no use to us. \\
\begin{figure}
\centering
\includegraphics[scale=0.8]{digraph-gamma-gamma-inv.pdf}
\caption{Structure of a digraph $G_{\gamma,\gamma^{-1}}$.}
\label{fig:digraph-gamma-gamma-inv}
\end{figure}
Assume $\gamma=(1,2,3,\ldots, n)\in S_n$ is the full cycle permutation, then
\begin{proposition}\label{prop:J-alpha=id}
Consider the $2$-in/$2$-out digraph $G_{\gamma, \gamma^{-1}}$, where $\gamma\in S_n$ is the full cycle. The number of covers of $G_{\gamma, \gamma^{-1}}$ having $k$ cycles is given by
\begin{equation}
j_k=\binom{n}{k}+\delta_{2,k}.
\end{equation}
In particular,
\begin{equation}
\mathrm{deg }\, J(G_{\gamma, \gamma^{-1}})=n.
\end{equation}
\end{proposition}
\begin{proof}
The proof follows from the fact that
$G_{\gamma,\gamma^{-1}}$ is of the form depicted in Fig \ref{fig:digraph-gamma-gamma-inv}. Then it is sufficient to notice that at each vertex of $G_{\gamma,\gamma^{-1}}$ we have the choice to either come back to where we came from (vertex state $1$) or to keep going in the same direction (vertex state $2$). The case for which we decide to keep going at all vertices (\textit{i.e.} we pick state $2$ at all vertices) leads to two cycles (responsible for the $\delta_{2,k}$ in the above formula). While if we decide to have one vertex with state $1$ and the others with state $2$ then we get one cycle. From that, a moment of reflection reveals that each time we assign an additional vertex the state $1$ we create an additional cycle. Thus there are $\binom{n}{k}$ choice of states leading to a cycle cover with $k$ cycles of $G_{\gamma,\gamma^{-1}}$. \\
It follows that the degree of $J(G_{\gamma, \gamma^{-1}})$ is $n$. This last result can also easily be obtained by constructing an interlace graph $H_{\gamma,\gamma^{-1}}$ from an Eulerian cycle $C_{\gamma,\gamma^{-1}}$ of $G_{\gamma, \gamma^{-1}}$.
It is easy to exhibit such an Eulerian cycle: $C_{\gamma,\gamma^{-1}}=1,2,3,\ldots,n,1,n,\ldots,3,2$. A pair $(1,q)$ is clearly interlaced for any $q=2,\ldots, n$ while any other pair $(a,b)$ for $a \textrm{ and }b\neq 1$ is not interlaced. Thus $H_{\gamma,\gamma^{-1}}$ is the star graph with the vertex $1$ at its center and thus $\underset{{X\textrm{ minor of } H}}{\textrm{max }}\textrm{dim }\textrm{ker } X=n$ the maximizing minor being the graph obtained from $H$ by removing vertex $1$.
\end{proof}
Finally, we now relate the circuit counting graph polynomial $J$ to a generalized trace of the projector on the symmetric subspace $P_s$ (see also Proposition \ref{prop:expanded-Wick-thm} for a more general result obtained using the formalism of tensor networks).
\begin{theorem}\label{thm:graph-poly-trace-Psym}
For all permutations $\alpha, \beta \in S_p$, we have
$$\Tr_{\alpha,\beta}(P_{s}, \ldots, P_{s}) = 2^{-p} J(G_{\alpha,\beta},N).$$
\end{theorem}
\begin{proof}
We start from $P_s = (I + F)/2$ and expand the left hand side of the equation in the statement as a sum over all possible choices of terms:
$$\Tr_{\alpha,\beta}(P_{s}, \ldots, P_{s}) = 2^{-p} \sum_{f : [p] \to \{I,F\}} \Tr_{\alpha,\beta}\left(\bigotimes_{i=1}^p f(i) \right).$$
Above, the generalized trace of the right hand side, in diagrammatic notation, is a collection of loops evaluating to $N$ raised to the power of the number of loops. Clearly, each such loop corresponds to a circuit in the digraph $G$ given by the permutations $\alpha$ and $\beta$, once the moves $f(i)$ have been applied at each vertex $i \in [p]$, see \eqref{eq:move-J}.
\end{proof}
\section{Moments of \texorpdfstring{$W^\Gamma$}{W Gamma}}\label{sec:moments}
In this section, we shall express the moments of the random matrix $W^\Gamma$ (see Figure \ref{fig:W-Gamma}) as a sum of circuit counting polynomials, allowing us to determine their asymptotic behaviour. On the way, we shall establish several useful properties of the aforementioned polynomials, which shall also be useful in the later sections. The main result, Theorem \ref{thm:moments-WGamma-3-asympt} showcases the power of the relation between the generalized bipartite traces of $P_s$ and the circuit counting polynomial from Theorem \ref{thm:graph-poly-trace-Psym}.
\begin{figure}
\centering
\includegraphics{symmetrized-Wishart-PT.pdf}
\caption{The partial transposition $W^\Gamma$ of a symmetrized Wishart matrix.}
\label{fig:W-Gamma}
\end{figure}
We first establish the exact form of the moments of $W^\Gamma$ in terms of $J$ polynomials.
\begin{proposition}\label{prop:moments-W-Gamma}
The moments of the random matrix $W^\Gamma \in \mathcal M_{N^2}(\mathbb C)$ from Eq.~\eqref{eq:def-W-Gamma} are given by:
\begin{equation}\label{eq:moments-W-Gamma}
\forall p \geq 1, \qquad \mathbb{E}\Tr\left((W^{\Gamma})^p\right) = 2^{-p} \sum_{\alpha\in S_p}M^{\#\alpha}J(G_{\gamma\alpha, \gamma^{-1}\alpha};N).
\end{equation}
\end{proposition}
\begin{proof}
The proof is a rather simple application of the Wick formula, in its tensor network incarnation \cite{collins2011gaussianization}. We need to evaluate the expectation of the trace of the product of $p$ copies of the matrix from Figure \ref{fig:W-Gamma}. By the Wick theorem, the result is a sum over permutations $\alpha \in S_p$ of diagrams obtained by connecting the $i$-th $G$ box to the $\alpha(i)$-th $G^*$ box
Consider now the 2-in/2-out digraph $G_{\gamma\alpha, \gamma^{-1}\alpha}$ having edges (see \eqref{eq:def-G-a-b} for the general case)
\begin{equation}\label{eq:def-G-ga-gma}
\{ i \mapsto \gamma(\alpha(i)) \}_{i \in p} \sqcup \{ i \mapsto \gamma^{-1}(\alpha(i)) \}_{i \in p}.
\end{equation}
The formula in the statement follows now from Theorem \ref{thm:graph-poly-trace-Psym}.
\end{proof}
Let us compute next the exact form of the first two moments. For the first moment, we have
$$\mathbb{E}\Tr W^{\Gamma}= \frac{M}{2}J(\, \includegraphics[valign=c]{G-1-1.pdf}\, ;N) = \frac{MN(N+1)}{2} \sim \frac c 2 N^4.$$
Note that this computation is consistent with the result from Proposition \ref{prop:eigenval-eigenvec-average}.
For the second moment ($p=2$), we have to sum over the cases $\alpha=\id$ and $\alpha=(12)$:
\begin{align*}\mathbb{E} \Tr\left((W^{\Gamma})^2\right) &= \frac{M^2}{4}J(\, \includegraphics[valign=c]{G-t-t.pdf}\, ;N)+ \frac{M}{4}J(\, \includegraphics[valign=c]{G-2-2.pdf}\, ;N)\\
&=\frac{M^2}{4}2N(N+1)+\frac{M}{4}N^2(N+1)^2 \sim \left(\frac{c^2}{2} + \frac c 4\right)N^6,
\end{align*}
where, for the case $\alpha = \id$, we have used Proposition \ref{prop:J-alpha=id} with $n=2$.
To analyze larger moments, we need the following technical result.
\begin{proposition}\label{prop:properties-J-G-alpha}
For a given permutation $\alpha\in S_p$, consider the 2-in/2-out digraph $G_{\gamma\alpha, \gamma^{-1}\alpha}$ defined in Eq.~\eqref{eq:def-G-ga-gma}. Then, $G_{\gamma\alpha, \gamma^{-1}\alpha}$ has either one or two connected components. It has two connected components if and only if the permutation $\alpha$ is \emph{parity changing}: for all $i \in [p]$, $\alpha(i) \neq i (\mathrm{mod}\, 2)$.
\end{proposition}
\begin{proof}
In the non-oriented graph $G_{\gamma \alpha, \gamma^{-1}\alpha}$, we have the following property: for any permutation $\sigma \in \langle \gamma\alpha, \gamma^{-1}\alpha\rangle$, and every $i \in [p]$, there is a path between the vertices $i$ and $\sigma(i)$. Note also that $\gamma^2\in \langle \gamma\alpha, \gamma^{-1}\alpha\rangle$. If $p$ is odd, $\gamma^2$ is a full cycle, hence $G_{\gamma \alpha, \gamma^{-1}\alpha}$ is connected. If $p$ is even, then $\gamma^2$ acts transitively on $\{2,4,\ldots, p\}$ and $\{1,3,\ldots, p-1\}$ separately. Thus $K(G_{\gamma \alpha, \gamma^{-1}\alpha})=1$ or $2$, depending on $\alpha$. Since $\gamma$ is a permutation that changes the parity, $K(G_{\gamma\alpha,\gamma^{-1}\alpha})=2$ if and only if $\alpha$ also changes the parity. This concludes the proof.
\end{proof}
We describe next the asymptotic behaviour of the larger moments of $W^\Gamma$.
\begin{theorem}\label{thm:moments-WGamma-3-asympt}
For $p \geq 3$, in the limit $N \to \infty$, the moments of $W^{\Gamma}$ asymptotically behave as
\begin{equation}\label{eq:asymptotics-partial}
\mathbb{E}\Tr\left((W^{\Gamma})^p\right) = \frac{c^p}{2^p}N^{3p}+O(N^{3p-1}).
\end{equation}
\end{theorem}
\begin{proof}
We start from the sum over permutations from \eqref{eq:moments-W-Gamma}, and we distinguish three cases, as follows:
\begin{itemize}
\item $\#\alpha = p$, i.e.~$\alpha=\id$. We know from Proposition \ref{prop:J-alpha=id} that $J(G_{\gamma,\gamma^{-1}};N) \sim N^p$, and thus the term $\alpha = \id$ behaves as $(c/2)^p N^{3p}$.
\item $\#\alpha=p-1$, i.e.~$\alpha$ is a transposition. According to Lemma \ref{lem:UB-deg-J}, such a term is bounded by a polynomial whose leading term in $N$ has exponent
$$2\#\alpha + p+ K(G_{\gamma\alpha,\gamma^{-1}\alpha}) = 3p-2 + K(G_{\gamma\alpha,\gamma^{-1}\alpha}).$$
Since $\alpha$ is a transposition and $p \geq 3$, $\alpha$ must have a fixed point, so it cannot be parity changing. Hence, using Proposition \ref{prop:properties-J-G-alpha}, $K(G_{\gamma\alpha,\gamma^{-1}\alpha}) =1$, proving that such terms are subdominant with respect to the previous case $\alpha = \id$.
\item $\#\alpha\le p-2$. Reasoning as in the previous case, these terms are clearly subdominant, since $K(G_{\gamma\alpha,\gamma^{-1}\alpha}) \leq 2$.
\end{itemize}
In conclusion, all the terms with $\alpha \neq \id$ are subdominant with respect to the term $\alpha = \id$, finishing the proof.
\end{proof}
To finish this section, note that the asymptotic behaviour from \eqref{eq:asymptotics-partial}, corresponding to the moments $p \geq 3$, does not match the ones for $p=1,2$. This is a signature of the presence of an outlier, an eigenvalue on a larger scale, which is the phenomenon described in our main result, Theorem \ref{thm:main}.
\section{\texorpdfstring{$t$}{t}-channel random matrix and graphical representation}\label{sec:t-channel}
In this section we introduce the \emph{$t$-channel}\footnote{in reference to the $s$, $t$, $u$ channels of particle physics, as the diagrammatic representation is similar. In this spirit $W$ corresponds to the $s$-channel.} random matrix that we denote $W_{t}$. We do so as it is better suited for our aims in Sections \ref{sec:diagrammatics}, \ref{sec:bounds} and \ref{sec:tensor-eval}. We define $W_t$ component-wise by
\begin{equation}
(W_{t})_{ij,kl}:=(W)_{jl,ki}
\end{equation}
and we provide a diagrammatic representation of the above definition below
\begin{equation}\label{eq:diag-Wst}
(W_{t})_{ij,kl}= \raisebox{-11mm}{\includegraphics[scale=0.6]{W_St.pdf}}.
\end{equation}
In the above diagrammatic representation the red arc represents the symmetrizer $P_s$. The black square represents the complex conjugate $\bar G$ of the Ginibre matrix $G$ of section \ref{sec:bosonic} while the white square represents $G$ itself. $\bar G$ is seen as a linear form on the Hilbert space, hence input vectors, which is why we display the corresponding tensor legs with ingoing edges. For similar reason we display tensor legs of $G$ with outgoing edges. The letter $E$ denotes the environmental Hilbert space that is traced out. Note also that we have the following equalities:
\begin{equation}\label{eq:diag-Wst*}
(W_{t}^*)_{ij,kl}= \raisebox{-11mm}{\includegraphics[scale=0.6]{W_St-star.pdf}}
\end{equation}
We prefer to use black and white squares here to simplify the representations since we will use it intensively in the coming sections.
\\
We introduce the projector on the complement of $\mathbb C \ket \Omega$
\begin{equation}\label{eq:proj_complemet_omega_def}
P_{\overline{\Omega}}=I-\lvert\Omega\rangle\langle \Omega \rvert.
\end{equation}
By the projector property we have $P_{\overline{\Omega}}^2=P_{\overline{\Omega}}$. We want to study the sequence of moments $\mathbb{E}(\Tr(Q^p))$ of the random matrix
\begin{equation}
Q:=P_{\overline{\Omega}} W^{\Gamma} P_{\overline{\Omega}}.
\end{equation}
In order to use graphical methods for tensor networks evaluation (see \cite{biamonte2017tensor} for an overview of tensor networks techniques and ideas), we will use the following relation
\begin{equation}
\Tr(Q^p)=\Tr((F P_{\overline{\Omega}} W_{t} P_{\overline{\Omega}})^{p\textrm{ mod }2}(P_{\overline{\Omega}} W_{t}^*P_{\overline{\Omega}} W_{t} P_{\overline{\Omega}})^{\lfloor p/2 \rfloor}).
\end{equation}
This is easily shown by using the fact that $F^2 = I$, $F W_{t} F=W_{t}^*$ and $[P_{\overline{\Omega}},F]=0$. These moments have a graphical representation as ladder diagrams introduced in the next Section \ref{sec:diagrammatics}, and their Wick expansion produces terms that are indexed by tensor networks that are quotients of the ladder diagram by the action of Wick pairings. When there is no projectors $P_{\overline{\Omega}}$, the tensor network evaluates to the circuit counting polynomial of the tensor network graph. However, due to the presence of the projector, there are two types of tensors appearing in the tensor network and this will give a different answer. This is the route we follow in Section \ref{sec:tensor-eval}. Note however that if we expand all projectors then we obtain, for each term of the expansion, a $2$-in/$2$-out digraph which represents a tensor network evaluating to the circuit counting polynomial of the graph. This is the method we use in this section and in the next Sections \ref{sec:diagrammatics} and \ref{sec:bounds}.\\
We denote $P\models \{1,\ldots, p\}$ an \emph{interval partition} of $\{1,\ldots, p\}$. That is, $P$ is a set of subsets $P_i\subseteq \{1,\ldots, p\}$ of $P$ such that $\bigsqcup_i P_i=P$ and $P_i$ are sub-intervals of $\{1,\ldots, p\}$ seen cyclically. For instance,
\begin{itemize}
\item $p=4$, $P=\{\{1,2\},\{3,4\}\},\ P_1=\{1,2\},\ P_2=\{3,4\}$
\item $p=4$, $P=\{\{2,3\},\{4,1\}\},\ P_1=\{2,3\}, \ P_2=\{4,1\}$
\item $p=4$, $P=\{\{2\},\{3\},\{4,1\}\},\ P_1=\{2\},\ P_2=\{3\}, \ P_3=\{4,1\}$.
\item $p=3$, $P=\{\{1,2,3\}\}, \ P_1=\{1,2,3\}$,
\end{itemize}
a non-example is given by the partition
\begin{itemize}
\item $p=5$, $P=\{\{1,2,4\},\{3,5\}\}$.
\end{itemize}
Assume we have a word $f\in \{0,1\}^p$. Assuming there exists $i$ such that $f(i)=1$, we associate to $f$ a subdivision of $P_f\models\{1,\ldots, p\}$ by putting bars between elements $i-1, i \in \{1,\ldots, p\}$ if and only if $f(i)=1$. $P_j$ is the interval defined as the set of elements between the $(j-1)^{\mathrm{th}}$ and $j^{\mathrm{th}}$ bar.
\begin{figure}
\centering
\includegraphics[scale=0.7]{Bars-diagrams.pdf}
\caption{Examples of a interval subdivisions of $\{1,\ldots, 4\}$ associated to words $f\in \{0,1\}^4$ using the bars construction.}
\label{fig:bars-to-intervals}
\end{figure}
See Fig. \ref{fig:bars-to-intervals} for a pictorial description. \\
Expanding the projectors $P_{\overline{\Omega}}$ appearing in the definition of $Q$, we have
\begin{multline}\label{eq:Pcomp-expansion}
\Tr(Q^p)= \Tr((F W_{t})^{p \textrm{ mod }2}(W_{t}^*W_{t})^{\lfloor p/2\rfloor})\\
+\sum_{\substack{f\in \{0,1\}^p \\ \exists i, f(i)=1}}\left(\frac{-1}{N}\right)^{|P_f|} \prod_{P_{f,i}\in P}N\langle \Omega \vert (W_{t})^{|P_{f,i}| \textrm{ mod }2}(W_{t}^*W_{t})^{\left\lfloor |P_{f,i}|/2\right\rfloor} \lvert \Omega \rangle
\end{multline}
This expansion forms the basis for the expansion of the moments of $Q$ as sums of circuit counting polynomials of $2$-in/$2$-out digraphs. The term of the form
\begin{equation}\label{eq:exp-moments-Q}
N\langle \Omega \vert (W_{t})^{|P_{f,i}| \textrm{ mod }2}(W_{t}^*W_{t})^{\left\lfloor |P_{f,i}|/2\right\rfloor} \lvert \Omega \rangle
\end{equation}
can be represented diagrammatically, and we will use this diagrammatic representation to understand the Wick expansion of the moments of $Q$.\\
\begin{remark}\label{rem:correspondance-W-Partial}
Note also that the first term of the right hand side of the equation \eqref{eq:exp-moments-Q} above is $\Tr((W^{\Gamma})^p)$. We use this remark later to deduce the diagrammatics of $\mathbb{E}\left( \Tr((W^{\Gamma})^p) \right)$.
\end{remark}
\section{Diagrammatics for the moments of \texorpdfstring{$Q$}{Q} and \texorpdfstring{$W_{t}$}{Wt}}\label{sec:diagrammatics}
We start by considering the diagram representation of expression involving $W_{t}$ in the expansion \eqref{eq:Pcomp-expansion}. They are easily obtained by stacking building blocks of equations \eqref{eq:diag-Wst},\eqref{eq:diag-Wst*}. We have
\begin{multline}\label{eq:trace-to-ladder-even}
\forall p \in 2\mathbb{N}, \ \Tr((F W_{t})^{p \textrm{ mod }2})(W_{t}^*W_{t})^{\lfloor p/2\rfloor} = \Tr((W_{t}^*W_{t})^{p/2})\\
= \raisebox{-11mm}{\includegraphics[scale=0.5]{trace-to-ladder-even.pdf}}\
\end{multline}
If $p\in 2\mathbb{N}+1$ is odd, we have the slightly twisted representation
\begin{equation}\label{eq:trace-to-ladder-odd}
\Tr((F W_{t})(W_{t}^*W_{t})^{\lfloor p/2\rfloor}) \ = \ \raisebox{-12mm}{\includegraphics[scale=0.5]{trace-to-ladder-odd.pdf}}
\end{equation}\\
The gray boxes highlight the copies of $W_{t}$ (resp. $W_{t}^*$) which appear in the product in the trace. We number these copies as shown on the graphical representation. The black and white squares inherit the numbering of the gray box they lie in. This diagrammatic allows us to describe the Wick pairings as permutations $\alpha \in S_p$. Indeed, since $G$ is normally distributed, with vanishing pseudo-variance, Wick pairings only pair instances of $G$ with instances of $\bar G$. We represent such a pairing by adding oriented dotted edges from black squares to white ones in diagrams of the type appearing in equations \eqref{eq:trace-to-ladder-even} and \eqref{eq:trace-to-ladder-odd}. Each Wick pairing can be indexed by a permutation in $\alpha \in S_p$. Indeed the black box number $i$ is paired with the white box $j$ if and only if $\alpha(i)=j$. This translates diagrammatically as a dotted edge between black square $i$ and white square $j$, oriented from black to white. See examples on Fig. \ref{fig:pairing-example-trace-to-ladder}.
\begin{figure}
\centering
\includegraphics[scale=0.6]{example_Wick_pairing_ladderform.pdf}
\caption{Two examples of Wick pairings indexed by permutations $\alpha$. The dotted lines allows to track how the random states are paired together.}
\label{fig:pairing-example-trace-to-ladder}
\end{figure}\\
Now we realize that each such pairing tells us about the composition of symmetrizers (denoted as red circle arcs), whose inputs and outputs are kept track of using the orientations of the black edges in the diagrams. In fact, these Wick pairings tell us how to match the indices of these symmetrizers together. In particular, note that if $\alpha(i)=j$, then the symmetrizer associated to the black box $i$ is composed (in the usual operator sense) with the symmetrizer associated with the white box $j$. Since the symmetrizer is a projector, we are left with a unique symmetrizer for each dotted edge whose inputs are the black edges ending on the black box $i$ and whose outputs are the black edges exiting the white box $j$. Since a symmetrizer symmetrizes over both the inputs and the outputs it is not necessary to keep track of the difference between a first input (resp. output) and a second input (resp. output). Thus, the corresponding contraction of symmetrizers is indexed by a directed graph (digraph) whose vertices are $2$-in/$2$-out and represent two\footnote{the factor of $2$ is conventional. Its purpose is to recover the definition of the $J$ polynomial later.} times $P_{s}$. Such a digraph can be seen as a tensor network for the tensor $P_s$ The value of the tensor network is just the evaluation of the corresponding contraction. We explain in details this evaluation combinatorially. Since each vertex represents twice a symmetrizer $P_{s}=\frac12(I+F)$, we can evaluate the contraction by deciding for a \emph{state}\footnote{we use here the terminology that is familiar to the graph polynomial literature. Note though that vertex states should not be confused with quantum states. We hope that the context will be clear enough to avoid confusion.}, that is the assignation of either the identity $I:\mathbb{C}^N\otimes \mathbb{C}^N \rightarrow \mathbb{C}^N\otimes \mathbb{C}^N$ or the flip $F:\mathbb{C}^N\otimes \mathbb{C}^N \rightarrow \mathbb{C}^N\otimes \mathbb{C}^N$, at each vertex. The possible states $\{S_1, S_2\}$ are
\begin{equation}
\raisebox{-6mm}{\includegraphics[scale=0.7]{state_no_state.pdf}}\longrightarrow \raisebox{-12mm}{\includegraphics[scale=0.7]{state_S_1.pdf}}, \quad \raisebox{-12mm}{\includegraphics[scale=0.7]{state_S_2.pdf}}.
\end{equation}
For each state assignation to all the vertices of the graph, we obtain a collection of cycles, called a cycle cover of the graph. Each of these cycles are weighted by $N$. Hence the weight of a particular state assignation is $N$ raised to the number of resulting cycles. Summing over all possible state assignation will result in the circuit counting polynomial of the graph. Hence the weight of the tensor network we constructed is just the circuit counting polynomial of the underlying graph times $\left( \frac12 \right)^p$, with $p$ the number of vertices. This last factor just takes into account the normalization factor $\frac12$ of the symmetrizer. \\
The digraph resulting from the Wick pairing $\alpha$ can be indexed by the data of two permutations that also allows us to construct directly the adjacency matrix of the digraph. \
We describe this permutation representation of $2$-in/$2$-out digraphs obtained from a Wick pairing. These digraphs are indexed by $\sigma_1=\gamma\alpha, \ \sigma_2=\gamma^{-1}\alpha$ with $\gamma=(1,2,\ldots, p)$. Indeed, a vertex labeled $i$ corresponds, by definition, to a pair $(i,\alpha(i))$. The edges entering this vertex are the edges entering the black square of the pair $i$ in the ladder graph while the edges exiting this vertex are the edges exiting the white square of the gray pair $\alpha(i)$. Note that the edges exiting from vertex $i$ of the digraph, are adjacent to the black boxes of the gray pairs $\gamma(\alpha(i))$ and $\gamma^{-1}(\alpha(i))$, that is in the digraph they are adjacent to the vertices $\gamma(\alpha(i))$, $\gamma^{-1}(\alpha(i)$ oriented as $i\rightarrow \gamma(\alpha(i))$ and $i\rightarrow \gamma^{-1}(\alpha(i))$. Hence, $\sigma_1=\gamma\alpha, \ \sigma_2=\gamma^{-1}\alpha$. See the local construction of the vertices on Fig. \ref{fig:local-construct}.\\
\begin{figure}
\centering
\includegraphics[scale=0.9]{local-construction-vertices.pdf}
\caption{On this figure we show how the vertices of the digraph are constructed from the pairing $\alpha$ by identifying the black box $i$ with the white box $\alpha(i)$. We forget the environment (green) lines in the digraph. We highlighted the edges adjacent to the black and white boxes to be identified in orange and blue with their orientations so that it is easier to spot them in the digraph.}
\label{fig:local-construct}
\end{figure}
We follow a similar train of thoughts for the terms in the sum
\begin{equation}
\sum_{\substack{f\in \{0,1\}^p \\ \exists i, f(i)=1}}\left(\frac{-1}{N}\right)^{|P_f|} \prod_{P_{f,i}\in P}N\langle \Omega \vert (W_{t})^{|P_{f,i}| \textrm{ mod }2}(W_{t}^*W_{t})^{\left\lfloor |P_{f,i}|/2\right\rfloor} \lvert \Omega \rangle
\end{equation}
from equation \eqref{eq:Pcomp-expansion}. Indeed, the product $(W_{t})^{|P_{f,i}| \textrm{ mod }2}(W_{t}^*W_{t})^{\left\lfloor |P_{f,i}|/2\right\rfloor}$ is also represented diagrammatically by stacking building blocks of equations \eqref{eq:diag-Wst} and \eqref{eq:diag-Wst*}. The only difference being that the vector $\lvert \Omega \rangle$ has the following representation
\begin{equation}
\lvert \Omega \rangle =\frac1{\sqrt{N}} \ \raisebox{-3mm}{\includegraphics[scale=0.6]{omega-vector.pdf}}
\end{equation}
Hence, we have the diagrammatic below
\begin{equation}
N \langle \Omega \rvert(W_{t})^{|P_{f,i}| \textrm{ mod }2}(W_{t}^*W_{t})^{\left\lfloor |P_{f,i}|/2\right\rfloor}\lvert\Omega\rangle = \ \raisebox{-12mm}{\includegraphics[scale=0.48]{overlaps-general.pdf}}
\end{equation}
We introduce permutations that we use to give a description of the digraphs appearing in the computation of the above sum. We start with the following permutations on $l$ elements, whose definition differs in the odd and even cases,
\begin{align}
\gamma_{a,l}&=\begin{cases}
(1)(23)(45)\ldots(2k,2k+1), \ \textrm{if }l=2k+1 \textrm{ is odd} \\
(1)(23)(45)\ldots(2k-2,2k-1)(2k), \ \textrm{if }l=2k \textrm{ is even}.
\end{cases}\\
\gamma_{b,l}&=\begin{cases}
(12)(34)\ldots(2k-1,2k)(2k+1), \ \textrm{if }l=2k+1 \textrm{ is odd} \\
(12)(34)\ldots(2k-1,2k), \ \textrm{if }l=2k \textrm{ is even}
\end{cases}
\end{align}
and to each $P_i\in P \models \{1,\ldots, p\}$ we associate the bijection $C_{P_i}:P_i\rightarrow \{1,\ldots, \lvert P_i\rvert\}$ defined by
\begin{equation}
C_{P_i}(P_i(q))=q
\end{equation}
where $P_i(q)$ is the $q^{\textrm{th}}$ element of $P_i$. Then we define for each $P_i$ the permutation $\gamma_{a,P_i}=C_{P_i}^{-1}\gamma_{a,\lvert P_i\rvert}C_{P_i}$ and $\gamma_{b,P_i}=C_{P_i}^{-1}\gamma_{b,\lvert P_i\rvert}C_{P_i}$. Finally, we construct yet another two permutations, $\Gamma_{a,P}, \, \Gamma_{b,P} := \prod_{i}\gamma_{a,P_i}, \, \prod_{i}\gamma_{b,P_i}$. We have several figures to illustrate the relations with the diagrams. See Fig. \ref{fig:overlaps-gamma-ab} for $\gamma_{a,\lvert P_i\rvert}$ and $\gamma_{b,\lvert P_i\lvert}$. See also Fig. \ref{fig:example_GammaP} for an example of permutations $\Gamma_{a,P}, \Gamma_{b,p}$.\\
\begin{figure}
\centering
\includegraphics[scale=0.6]{overlaps-general-gamma_a_b_action.pdf}
\caption{Illustration of the relation between the diagrams and the permutations $\gamma_{a,|P_i|}$ and $\gamma_{b,|P_i|}$. On the topmost picture: the even $|P_i|$ case. On the bottom-most picture: the odd $|P_i|$ case.}
\label{fig:overlaps-gamma-ab}
\end{figure}
Then we have as a consequence of Wick theorem that:
\begin{equation}\label{eq:Wick-prod-overlaps}
\mathbb{E}\left( \prod_{P_{f,i}\in P}N\langle \Omega \vert (W_{t})^{|P_{f,i}| \textrm{ mod }2}(W_{t}^*W_{t})^{\left\lfloor |P_{f,i}|/2\right\rfloor} \lvert \Omega \rangle\right) = \sum_{\alpha \in S_p}N^{2\# \alpha}\frac{c^{\#\alpha}}{2^p} J(G_{\Gamma_{a,P}\alpha,\Gamma_{b,P}\alpha},N)
\end{equation}
where we recall that $\sum_i \lvert P_{f,i}\rvert =p$.\\
In the same way than in the previous case, the Wick theorem identifies black box $i$ with white box $j$ for any pairing such that $\alpha(i)=j$. For the same reason than before, it leads to a $2$-in/$2$-out digraph. However, now the edges exiting a vertex $i$ of the digraph are the edges exiting the white box $\alpha(i)$ while these edges are adjacent to the black boxes $\Gamma_{a,P}(\alpha(i)), \Gamma_{b,P}(\alpha(i))$, which correspond to the vertices $\Gamma_{a,P}(\alpha(i))$ and $\Gamma_{b,P}(\alpha(i))$ in the digraph. This is sufficient to conclude that we have the relation \eqref{eq:Wick-prod-overlaps}. \\
We summarize the content of the above discussion by the proposition below:
\begin{proposition}\label{prop:expanded-Wick-thm}
As a consequence of Wick theorem we have,
\begin{equation}
\mathbb{E}\left(\Tr((F W_{t}^{p \textrm{ mod }2})(W_{t}^*W_{t})^{\lfloor p/2\rfloor})\right) = \sum_{\alpha\in S_p}N^{2\# \alpha}\frac{c^{\#\alpha}}{2^p}J(G_{\gamma\alpha, \gamma^{-1}\alpha};N)
\end{equation}
and
\begin{multline}
\sum_{\substack{f\in \{0,1\}^p \\ \exists i, f(i)=1}}\left(\frac{-1}{N}\right)^{|P_f|} \prod_{P_{f,i}\in P}N\langle \Omega \vert (W_{t})^{|P_{f,i}| \textrm{ mod }2}(W_{t}^*W_{t})^{\left\lfloor |P_{f,i}|/2\right\rfloor} \lvert \Omega \rangle =\\
\sum_{\substack{f\in \{0,1\}^p \\ \exists i, f(i)=1}}\sum_{\alpha \in S_p} \left(-1\right)^{|P_f|} N^{2\#\alpha - \lvert P_f\rvert } \frac{c^{\#\alpha}}{2^p}J(G_{\Gamma_{a,P_f}\alpha,\Gamma_{b,P_f}\alpha };N).
\end{multline}
\end{proposition}
Reformulating the above proposition and using the Remark \ref{rem:correspondance-W-Partial}, we can recover the moments of the random matrix $W^\Gamma$ from Proposition \ref{prop:moments-W-Gamma}
\begin{equation}
\mathbb{E}\left(\Tr\left((W^{\Gamma})^p\right)\right) = \sum_{\alpha\in S_p}N^{2\# \alpha}\frac{c^{\#\alpha}}{2^p}J(G_{\gamma\alpha, \gamma^{-1}\alpha};N).
\end{equation}
We use the short-hand notation, for $\alpha\in S_p$
\begin{equation}
w(\alpha,\gamma):=N^{2\# \alpha}\frac{c^{\#\alpha}}{2^p}J(G_{\gamma\alpha,\gamma^{-1}\alpha}).
\end{equation}
\begin{figure}
\centering
\includegraphics[scale=0.55]{example-GammaP-permutation.pdf}
\caption{Example of permutations $\Gamma_{a,P}$, $\Gamma_{b,P}$ for $P=\{\{2,3\},\{1,4\}\}$.}
\label{fig:example_GammaP}
\end{figure}
\section{Circuit polynomial techniques for \texorpdfstring{$\mathbb{E}\Tr(Q^p)$}{E Tr Qp}}\label{sec:bounds}
In this section we prove a sequence of technical bounds and combinatorial properties that we use to determine the set of permutations $\alpha$ contributing to the asymptotics of the random matrices $W^\Gamma$ and $Q$. \\
We have the following bound
\begin{proposition}\label{prop:bound_prop}
Let $\alpha\in S_p$, then there exists $R_{\alpha,f}>0$, that can depend on $\alpha$ and on $f$ but not on $N$
\begin{equation}
N^{\#2\alpha-\lvert P_f \rvert }J(G_{\Gamma_{a,P_f}\alpha,\Gamma_{b,P_f}\alpha }; N)\le R_{\alpha, f}N^{2\#\alpha - \lvert P_f\rvert +p + K(G_{\Gamma_{a,P_f}\alpha,\Gamma_{b,P_f}\alpha })},
\end{equation}
with $K(G_{\Gamma_{a,P_f}\alpha,\Gamma_{b,P_f}\alpha })$ the number of connected components of the corresponding digraph. Moreover, we have: $$K(G_{\Gamma_{a,P_f}\alpha,\Gamma_{b,P_f}\alpha })-\lvert P_f\rvert \le 0.$$
\end{proposition}
\begin{proof}
We now come to the second bound in the case of $2$-in/$2$-out digraphs $G_{\Gamma_{a,P_f}\alpha,\Gamma_{b,P_f}\alpha }$. The main difference lies in their number of connected component. Indeed, $\Gamma_{a,P_f}\Gamma_{b,P_f}\in \langle \Gamma_{a,P_f}\alpha, \Gamma_{b,P_f}\alpha\rangle$ acts transitively on the sets $P_{f,i}$ separately, but is not transitive on $\{1,\ldots,p\}$. Thus the number of connected components is bounded from above by $\lvert P_f \rvert$. Hence we have
\begin{equation}
K(G_{\Gamma_{a,P_f}\alpha,\Gamma_{b,P_f}\alpha })-\lvert P_f\rvert \le 0.
\end{equation}
Again, using the relation between the circuit polynomial and the interlace polynomial on each connected component of $G_{\Gamma_{a,P_f}\alpha,\Gamma_{b,P_f}\alpha }$, we obtain
\begin{equation}
\textrm{deg }J(G_{\Gamma_{a,P_f}\alpha,\Gamma_{b,P_f}\alpha }; N)\le p+K(G_{\Gamma_{a,P_f}\alpha,\Gamma_{b,P_f}\alpha }).
\end{equation}
\end{proof}
We also show the two next technical Lemma \ref{lem:maximal-scaling-cycle-type} and \ref{lem:crossing-length2}. They prove useful in the last section, to determine the limiting empirical distribution of eigenvalues of $Q$,
\begin{lemma}\label{lem:maximal-scaling-cycle-type}
Let $\alpha \in S_p$ such that $2\#\alpha +p + K(G_{\gamma\alpha,\gamma^{-1}\alpha})= 2p+e$ with $e$ a positive integer. Then, if $K(G_{\gamma\alpha,\gamma^{-1}\alpha})=1$
\begin{equation}
e=(\#_1\alpha+1)+\sum_{i\ge 3}(2-i)\#_i\alpha\le \#_1\alpha+1,
\end{equation}
where $\#_i\alpha$ is the number of cycles of length $i$ of $\alpha$. If $K(G_{\gamma\alpha,\gamma^{-1}\alpha})=2$,
\begin{equation}
e=2+\sum_{k\ge 2}2(1-k)C_{2k}\le 2.
\end{equation}
Similarly, let $e'$ be a positive integer such that $2\#\alpha +p + K(G_{\Gamma_{a,P}\alpha,\Gamma_{b,P}\alpha})=2p+e'$. Then,
\begin{equation}
e'=\#_1\alpha+\sum_{i\ge3}(2-i)\#_i\alpha+\left(K(G_{\Gamma_{a,P}\alpha,\Gamma_{b,P}\alpha})-\lvert P \rvert \right).
\end{equation}
In particular, if $\alpha$ is not allowed to have fixed points, then $e'\le 0$. If we have equality $e'=0$, then $\#_i\alpha=0$ for all $ i \ge 3$.
\end{lemma}
\begin{proof}
We first focus on $e$. We denote $\#_i\alpha$ the number of cycles of length $i$ of $\alpha$. We have the following relations
\begin{equation}
p=\sum_{i\ge 1} i \#_i\alpha \quad \textrm{and} \quad \#\alpha =\sum_{i\ge 1}\#_i\alpha.
\end{equation}
From those we deduce
\begin{equation}
e= \sum_{i\ge 1}(2-i)\#_i\alpha+K(G_{\gamma\alpha,\gamma^{-1}\alpha}).
\end{equation}
We have two cases, first assume that $K(G_{\gamma\alpha,\gamma^{-1}\alpha})=1$. Then, a consequence of proposition \ref{prop:properties-J-G-alpha} is that $\alpha$ is allowed to have fixed points, thus
\begin{equation}
e=(\#_1\alpha+1)+\sum_{i\ge 3}(2-i)\#_i\alpha \le \#_1\alpha+1.
\end{equation}
Now, assume that $K(G_{\gamma\alpha,\gamma^{-1}\alpha})=2$. A consequence of proposition \ref{prop:properties-J-G-alpha} is that $\alpha$ is not allowed to have fixed point (as those are not parity changing). More generally, $\alpha$ is not allowed to have cycles of odd length. Therefore,
\begin{equation}
e=2+\sum_{k\ge 2}2(1-k)\#_{2k}\alpha\le 2.
\end{equation}
The above inequality is saturated when $\alpha$ has only cycles of length $2$ as any longer cycles will contribute negatively to the sum of the right hand side. This proves the two first statements of the lemma.\\
We now come to $e'$. We have
\begin{equation}
e'=\#_1\alpha+\sum_{i\ge 3}(2-i)\#_i\alpha+\left(K(G_{\Gamma_{a,P}\alpha, \Gamma_{b,P}\alpha})-\lvert P \rvert \right),
\end{equation}
where the second and last term of the right hand side is bounded from above by $0$. Hence, $e'\le \#_1\alpha$. In particular, if $\alpha$ does not have fixed points, $e'\le 0$ with equality if $\#_i\alpha=0 \ \forall i \ge 3$ and $\alpha(P_i)=P_i, \ \forall P_i \in P$.
\end{proof}
\noindent As we understand from the above proposition, fixed points and cycles of length $2$ of $\alpha$ play an important role. Indeed, the scaling of a given Wick pairing $\alpha$ can exceed $N^{2p+2}$ if and only if there are enough fixed points, and not too many cycles of length greater or equal to $3$. The lemma below is an important step to the proof that we discuss fully in the next sections.
\begin{lemma}\label{lem:crossing-length2}
Assume $\alpha\in S_p$ has only cycles of length $2$ and is parity changing (\textit{i.e.} $\alpha(i) \textrm{ mod }2 \neq i \textrm{ mod }2$). Assume also that there exists two cycles of $\alpha$ of the form $(a,i+1)$ and $(i,b)$ for $a<i<i+1<b$. Then, $\textrm{deg }J(G_{\gamma\alpha,\gamma^{-1}\alpha};N)\le p$.
\end{lemma}
We use the parity changing assumption together with the constraint that $\alpha$ has only length $2$ cycles only to simplify the proof and avoid having to consider several cases. Indeed, a stronger form of this result can easily be proven from similar considerations. However, the above form of the lemma will be sufficient for our purposes.
\begin{proof}
The parity changing assumption ensures that $G_{\gamma\alpha,\gamma^{-1}\alpha}$ has $2$ connected components. We note that these two connected components must be isomorphic graphs. Indeed, we can exhibit an isomorphism. For each cycle of the form $(k,l)$ of $\alpha$, the isomorphism $\phi$ is set to $\phi(k)=l$. Hence, we can focus on one connected component only and bound the degree of the circuit counting polynomial of this connected component. We use Fig.~\ref{fig:crossing-deg-bound} to show the structure of $G_{\gamma\alpha,\gamma^{-1}\alpha}$ around vertices $a,b,i,i+1$. From this figure we note that neither vertex $a$ nor vertex $i$ are cut vertices. Moreover, since the figure we show is minimally connected (gray parts inside each connected components could be further connected if $\alpha$ had more crossings), this means vertices $a$ and $i$ cannot become cut-vertices by changing $\alpha$ while keeping the cycles $(a,i+1)$ and $(i,b)$ fixed.\\
Denote $G^{(a)}_{\gamma\alpha,\gamma^{-1}\alpha}$ the connected component containing vertex $a$. We focus on this connected component for now. By Lemma \ref{lem:max-J-only-cut}, we have $\textrm{deg }J(G_{\gamma\alpha,\gamma^{-1}\alpha}^{(a)};N)\le p/2$. Indeed, since for instance $a$ is not a cut-vertex we know that once we reduced vertex $a$ by summing over its two possible states using the skein relations of Eq.~\eqref{eq:skein-relation-J} on $a$ we are left with two graphs $G'$ and $G''$ such that
\begin{equation}
J(G^{(a)}_{\gamma\alpha,\gamma\alpha^{-1}};N)=J(G';N)+J(G''; N).
\end{equation}
Both $G'$ and $G''$ have a maximum of $p/2-1$ vertices. Therefore, from Lemma \ref{lem:UB-deg-J}
\begin{equation}
\textrm{deg }J(G^{(a)}_{\gamma\alpha,\gamma\alpha^{-1}};N)=\textrm{max}(\textrm{deg }J(G';N), \textrm{deg }J(G''; N)) \le p/2.
\end{equation}
By the isomorphism argument, we have the same bound on the degree of $J(G^{(b)}_{\gamma\alpha, \gamma^{-1},\alpha};N)$, hence
\begin{equation}
\textrm{deg } J(G_{\gamma\alpha, \gamma^{-1}\alpha})\le p.
\end{equation}
\end{proof}
In the next section, we show that the contribution of fixed points is smaller than the previous bounds would let us believe. In order to study fixed points and cycles of length $2$ we introduce more general tensor networks.\\
\begin{figure}
\centering
\includegraphics[scale=0.6]{aibi+1.pdf}
\caption{Local structure of the graph $G_{\gamma\alpha,\gamma^{-1}\alpha}$ for $\alpha$ satisfying the constraints of Lemma \ref{lem:crossing-length2}. The axe of symmetry corresponds to the ladder axe of symmetry. The figure shows the smallest connexity case. Indeed, the gray part of each connected component could be further connected if $\alpha$ have more crossings. For readability we color the edges differently depending on which vertices they are adjacent to. The color change in the middle if the edges are shared between vertices $a$ and $i$ or between vertices $b$ and $i+1$.}
\label{fig:crossing-deg-bound}
\end{figure}
\section{Tensor network evaluation of \texorpdfstring{$\mathbb{E}\Tr(Q^p)$}{E Tr Qp}}\label{sec:tensor-eval}
In this section, we express the moments of the random matrix $Q$ as a sum over evaluations of tensor networks made of the tensors $P_{s}$ and $P_{\overline \Omega}$. In particular, we do not expand the projectors $P_{\overline \Omega}$ appearing in the expression of $Q$. In that case, for each Wick pairing $\alpha$ we obtain a contraction of two types of tensors, $P_{s}:\mathbb{C}^{N}\otimes \mathbb{C}^N\rightarrow \mathbb{C}^{N}\otimes \mathbb{C}^N$ as before and $P_{\overline{\Omega}}:\mathbb{C}^{N}\otimes \mathbb{C}^N\rightarrow \mathbb{C}^{N}\otimes \mathbb{C}^N$ which is the projector on the complement of $\textrm{span}\{\lvert \Omega \rangle\}$ introduced earlier in Eq.~\eqref{eq:proj_complemet_omega_def}.
We work here with a graphical representation that is very similar to the ones introduced in previous paragraphs. Here, we represent $\Tr(Q^p)$ directly, without expanding. We do so by stacking the building blocks introduced in \eqref{eq:diag-Wst}, \eqref{eq:diag-Wst*} together with an additional building block representing $P_{\overline \Omega}$
\begin{equation}
P_{\overline \Omega} = I- \lvert \Omega \rangle \langle \Omega \rvert = \raisebox{-6mm}{\includegraphics[scale=0.8]{proj_comp_omega.pdf}}
\end{equation}
These building blocks are stacked in between \eqref{eq:diag-Wst}, \eqref{eq:diag-Wst*} blocks to represent the operator $P_{\overline \Omega}$. The diagram thus obtained is a tensor network representing $\Tr(Q^n)$
\begin{equation}
\Tr(Q^n)=\raisebox{-12mm}{\includegraphics[scale=0.50]{trace-Q-p.pdf}}
\end{equation}
To compute $\mathbb{E}(\Tr(Q^p))$ we proceed as before, with each Wick pairing identifying black box $i$ with white box $\alpha(i)$ and leading to a vertex labeled $i$. This vertex represents a $P_{s}$ operator. However, the resulting oriented graph also contains $\overline \Omega$-boxes that represents $P_{\overline \Omega}$. This leads to a slightly more complicated connection pattern. We show on Fig. \ref{fig:local-structure-TN} the local structure around a vertex $i$. \\
\begin{figure}
\centering
\includegraphics[scale=0.7]{local-structure-tensor-network.pdf}
\caption{Local structure of a tensor network given a Wick pairing $\alpha$. Left: the Wick pairing on the diagram representing $\mathbb{E}(\Tr(Q^p))$. Right: the corresponding local structure of the tensor network of $P_{s}$ and $P_{\overline \Omega}$ operators.}
\label{fig:local-structure-TN}
\end{figure}
\noindent The only change is that now vertices are only connected to $\overline \Omega$-boxes. Locally it is simple to check that vertex $i$ is connected to $\overline \Omega$-boxes $i$, $\gamma(i)$ \textit{via} ingoing edges, while it is connected to $\overline \Omega$-boxes $\alpha(i)$, $\gamma(\alpha(i))$ \textit{via} outgoing edges. This is due to the fact that the white box $\alpha(i)$ (of which adjacent edges are outgoing) is connected to $\overline \Omega$-boxes $\alpha(i)$, $\gamma(\alpha(i))$ while the black box $\alpha(i)$ (of which adjacent edges are ingoing) is connected to $\overline \Omega$-boxes $i$ and $\gamma(i)$. In the rest of this paper we denote by $T_{\alpha}$ the tensor network (and its evaluation) corresponding to a pairing $\alpha$ constructed in the way described above.\\
Using this tensor network construction, we have
\begin{equation}
\mathbb{E} \Tr(Q^p)=\sum_{\alpha \in S_p} M^{\#\alpha} T_{\alpha}.
\end{equation}
Note that we have the relation
\begin{equation}\label{eq:TN-ev-vs-Graph-ev}
T_{\alpha}=\frac1{2^p}\left(J(G_{\gamma\alpha,\gamma^{-1}\alpha}; N) + \sum_{\substack{f\in \{0,1\}^p \\ \exists i, f(i)=1}} \left(-1\right)^{|P_f|} N^{- \lvert P_f\rvert } J(G_{\Gamma_{a,P_f}\alpha,\Gamma_{b,P_f}\alpha };N)\right).
\end{equation}
In the next paragraph, we prove reduction relations for the evaluation of such tensor networks at fixed points, and simple transpositions of $\alpha$. This will allow us to consider permutations that do not contain fixed points and only have cycles of length $2$.\\
\noindent{\bf Reduction property for tensor network evaluation}\\
\noindent Le $c_q$ be a cycle of a permutation $\sigma = c_1 c_2 \ldots c_{\#\sigma}\in S_p$. We denote by $\sigma \div c_q:=c_1c_2\ldots \hat{c_q}\ldots c_{\# \sigma}$ the permutation acting on the set $\{1,\ldots,p\}\setminus \{i \, : \, i\in c_q\}$. We have the following reduction property for tensor network $T_{\alpha}$ such that $\alpha$ have the corresponding feature
\begin{proposition}\label{prop:TN-red-moves}
Let $\alpha \in S_p$. Then,
\begin{itemize}
\item if $\alpha$ as a fixed point, that we denote $i$, then
\begin{equation}\label{eq:TN-to-reduced-TN}
T_{\alpha}=\frac12 T_{\alpha\div (i)},
\end{equation}
where $\tilde{\alpha}$ is the permutation satisfying $\alpha=\tilde{\alpha}(i)$ in cycle notation. In particular, if $\alpha$ has several fixed points, consider $\alpha_R$ the reduced permutation acting on $\{1,\ldots, p\}\backslash \operatorname{Stab}(\alpha)$, then one has
\begin{equation}
T_{\alpha}=\left( \frac12 \right)^{|\operatorname{Stab}(\alpha)|}T_{\alpha_R}
\end{equation}
\item if $\alpha$ contains a cycle of the form $(i,i+1)$, then
\begin{equation}
T_{\alpha}=\Lambda(N)T_{\alpha \div (i,i+1)},
\end{equation}
where $\Lambda(N)=\frac14(N^2+2(N-\frac1{N}))$.
\end{itemize}
Note that the special cases where $\alpha=(1)$ and $\alpha=(12)$ are dealt with by setting $T_{\alpha\div(1)}=\Tr(P_{\overline \Omega})=N^2-1$ for the first case and $T_{\alpha\div (12)}=\Tr(P_{\overline \Omega})=N^2-1$ for the second case. For convenience, we denote $T_{\emptyset}:=\Tr(P_{\overline \Omega})$.
\end{proposition}
\begin{proof}
We proceed to the proof by diagrammatic manipulation of the tensor network. Fixed points $i$ of $\alpha$ can be treated locally by realizing that the corresponding tensor network always have the same structure around vertex $i$ (see the left hand side of equation \eqref{eq:fixed-point-reduction})
\begin{align}\label{eq:fixed-point-reduction}
T_{\alpha}\ = \ \raisebox{-20mm}{\includegraphics[scale=0.42]{Fixed-points-red-lhs.pdf}}\ &= \ \frac12 \ \raisebox{-20mm}{\includegraphics[scale=0.42]{Fixed-points-red-rhs-1.pdf}} + \frac12 \ \raisebox{-20mm}{\includegraphics[scale=0.42]{Fixed-points-red-rhs-2.pdf}} \ = \frac12 \ \raisebox{-20mm}{\includegraphics[scale=0.4]{Fixed-points-red-rhs-1.pdf}}
\end{align}
where we used the fact that
\begin{equation}\label{eq:PcompOmega-graphic}
\raisebox{-8mm}{\includegraphics[scale=0.7]{PcompOmega.pdf}}\ = \ \sqrt{N}P_{\overline{\Omega}} \lvert \Omega \rangle=0.
\end{equation}
Together with the fact that $P_{\overline{\Omega}}^2=P_{\overline{\Omega}}$ his shows that
\begin{equation}
T_{\alpha}=\frac12 T_{\alpha\div (i)}.
\end{equation}
The second statement can be obtained by looking at what happens locally on the vertices $i$ and $i+1$ when $\alpha$ has a cycle of the form $(i,i+1)$. One has the following diagrammatic relation
\begin{equation}\label{eq:i-i+1-expansion}
T_{\alpha} \ = \ \raisebox{-20mm}{\includegraphics[scale=0.42]{i-i+1-red-lhs.pdf}}\ = \ \raisebox{-20mm}{\includegraphics[scale=0.42]{i-i+1-red-rhs-1a.pdf}} -\frac1{N} \ \raisebox{-20mm}{\includegraphics[scale=0.42]{i-i+1-red-rhs-1b.pdf}}
\end{equation}
obtained by using the definition of $P_{\overline{\Omega}}$. The leftmost term of the right hand side of equation \eqref{eq:i-i+1-expansion} leads after further expanding the two black vertices
\begin{equation}
\raisebox{-20mm}{\includegraphics[scale=0.42]{i-i+1-red-rhs-1a.pdf}} = \frac14 (N+1)^2 \ \raisebox{-20mm}{\includegraphics[scale=0.4]{Fixed-points-red-rhs-1.pdf}}
\end{equation}
while the rightmost term of the right hand side of equation \eqref{eq:i-i+1-expansion} can also be extended further. Together with using again the relation \eqref{eq:PcompOmega-graphic} we have
\begin{equation}
\frac1{N} \ \raisebox{-20mm}{\includegraphics[scale=0.42]{i-i+1-red-rhs-1b.pdf}} \ = \ \frac14 \left(\frac{N+1}{N}+\frac1{N}\right) \ \raisebox{-20mm}{\includegraphics[scale=0.4]{Fixed-points-red-rhs-1.pdf}}.
\end{equation}
Putting everything together we obtain
\begin{equation}
T_{\alpha}= \frac14 \left((N+1)^2-\frac{N+1}{N}-\frac1{N}\right) \ \raisebox{-20mm}{\includegraphics[scale=0.4]{Fixed-points-red-rhs-1.pdf}}
\end{equation}
hence $\Lambda(N)=\frac14(N^2+2(N-\frac1{N}))$.
\end{proof}
Thanks to this proposition we can consider permutations without fixed points by just remembering that for every $\alpha\in S_p$, there is a reduced permutation $\alpha_R$ without fixed points such that equation \eqref{eq:TN-to-reduced-TN} is satisfied. In particular it is important to note that by getting rid of the fixed points, we did not produce any factor of $N$. In terms of the right hand side of \eqref{eq:TN-ev-vs-Graph-ev}, this is due to cancellations between terms in the sum over words $f$. The tensor networks formulation allows to formulate these cancellations compactly. One particular case is if $\alpha\in S_p$ is the identity permutation. Then we have
\begin{equation}T_{\alpha=id}=\frac1{2^p}(N^2-1).\end{equation}
Another particular case is when $\alpha\in \textrm{NC}_{2}(p)$, in that case it is always possible to find a cycle of the form $(i,i+1)$ and reducing it leads to another permutation $\alpha'$ which is in $\textrm{NC}_2(p-2)$. Using this property we can recursively reduce all the cycles of such a permutation and we obtain
\begin{equation}
T_{\alpha\in \textrm{NC}_2(p)}=\Lambda(N)^{p/2}(N^2-1)=\frac1{2^p}N^{p+2}+O(N^p).
\end{equation}
\section{Proof of the main result}\label{sec:proof-main-result}
We now have all the tools we need to come to the proof of the main theorem \ref{thm:main}. This proof follows the steps:
\begin{enumerate}
\item Determining the limiting spectrum of $Q$. This is Theorem \ref{thm:eigs-Q}. It is proved by nested applications of Proposition \ref{prop:TN-red-moves}, Lemma \ref{lem:maximal-scaling-cycle-type} and \ref{lem:crossing-length2}.
\item Showing a localization result for the overlap $\left\langle \Omega \left\lvert \frac{W^{\Gamma}}{N^3}\right\rvert \Omega\right \rangle$ in Proposition \ref{prop:moments-overlap}.
\item Proving the main theorem by using Proposition \ref{prop:moments-overlap} and Theorem \ref{thm:moments-WGamma-3-asympt} to control the largest and smallest eigenvalues of $W^\Gamma$ and using Cauchy's interlacing theorem on $\frac{W^\Gamma}{N^3}$ and $Q$ to obtain the convergence of the empirical distribution of the $N^2-1$ smallest eigenvalues of $W^{\Gamma}$.
\end{enumerate}
\noindent{\bf Limiting spectrum of $Q$.} We tackle the first step of our three steps plan of proof.
Assume that the permutation $\alpha$ do not contain fixed points. By virtue of Lemma \ref{lem:maximal-scaling-cycle-type}, we know that $e\le 2$ and $e'\le 0$ for such $\alpha$. Moreover, these inequalities can be saturated if $\alpha$ has cycles of length $2$. This implies that there exists a positive constant $R$, independent of $N$,
\begin{equation}
\lvert T_{\alpha} \rvert \le R N^{2p+2}
\end{equation}
which can be attained if $\alpha$ has only length $2$ cycles. This remark allows us to prove the following theorem
\begin{theorem}\label{thm:eigs-Q}
The limiting moments of the matrix $Q$ are given by
\begin{equation}
\frac1{N^{2p+2}}\mathbb{E}\left(\Tr(Q^p)\right)=\frac1{2^p}\sum_{\alpha\in \textrm{NC}_{1,2}(p)} c^{\#\alpha}.
\end{equation}
This completely determines the limiting law of eigenvalues of $Q$ as being a shifted semi-circular law of mean $c/2$ and variance $c/4$. In particular, the limiting support is $[-\sqrt{c}+c/2,\sqrt{c}+c/2]$ which is contained in the positive real line if and only if $c\ge4$.
\end{theorem}
\begin{proof}
We start with the relation
\begin{equation}
\mathbb{E}\left( \Tr(Q^p)\right)=\sum_{\alpha\in S_p}N^{2\#\alpha} c^{\#\alpha} T_{\alpha}.
\end{equation}
Thanks to Proposition \ref{prop:TN-red-moves} we can reduce to permutations $\alpha'$ that do not contain fixed points at the cost of a factor $\frac12$, that is
\begin{equation}
\mathbb{E}\left( \Tr(Q^p)\right)=\sum_{q=0}^p\frac1{2^q}\binom{p}{q}\sum_{\alpha'\in S_{p-q}}N^{2q +2\#\alpha'} c^{q+ \#\alpha'} T_{\alpha'}.
\end{equation}
According to Lemma \ref{lem:maximal-scaling-cycle-type}, we have
\begin{equation}
\sum_{q=0}^p\frac1{2^q}\binom{p}{q}\sum_{\alpha'\in S_{p-q}}N^{2q +2\#\alpha'} c^{q+ \#\alpha'} T_{\alpha'}=\sum_{q=0}^p\frac1{2^q}\binom{p}{q}\sum_{\substack{\alpha'\in S_{p-q}\\ \#_2\alpha'=\frac{p-q}{2}}}N^{2q +2\#\alpha'} c^{q+ \#\alpha'} T_{\alpha'} +O(N^{2p+1})
\end{equation}
as indeed all the cycles of $\alpha'$ must be of length $2$ since $\alpha'$ is not allowed to have fixed points. Making use of Lemma \ref{lem:crossing-length2} and the relation of equation \eqref{eq:TN-ev-vs-Graph-ev}, we can further reduce the sum
\begin{equation}
\sum_{\substack{\alpha'\in S_{p-q}\\ \#_2\alpha'=\frac{p-q}{2}}}N^{2q +2\#\alpha'} c^{q+ \#\alpha'} T_{\alpha'}=\sum_{\alpha'\in \textrm{NC}_2(p-q)} N^{2q +2\#\alpha'} c^{q+ \frac{p-q}{2}} T_{\alpha'} + O(N^{2p+1})
\end{equation}
because if $\alpha$ only have cycles of length $2$ and $\alpha\notin \textrm{NC}_2(p-q)$ then $\alpha$ can be reduced to a non trivial permutation $\tilde \alpha$ that do not contain cycles of the form $(i,i+1)$ by using Proposition \ref{prop:TN-red-moves} and recursively reducing cycles of this type. Then, $\tilde \alpha$ must have two cycles of the form $(a,i+1), (i,b)$ with $a<i<i+1<b$. This allows us to use Lemma \ref{lem:crossing-length2} to bound the contribution of $T_{\tilde \alpha}$. Finally, using the consequence of Proposition \ref{prop:TN-red-moves} for permutations in $\textrm{NC}_2(k)$, we have that
\begin{align}
\sum_{\alpha'\in \textrm{NC}_2(p-q)} N^{2q +2\#\alpha'} c^{q+ \frac{p-q}{2}} T_{\alpha'} &=\textrm{Cat}_{\frac{p-q}{2}}N^{2q +2\#\alpha'} c^{q+\frac{p-q}{2}}\Lambda(N)^{\frac{p-q}{2}}T_{\emptyset}\\
&=\frac1{2^{p-q}}c^{q+\frac{p-q}{2}}\textrm{Cat}_{\frac{p-q}{2}}N^{2p}(N^2-1),
\end{align}
provided $p-q$ is even. Otherwise the sum is zero because there are no terms in the sum. Putting these nested sums together, we have that
\begin{align}
\mathbb{E}(\Tr(Q^p))=\frac1{2^p}\sum_{q=0}^p\binom{p}{q}\mathbf{1}_{p-q=0 \textrm{ mod }2}c^{q+\frac{p-q}{2}}\textrm{Cat}_{\frac{p-q}{2}}N^{2p+2}+O(N^{2p+1}).
\end{align}
This sum rewrites as
\begin{equation}
\mathbb{E}(\Tr(Q^p))=\frac{N^{2p+2}}{2^p}\sum_{\alpha\in \textrm{NC}_{1,2}(p)}c^{\#\alpha}+O(N^{2p+1}).
\end{equation}
\end{proof}
\noindent{\bf Localization of the overlap.} We now come to the second step of our three steps plan. We state and prove the localization result for the overlap below.
\begin{proposition}\label{prop:moments-overlap}
For all $p \geq 1$,
$$\mathbb{E} \left\langle \Omega \left\vert \frac{W^{\Gamma}}{N^3} \right\vert \Omega \right\rangle^p = N^{-4p} \sum_{\alpha \in S_p} M^{\#\alpha} N[2]^{\#\alpha}.$$
In particular, we have
$$\mathbb{E} \left\langle \Omega \left\vert \frac{W^{\Gamma}}{N^3} \right\vert \Omega \right\rangle = \frac c 2, \qquad\qquad \operatorname{Var} \left\langle \Omega \left\vert \frac{W^{\Gamma}}{N^3} \right\vert \Omega \right\rangle = \frac{M N[2]}{N^8} \sim \frac c 2 N^{-4},$$
and
$$\lim_{N \to \infty} \mathbb{E} \left\langle \Omega \left\vert \frac{W^{\Gamma}}{N^3} \right\vert \Omega \right\rangle^p = \left(\frac c 2\right)^p.$$
\end{proposition}
\begin{proof}
To show the first statement, use the Wick formula to write
$$\mathbb{E} \left\langle \Omega \left\vert \frac{W^{\Gamma}}{N^3} \right\vert \Omega \right\rangle^p = N^{-4p} \sum_{\alpha \in S_p} M^{\#\alpha} \Tr_{\alpha, \alpha}(P_s).$$
Indeed, we have, graphically,
\begin{center}
\includegraphics{symmetrized-Wishart-PT-overlap.pdf}
\end{center}
Use now the fact that that $P_s$ is a projection, and thus
$$\Tr_{\alpha, \alpha}(P_s) = (\Tr P_s)^{\#\alpha} = N[2]^{\#\alpha}.$$
\end{proof}
\noindent {\bf Interlacing of eigenvalues and asymptotic of $W^{\Gamma}$.} We are now ready to prove the main result of the paper. As a reminder, let us start by stating a classical result, namely Cauchy's interlacing theorem
\begin{theorem}[Cauchy's interlacing theorem]\label{thm:cauchy-interlacing}
Let $A$ be a Hermitian operator on a Hilbert space $\mathcal{H}$, and let $B$ be its compression to a codimension $k$ subspace $\mathcal{N}$. Then for $j=1,2,\ldots, n-k$
\begin{equation}
\lambda_j(A)\ge \lambda_{j}(B)\ge \lambda_{j+k}(A).
\end{equation}
\end{theorem}
We use this theorem to recover the behaviour of the limiting eigenvalues of $W^{\Gamma}$ from the limiting spectrum of $Q$ and the large eigenvalue of $W^{\Gamma}$. We now come to the proof of theorem \ref{thm:main}.\\
\begin{proof}[Proof of the Theorem \ref{thm:main}]
Let us first show that the largest eigenvalue of the matrix $W^\Gamma$ behaves, asymptotically as $N \to \infty$, as $\frac c 2 N^3$. To this end, we shall compare it with two quantities: the overlap $\langle \Omega | W^\Gamma | \Omega \rangle$ and wih the moments of the random matrix $W^\Gamma$.
On the one hand, we have (we denote $\lambda_{\max} = \lambda_1$ for clarity), for all $p \geq 2$,
$$\Tr\left( \frac{W^\Gamma}{N^3}\right)^{2p}\ge \lambda_{\max}^{2p}\left( \frac{W^\Gamma}{N^3}\right)\ge \left\langle \Omega\left\lvert \frac{W^\Gamma}{N^3} \right\rvert \Omega\right\rangle^{2p}.$$
Using Theorem \ref{thm:moments-WGamma-3-asympt} and Proposition \ref{prop:moments-overlap}, we conclude that
$$\lim_{N \to \infty} \lambda_{\max}^{2p}\left( \frac{W^\Gamma}{N^3}\right) = \left(\frac c 2 \right)^{2p}.$$
By Markov's inequality, we have thus
$$\lim_{N \to \infty}\lambda_{\max}^{2}\left( \frac{W^\Gamma}{N^3}\right) = \left(\frac c 2 \right)^{2}$$
in probability. But since $\Tr W^\Gamma = \Tr W \geq 0$, $\lambda_{\max}(W^\Gamma) \geq 0$ almost surely, so we also have, in probability,
$$\lim_{N \to \infty}\lambda_{\max}\left( \frac{W^\Gamma}{N^3}\right) = \frac c 2,$$
establishing the first claim.
In order to prove the second part of the statement, concerning the lower $N^2-1$ eigenvalues of $W^\Gamma$, we shall make use of Theorems \ref{thm:eigs-Q} and \ref{thm:cauchy-interlacing}. Indeed, if we denote by $\mu_1 \geq \cdots \geq \mu_{N^2-1}$ the (non-zero) eigenvalues of the matrix $N^{-2}Q$ and by $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_{N^2-1} \geq \lambda_{N^2}$ those of $N^{-2}W^\Gamma$, by Cauchy's interlacing theorem, we have
$$\lambda_1 \geq \mu_1 \geq \lambda_2 \geq \mu_2 \geq \cdots \geq \lambda_{N^2-1} \geq \mu_{N^2-1} \geq \lambda_{N^2}.$$
On the other hand, we know by Theorem \ref{thm:eigs-Q} that the empirical distribution of the $\mu$ eigenvalues converges to the desired shifted semicircle distribution. Hence, in order to conclude, we need to control the smallest eigenvalue $\lambda_{N^2}$ and show that it does not contribute asymptotically to moments. We have
\begin{align*} \mathbb{E} \lambda_{\min}^4\left(\frac{W^\Gamma}{N^3}\right)& \leq \mathbb{E} \Tr\left(\frac{W^\Gamma}{N^3}\right)^4 - \mathbb{E} \lambda_{\max}^4\left(\frac{W^\Gamma}{N^3}\right) \\
& \leq \mathbb{E} \Tr\left(\frac{W^\Gamma}{N^3}\right)^4 - \mathbb{E} \left\langle \Omega \left\vert \frac{W^{\Gamma}}{N^3} \right\vert \Omega \right\rangle^4 \leq o(1) + \left(\frac c 2 \right)^4 - \left(\frac c 2 \right)^4 = o(1).
\end{align*}
Hence, $\mathbb{E}|\lambda_{\min}(W^\Gamma)| = o(N^{3})$, which implies that $\mathbb{E} |\lambda_{N^2}| = o(N)$, completing the proof.
\end{proof}
\section{Conclusion}\label{sec:conclusion}
In this paper we have introduced the ensembles of bosonic and fermionic density matrices, which are random quantum states supported on the symmetric, resp.~anti-symmetric subspaces of a tensor product space. We have then studied the entanglement of typical states from these ensembles, focusing on the partial transposition criterion.
In the fermionic case, we have found that the partial transposition of such a random density matrix has a large negative eigenvalue, hence a typical fermionic bipartite density matrix is entangled. The bosonic case is more delicate, since the symmetry condition imposes a large positive eigenvalue for the partial transposition, so a finer analysis of the spectrum is required.
We use the moment method to completely describe the spectrum of the partial transposition of a symmetric random density matrix. We find that the bulk of the spectrum follows a shifted semicircular distribution at scale $N^2$, similarly to the non-symmetric case discussed in \cite{aubrun2012partial}. We also find a large outlier positive eigenvalue, on the scale $N^3$, which is a signature of the symmetry. Our method relies on establishing a connection between the moments of symmetric random matrices and the circuit counting graph polynomial. This allows us to characterize the asymptotic ratio between the system size and the environment size for which the PPT criterion is strong enough to certify entanglement.
The current paper is a first step in the study of the entanglement properties of the bosonic and fermionic ensembles of quantum states introduced here, several questions being left for future work. In the bipartite case (which is the main focus of this paper), the question of the convergence of the smallest eigenvalue of the partial transposition towards the left edge of the spectrum is left open. Probably, this would require the detailed analysis of large moments of the corresponding random matrix, similarly to the method used in \cite{aubrun2012partial}; such an analysis is hindered in the current situation by the presence of the outlier eigenvalue on a larger scale.
We have not addressed the multipartite case, $r \geq 3$, mainly because the tools used in this work (connection to graph polynomials) are not efficient enough in the general case. However, first investigations of this general case suggest that an interesting phenomenon is at play. Indeed, we expect the spectrum of the partial transpose to split at several scales, such that the empirical eigenvalues distribution has several ($r-1$) connected dense parts with each part appearing at a specific scale. We expect that the rigorous study of this general case will make heavy use of representation theory.
Another interesting direction for future study is the de Finetti theorem for the bosonic ensemble. This would require analyzing the separability property of the partial trace of a symmetric random density matrix in the regime where the parameter $r$ is large, and the Hilbert space dimension $N$ is fixed. This asymptotic regime usually requires completely different techniques in random matrix theory.
In conclusion, we have set up and studied basic entanglement properties of symmetric (and anti-symmetric) random density matrices. Several questions regarding these ensembles are left open, and we hope that our work will generate interest in these ensembles of random matrices / tensors, and their connection to combinatorics and graph polynomials.
\bigskip
\paragraph*{\bf Acknowledgments.}
S.~Dartois has been partially supported by the Australian Research Council grant DP170102028, the French ANR project ANR-18-CE47-0010 (\href{https://www.irif.fr/~magniez/qudata/}{QUDATA}). I.~Nechita has been supported by the ANR project \href{https://esquisses.math.cnrs.fr/}{ESQuisses}, grant number ANR-20-CE47-0014-01. Both S.~Dartois and I.~Nechita were supported on this project by the ANR project ``Investissements d'avenir'' ANR-11-LABX-0040. A. Tanasa has been partially supported by the ANR-20-CE48-0018 ``3DMaps" grant.
\bibliographystyle{alpha}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,112
|
Racism at School – Some Personal Reflections on Causes and Effects with Levi Tebbutt (OR 2006-2013)
Reading School was delighted to invite Levi Tebbutt (OR 2006-2013) to speak to our Year 13 students about his experiences of racism and his reflection on the causes of prejudice. He spoke most powerfully and honestly about the impact that our words, actions and inactions can have.
After the tragic death of George Floyd and the consequent increase in public awareness about racism and racial tensions in different parts of the world, Levi published his reflections in a detailed Linkedin post regarding the racial abuse he received during his time at primary school, at Reading School, at the University of Oxford and even in the early stages of his career at KPMG.
Levi, who left Reading School in 2013, grew up in a mixed family in Reading with a White British mother and Black British father. During his inspiring and powerful talk, Levi traced the roots of his very first racist encounters as a young child; through his years at school and early adulthood.
Levi candidly shared his experiences of racist bullying at the hand of his peers, and the ways he perpetuated, participated in or pretended not to hear racism around him. He recalls numerous forms of racism he had to endure including the use of utterly unacceptable racist stereotypes and pejorative slurs based on his appearance, culture or heritage.
He shared how these painful and intrusive encounters were a constant part of his experience, and explained how such persistent bullying and discrimination can make people feel inadequate and invisible. His encounters provided a very real challenge for today's students and made everyone who heard his talk more aware of the impact of their action or inaction.
During the very reflective Q&A session, Levi touched on the importance of ally-ship, reflecting on how easy it is to be a bystander in racist encounters, and he confronted how negative internalised feelings diminish the development of identity which includes positive views of race and heritage.
Levi's experiences of racism at Reading School may have occurred nearly 10 years ago but, unfortunately, his insights remain relevant today.
Mr AM Robson said:
"Levi is a real role model. He is willing to stand up and confront what is wrong, and his leadership is an outstanding example to our students. As you would expect it pains me to hear of Levi's experience of racism at Reading School. Although it has been seven years since Levi left Reading to go to Oxford, no institution should imagine that prejudice and discrimination are entirely behind us. As an educational institution, we continue to strive to educate our students against prejudice, to cultivate positive change and to nurture empathy. Levi's leadership will help us, and our students to do better and be better.
Reading School welcomes boys from a wide variety of ethnic and cultural backgrounds with over fifty different languages represented and major religions. We regard the diversity and richness that this brings as a real strength of the school. We are profoundly grateful to Levi for his invaluable contribution."
Rev'd Dr C Evans commented:
"I am personally hugely grateful to Levi. His honesty is liberating and his insight and integrity are a source of hope for us. None of us like to see evidence of our own historic faults or failings, but the growing understanding of the insidious and persistent nature of institutional racism within UK society means we owe it to all our students to talk honestly about these issues.
Racism in any form is abhorrent and we are committed to eradicating ignorance, inequality and discrimination.
Levi has done our students a huge favour by helping them become more conscious of micro-aggressions, of the impact of thoughtless words or actions, and of their own responsibility to challenge what is inappropriate and unfair."
Levi Tebbutt (OR 2006-2013) added:
"It was great to speak to the evidently bright cohort of Year 13s at Reading School. The school holds a lot of memories for me and was a second home of sorts from 2006-13. While this was painful to talk about, the school has so much potential to inspire and build leaders for the future. To this end, I hope all who watch this talk will benefit from hearing a candid tale of the pain inflicted by racism, and will hopefully feel encouraged to stand up in their own lives and make all the differences - however small - that they can.
I remember so many small things that teachers and staff did for me, without which I could never have hoped to get to where I am today. Those small things are things we can all go forwards and do for our peers, those who look up to us, and even those who we look up to.
I am far from perfect and often reflect upon regrets I have, but we can always do better. I hope to continue to do better, it's a journey which has no end."
Reading School would like to thank Levi for his candid and courageous talk. Open dialogue about race, identity and behaviour helps us to create a more equitable, self-aware and inclusive school community. We happily encourage all our alumni to share their own stories and to help us in our goal to 'build good men'. We relish the opportunity for spiritual, moral and cultural dialogue that shapes our students' consciousness and supports them to be the best they can be.
Mr Tebbutt is happy to talk to pupils further who may have questions. Please direct these to the Society Office at communications@reading-school.co.uk.
If you are an Old Redingensian and would like to share your professional expertise, academic specialism or research background with our students through a short talk, master class, tutorial or inspire lecture, please complete the form here: Remote Inspire Lectures
If you have any further queries please contact Piatrice or Jas at events@reading-school.co.uk
#ReadingSchoolFamily #BulidingGoodMen #valuesmatter #education #diversityinclusion #community #ViaRedingensis #BetterTogether
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,716
|
Race Report Stage 2 - Cambodia >
< Breaking News - GlobalLimits Cambodia 2017 -
Category: Cambodia
Race Report Stage 1 - Cambodia
GlobalLimits Cambodia - The Ancient Khmer Path 6th edition commenced today with runners from 16 countries. The start is located in a local Buddhist temple in the Kamphong Thom province. Local monks blessed our runners before the race started and in return, we pay respect to the country and sing Cambodia's national anthem "Majestic Kingdom".
The race course extends northward towards the ancient capital of Angkor (Siem Reap) through the ancient Khmer Path. The runners will experience running through and spending overnights in ancient temples of the Khmer Kingdom, local villages, waterfalls and schools in the next 6 days.
Running through the heart of Cambodia's network of waterways adorned with beautiful lotus sanctuary on the side is an enchanting experience. As it is a Sunday, local families and children came to cheer for our runners.
In today's 30,6km stage. Xavi Marina from Spain, who was also podium winner in Sri Lanka 2016, led the race and finished 2:22hrs. He was followed by first time GlobalLimits runner Isabelle Sauve from Canada who set an impressive speed from the start on and won the female category in a new female course record time of 2:39hrs. Michael Traub from USA followed closely and won the third place in this stage at 2:43hrs. Joined third mal were Benjamin Pralong from Switzerland and Tony Andrades from Spain. Second woman was 3rd time starter Teresa Lam from Hongkong and third Belinda van der Riet from South Africa.
All the runners arrived safely at our first camp in a remote village where they will share the traditional wooden houses with local families and prepare for tomorrow's 36km stage.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,143
|
\section{Preliminaries}
Piece-wise polynomial representation of trajectory is adopted. Any segment of the trajectory can be denoted as an $N$-order polynomial
\begin{equation}
p(t)=\mathbf{c}^{\mathrm{T}} \beta(t), t\in[0, T]
\end{equation}
where $\mathbf{c}\in\mathbb{R}^{(N+1)\times{3}}$ is the coefficient matrix, $T$ is the duration and
\begin{equation}
\beta(t)=\rBrac{1, t, t^2, \cdots, t^N}^{\mathrm{T}}
\end{equation}
is a basis function. It is worth noting that $N$ should be an odd number hereafter, which makes the mapping bijective between the coefficient matrix and its boundary condition.
Consider derivatives of $p(t)$ up to $(N-1)/2$ order
\begin{equation}
\mathbf{d}(t)=\rBrac{p(t), \dot{p}(t), \cdots, p^{\rBrac{\frac{N-1}{2}}}(t)}^{\mathrm{T}},
\end{equation}
we have
\begin{equation}
\mathbf{d}(t)=\mathbf{B}(t)\mathbf{c}
\end{equation}
where
\begin{equation}
\mathbf{B}(t)=\rBrac{\beta(t), \dot{\beta}(t), \cdots, \beta^{\rBrac{\frac{N-1}{2}}}(t)}^{\mathrm{T}}.
\end{equation}
We denote $\mathbf{d}_{start}$ and $\mathbf{d}_{end}$ by $\mathbf{d}(0)$ and $\mathbf{d}(T)$, respectively. The boundary condition is described by tuple $\rBrac{\mathbf{d}^{\mathrm{T}}_{start}, \mathbf{d}^{\mathrm{T}}_{end}}^{\mathrm{T}}$. The following mapping holds:
\begin{equation}
\label{eq:RepresentationMapping}
\rBrac{\mathbf{d}^{\mathrm{T}}_{start}, \mathbf{d}^{\mathrm{T}}_{end}}^{\mathrm{T}}=\mathbf{A}(T)\mathbf{c}
\end{equation}
where
\begin{equation}
\mathbf{A}(T)=\rBrac{\mathbf{B}^{\mathrm{T}}(0),\mathbf{B}^{\mathrm{T}}(T)}^{\mathrm{T}}
\end{equation}
is the mapping matrix. Since $N$ is an odd number, it is easy to know that $\mathbf{A}(T)$ is a non-singular square matrix. In other words, the mapping in \ref{eq:RepresentationMapping} is bijective. Therefore, any segment of a trajectory can be equivalently expressed by tuple $\rBrac{\mathbf{d}_{start}, \mathbf{d}_{end}, T}$ or tuple $\rBrac{\mathbf{c}, T}$.
Consequently, we consider an $M$-segment trajectory $\mathbf{P}$ parametrized by time allocation $\mathbf{T}=\rBrac{T_1, T_2, \cdots, T_M}^{\mathrm{T}}$ as well as boundary conditions $\mathbf{D}=\rBrac{d^{\mathrm{T}}_1, d^{\mathrm{T}}_2, \cdots, d^{\mathrm{T}}_{M+1}}^{\mathrm{T}}$ of all segments. The trajectory is defined by
\begin{equation}
\mathbf{P}(t):=\mathbf{d}_m^{\mathrm{T}}\mathbf{A}(T_m)^{-\mathrm{T}}\beta( t-\sum_{i=1}^{m-1}{T_i})
\end{equation}
where $t$ lies in the $m$-th segment and $\mathbf{d}_m=\rBrac{d^{\mathrm{T}}_{m}, d^{\mathrm{T}}_{m+1}}^{\mathrm{T}}$ is a boundary condition of the $m$-th segment. Normally, some entries in $\mathbf{D}$ are fixed while the others are to be optimized. We split $\mathbf{D}$ into two parts, the fixed part $\mathbf{D}_F$ which is viewed as constant, and the free part $\mathbf{D}_P$ which is to be optimized.
Then, the whole trajectory can be fully determined by
\begin{equation}
\mathbf{P}=\mathbf{\Phi}(\mathbf{D}_P, \mathbf{T}).
\end{equation}
\section{Optimization Objective}
The following time regularized quadratic objective function is used:
\begin{equation}
\label{eq:IntegralFormObjective}
J(\mathbf{P})=\int_{0}^{\sum_{m=1}^{M}{T_m}}\rBrac{{\rho + \sum_{i=D_{min}}^{D_{max}}{w_i\Norm{\mathbf{P}^{(i)}(t)}^2}}}\mathrm{d}t
\end{equation}
where $D_{min}$ and $D_{max}$ are the lowest and the highest order of derivative to be penalized respectively, $w_i$ is the weight of the $i$-order derivative and $\rho$ is the weight of time regularization. When $D_{max} > (N-1)/2$, some derivatives on the boundary of each segment may not exist, hence we sum up objectives on all segments instead, which have the form
\begin{equation}
\label{eq:KthSegmentObjective}
J_m(\mathbf{d}_m, T_m):=\rho T_m+\trace\cBrac{\mathbf{d}_m^{\mathrm{T}}\mathbf{A}(T_m)^{-\mathrm{T}} \mathbf{Q}(T_m)\mathbf{A}(T_m)^{-1}\mathbf{d}_m}
\end{equation}
for the $m$-th segment, where $\mathbf{Q}(T_m)$ is a symmetric matrix \cite{Bry2015AggressiveFO} consisting of high powers of $T_m$, and $\trace\cBrac{\cdot}$ is trace operation. The overall objective $J(\mathbf{D}_P, \mathbf{T}):=J(\mathbf{\Phi}(\mathbf{D}_P, \mathbf{T}))$ is formulated as
\begin{equation}
\label{eq:ObjectiveStructure}
J(\mathbf{D}_P, \mathbf{T})=\rho\Norm{\mathbf{T}}_1+\trace\cBrac{\begin{pmatrix}\mathbf{D}_F\\\mathbf{D}_P\end{pmatrix}^{\mathrm{T}}\mathbf{C}^{\mathrm{T}}\mathbf{H}(\mathbf{T})\mathbf{C}\begin{pmatrix}\mathbf{D}_F\\\mathbf{D}_P\end{pmatrix}}
\end{equation}
\begin{equation}
\mathbf{H}(\mathbf{T})=\bigoplus_{m=1}^{M}{\mathbf{A}(T_m)^{-\mathrm{T}}\mathbf{Q}(T_m)\mathbf{A}(T_m)^{-1}}
\end{equation}
where $\mathbf{H}(\mathbf{T})$ is the direct sum of its $M$ diagonal blocks, and $\mathbf{C}$ is a permutation matrix.
In Eq.~\ref{eq:ObjectiveStructure}, $N, M, D_{min}, D_{max}, \rho, w_i, \mathbf{D}_F$ and $\mathbf{C}$ are all parameters that directly determine the structure of $J(\mathbf{D}_P, \mathbf{T})$. It is important to know that not all settings for $J$ are legal. Instead of restricting those parameters, we make the following assumption on the objective function such that the setting is meaningful.
\begin{assumption}
\label{asm:LegalityCondition}
For any finite $\alpha$, the corresponding $\alpha$-sublevel set of $J$
\begin{equation}
\mathit{L}_{\alpha}^{-}(J):=\cBrac{(\mathbf{D}_P, \mathbf{T})~\Big|~J(\mathbf{D}_P, \mathbf{T})\leq{\alpha}}
\end{equation}
is bounded and satisfies
\begin{equation}
\label{eq:StrictPositiveness}
\mathit{L}_{\alpha}^{-}(J)\subset\cBrac{(\mathbf{D}_P, \mathbf{T})~\Big|~\mathbf{T}\in\mathbb{R}_+^M}.
\end{equation}
\end{assumption}
Intuitively, Assumption \ref{asm:LegalityCondition} forbids the objective from taking meaningful value when decision variables are extremely large or any duration is extremely small. For example, consecutive repeating waypoints with identical boundary conditions fixed in $\mathbf{D}_F$ are illegal, because the optimal duration on corresponding segment becomes $0$ which violates condition (\ref{eq:StrictPositiveness}). In other words, the segment should not exist if the objective is to be minimized. Another example is that $\rho\leq{0}$ is also illegal. Non-positive weight on total duration means that the objective can be sufficiently low when duration on each segment is large enough. In such a case, the boundness condition is violated.
\section{Unconstrained Optimization Algorithm}
\begin{algorithm}
\caption{Unconstrained Spatial-Temporal AM}
\label{alg:UnconstrainedSpatialTemporalAM}
\KwIn{$\mathbf{D}_P^0, K\in\mathbb{Z}_+, \delta>0$}
\KwOut{$\mathbf{D}_P^*, \mathbf{T}^*$}
\Begin
{
$\mathbf{T}^0 \leftarrow \argmin_{\mathbf{T}}{J(\mathbf{D}_P^0, \mathbf{T})}$\;
$J_l \leftarrow J(\mathbf{D}_P^0, \mathbf{T}^0), k \leftarrow 0$\;
\While{$k<K$}
{
$\mathbf{D}_P^{k+1} \leftarrow \argmin_{\mathbf{D}_P}{J(\mathbf{D}_P, \mathbf{T}^{k})}$\;
$\mathbf{T}^{k+1} \leftarrow \argmin_{\mathbf{T}}{J(\mathbf{D}_P^{k+1}, \mathbf{T})}$\;
$J_{c} \leftarrow J(\mathbf{D}_P^{k+1}, \mathbf{T}^{k+1})$\;
\If{$\abs{J_l-J_c}<\delta$}{\textbf{break}}
$J_l \leftarrow J_c, k \leftarrow k+1$\;
}
$\mathbf{D}_P^* \leftarrow \mathbf{D}_P^{k}, \mathbf{T}^* \leftarrow \mathbf{T}^{k}$\;
\Return{$\mathbf{D}_P^*, \mathbf{T}^*$};
}
\end{algorithm}
To optimize Eq.~\ref{eq:ObjectiveStructure}, Algorithm.~\ref{alg:UnconstrainedSpatialTemporalAM} is proposed.
Initially, $\mathbf{T}^{0}$ is solved for any provided $\mathbf{D}_P^0$. After that, the minimization of the objective function is done through a two-phase process, in which only one of $\mathbf{D}_P$ and $\mathbf{T}$ is optimized while the other is fixed.
In the first phase, the sub-problem
\begin{equation}
\label{eq:OptimizeD}
\mathbf{D}_P^*(\mathbf{T})=\argmin_{\mathbf{D}_P}{J(\mathbf{D}_P, \mathbf{T})}
\end{equation}
is solved for each $\mathbf{T}^k$. We employ the unconstrained QP formulation by Richter et al. \cite{Bry2015AggressiveFO}, which we briefly introduce here. The matrix $\mathbf{R}(\mathbf{T})=\mathbf{C}^{\mathrm{T}}\mathbf{H}(\mathbf{T})\mathbf{C}$ is partitioned as
\begin{equation}
\mathbf{R}(\mathbf{T})=\begin{pmatrix}\mathbf{R}_{FF}(\mathbf{T}) & \mathbf{R}_{FP}(\mathbf{T}) \\
\mathbf{R}_{PF}(\mathbf{T}) & \mathbf{R}_{PP}(\mathbf{T})\end{pmatrix}.
\end{equation}
then the solution is obtained analytically through
\begin{equation}
\mathbf{D}_P^*(\mathbf{T})=-\mathbf{R}_{PP}(\mathbf{T})^{-1}\mathbf{R}_{FP}(\mathbf{T})\mathbf{D}_F.
\end{equation}
In the second phase, the sub-problem
\begin{equation}
\label{eq:OptimizeT}
\mathbf{T}^*(\mathbf{D}_P)=\argmin_{\mathbf{T}}{J(\mathbf{D}_P, \mathbf{T})}
\end{equation}
is solved for each $\mathbf{D}_P^k$. In this phase, the scale of sub-problem can be reduced into each segment.
Due to our representation of trajectory, once $\mathbf{D}_P$ is fixed, the boundary conditions $\mathbf{D}$ isolate each entry in $\mathbf{T}$ from the others. Therefore, $T_m$ can be optimized individually to get all entries of $\mathbf{T}^*(\mathbf{D}_P)$. As for the $m$-th segment, its cost $J_m$ in (\ref{eq:KthSegmentObjective}) is indeed a rational function of $T_m$. We show the structure of $J_m$ and omit the trivial deduction for brevity:
\begin{equation}
\label{eq:RationalCostOnSegment}
J_m(T)=\rho T+\frac{1}{T^{p_{n}}}\sum\limits_{i=0}^{p_{d}}{\alpha_i T^i}
\end{equation}
where $p_{n}=2D_{max}-1$ and $p_{d}=2(D_{max}-D_{min})+N-1$ are orders of numerator and denominator respectively. The coefficient $\alpha_i$ is determined by $\mathbf{d}_m$. It is clear that $J_m(T)$ is smooth on $T\in\rbrac{0, +\infty}$. Due to the positiveness of $J_m(T)$, we have $J_m(T)\rightarrow+\infty$ as $T\rightarrow+\infty$ or $T\rightarrow0^+$. Therefore, the minimizer exists for
\begin{equation}
T_m^*(\mathbf{D}_P)=\argmin_{T\in\rbrac{0, +\infty}}{J_m(T)}.
\end{equation}
To find all candidates, we compute the derivative of (\ref{eq:RationalCostOnSegment}):
\begin{equation}
\frac{\mathrm{d}J_m(T)}{\mathrm{d}T}=\rho+\frac{1}{T^{1+p_{n}}}\sum\limits_{i=0}^{p_{d}}{(i-p_n)\alpha_i T^i}.
\end{equation}
The minimum exists in the solution set of ${\mathrm{d}J_m(T)}/{\mathrm{d}T}=0$, which can be calculated through any modern univariate polynomial real-roots solver \cite{Sagraloff2013ComputingRR}. The second phase is completed by updating every entry $T_m^*(\mathbf{D}_P)$ in $\mathbf{T}^*(\mathbf{D}_P)$.
\section{Convergence Analysis}
We first explore some basic properties of $J(\mathbf{D}_P, \mathbf{T})$, which help a lot in convergence analysis of Algorithm~\ref{alg:UnconstrainedSpatialTemporalAM}. We have already shown that $J(\mathbf{D}_P, \mathbf{T})$ are rational function of each entry in $\mathbf{T}$. As for the $\mathbf{D}_P$ part, it is indeed partially convex, which is given by the following lemma.
\begin{lemma}
\label{lm:PartialConvexity}
$J(\mathbf{D}_P, \mathbf{T})$ is convex in $\mathbf{D}_P$ for any $\mathbf{T}\in\mathbb{R}_+^M$, provided that Assumption~\ref{asm:LegalityCondition} holds.
\end{lemma}
\begin{proof}
Assumption~\ref{asm:LegalityCondition} implies that $\rho>{0}$, $w_i\geq{0}$ holds for all $D_{min}\leq{i}\leq{D_{max}}$ and at least one $w_i$ is nonzero. Otherwise, the boundness on $\mathit{L}_{\alpha}^{-}(J)$ or positiveness on its time allocation is violated. Thus, for any $\mathbf{T}\in\mathbb{R}_+^M$, the objective function is always positive, which can be seen from (\ref{eq:IntegralFormObjective}). The non-negativity of $J(\mathbf{D}_P, \mathbf{T})$ implies the positive semidefiniteness of the symmetric matrix $\mathbf{R}(\mathbf{T})$. Since $\mathbf{R}_{PP}(\mathbf{T})$ is the principal submatrix of $\mathbf{R}(\mathbf{T})$, it is also positive semidefinite. We compute the Hessian matrix of $J(\mathbf{D}_P, \mathbf{T})$ with respect to $\mathbf{D}_P$:
\begin{equation}
\nabla^2_{\mathbf{D}_P}{J(\mathbf{D}_P, \mathbf{T})}=2\bigoplus_{k=1}^{3}{\mathbf{R}_{PP}(\mathbf{T})}
\end{equation}
which means $\nabla^2_{\mathbf{D}_P}{J(\mathbf{D}_P, \mathbf{T})}$ is positive semidefinite. Therefore, $J(\mathbf{D}_P, \mathbf{T})$ is convex in $\mathbf{D}_P$.
\end{proof}
\begin{lemma}
\label{lm:LGradientConvexProperty}
For any convex function $f:\mathbb{R}^{I\times{J}}\mapsto\mathbb{R}$, if the following inequality holds for any $\mathbf{X}, \mathbf{Y}\in\mathbb{R}^{I\times{J}}$
\begin{equation}
\Norm{\nabla{f(\mathbf{X})}-\nabla{f(\mathbf{Y})}}_F\leq{L\Norm{\mathbf{X}-\mathbf{Y}}_F},
\end{equation}
in which $L$ is a constant and $\Norm{\cdot}_F$ is Frobenius norm, then
\begin{equation}
f(\mathbf{X})-f(\mathbf{Y})\geq{\trace\cBrac{\nabla{f(\mathbf{Y})}^{\mathrm{T}}(\mathbf{X}-\mathbf{Y})}+\frac{1}{2L}\Norm{\nabla{f(\mathbf{X})}-\nabla{f(\mathbf{Y})}}_F^2}
\end{equation}
\end{lemma}
\begin{proof}
See Theorem 2.1.5 in \cite{Nesterov2018LecturesOC}.
\end{proof}
\begin{lemma}
\label{lm:BoundedDecrease}
Provided that Assumption~\ref{asm:LegalityCondition} is satisfied, then the following inequality holds for any $\mathbf{T}\in\mathbb{R}_+^M$ and any $\mathbf{D}_P$:
\begin{equation}
J(\mathbf{D}_P, \mathbf{T})-J(\mathbf{D}_P^*, \mathbf{T})\geq\frac{1}{4\sigma_P(\mathbf{T})}\Norm{\nabla_{\mathbf{D}_P}J(\mathbf{D}_P, \mathbf{T})}_F^2
\end{equation}
where
\begin{equation}
\label{eq:MinimumCondition}
\mathbf{D}_P^*=\argmin_{\mathbf{D}_P}{J(\mathbf{D}_P, \mathbf{T})},
\end{equation}
and $\sigma_P(\mathbf{T})$ is the largest singular value of $\mathbf{R}_{PP}(\mathbf{T})$.
\end{lemma}
\begin{proof}
The gradient of $J(\mathbf{D}_{P}, \mathbf{T})$ with respect to $\mathbf{D}_P$ can be calculated as
\begin{equation}
\nabla_{\mathbf{D}_P}J(\mathbf{D}_{P}, \mathbf{T})=2\mathbf{R}_{FP}^{\mathrm{T}}(\mathbf{T})\mathbf{D}_F+2\mathbf{R}_{PP}^{\mathrm{T}}(\mathbf{T})\mathbf{D}_P.
\end{equation}
The difference in gradient at $\mathbf{D}_P$ and $\mathbf{D}_P^*$ is
\begin{equation}
\label{eq:PartialGradientDifference}
\Norm{\nabla_{\mathbf{D}_P}J(\mathbf{D}_P, \mathbf{T})-\nabla_{\mathbf{D}_P}J(\mathbf{D}_P^*, \mathbf{T})}_F=2\Norm{\mathbf{R}_{PP}^{\mathrm{T}}(\mathbf{T})\rBrac{\mathbf{D}_P-\mathbf{D}_P^*}}_F
\end{equation}
Assumption~\ref{asm:LegalityCondition} ensures that $\mathbf{R}_{PP}(\mathbf{T})$ is nonzero matrix, which means it has largest singular value $\sigma_P(\mathbf{T}) > 0$ for any $\mathbf{T}\in\mathbb{R}_+^M$. According to the basic property of spectral norm, we have
\begin{equation}
\label{eq:SpectralNormIneqaulity}
\frac{\Norm{\mathbf{R}_{PP}^{\mathrm{T}}(\mathbf{T})\rBrac{\mathbf{D}_P-\mathbf{D}_P^*}}_F}{\Norm{\mathbf{D}_P-\mathbf{D}_P^*}_F}\leq\max_{\Norm{\mathbf{X}}_F=1}\Norm{\mathbf{R}_{PP}^{\mathrm{T}}(\mathbf{T})\mathbf{X}}_F=\sigma_P(\mathbf{T}).
\end{equation}
Combining (\ref{eq:PartialGradientDifference}) and (\ref{eq:SpectralNormIneqaulity}), we get
\begin{equation}
\Norm{\nabla_{\mathbf{D}_P}J(\mathbf{D}_P, \mathbf{T})-\nabla_{\mathbf{D}_P}J(\mathbf{D}_P^*, \mathbf{T})}_F\leq{2\sigma_P(\mathbf{T})}\Norm{\mathbf{D}_P-\mathbf{D}_P^*}_F.
\end{equation}
According to Lemma \ref{lm:PartialConvexity} and Lemma \ref{lm:LGradientConvexProperty}, if we substitute $f(\cdot)$ by $J(\cdot, \mathbf{T})$, together with the fact that (\ref{eq:MinimumCondition}) implies $\nabla_{\mathbf{D}_P}J(\mathbf{D}_P^*, \mathbf{T})=\mathbf{0}$, the result follows.
\end{proof}
\begin{theorem}
\label{thm:UnconstrainedGlobalConvergence}
Consider the process in Algorithm~\ref{alg:UnconstrainedSpatialTemporalAM} started with any $\mathbf{D}_P^0$. Provided that Assumption~\ref{asm:LegalityCondition} is satisfied, then the inequality always holds for $K$-th iteration:
\[
\min_{0\leq{k}\leq{K}}{\norm{\nabla{J(\mathbf{D}_P^k, \mathbf{T}^k)}}}_F^2\leq{M_c \frac{J(\mathbf{D}_P^0, \mathbf{T}^0)-J_c}{K}}
\]
where $M_c$ and $J_c$ are both constant.
\end{theorem}
\begin{proof}
It is clear that the objective function is non-increasing in any iteration, i.e., for any $k\geq{0}$, we have
\begin{equation}
J(\mathbf{D}_P^k, \mathbf{T}^k)\geq{J(\mathbf{D}_P^{k+1}, \mathbf{T}^k)}\geq{J(\mathbf{D}_P^{k+1}, \mathbf{T}^{k+1})}.
\end{equation}
Moreover, the objective function is non-negative, which means $J(\mathbf{D}_P^k, \mathbf{T}^k)\geq{0}$ for for any $k\geq{0}$. Therefore,
\begin{equation}
\lim_{k\rightarrow+\infty}{J(\mathbf{D}_P^k, \mathbf{T}^k)}=J_c.
\end{equation}
Since $\mathbf{D}_P^{k+1}=\argmin_{\mathbf{D}_P}{J(\mathbf{D}_P, \mathbf{T}^k)}$, the following condition holds by Lemma \ref{lm:BoundedDecrease}:
\begin{equation}
\frac{\Norm{\nabla_{\mathbf{D}_P}{J(\mathbf{D}_P^k, \mathbf{T}^k)}}_F^2}{4\sigma_{PP}(\mathbf{T}^k)}\leq{J(\mathbf{D}_P^k, \mathbf{T}^k)-J(\mathbf{D}_P^{k+1}, \mathbf{T}^k)}\leq{J(\mathbf{D}_P^k, \mathbf{T}^k)-J(\mathbf{D}_P^{k+1}, \mathbf{T}^{k+1})}.
\end{equation}
Notice that $\nabla_{\mathbf{T}}J(\mathbf{D}_P^k, \mathbf{T}^k)=\mathbf{0}$ in each iteration, then
\begin{equation}
\Norm{\nabla{J(\mathbf{D}_P^k, \mathbf{T}^k)}}_F=\Norm{\nabla_{\mathbf{D}_P}J(\mathbf{D}_P^k, \mathbf{T}^k)}_F.
\end{equation}
Therefore,
\begin{equation}
\frac{\Norm{\nabla{J(\mathbf{D}_P^k, \mathbf{T}^k)}}_F^2}{4\sigma_P(\mathbf{T}^k)}\leq{J(\mathbf{D}_P^k, \mathbf{T}^k)-J(\mathbf{D}_P^{k+1}, \mathbf{T}^{k+1})}.
\end{equation}
We simply let $\alpha=J(\mathbf{D}_P^0, \mathbf{T}^0)$, then $(\mathbf{D}_P^k, \mathbf{T}^k)\in\mathit{L}_{\alpha}^{-}(J)$ for all $k\geq{0}$. According to Assumption~\ref{asm:LegalityCondition}, $\mathit{L}_{\alpha}^{-}(J)$ is bounded and satisfies condition (\ref{eq:StrictPositiveness}). Then there exists positive constant $m_T$ and $M_T$ such that $\mathbf{T}^k\in[m_T,M_T]^{M}$ always holds for $k\geq{0}$. Consequently, $4\sigma_{P}(\mathbf{T}^k)$ is also upper bounded by a positive constant $M_c$. We have
\begin{equation}
\label{eq:LowerBoundedDecrease}
\frac{\Norm{\nabla{J(\mathbf{D}_P^k, \mathbf{T}^k)}}_F^2}{M_c}\leq\frac{\Norm{\nabla{J(\mathbf{D}_P^k, \mathbf{T}^k)}}_F^2}{4\sigma_P(\mathbf{T}^k)}\leq{J(\mathbf{D}_P^k, \mathbf{T}^k)-J(\mathbf{D}_P^{k+1}, \mathbf{T}^{k+1})}.
\end{equation}
We sum it up for all $K$ iterations:
\begin{equation}
\frac{1}{M_c}\sum_{k=0}^{K}{\Norm{\nabla{J(\mathbf{D}_P^k, \mathbf{T}^k)}}_F^2}\leq{J(\mathbf{D}_P^0, \mathbf{T}^0)-J(\mathbf{D}_P^{K+1}, \mathbf{T}^{K+1})}\leq{J(\mathbf{D}_P^0, \mathbf{T}^0)-J_c}
\end{equation}
Since the right hand side is bounded, we have
\begin{equation}
\lim_{K\rightarrow+\infty}{\nabla{J(\mathbf{D}_P^K, \mathbf{T}^K)}}=\mathbf{0}.
\end{equation}
Taking the minimum of left hand side equals
\begin{equation}
\frac{\min\limits_{0\leq{k}\leq{K}}{\norm{\nabla{J(\mathbf{D}_P^k, \mathbf{T}^k)}}}_F^2}{M_c/K}\leq{J(\mathbf{D}_P^0, \mathbf{T}^0)-J_c}.
\end{equation}
Rearranging gives the result.
\end{proof}
Theorem~\ref{thm:UnconstrainedGlobalConvergence} shows that, under no assumption on convexity, Algorithm~\ref{alg:UnconstrainedSpatialTemporalAM} shares the same global convergence rate $O(1/\sqrt{K})$ as that of gradient descent with the best step-size chosen in each iteration~\cite{Nesterov2018LecturesOC}. However, the best step-size is practically unavailable.
As a contrast, Algorithm~\ref{alg:UnconstrainedSpatialTemporalAM} does not involve any step-size choosing in each iteration.
Sub-problems (Eq.~\ref{eq:OptimizeD} and Eq.~\ref{eq:OptimizeT}) both can be solved exactly and efficiently due to their algebraic convenience. Therefore, Algorithm~\ref{alg:UnconstrainedSpatialTemporalAM} is faster than gradient-based methods in practice.
Although only convergence to stationary point is guaranteed, strict saddle points are theoretically and numerically unstable \cite{Lee2019FirstorderMA} for Algorithm~\ref{alg:UnconstrainedSpatialTemporalAM}, which is indeed a first-order method. Moreover, when the stationary point is a strict local minimum, we show that the convergence rate is faster than the general case in Theorem~\ref{thm:UnconstrainedGlobalConvergence}.
\begin{lemma}
\label{lm:QuadraticRecurrence}
Consider a positive sequence $\cBrac{\alpha_k}_{k\geq{0}}$ satisfying
\begin{equation}
\alpha_k-\alpha_{k+1}\geq{\gamma\alpha_k^2},~k=0,1,\cdots,
\end{equation}
where $\gamma>0$ is a constant. Then for all $k\geq{0}$,
\begin{equation}
\alpha_k\leq{\frac{1}{k\gamma+\alpha_0^{-1}}}.
\end{equation}
\end{lemma}
\begin{proof}
Apparently,
\begin{equation}
\frac{1}{\alpha_{k+1}}-\frac{1}{\alpha_k}\geq\frac{1}{\alpha_k-\gamma\alpha_k^2}-\frac{1}{\alpha_k}=\frac{\gamma}{1-\gamma\alpha_k}\geq\gamma,
\end{equation}
hence
\begin{equation}
\frac{1}{\alpha_{k}}-\frac{1}{\alpha_0}\geq{k\gamma}.
\end{equation}
Rearranging gives the result.
\end{proof}
\begin{theorem}
\label{thm:UnconstrainedLocalConvergence}
Provided that Assumption~\ref{asm:LegalityCondition} is satisfied, let $(\widehat{\mathbf{D}}_P, \widehat{\mathbf{T}})$ denote any strict local minimum of $J(\mathbf{D}_P, \mathbf{T})$ to which Algorithm~\ref{alg:UnconstrainedSpatialTemporalAM} converges, then there exist $K_c\in\mathbb{Z}_+$ and $\gamma\in\mathbb{R}_+$, such that
\begin{equation}
J(\mathbf{D}_P^K, \mathbf{T}^K)-J^*\leq{\frac{1}{\gamma(K-K_c)+(J(\mathbf{D}_P^{K_c}, \mathbf{T}^{K_c})-J^*)^{-1}}}
\end{equation}
for all $K\geq{K_c}$, where $J^*=J(\widehat{\mathbf{D}}_P, \widehat{\mathbf{T}})$.
\end{theorem}
\begin{proof}
Define the neighborhood as
\begin{equation}
\mathcal{B}(r)=\cBrac{(\mathbf{D}_P, \mathbf{T})~\Big|~\norm{\mathbf{D}_P-\widehat{\mathbf{D}}_P}_F^2+\norm{\mathbf{T}-\widehat{\mathbf{T}}}^2\leq{r^2}}
\end{equation}
A strict local minimum $(\widehat{\mathbf{D}}_P, \widehat{\mathbf{T}})$ satisfies $\nabla^2{J(\widehat{\mathbf{D}}_P, \widehat{\mathbf{T}})}\succ\mathbf{0}$, then there exists $R_c\in\mathbb{R}_+$ such that $J$ is locally convex in the domain $\mathcal{B}(R_c)$. Moreover, there exists a positive integer $K_c$ such that $(\mathbf{D}_P^k, \mathbf{T}^k)\in\mathcal{B}(R_c)$ holds for all $k\geq{K_c}$, so we only consider $k\geq{K_c}$ hereafter. Due to the local convexity, we have
\begin{equation}
J(\mathbf{D}_P^k, \mathbf{T}^k)-J^*\leq\trace\cBrac{\nabla_{\mathbf{D}_P}J(\mathbf{D}_P^k, \mathbf{T}^k)^{\mathrm{T}}{(\mathbf{D}_P^k-\widehat{\mathbf{D}}_P)}}+\nabla_{\mathbf{T}}J(\mathbf{D}_P^k, \mathbf{T}^k)^{\mathrm{T}}(\mathbf{T}^k-\widehat{\mathbf{T}}).
\end{equation}
By applying Cauchy-Schwartz inequality on the right hand side, we have
\begin{equation}
(J(\mathbf{D}_P^k, \mathbf{T}^k)-J^*)^2\leq\Norm{\nabla{J(\mathbf{D}_P^k, \mathbf{T}^k)}}_F^2\rBrac{\Norm{\mathbf{D}_P^k-\widehat{\mathbf{D}}_P}_F^2+\Norm{\mathbf{T}^k-\widehat{\mathbf{T}}}^2}.
\end{equation}
Notice that the distance between $(\mathbf{D}_P^k, \mathbf{T}^k)$ and the local minimum is upper-bounded by $R_c$, thus
\begin{equation}
\frac{(J(\mathbf{D}_P^k, \mathbf{T}^k)-J^*)^2}{R_c^2}\leq\Norm{\nabla{J(\mathbf{D}_P^k, \mathbf{T}^k)}}_F^2.
\end{equation}
According to the inequality (\ref{eq:LowerBoundedDecrease}) deduced in the proof of Theorem \ref{thm:UnconstrainedGlobalConvergence}, we have
\begin{equation}
J(\mathbf{D}_P^k, \mathbf{T}^k)-J(\mathbf{D}_P^{k+1}, \mathbf{T}^{k+1})\geq\frac{\Norm{\nabla{J(\mathbf{D}_P^k, \mathbf{T}^k)}}_F^2}{M_c},
\end{equation}
where $M_c$ is the upper bound of $4\sigma_P(\mathbf{T})$. Combining these two conditions, we get
\begin{equation}
J(\mathbf{D}_P^k, \mathbf{T}^k)-J(\mathbf{D}_P^{k+1}, \mathbf{T}^{k+1})\geq\frac{(J(\mathbf{D}_P^k, \mathbf{T}^k)-J^*)^2}{M_cR_c^2}.
\end{equation}
We apply Lemma \ref{lm:QuadraticRecurrence} by defining
\begin{equation}
\alpha_k:=J(\mathbf{D}_P^{k+K_c}, \mathbf{T}^{k+K_c})-J^*,~~~\gamma:=1/{M_cR_c^2},
\end{equation}
then the result follows.
\end{proof}
When Algorithm~\ref{alg:UnconstrainedSpatialTemporalAM} converges to a strict local minimum, the above theorem shows that the local convergence rate is $O(1/K)$. Note that it is possible to accelerate our method to attain the optimal rate $O(1/K^2)$ of first-order methods or use high-order methods to achieve a faster rate.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,330
|
{"url":"https:\/\/rooms-4you.com\/find-the-integral-1-x2-1\/","text":"# Find the Integral 1\/(x^2-1)\n\nFactor the numerator and denominator of .\nRewrite as .\nSince both terms are perfect squares, factor using the difference of squares formula, where and .\nWrite the fraction using partial fraction decomposition.\nSimplify.\nSplit the single integral into multiple integrals.\nSince is constant with respect to , move out of the integral.\nSince is constant with respect to , move out of the integral.\nLet . Then . Rewrite using and .\nLet . Find .\nDifferentiate .\nBy the Sum Rule, the derivative of with respect to is .\nDifferentiate using the Power Rule which states that is where .\nSince is constant with respect to , the derivative of with respect to is .\nRewrite the problem using and .\nThe integral of with respect to is .\nSince is constant with respect to , move out of the integral.\nLet . Then . Rewrite using and .\nLet . Find .\nDifferentiate .\nBy the Sum Rule, the derivative of with respect to is .\nDifferentiate using the Power Rule which states that is where .\nSince is constant with respect to , the derivative of with respect to is .\nRewrite the problem using and .\nThe integral of with respect to is .\nSimplify.\nSubstitute back in for each integration substitution variable.\nReplace all occurrences of with .\nReplace all occurrences of with .\nFind the Integral 1\/(x^2-1)\n\n## Try our mobile app\n\nOur app allows students to get instant step-by-step solutions to all kinds of math troubles.\n\nScroll to top","date":"2022-12-07 03:21:49","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9861685633659363, \"perplexity\": 1024.8520228566124}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711126.30\/warc\/CC-MAIN-20221207021130-20221207051130-00072.warc.gz\"}"}
| null | null |
\section{Introduction}
\label{sec:intro}
Currently, techniques available for the measurement of cosmic shear (the galaxy shape distortion due to a gravitational potential on large scales) are based on the best fitting of galaxy images, as they have been developed for optical surveys.
However the image reconstruction from radio observations via PSF deconvolution, using algorithms such as CLEAN, is a highly non-linear process and has the potential to produce spurious cosmic shear. Moreover the noise in radio images is highly correlated. Therefore, accurate methods for radio weak lensing should work in the visibility domain, where the noise is Gaussian and there is a perfect modelling of the sampling function.
At present the only investigations about shear measurement with radio sources use shapelets, where galaxy shape models are decomposed through an orthonormal basis of functions corresponding to perturbations around a circular Gaussian (for an overview see~\cite{Patel15}).
In this paper we propose to use a more realistic galaxy model and to adapt a Bayesian model fitting approach called \emph{lensfit}~\cite{Miller07}, developed for optical surveys, recently improved and used for the CFHTLensS~\cite{Miller13} and KiDS surveys.
As a proof of concept, we measure galaxies shapes and evaluate any shear bias in this new method, that we call \emph{RadioLensfit}~\cite{Rivi}, by simulating SKA1-MID visibilities of individual galaxies at the phase centre.
\section{RadioLensfit Overview}
RadioLensfit is a chi-square fitting approach to measure radio galaxy shapes in the visibility domain. The galaxy model is defined only by the disc component as the analytical Fourier transform of a Sersic exponential brightness profile:
$I(r) = I_0 \exp(-r/\alpha)$,
linearly deformed according to ellipticity parameters \textbf{e} = $(e_1, e_2)$.
$I_0$ and $\alpha$ are respectively the central brightness and scalelength of the galaxy.
The likelihood is marginalised over non-interesting parameters such as flux, scalelength and position. This way we get a likelihood function of only the ellipticity that should be well described by a Gaussian.
Galaxy shape measure is given by the mean likelihood and 1D standard deviation (defined as the square root of the covariance matrix determinant).
The cosmic shear is estimated as a weighted average of the galaxies ellipticity. Weights take into account the variance of the ellipticity of the galaxy population and the 1D variance of each shape measure.
For a detailed description of the method see~\cite{Rivi}.
\section{Galaxy Distributions}
\label{priors}
The galaxy distributions used in our simulations are derived from the fitting to the 20~cm continuum radio observations taken with the VLA at a center frequency of 1400 MHz, covering a region of the Spitzer Wide-area InfraRed Extragalactic (SWIRE\footnote{http://heasarc.gsfc.nasa.gov/W3Browse/radio-catalog/vlasdf20cm.html}).
For a detailed analysis of this online catalog see~\cite{OM2008}.
We selected only faint sources with flux $S \le 100\mu$Jy (more than 1500).
The resulting flux distribution is fitted by a power law:
$p(S) \propto S^{-1.34}$, extrapolated below 30~$\mu$Jy because of incompleteness due to several detection effects (see \cite{OM2008}).
The Sersic scalelength $\alpha$ of each galaxy is obtained from the relation FWHM =~$2\alpha\ln(2)$.
The left panel of Fig.~\ref{fig:scale} shows the scalelength distribution independently of the source flux, where data are well fitted by a lognormal distribution with mean 0.266 arcsec and standard deviation 0.496 arcsec. Note that this is consistent with the size distribution obtained from VLA+MERLIN observations of the HDF-North~\cite{Muxlow05}.
However, as claimed in \cite{Bondi08}, for a star-forming (SF) galaxy population there should be a linear relation between the log of the median major-axis scalelength $\alpha_{med}$ and flux density.
As no source classification is provided in this catalog, we can estimate this relation only from sources with flux below 30~$\mu$Jy where such a relation appears (see right panel of Fig.~\ref{fig:scale}), as this population seems to be dominated by SF for this flux range. For higher fluxes there is fair mix of SF and (low-L) AGN as observed in the VLA-COSMOS survey~\cite{Sm2008} and therefore this relation is not evident anymore. For this reason we extrapolate the linear relation from a least-square fit to the median values in the range $15\mu$Jy $\le S \le 30\mu$Jy, obtaining:
$\ln{[\alpha_{med}/\textrm{arcsec}]} = -0.93 +0.33\ln{[S/\mu \textrm{Jy}]},$
which is consistent with results in the literature.
As scalelength distribution dependent on the flux, we use a lognormal
whose mean is $\mu=\ln(\alpha_{med})$ and variance is chosen in the middle of a range that appears to give a good representation of the distribution: $\sigma=0.3136$~arc sec.
As we have no information about ellipticity distributions on the radio regime, the modulus of the intrinsic ellipticity values are generated according to a distribution estimated from SDSS disk-dominated galaxies~\cite{Miller13}.
\begin{figure}
\includegraphics[scale=0.35]{scalelengthDist-full.png}
\includegraphics[scale=0.35]{flux-scale.png}
\caption{Distributions for radio sources with flux below 100~$\mu$Jy in the SWIRE field observed by VLA. \emph{Left panel}: Observed scalelength histogram fitted by a lognormal distribution (solid curve). \emph{Right Panel}: Median scalelength versus median flux.}
\label{fig:scale}
\end{figure}
\section{SKA Simulations}
\label{sec:ska}
By using the SKA1-MID baseline configuration, we simulated an 8-hour track radio observation at declination $\delta = 30^\circ$ of a population of individual SF galaxies. Visibilities are computed for the first 30\% of Band~2, i.e. 950 - 1190 MHz, and sampled every 60~s for 12 channels.
Galaxies flux ranges in the interval 50~$\mu$Jy-100 $\mu$Jy corresponding to a SNR $\ge 40$.
We apply a uniform gridding scheme to reduce the data volume and computational time. By testing the shape fitting, it appears that a small grid size $800 \times 800$ is the best choice for this case (see left panel of Fig.~\ref{fig:shear}).
\begin{figure}
\includegraphics[scale=0.35]{gridding.png}
\includegraphics[scale=0.35]{g2-50-100muJy.png}
\caption{\emph{Left panel}: Plot of the shape measurements best fitting slope vs grid size for a population of 1000 galaxies. \emph{Right panel}: Plot of the 2nd component of shear measurements and corresponding best-fit line.}
\label{fig:shear}
\end{figure}
To estimate the shear bias, we applied to each galaxy intrinsic ellipticity an input constant shear with amplitude $g=0$ and $g = 0.04$ (with 8 different orientations).
We compare input \textbf{g} and measured \textbf{g}$^m$ shear ellipticity values:
$g_i^m - g_i = m_i g_i + c_i$ ($i=1,2$),
where $m_i$ and $c_i$ are respectively the multiplicative and additive biases. They are
measured with an accuracy of 1\% by simulating for each measure a population of $10^4$ galaxies.
Shear bias estimates show different values for the two ellipticity components probably due to the asymmetry of the $uv$ coverage:
\begin{align}
& m_1 = -0.00079 \pm 0.00750, \quad c_1 = 0.00014 \pm 0.00020; \nonumber \\
& m_2 = 0.0143 \pm 0.0073, \ \ \ \quad \quad c_2 = 0.00023 \pm 0.00019. \nonumber
\end{align}
\section{Conclusions}
RadioLensfit is a method for shear measurement from radio weak lensing surveys working in the visibility domain.
We tested this method by simulating the visibilities of individual galaxies located at the phase centre using the SKA1-MID baseline configuration. RadioLensfit seems to be very promising for SKA continuum surveys because from these first tests we get multiplicative and additive noise biases comparable with the requirements on a 5000 $\deg^2$ SKA1 survey ($m = 0.0067$, $c = 0.00082$)~\cite{Brown15}. In particular the additive bias is on average 4 times smaller.
Further work will test this method for simulations with a lower SNR and with galaxies located randomly in the field of view taking into account frequency and time smearing effects.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,741
|
Airtel Payments Bank contemplates IPO
Hello reader,
It seems that October is not the month of WhatsApp.
The Meta-owned social messaging giant suffered one of its longest outages of around two hours yesterday as many of us rushed to use other social messaging services. In the same month last year, WhatsApp, along with Facebook and Instagram, suffered a six-hour outage after a major DNS failure.
Not good news for Google either.
The Competition Commission of India (CCI) imposed the technological giant with another penalty—drilling a hole worth Rs 936 crore— for abusing its dominant position regarding its Play Store policies. This comes just days after he was fined Rs 1,337 crore for anti-competitive practices.
The darkness continued in the stock markets, with the beauty giant nykaa— one of the last public companies in India's startup ecosystem to hold strong on stocks — seeing its shares drop below the IPO issue price before the end of the lock-in period for anchor investors .
ICYMI: Sri Lanka could soon be home to South Asia's first Disneyland. The $18 billion investment will be a boon to the country's economy.
In today's newsletter, we will talk about
Turning brass into designer jewelry
Here is your trivia for today: Which city in the world is located on two continents?
Airtel Payments Bank (APB), the fintech arm of telecoms giant Bharti Airtel, will begin the process of recruiting outside investors, CEO and CEO Anubrata Biswas said, as part of a larger plan to eventually spin off the company. subsidiary as a separate entity and reach the public markets.
During an interview with Your historyThe CEO said the board has been clear about its plans to spin off the subsidiary as a separate entity in the future.
Currently, Bharti Airtel Limited and Bharti Enterprises Limited have a 70:30 partnership in APB.
The fintech subsidiary was the first of 11 players (only six are operational now) to receive a payments bank license from the Reserve Bank of India, in 2016.
Turning profitable in the September quarter of FY22, the payment bank currently has an annualized revenue of Rs 1,000 crore and is looking to post double-digit momentum in the fiscal year.
Picture this. Handmade earrings made from brass from a car, necklaces made from scrap construction equipment, and rings carved from ancient musical instruments.
This is what Gurugram-based brand Aulerth is doing: creating a range of traditional and contemporary jewelery made from recycled brass to reduce the carbon footprint of jewellery.
Fine jewelry in store:
Founder Vivek Ramabhadran believes that luxury is not about embracing excess, but about being considerate and caring for the environment.
He sources brass from junkyards, automobiles, construction equipment, musical instruments, etc., and gives it a "second life" by treating and reusing it in designer jewelry.
He has partnered with Indian designers: Suneet Varma, JJ Valaya, Shivan and Naresh, and Tribe Amrapali.
Kausalya Shankar, a survivor of the horrific 2016 honor killing case in Tamil Nadu, continues to speak out against caste-based violence.
Kausalya recently left her government job to become a businesswoman. She opened a Zha beauty salon in Velallur, Coimbatore, opened by actor Parvathy Thiruvothu a month ago.
Standing on your feet:
After completing a "beautician course," Kausalya took out a bank loan, pledged her jewelry, and borrowed money from a friend to start Zha.
His fight against honor killings is ongoing. She says that she is encouraged that there is more awareness and that conversations are taking place, but that she still has some way to go.
Kausalya uses all available forums and arenas to talk about honor killings and believes that these cases need awareness at all levels, including among the police.
Slide back: Apple Inc is cutting production of the iPhone 14 Plus and increasing production of the iPhone 14 Pro due to low demand, market research firm TrendForce said. This could lead to a 14% year-over-year drop in production to 52 million units.
slippery slope: Shares of US-listed Chinese companies fell sharply after President Xi Jinping was elected to serve an unprecedented third term. The Invesco Golden Dragon China ETF plunged 14.5% to hit its lowest level since 2009.
Dark days ahead: International Energy Agency (IEA) Director Faith Birol said the world was in the midst of "the first truly global energy crisis" amid the Ukraine conflict and a possible uptick in fuel demand from China.
What you should keep in mind
Smartphone and accessory maker Nothing will launch a True Wireless earphones called Nothing Ear (Stick) on Myntra.
Satcom Industry Association India will host a three-day Indian Space Congress in Delhi.
Which city in the world is located on two continents?
Answer: Istanbul. It is the largest city in Turkey and straddles the Bosphorus Strait, both in Europe and Asia.
We would love to hear from you! To let us know what you liked and didn't like about our newsletter, please send an email editorial@tuhistoria.com.
If you haven't received this newsletter in your inbox yet, sign up here. For past issues of YourStory Buzz, you can check out our Daily capsule page here.
Categories Iphone 14 Tags Airtel payment bank, Aulerth, daily capsule, initial public offering, newcomer, WhatsApp Post navigation
iPhone 15 rumour: One of the worst features of the iPhone 14 is predicted to continue
Which Galaxy S22 should you buy? Regular, Plus or Ultra?
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,127
|
<!--
* author Jannik Richter
*
* The MIT License (MIT)
Copyright © 2014 Slenderware
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/-->
<div class="container-fluid main" style="padding-left:0px;padding-right:0px;">
<!-- Columns start at 50% wide on mobile and bump up to 33.3% wide on desktop -->
<div class="row main" style="margin-top:12%;">
<div class="col-md-3 col-sm-2 col-xs-0 col-lg-4" ></div>
<div class="col-md-6 col-sm-8 col-xs-12 col-lg-4" id="loginContainer">
<div class="panel panel-default fade-down" id="loginBox" >
<form role="form">
<div class="panel-heading">
<img src="modules/login/res/loginlogo.png" style="width:100%;"/>
</div>
<div class="panel-body">
<div ng-class="validate(username) ? 'form-group has-feedback' : 'form-group has-error has-feedback'" style="margin-bottom:15px;">
<label for="exampleInputEmail1">Username</label>
<input type="text" class="form-control" id="txtEmail" placeholder="Enter username" ng-model="username">
<span class="glyphicon glyphicon-warning-sign form-control-feedback" ng-hide="validate(username)"></span>
</div>
<div ng-class="validate(password) ? 'form-group has-feedback' : 'form-group has-error has-feedback'" style="margin-bottom:15px;">
<label for="exampleInputPassword1">Password</label>
<input type="password" class="form-control" id="txtPassword" placeholder="Password" ng-model="password">
<span class="glyphicon glyphicon-warning-sign form-control-feedback" ng-hide="validate(password)"></span>
</div>
<div ng-class="result.success ? 'alert alert-success' : 'alert alert-danger'" ng-hide="result === undefined && message === undefined">{{message}}</div>
</div>
<div class="panel-footer">
<input type="submit" class="btn btn-primary" style="float:right;" value="Login" ng-click='authenticate()' ng-disabled="!validate(username) || !validate(password) || username === undefined || password === undefined"/>
<a type="submit" class="btn btn-default" style="float:right;" ng-click='register()'>Register</a>
<i class="fa fa-circle-o-notch fa-spin fa-2x" style="float:right;margin-right:0.3em;margin-top:0.3em;" ng-show="loginLoading"></i>
</div>
</form>
</div>
</div>
</div>
</div>
</div>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,696
|
(function() {
var Build, Repository, assert, client, jsonHeaders, tobi, vows;
vows = require('../spec_helper').vows;
client = require('../spec_helper').client;
tobi = require('../spec_helper').tobi;
assert = require('../spec_helper').assert;
jsonHeaders = require('../spec_helper').headers.jsonHeaders;
Repository = require('../../models/repository').Repository;
Build = require('../../models/repository').Build;
vows.describe('builds').addBatch({
'with a repository': {
topic: function() {
var repository;
repository = new Repository({
name: 'winston',
ownerName: 'indexzero'
});
repository.save(this.callback);
return;
return {
'with a build': 'pending'
};
}
}
})["export"](module);
}).call(this);
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,981
|
{"url":"https:\/\/opendatacube.readthedocs.io\/en\/latest\/data-access-analysis\/apis\/data-loading.html","text":"Once you know the products or datasets that you are interested in, you can load data using dc.load().\n\nOne way to load data from a datacube is to pass a list of datasets to dc.load(). For example, we can load the ls9_sr datasets we found in the previous Data Searching example by passing them to dc.load()\u2019s datasets parameter.\n\nTo load data for a subset of measurements, we can supply them to dc.load()\u2019s measurements parameter (here we use the red, green, blue measurement alias names that we obtained in the Product Discovery example). For indexed datacube products, we also need to supply our desired output coordinate reference system (output_crs) and output resolution (resolution). Datacube will then resample and reproject our data to match these inputs.\n\n[1]:\n\nimport datacube\n\ndc = datacube.Datacube(app=\"my_analysis\")\n\ndatasets = dc.find_datasets(\nproduct=\"ls9_sr\",\nx=(29.0, 29.01),\ny=(25.0, 25.01),\ntime=(\"2022-01-01\", \"2022-02-01\"),\n)\n\ndatasets=datasets,\nmeasurements=[\"red\", \"green\", \"blue\"],\noutput_crs=\"EPSG:6933\",\nresolution=(-30, 30),\n)\nds\n\n[1]:\n\n<xarray.Dataset>\nDimensions: (time: 2, y: 8119, x: 7192)\nCoordinates:\n* time (time) datetime64[ns] 2022-01-15T08:31:52.404426 2022-01-31T...\n* y (y) float64 3.16e+06 3.16e+06 3.16e+06 ... 2.917e+06 2.917e+06\n* x (x) float64 2.684e+06 2.685e+06 2.685e+06 ... 2.9e+06 2.9e+06\nspatial_ref int32 6933\nData variables:\nred (time, y, x) uint16 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0\ngreen (time, y, x) uint16 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0\nblue (time, y, x) uint16 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0\nAttributes:\ncrs: EPSG:6933\ngrid_mapping: spatial_ref\n\nWe can see that dc.load has returned an xarray.Dataset containing data from our two input datasets. This xarray.Dataset includes:\n\nDimensions\n\n\u2022 This header identifies the number of timesteps returned (time: 2) as well as the number of resulting pixels in the x and y directions.\n\nCoordinates\n\n\u2022 time identifies the time attributed to each returned timestep.\n\n\u2022 x and y provide coordinates for each pixel within the returned data.\n\n\u2022 spatial_ref provides information about the spatial grid used to load the data\n\nData variables\n\n\u2022 These are the measurements available for the loaded product. For every timestep (time) returned by the query, the measured value at each pixel (y, x) is returned as an array for each measurement. Each data variable is itself an xarray.DataArray object.\n\nAttributes\n\nWe can also inspect our loaded data by plotting it:\n\n[2]:\n\nds[[\"red\", \"green\", \"blue\"]].to_array().plot.imshow(col=\"time\", robust=True)\n\n[2]:\n\n<xarray.plot.facetgrid.FacetGrid at 0x7fa8ba244310>\n\n\nWe can see in the image above that dc.load() has loaded the entire input datasets, which for Landsat 9 includes the extent of a full satellite path-row scene.\n\nInstead, we may prefer to load a subset of data for a specific spatial and temporal extent. To do this, we can query and load data directly with dc.load() without first searching for datasets using dc.find_datasets().\n\nTo achieve this, we can pass all the inputs we originally passed to dc.find_datasets() (e.g. product, x, y, time) to dc.load() instead:\n\n[3]:\n\nds = dc.load(\nproduct=\"ls9_sr\",\nx=(29.0, 29.1),\ny=(25.0, 25.1),\ntime=(\"2022-01-01\", \"2022-02-01\"),\nmeasurements=[\"red\", \"green\", \"blue\"],\noutput_crs=\"EPSG:6933\",\nresolution=(-30, 30),\n)\nds\n\n[3]:\n\n<xarray.Dataset>\nDimensions: (time: 2, y: 388, x: 322)\nCoordinates:\n* time (time) datetime64[ns] 2022-01-15T08:31:52.404426 2022-01-31T...\n* y (y) float64 3.103e+06 3.103e+06 ... 3.092e+06 3.092e+06\n* x (x) float64 2.798e+06 2.798e+06 ... 2.808e+06 2.808e+06\nspatial_ref int32 6933\nData variables:\nred (time, y, x) uint16 22285 22517 22302 ... 21193 22230 20021\ngreen (time, y, x) uint16 17701 17715 17627 ... 16897 17321 16152\nblue (time, y, x) uint16 13124 13073 13038 ... 12580 12743 12352\nAttributes:\ncrs: EPSG:6933\ngrid_mapping: spatial_ref\n\nWe can see from Dimensions that a much smaller set of pixels have now been loaded compared to the previous time we called dc.load().\n\nIf we plot our new xarray.Dataset, we can see that dc.load() has now loaded data for only the specific x and y ranges we specified:\n\n[4]:\n\nds[[\"red\", \"green\", \"blue\"]].to_array().plot.imshow(col=\"time\", robust=True)\n\n[4]:\n\n<xarray.plot.facetgrid.FacetGrid at 0x7fa88e4d4d00>","date":"2022-08-08 19:20:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4905736446380615, \"perplexity\": 4008.128360620986}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882570871.10\/warc\/CC-MAIN-20220808183040-20220808213040-00569.warc.gz\"}"}
| null | null |
Champ was rescued from SBC City Shelter on the day he was to be euthanized. He is 4 years old and a puppy at heart. Champ is your classic, loving, playful pittie. He weighs about 65 pounds.He is dog friendly, potty trained, SUPER cuddly, and the perfect match for any mildly active family and/or single person. Champ is ready to be your best friend.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,657
|
name: Feature Request
about: Suggest an idea for this project
---
<!--
Please read contribution guideline first: https://github.com/dnnsoftware/Dnn.Platform/blob/development/CONTRIBUTING.md
Any potential security issues should be sent to security@dnnsoftware.com, rather than posted on GitHub
-->
## Description of problem
Is your feature request related to a problem? Provide a clear and concise description of the problem.
## Description of solution
Provide a clear and concise description of what you want to happen.
## Description of alternatives considered
Provide a clear and concise description of any alternative solutions or features you have considered.
## Screenshots
If applicable, provide screenshots to help explain your problem and/or feature.
## Additional context
Add any other context about the feature that may be helpful with implementation.
## Affected version
<!-- Check all that apply and add more if necessary -->
* [x] 9.3.2
* [x] 9.3.1
* [x] 9.2.2
* [x] 9.2.1
* [x] 9.2
* [x] 9.1.1
* [ ] 9.1
* [ ] 9.0
## Affected browser
<!--
Check all that apply and add more if necessary.
If possible, please also specify exact versions and mention the operating system
-->
* [ ] Chrome
* [ ] Firefox
* [ ] Safari
* [ ] Internet Explorer
* [ ] Edge
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,737
|
Planning calendar from last year (if you have one) – this might be a printed planner or it might be as simple as last year's online calendar entries!
Take your supplies to a place where you can be creative as well as introspective. Maybe it's wrapped up in a soft throw in your favorite comfy chair, outside on a sunny day, or snuggled up on the couch by a roaring fire. Wherever it is, go there on purpose. Settle in and gift yourself this time!
Start by taking some time to reflect on the last 12 months. This is where the planner or online calendar comes into play. Stroll through the last 12 months and remember what you did, where you went, what you accomplished and who you spent time with.
Social media platforms are good for this as well. Look back and see what you posted and shared. Wherever you store the photos you take is another great place to wander around and reflect.
As you take this journey, begin to write down words that capture what you are feeling, seeing or remembering. Don't limit your words…just write them down. They don't need to be complete sentences or even be listed in order. Have fun with them. Use different colors to represent the feelings a word or memory stirs up.
After you've journeyed back 12 months, pull out that stationary or a fresh piece of paper and write a letter….to YOURSELF.
This is a letter of congratulations to yourself. Using the words you jotted down, write a letter that acknowledges the good things you accomplished, the experiences you had, the things you overcame and the things you celebrated.
Don't hold back on the colors and stickers! When you are done, this letter should make you smile just looking at it, even before you've read a single word.
And then sometimes…just read it and smile!
When can you schedule time to write yourself a letter – on purpose?
What surprised you about your journey down memory lane?
How did this exercise flip your script on who you are or what you are capable of?
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 6,730
|
Q: adding extra files to vb.net project I am trying to add some 'supporting' files to my project in VB.NET Express 2012. These files are additional files that are not really part of VB.NET such as RTF files which are templates for reports etc. There is also some text based template files, that will get modified as part of the execution of my program.
What I have done:
*
*I have added these to the project (i.e. they appear in the Solution
Explorer under the project)
*I have set the 'Copy to Output Directory' of each of the files to
"Copy Always" in the properties window.
When I publish the project and re-install it, none of these files are included.
Any help would be appreciated. Thanks in advance.
A: From build action you should select "content"
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 388
|
Q: Are auto-responses from STOP keywords in MobileConnect charged to the business that owns the SFMC Account? just like the SMS cost of MobileConnect, is the auto responses from STOP and Opt-In keywords also cost?
Thanks!
A: I don't see why it should not be charged.
Here is what I've found in the doc:
Customer will be charged for all outbound, mobile terminated (MT) SMS
messages, whether delivered or undelivered.
Source: SMS/ MMS Messages
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,273
|
package dataconversion
import (
"log"
"strconv"
"sync"
"time"
. "eaciit/wfdemo-git/library/helper"
. "eaciit/wfdemo-git/library/models"
. "eaciit/wfdemo-git/processapp/watcher/controllers"
hpp "eaciit/wfdemo-git/processapp/helper"
_ "github.com/eaciit/dbox/dbc/mongo"
. "github.com/eaciit/orm"
tk "github.com/eaciit/toolkit"
)
type ConvThreeExt struct {
Ctx *DataContext
}
var (
mutexX = &sync.Mutex{}
)
func NewConvThreeExt(ctx *DataContext) *ConvThreeExt {
dc := new(ConvThreeExt)
dc.Ctx = ctx
return dc
}
func (d *ConvThreeExt) Generate(file string) (errorLine tk.M) {
log.Println("Start Conversion...")
// funcName := "GenTenFromThreeSecond"
var wg sync.WaitGroup
_ = wg
ctx := d.Ctx
list := []tk.M{}
pipes := []tk.M{}
match := tk.M{}
if file != "" {
match = tk.M{"file": file}
pipes = append(pipes, tk.M{"$match": match})
}
group := tk.M{
"_id": "$file",
"min_timestamp": tk.M{"$min": "$timestampconverted"},
"max_timestamp": tk.M{"$max": "$timestampconverted"},
}
pipes = append(pipes, tk.M{"$group": group})
// pipes = append(pipes, tk.M{"$sort": tk.M{"_id": 1}})
// tk.Printf("pipes: %#v \n", pipes)
csr, e := ctx.Connection.NewQuery().
From(new(ScadaThreeSecs).TableName()).
Command("pipe", pipes).
Cursor(nil)
defer csr.Close()
if e != nil {
log.Printf("ERR: %#v \n", e.Error())
} else {
e = csr.Fetch(&list, 0, false)
// log.Printf("list: %#v \n", list)
if len(list) > 0 {
for _, valList := range list {
// log.Printf("valList: %#v \n", valList)
var startTime, endTime time.Time
minTimeStamp := valList.Get("min_timestamp").(time.Time).UTC()
maxTimeStamp := valList.Get("max_timestamp").(time.Time).UTC()
// log.Printf("valList: %v | %v \n", minTimeStamp, maxTimeStamp)
startTime, _ = time.Parse("20060102 15:04", minTimeStamp.Format("20060102 15:04"))
endTime, _ = time.Parse("20060102 15:04", maxTimeStamp.Format("20060102 15:04"))
for {
if startTime.Format("2006-01-02 15:04") > endTime.Format("2006-01-02 15:04") {
break
}
// log.Printf("startTime: %v \n", startTime)
startTimeInt, _ := strconv.ParseInt(startTime.Format("200601021504"), 10, 64)
Fast_CurrentL3List := d.getAvg(ctx, startTime, "fast_currentl3")
Fast_CurrentL3Map := d.getMap(Fast_CurrentL3List, "fast_currentl3")
// log.Printf("Fast_CurrentL3Map: %#v \n", Fast_CurrentL3Map)
Fast_ActivePower_kWList := d.getAvg(ctx, startTime, "fast_activepower_kw")
Fast_ActivePower_kWMap := d.getMap(Fast_ActivePower_kWList, "fast_activepower_kw")
Fast_CurrentL1List := d.getAvg(ctx, startTime, "fast_currentl1")
Fast_CurrentL1Map := d.getMap(Fast_CurrentL1List, "fast_currentl1")
Fast_ActivePowerSetpoint_kWList := d.getAvg(ctx, startTime, "fast_activepowersetpoint_kw")
Fast_ActivePowerSetpoint_kWMap := d.getMap(Fast_ActivePowerSetpoint_kWList, "fast_activepowersetpoint_kw")
Fast_CurrentL2List := d.getAvg(ctx, startTime, "fast_currentl2")
Fast_CurrentL2Map := d.getMap(Fast_CurrentL2List, "fast_currentl2")
Fast_DrTrVibValueList := d.getAvg(ctx, startTime, "fast_drtrvibvalue")
Fast_DrTrVibValueMap := d.getMap(Fast_DrTrVibValueList, "fast_drtrvibvalue")
Fast_GenSpeed_RPMList := d.getAvg(ctx, startTime, "fast_genspeed_rpm")
Fast_GenSpeed_RPMMap := d.getMap(Fast_GenSpeed_RPMList, "fast_genspeed_rpm")
Fast_PitchAccuV1List := d.getAvg(ctx, startTime, "fast_pitchaccuv1")
Fast_PitchAccuV1Map := d.getMap(Fast_PitchAccuV1List, "fast_pitchaccuv1")
Fast_PitchAngleList := d.getAvg(ctx, startTime, "fast_pitchangle")
Fast_PitchAngleMap := d.getMap(Fast_PitchAngleList, "fast_pitchangle")
Fast_PitchAngle3List := d.getAvg(ctx, startTime, "fast_pitchangle3")
Fast_PitchAngle3Map := d.getMap(Fast_PitchAngle3List, "fast_pitchangle3")
Fast_PitchAngle2List := d.getAvg(ctx, startTime, "fast_pitchangle2")
Fast_PitchAngle2Map := d.getMap(Fast_PitchAngle2List, "fast_pitchangle2")
Fast_PitchConvCurrent1List := d.getAvg(ctx, startTime, "fast_pitchconvcurrent1")
Fast_PitchConvCurrent1Map := d.getMap(Fast_PitchConvCurrent1List, "fast_pitchconvcurrent1")
Fast_PitchConvCurrent3List := d.getAvg(ctx, startTime, "fast_pitchconvcurrent3")
Fast_PitchConvCurrent3Map := d.getMap(Fast_PitchConvCurrent3List, "fast_pitchconvcurrent3")
Fast_PitchConvCurrent2List := d.getAvg(ctx, startTime, "fast_pitchconvcurrent2")
Fast_PitchConvCurrent2Map := d.getMap(Fast_PitchConvCurrent2List, "fast_pitchconvcurrent2")
Fast_PowerFactorList := d.getAvg(ctx, startTime, "fast_powerfactor")
Fast_PowerFactorMap := d.getMap(Fast_PowerFactorList, "fast_powerfactor")
Fast_ReactivePowerSetpointPPC_kVArList := d.getAvg(ctx, startTime, "fast_reactivepowersetpointppc_kvar")
Fast_ReactivePowerSetpointPPC_kVArMap := d.getMap(Fast_ReactivePowerSetpointPPC_kVArList, "fast_reactivepowersetpointppc_kvar")
Fast_ReactivePower_kVArList := d.getAvg(ctx, startTime, "fast_reactivepower_kvar")
Fast_ReactivePower_kVArMap := d.getMap(Fast_ReactivePower_kVArList, "fast_reactivepower_kvar")
Fast_RotorSpeed_RPMList := d.getAvg(ctx, startTime, "fast_rotorspeed_rpm")
Fast_RotorSpeed_RPMMap := d.getMap(Fast_RotorSpeed_RPMList, "fast_rotorspeed_rpm")
Fast_VoltageL1List := d.getAvg(ctx, startTime, "fast_voltagel1")
Fast_VoltageL1Map := d.getMap(Fast_VoltageL1List, "fast_voltagel1")
Fast_VoltageL2List := d.getAvg(ctx, startTime, "fast_voltagel2")
Fast_VoltageL2Map := d.getMap(Fast_VoltageL2List, "fast_voltagel2")
Fast_WindSpeed_msList := d.getAvg(ctx, startTime, "fast_windspeed_ms")
Fast_WindSpeed_msMap := d.getMap(Fast_WindSpeed_msList, "fast_windspeed_ms")
Slow_CapableCapacitiveReactPwr_kVArList := d.getAvg(ctx, startTime, "slow_capablecapacitivereactpwr_kvar")
Slow_CapableCapacitiveReactPwr_kVArMap := d.getMap(Slow_CapableCapacitiveReactPwr_kVArList, "slow_capablecapacitivereactpwr_kvar")
Slow_CapableInductiveReactPwr_kVArList := d.getAvg(ctx, startTime, "slow_capableinductivereactpwr_kvar")
Slow_CapableInductiveReactPwr_kVArMap := d.getMap(Slow_CapableInductiveReactPwr_kVArList, "slow_capableinductivereactpwr_kvar")
Slow_DateTime_SecList := d.getAvg(ctx, startTime, "slow_datetime_sec")
Slow_DateTime_SecMap := d.getMap(Slow_DateTime_SecList, "slow_datetime_sec")
Slow_NacellePosList := d.getAvg(ctx, startTime, "slow_nacellepos")
Slow_NacellePosMap := d.getMap(Slow_NacellePosList, "slow_nacellepos")
Fast_PitchAngle1List := d.getAvg(ctx, startTime, "fast_pitchangle1")
Fast_PitchAngle1Map := d.getMap(Fast_PitchAngle1List, "fast_pitchangle1")
Fast_VoltageL3List := d.getAvg(ctx, startTime, "fast_voltagel3")
Fast_VoltageL3Map := d.getMap(Fast_VoltageL3List, "fast_voltagel3")
Slow_CapableCapacitivePwrFactorList := d.getAvg(ctx, startTime, "slow_capablecapacitivepwrfactor")
Slow_CapableCapacitivePwrFactorMap := d.getMap(Slow_CapableCapacitivePwrFactorList, "slow_capablecapacitivepwrfactor")
Fast_Total_Production_kWhList := d.getAvg(ctx, startTime, "fast_total_production_kwh")
Fast_Total_Production_kWhMap := d.getMap(Fast_Total_Production_kWhList, "fast_total_production_kwh")
Fast_Total_Prod_Day_kWhList := d.getAvg(ctx, startTime, "fast_total_prod_day_kwh")
Fast_Total_Prod_Day_kWhMap := d.getMap(Fast_Total_Prod_Day_kWhList, "fast_total_prod_day_kwh")
Fast_Total_Prod_Month_kWhList := d.getAvg(ctx, startTime, "fast_total_prod_month_kwh")
Fast_Total_Prod_Month_kWhMap := d.getMap(Fast_Total_Prod_Month_kWhList, "fast_total_prod_month_kwh")
Fast_ActivePowerOutPWCSell_kWList := d.getAvg(ctx, startTime, "fast_activepoweroutpwcsell_kw")
Fast_ActivePowerOutPWCSell_kWMap := d.getMap(Fast_ActivePowerOutPWCSell_kWList, "fast_activepoweroutpwcsell_kw")
Fast_Frequency_HzList := d.getAvg(ctx, startTime, "fast_frequency_hz")
Fast_Frequency_HzMap := d.getMap(Fast_Frequency_HzList, "fast_frequency_hz")
Slow_TempG1L2List := d.getAvg(ctx, startTime, "slow_tempg1l2")
Slow_TempG1L2Map := d.getMap(Slow_TempG1L2List, "slow_tempg1l2")
Slow_TempG1L3List := d.getAvg(ctx, startTime, "slow_tempg1l3")
Slow_TempG1L3Map := d.getMap(Slow_TempG1L3List, "slow_tempg1l3")
Slow_TempGearBoxHSSDEList := d.getAvg(ctx, startTime, "slow_tempgearboxhssde")
Slow_TempGearBoxHSSDEMap := d.getMap(Slow_TempGearBoxHSSDEList, "slow_tempgearboxhssde")
Slow_TempGearBoxIMSNDEList := d.getAvg(ctx, startTime, "slow_tempgearboximsnde")
Slow_TempGearBoxIMSNDEMap := d.getMap(Slow_TempGearBoxIMSNDEList, "slow_tempgearboximsnde")
Slow_TempOutdoorList := d.getAvg(ctx, startTime, "slow_tempoutdoor")
Slow_TempOutdoorMap := d.getMap(Slow_TempOutdoorList, "slow_tempoutdoor")
Fast_PitchAccuV3List := d.getAvg(ctx, startTime, "fast_pitchaccuv3")
Fast_PitchAccuV3Map := d.getMap(Fast_PitchAccuV3List, "fast_pitchaccuv3")
Slow_TotalTurbineActiveHoursList := d.getAvg(ctx, startTime, "slow_totalturbineactivehours")
Slow_TotalTurbineActiveHoursMap := d.getMap(Slow_TotalTurbineActiveHoursList, "slow_totalturbineactivehours")
Slow_TotalTurbineOKHoursList := d.getAvg(ctx, startTime, "slow_totalturbineokhours")
Slow_TotalTurbineOKHoursMap := d.getMap(Slow_TotalTurbineOKHoursList, "slow_totalturbineokhours")
Slow_TotalTurbineTimeAllHoursList := d.getAvg(ctx, startTime, "slow_totalturbinetimeallhours")
Slow_TotalTurbineTimeAllHoursMap := d.getMap(Slow_TotalTurbineTimeAllHoursList, "slow_totalturbinetimeallhours")
Slow_TempG1L1List := d.getAvg(ctx, startTime, "slow_tempg1l1")
Slow_TempG1L1Map := d.getMap(Slow_TempG1L1List, "slow_tempg1l1")
Slow_TempGearBoxOilSumpList := d.getAvg(ctx, startTime, "slow_tempgearboxoilsump")
Slow_TempGearBoxOilSumpMap := d.getMap(Slow_TempGearBoxOilSumpList, "slow_tempgearboxoilsump")
Fast_PitchAccuV2List := d.getAvg(ctx, startTime, "fast_pitchaccuv2")
Fast_PitchAccuV2Map := d.getMap(Fast_PitchAccuV2List, "fast_pitchaccuv2")
Slow_TotalGridOkHoursList := d.getAvg(ctx, startTime, "slow_totalgridokhours")
Slow_TotalGridOkHoursMap := d.getMap(Slow_TotalGridOkHoursList, "slow_totalgridokhours")
Slow_TotalActPowerOut_kWhList := d.getAvg(ctx, startTime, "slow_totalactpowerout_kwh")
Slow_TotalActPowerOut_kWhMap := d.getMap(Slow_TotalActPowerOut_kWhList, "slow_totalactpowerout_kwh")
Fast_YawServiceList := d.getAvg(ctx, startTime, "fast_yawservice")
Fast_YawServiceMap := d.getMap(Fast_YawServiceList, "fast_yawservice")
Fast_YawAngleList := d.getAvg(ctx, startTime, "fast_yawangle")
Fast_YawAngleMap := d.getMap(Fast_YawAngleList, "fast_yawangle")
Slow_WindDirectionList := d.getAvg(ctx, startTime, "slow_winddirection")
Slow_WindDirectionMap := d.getMap(Slow_WindDirectionList, "slow_winddirection")
Slow_CapableInductivePwrFactorList := d.getAvg(ctx, startTime, "slow_capableinductivepwrfactor")
Slow_CapableInductivePwrFactorMap := d.getMap(Slow_CapableInductivePwrFactorList, "slow_capableinductivepwrfactor")
Slow_TempGearBoxHSSNDEList := d.getAvg(ctx, startTime, "slow_tempgearboxhssnde")
Slow_TempGearBoxHSSNDEMap := d.getMap(Slow_TempGearBoxHSSNDEList, "slow_tempgearboxhssnde")
Slow_TempHubBearingList := d.getAvg(ctx, startTime, "slow_temphubbearing")
Slow_TempHubBearingMap := d.getMap(Slow_TempHubBearingList, "slow_temphubbearing")
Slow_TotalG1ActiveHoursList := d.getAvg(ctx, startTime, "slow_totalg1activehours")
Slow_TotalG1ActiveHoursMap := d.getMap(Slow_TotalG1ActiveHoursList, "slow_totalg1activehours")
Slow_TotalActPowerOutG1_kWhList := d.getAvg(ctx, startTime, "slow_totalactpoweroutg1_kwh")
Slow_TotalActPowerOutG1_kWhMap := d.getMap(Slow_TotalActPowerOutG1_kWhList, "slow_totalactpoweroutg1_kwh")
Slow_TotalReactPowerInG1_kVArhList := d.getAvg(ctx, startTime, "slow_totalreactpowering1_kvarh")
Slow_TotalReactPowerInG1_kVArhMap := d.getMap(Slow_TotalReactPowerInG1_kVArhList, "slow_totalreactpowering1_kvarh")
Slow_NacelleDrillList := d.getAvg(ctx, startTime, "slow_nacelledrill")
Slow_NacelleDrillMap := d.getMap(Slow_NacelleDrillList, "slow_nacelledrill")
Slow_TempGearBoxIMSDEList := d.getAvg(ctx, startTime, "slow_tempgearboximsde")
Slow_TempGearBoxIMSDEMap := d.getMap(Slow_TempGearBoxIMSDEList, "slow_tempgearboximsde")
Fast_Total_Operating_hrsList := d.getAvg(ctx, startTime, "fast_total_operating_hrs")
Fast_Total_Operating_hrsMap := d.getMap(Fast_Total_Operating_hrsList, "fast_total_operating_hrs")
Slow_TempNacelleList := d.getAvg(ctx, startTime, "slow_tempnacelle")
Slow_TempNacelleMap := d.getMap(Slow_TempNacelleList, "slow_tempnacelle")
Fast_Total_Grid_OK_hrsList := d.getAvg(ctx, startTime, "fast_total_grid_ok_hrs")
Fast_Total_Grid_OK_hrsMap := d.getMap(Fast_Total_Grid_OK_hrsList, "fast_total_grid_ok_hrs")
Fast_Total_WTG_OK_hrsList := d.getAvg(ctx, startTime, "fast_total_wtg_ok_hrs")
Fast_Total_WTG_OK_hrsMap := d.getMap(Fast_Total_WTG_OK_hrsList, "fast_total_wtg_ok_hrs")
Slow_TempCabinetTopBoxList := d.getAvg(ctx, startTime, "slow_tempcabinettopbox")
Slow_TempCabinetTopBoxMap := d.getMap(Slow_TempCabinetTopBoxList, "slow_tempcabinettopbox")
Slow_TempGeneratorBearingNDEList := d.getAvg(ctx, startTime, "slow_tempgeneratorbearingnde")
Slow_TempGeneratorBearingNDEMap := d.getMap(Slow_TempGeneratorBearingNDEList, "slow_tempgeneratorbearingnde")
Fast_Total_Access_hrsList := d.getAvg(ctx, startTime, "fast_total_access_hrs")
Fast_Total_Access_hrsMap := d.getMap(Fast_Total_Access_hrsList, "fast_total_access_hrs")
Slow_TempBottomPowerSectionList := d.getAvg(ctx, startTime, "slow_tempbottompowersection")
Slow_TempBottomPowerSectionMap := d.getMap(Slow_TempBottomPowerSectionList, "slow_tempbottompowersection")
Slow_TempGeneratorBearingDEList := d.getAvg(ctx, startTime, "slow_tempgeneratorbearingde")
Slow_TempGeneratorBearingDEMap := d.getMap(Slow_TempGeneratorBearingDEList, "slow_tempgeneratorbearingde")
Slow_TotalReactPowerIn_kVArhList := d.getAvg(ctx, startTime, "slow_totalreactpowerin_kvarh")
Slow_TotalReactPowerIn_kVArhMap := d.getMap(Slow_TotalReactPowerIn_kVArhList, "slow_totalreactpowerin_kvarh")
Slow_TempBottomControlSectionList := d.getAvg(ctx, startTime, "slow_tempbottomcontrolsection")
Slow_TempBottomControlSectionMap := d.getMap(Slow_TempBottomControlSectionList, "slow_tempbottomcontrolsection")
Slow_TempConv1List := d.getAvg(ctx, startTime, "slow_tempconv1")
Slow_TempConv1Map := d.getMap(Slow_TempConv1List, "slow_tempconv1")
Fast_ActivePowerRated_kWList := d.getAvg(ctx, startTime, "fast_activepowerrated_kw")
Fast_ActivePowerRated_kWMap := d.getMap(Fast_ActivePowerRated_kWList, "fast_activepowerrated_kw")
Fast_NodeIPList := d.getAvg(ctx, startTime, "fast_nodeip")
Fast_NodeIPMap := d.getMap(Fast_NodeIPList, "fast_nodeip")
Fast_PitchSpeed1List := d.getAvg(ctx, startTime, "fast_pitchspeed1")
Fast_PitchSpeed1Map := d.getMap(Fast_PitchSpeed1List, "fast_pitchspeed1")
Slow_CFCardSizeList := d.getAvg(ctx, startTime, "slow_cfcardsize")
Slow_CFCardSizeMap := d.getMap(Slow_CFCardSizeList, "slow_cfcardsize")
Slow_CPU_NumberList := d.getAvg(ctx, startTime, "slow_cpu_number")
Slow_CPU_NumberMap := d.getMap(Slow_CPU_NumberList, "slow_cpu_number")
Slow_CFCardSpaceLeftList := d.getAvg(ctx, startTime, "slow_cfcardspaceleft")
Slow_CFCardSpaceLeftMap := d.getMap(Slow_CFCardSpaceLeftList, "slow_cfcardspaceleft")
Slow_TempBottomCapSectionList := d.getAvg(ctx, startTime, "slow_tempbottomcapsection")
Slow_TempBottomCapSectionMap := d.getMap(Slow_TempBottomCapSectionList, "slow_tempbottomcapsection")
Slow_RatedPowerList := d.getAvg(ctx, startTime, "slow_ratedpower")
Slow_RatedPowerMap := d.getMap(Slow_RatedPowerList, "slow_ratedpower")
Slow_TempConv3List := d.getAvg(ctx, startTime, "slow_tempconv3")
Slow_TempConv3Map := d.getMap(Slow_TempConv3List, "slow_tempconv3")
Slow_TempConv2List := d.getAvg(ctx, startTime, "slow_tempconv2")
Slow_TempConv2Map := d.getMap(Slow_TempConv2List, "slow_tempconv2")
Slow_TotalActPowerIn_kWhList := d.getAvg(ctx, startTime, "slow_totalactpowerin_kwh")
Slow_TotalActPowerIn_kWhMap := d.getMap(Slow_TotalActPowerIn_kWhList, "slow_totalactpowerin_kwh")
Slow_TotalActPowerInG1_kWhList := d.getAvg(ctx, startTime, "slow_totalactpowering1_kwh")
Slow_TotalActPowerInG1_kWhMap := d.getMap(Slow_TotalActPowerInG1_kWhList, "slow_totalactpowering1_kwh")
Slow_TotalActPowerInG2_kWhList := d.getAvg(ctx, startTime, "slow_totalactpowering2_kwh")
Slow_TotalActPowerInG2_kWhMap := d.getMap(Slow_TotalActPowerInG2_kWhList, "slow_totalactpowering2_kwh")
Slow_TotalActPowerOutG2_kWhList := d.getAvg(ctx, startTime, "slow_totalactpoweroutg2_kwh")
Slow_TotalActPowerOutG2_kWhMap := d.getMap(Slow_TotalActPowerOutG2_kWhList, "slow_totalactpoweroutg2_kwh")
Slow_TotalG2ActiveHoursList := d.getAvg(ctx, startTime, "slow_totalg2activehours")
Slow_TotalG2ActiveHoursMap := d.getMap(Slow_TotalG2ActiveHoursList, "slow_totalg2activehours")
Slow_TotalReactPowerInG2_kVArhList := d.getAvg(ctx, startTime, "slow_totalreactpowering2_kvarh")
Slow_TotalReactPowerInG2_kVArhMap := d.getMap(Slow_TotalReactPowerInG2_kVArhList, "slow_totalreactpowering2_kvarh")
Slow_TotalReactPowerOut_kVArhList := d.getAvg(ctx, startTime, "slow_totalreactpowerout_kvarh")
Slow_TotalReactPowerOut_kVArhMap := d.getMap(Slow_TotalReactPowerOut_kVArhList, "slow_totalreactpowerout_kvarh")
Slow_UTCoffset_intList := d.getAvg(ctx, startTime, "slow_utcoffset_int")
Slow_UTCoffset_intMap := d.getMap(Slow_UTCoffset_intList, "slow_utcoffset_int")
groupSub := tk.M{
"_id": tk.M{
"timestampsecondgroup": "$timestampsecondgroup",
"projectname": "$projectname",
"turbine": "$turbine",
},
"count": tk.M{"$sum": 1},
}
pipesSub := []tk.M{}
matchSub := tk.M{}
if file != "" {
matchSub.Set("file", file)
}
matchSub.Set("timestampconvertedint", startTimeInt)
pipesSub = append(pipesSub, tk.M{"$match": matchSub})
pipesSub = append(pipesSub, tk.M{"$group": groupSub})
// pipesSub = append(pipesSub, tk.M{"$sort": tk.M{"_id": 1}})
csr, e := ctx.Connection.NewQuery().
From(new(ScadaThreeSecs).TableName()).
Command("pipe", pipesSub).
Cursor(nil)
if e != nil {
log.Printf("Error: %v \n", e.Error())
}
listSub := []tk.M{}
e = csr.Fetch(&listSub, 0, false)
countData := len(listSub)
if countData > 0 {
// log.Printf("timestamp: %v | %v | %v \n", startTime.Format("20060102 15:04"), countData, listSub[0].Get("_id").(tk.M).Get("timestampint"))
countPerProcess := 1000
counter := 0
startIndex := counter * countPerProcess
endIndex := (counter+1)*countPerProcess - 1
isFinish := false
for !isFinish {
startIndex = counter * countPerProcess
endIndex = (counter+1)*countPerProcess - 1
if endIndex > countData {
endIndex = countData
}
data := listSub[startIndex:endIndex]
wg.Add(1)
go func(data []tk.M) {
for _, val := range data {
// log.Printf("val: %v \n", val)
ext := new(ScadaThreeSecsExt)
ext.File = file
idSub := val.Get("_id").(tk.M)
timeStampSub := idSub.Get("timestampsecondgroup").(time.Time)
projectNameSub := idSub.GetString("projectname")
turbineSub := idSub.GetString("turbine")
timeStampStr := timeStampSub.UTC().Format("060102_150405")
key := timeStampStr + "#" + projectNameSub + "#" + turbineSub
ext.ProjectName = projectNameSub
ext.Turbine = turbineSub
tenMinuteInfo := GenTenMinuteInfo(timeStampSub)
ext.THour = tenMinuteInfo.THour
ext.TMinute = tenMinuteInfo.TMinute
ext.TSecond = tenMinuteInfo.TSecond
ext.TMinuteValue = tenMinuteInfo.TMinuteValue
ext.TMinuteCategory = tenMinuteInfo.TMinuteCategory
ext.TimeStampConverted = startTime
ext.TimeStampConvertedInt, _ = strconv.ParseInt(ext.TimeStampConverted.Format("200601021504"), 10, 64)
ext.TimeStampSecondGroup = timeStampSub
ext = ext.New()
Fast_CurrentL3 := Fast_CurrentL3Map[key]
if Fast_CurrentL3 != nil {
ext.Fast_CurrentL3 = Fast_CurrentL3.GetFloat64("fast_currentl3")
ext.Fast_CurrentL3CountSecs = Fast_CurrentL3.GetInt("fast_currentl3countsecs")
} else {
ext.Fast_CurrentL3 = emptyValueBig
ext.Fast_CurrentL3CountSecs = 0
}
Fast_ActivePower_kW := Fast_ActivePower_kWMap[key]
if Fast_ActivePower_kW != nil {
ext.Fast_ActivePower_kW = Fast_ActivePower_kW.GetFloat64("fast_activepower_kw")
ext.Fast_ActivePower_kWCountSecs = Fast_ActivePower_kW.GetInt("fast_activepower_kwcountsecs")
} else {
ext.Fast_ActivePower_kW = emptyValueBig
ext.Fast_ActivePower_kWCountSecs = 0
}
Fast_CurrentL1 := Fast_CurrentL1Map[key]
if Fast_CurrentL1 != nil {
ext.Fast_CurrentL1 = Fast_CurrentL1.GetFloat64("fast_currentl1")
ext.Fast_CurrentL1CountSecs = Fast_CurrentL1.GetInt("fast_currentl1countsecs")
} else {
ext.Fast_CurrentL1 = emptyValueBig
ext.Fast_CurrentL1CountSecs = 0
}
Fast_ActivePowerSetpoint_kW := Fast_ActivePowerSetpoint_kWMap[key]
if Fast_ActivePowerSetpoint_kW != nil {
ext.Fast_ActivePowerSetpoint_kW = Fast_ActivePowerSetpoint_kW.GetFloat64("fast_activepowersetpoint_kw")
ext.Fast_ActivePowerSetpoint_kWCountSecs = Fast_ActivePowerSetpoint_kW.GetInt("fast_activepowersetpoint_kwcountsecs")
} else {
ext.Fast_ActivePowerSetpoint_kW = emptyValueBig
ext.Fast_ActivePowerSetpoint_kWCountSecs = 0
}
Fast_CurrentL2 := Fast_CurrentL2Map[key]
if Fast_CurrentL2 != nil {
ext.Fast_CurrentL2 = Fast_CurrentL2.GetFloat64("fast_currentl2")
ext.Fast_CurrentL2CountSecs = Fast_CurrentL2.GetInt("fast_currentl2countsecs")
} else {
ext.Fast_CurrentL2 = emptyValueBig
ext.Fast_CurrentL2CountSecs = 0
}
Fast_DrTrVibValue := Fast_DrTrVibValueMap[key]
if Fast_DrTrVibValue != nil {
ext.Fast_DrTrVibValue = Fast_DrTrVibValue.GetFloat64("fast_drtrvibvalue")
ext.Fast_DrTrVibValueCountSecs = Fast_DrTrVibValue.GetInt("fast_drtrvibvaluecountsecs")
} else {
ext.Fast_DrTrVibValue = emptyValueBig
ext.Fast_DrTrVibValueCountSecs = 0
}
Fast_GenSpeed_RPM := Fast_GenSpeed_RPMMap[key]
if Fast_GenSpeed_RPM != nil {
ext.Fast_GenSpeed_RPM = Fast_GenSpeed_RPM.GetFloat64("fast_genspeed_rpm")
ext.Fast_GenSpeed_RPMCountSecs = Fast_GenSpeed_RPM.GetInt("fast_genspeed_rpmcountsecs")
} else {
ext.Fast_GenSpeed_RPM = emptyValueBig
ext.Fast_GenSpeed_RPMCountSecs = 0
}
Fast_PitchAccuV1 := Fast_PitchAccuV1Map[key]
if Fast_PitchAccuV1 != nil {
ext.Fast_PitchAccuV1 = Fast_PitchAccuV1.GetFloat64("fast_pitchaccuv1")
ext.Fast_PitchAccuV1CountSecs = Fast_PitchAccuV1.GetInt("fast_pitchaccuv1countsecs")
} else {
ext.Fast_PitchAccuV1 = emptyValueBig
ext.Fast_PitchAccuV1CountSecs = 0
}
Fast_PitchAngle := Fast_PitchAngleMap[key]
if Fast_PitchAngle != nil {
ext.Fast_PitchAngle = Fast_PitchAngle.GetFloat64("fast_pitchangle")
ext.Fast_PitchAngleCountSecs = Fast_PitchAngle.GetInt("fast_pitchanglecountsecs")
} else {
ext.Fast_PitchAngle = emptyValueBig
ext.Fast_PitchAngleCountSecs = 0
}
Fast_PitchAngle3 := Fast_PitchAngle3Map[key]
if Fast_PitchAngle3 != nil {
ext.Fast_PitchAngle3 = Fast_PitchAngle3.GetFloat64("fast_pitchangle3")
ext.Fast_PitchAngle3CountSecs = Fast_PitchAngle3.GetInt("fast_pitchangle3countsecs")
} else {
ext.Fast_PitchAngle3 = emptyValueBig
ext.Fast_PitchAngle3CountSecs = 0
}
Fast_PitchAngle2 := Fast_PitchAngle2Map[key]
if Fast_PitchAngle2 != nil {
ext.Fast_PitchAngle2 = Fast_PitchAngle2.GetFloat64("fast_pitchangle2")
ext.Fast_PitchAngle2CountSecs = Fast_PitchAngle2.GetInt("fast_pitchangle2countsecs")
} else {
ext.Fast_PitchAngle2 = emptyValueBig
ext.Fast_PitchAngle2CountSecs = 0
}
Fast_PitchConvCurrent1 := Fast_PitchConvCurrent1Map[key]
if Fast_PitchConvCurrent1 != nil {
ext.Fast_PitchConvCurrent1 = Fast_PitchConvCurrent1.GetFloat64("fast_pitchconvcurrent1")
ext.Fast_PitchConvCurrent1CountSecs = Fast_PitchConvCurrent1.GetInt("fast_pitchconvcurrent1countsecs")
} else {
ext.Fast_PitchConvCurrent1 = emptyValueBig
ext.Fast_PitchConvCurrent1CountSecs = 0
}
Fast_PitchConvCurrent3 := Fast_PitchConvCurrent3Map[key]
if Fast_PitchConvCurrent3 != nil {
ext.Fast_PitchConvCurrent3 = Fast_PitchConvCurrent3.GetFloat64("fast_pitchconvcurrent3")
ext.Fast_PitchConvCurrent3CountSecs = Fast_PitchConvCurrent3.GetInt("fast_pitchconvcurrent3countsecs")
} else {
ext.Fast_PitchConvCurrent3 = emptyValueBig
ext.Fast_PitchConvCurrent3CountSecs = 0
}
Fast_PitchConvCurrent2 := Fast_PitchConvCurrent2Map[key]
if Fast_PitchConvCurrent2 != nil {
ext.Fast_PitchConvCurrent2 = Fast_PitchConvCurrent2.GetFloat64("fast_pitchconvcurrent2")
ext.Fast_PitchConvCurrent2CountSecs = Fast_PitchConvCurrent2.GetInt("fast_pitchconvcurrent2countsecs")
} else {
ext.Fast_PitchConvCurrent2 = emptyValueBig
ext.Fast_PitchConvCurrent2CountSecs = 0
}
Fast_PowerFactor := Fast_PowerFactorMap[key]
if Fast_PowerFactor != nil {
ext.Fast_PowerFactor = Fast_PowerFactor.GetFloat64("fast_powerfactor")
ext.Fast_PowerFactorCountSecs = Fast_PowerFactor.GetInt("fast_powerfactorcountsecs")
} else {
ext.Fast_PowerFactor = emptyValueBig
ext.Fast_PowerFactorCountSecs = 0
}
Fast_ReactivePowerSetpointPPC_kVAr := Fast_ReactivePowerSetpointPPC_kVArMap[key]
if Fast_ReactivePowerSetpointPPC_kVAr != nil {
ext.Fast_ReactivePowerSetpointPPC_kVAr = Fast_ReactivePowerSetpointPPC_kVAr.GetFloat64("fast_reactivepowersetpointppc_kvar")
ext.Fast_ReactivePowerSetpointPPC_kVArCountSecs = Fast_ReactivePowerSetpointPPC_kVAr.GetInt("fast_reactivepowersetpointppc_kvarcountsecs")
} else {
ext.Fast_ReactivePowerSetpointPPC_kVAr = emptyValueBig
ext.Fast_ReactivePowerSetpointPPC_kVArCountSecs = 0
}
Fast_ReactivePower_kVAr := Fast_ReactivePower_kVArMap[key]
if Fast_ReactivePower_kVAr != nil {
ext.Fast_ReactivePower_kVAr = Fast_ReactivePower_kVAr.GetFloat64("fast_reactivepower_kvar")
ext.Fast_ReactivePower_kVArCountSecs = Fast_ReactivePower_kVAr.GetInt("fast_reactivepower_kvarcountsecs")
} else {
ext.Fast_ReactivePower_kVAr = emptyValueBig
ext.Fast_ReactivePower_kVArCountSecs = 0
}
Fast_RotorSpeed_RPM := Fast_RotorSpeed_RPMMap[key]
if Fast_RotorSpeed_RPM != nil {
ext.Fast_RotorSpeed_RPM = Fast_RotorSpeed_RPM.GetFloat64("fast_rotorspeed_rpm")
ext.Fast_RotorSpeed_RPMCountSecs = Fast_RotorSpeed_RPM.GetInt("fast_rotorspeed_rpmcountsecs")
} else {
ext.Fast_RotorSpeed_RPM = emptyValueBig
ext.Fast_RotorSpeed_RPMCountSecs = 0
}
Fast_VoltageL1 := Fast_VoltageL1Map[key]
if Fast_VoltageL1 != nil {
ext.Fast_VoltageL1 = Fast_VoltageL1.GetFloat64("fast_voltagel1")
ext.Fast_VoltageL1CountSecs = Fast_VoltageL1.GetInt("fast_voltagel1countsecs")
} else {
ext.Fast_VoltageL1 = emptyValueBig
ext.Fast_VoltageL1CountSecs = 0
}
Fast_VoltageL2 := Fast_VoltageL2Map[key]
if Fast_VoltageL2 != nil {
ext.Fast_VoltageL2 = Fast_VoltageL2.GetFloat64("fast_voltagel2")
ext.Fast_VoltageL2CountSecs = Fast_VoltageL2.GetInt("fast_voltagel2countsecs")
} else {
ext.Fast_VoltageL2 = emptyValueBig
ext.Fast_VoltageL2CountSecs = 0
}
Fast_WindSpeed_ms := Fast_WindSpeed_msMap[key]
if Fast_WindSpeed_ms != nil {
ext.Fast_WindSpeed_ms = Fast_WindSpeed_ms.GetFloat64("fast_windspeed_ms")
ext.Fast_WindSpeed_msCountSecs = Fast_WindSpeed_ms.GetInt("fast_windspeed_mscountsecs")
} else {
ext.Fast_WindSpeed_ms = emptyValueBig
ext.Fast_WindSpeed_msCountSecs = 0
}
Slow_CapableCapacitiveReactPwr_kVAr := Slow_CapableCapacitiveReactPwr_kVArMap[key]
if Slow_CapableCapacitiveReactPwr_kVAr != nil {
ext.Slow_CapableCapacitiveReactPwr_kVAr = Slow_CapableCapacitiveReactPwr_kVAr.GetFloat64("slow_capablecapacitivereactpwr_kvar")
ext.Slow_CapableCapacitiveReactPwr_kVArCountSecs = Slow_CapableCapacitiveReactPwr_kVAr.GetInt("slow_capablecapacitivereactpwr_kvarcountsecs")
} else {
ext.Slow_CapableCapacitiveReactPwr_kVAr = emptyValueBig
ext.Slow_CapableCapacitiveReactPwr_kVArCountSecs = 0
}
Slow_CapableInductiveReactPwr_kVAr := Slow_CapableInductiveReactPwr_kVArMap[key]
if Slow_CapableInductiveReactPwr_kVAr != nil {
ext.Slow_CapableInductiveReactPwr_kVAr = Slow_CapableInductiveReactPwr_kVAr.GetFloat64("slow_capableinductivereactpwr_kvar")
ext.Slow_CapableInductiveReactPwr_kVArCountSecs = Slow_CapableInductiveReactPwr_kVAr.GetInt("slow_capableinductivereactpwr_kvarcountsecs")
} else {
ext.Slow_CapableInductiveReactPwr_kVAr = emptyValueBig
ext.Slow_CapableInductiveReactPwr_kVArCountSecs = 0
}
Slow_DateTime_Sec := Slow_DateTime_SecMap[key]
if Slow_DateTime_Sec != nil {
ext.Slow_DateTime_Sec = Slow_DateTime_Sec.GetFloat64("slow_datetime_sec")
ext.Slow_DateTime_SecCountSecs = Slow_DateTime_Sec.GetInt("slow_datetime_seccountsecs")
} else {
ext.Slow_DateTime_Sec = emptyValueBig
ext.Slow_DateTime_SecCountSecs = 0
}
Slow_NacellePos := Slow_NacellePosMap[key]
if Slow_NacellePos != nil {
ext.Slow_NacellePos = Slow_NacellePos.GetFloat64("slow_nacellepos")
ext.Slow_NacellePosCountSecs = Slow_NacellePos.GetInt("slow_nacelleposcountsecs")
} else {
ext.Slow_NacellePos = emptyValueBig
ext.Slow_NacellePosCountSecs = 0
}
Fast_PitchAngle1 := Fast_PitchAngle1Map[key]
if Fast_PitchAngle1 != nil {
ext.Fast_PitchAngle1 = Fast_PitchAngle1.GetFloat64("fast_pitchangle1")
ext.Fast_PitchAngle1CountSecs = Fast_PitchAngle1.GetInt("fast_pitchangle1countsecs")
} else {
ext.Fast_PitchAngle1 = emptyValueBig
ext.Fast_PitchAngle1CountSecs = 0
}
Fast_VoltageL3 := Fast_VoltageL3Map[key]
if Fast_VoltageL3 != nil {
ext.Fast_VoltageL3 = Fast_VoltageL3.GetFloat64("fast_voltagel3")
ext.Fast_VoltageL3CountSecs = Fast_VoltageL3.GetInt("fast_voltagel3countsecs")
} else {
ext.Fast_VoltageL3 = emptyValueBig
ext.Fast_VoltageL3CountSecs = 0
}
Slow_CapableCapacitivePwrFactor := Slow_CapableCapacitivePwrFactorMap[key]
if Slow_CapableCapacitivePwrFactor != nil {
ext.Slow_CapableCapacitivePwrFactor = Slow_CapableCapacitivePwrFactor.GetFloat64("slow_capablecapacitivepwrfactor")
ext.Slow_CapableCapacitivePwrFactorCountSecs = Slow_CapableCapacitivePwrFactor.GetInt("slow_capablecapacitivepwrfactorcountsecs")
} else {
ext.Slow_CapableCapacitivePwrFactor = emptyValueBig
ext.Slow_CapableCapacitivePwrFactorCountSecs = 0
}
Fast_Total_Production_kWh := Fast_Total_Production_kWhMap[key]
if Fast_Total_Production_kWh != nil {
ext.Fast_Total_Production_kWh = Fast_Total_Production_kWh.GetFloat64("fast_total_production_kwh")
ext.Fast_Total_Production_kWhCountSecs = Fast_Total_Production_kWh.GetInt("fast_total_production_kwhcountsecs")
} else {
ext.Fast_Total_Production_kWh = emptyValueBig
ext.Fast_Total_Production_kWhCountSecs = 0
}
Fast_Total_Prod_Day_kWh := Fast_Total_Prod_Day_kWhMap[key]
if Fast_Total_Prod_Day_kWh != nil {
ext.Fast_Total_Prod_Day_kWh = Fast_Total_Prod_Day_kWh.GetFloat64("fast_total_prod_day_kwh")
ext.Fast_Total_Prod_Day_kWhCountSecs = Fast_Total_Prod_Day_kWh.GetInt("fast_total_prod_day_kwhcountsecs")
} else {
ext.Fast_Total_Prod_Day_kWh = emptyValueBig
ext.Fast_Total_Prod_Day_kWhCountSecs = 0
}
Fast_Total_Prod_Month_kWh := Fast_Total_Prod_Month_kWhMap[key]
if Fast_Total_Prod_Month_kWh != nil {
ext.Fast_Total_Prod_Month_kWh = Fast_Total_Prod_Month_kWh.GetFloat64("fast_total_prod_month_kwh")
ext.Fast_Total_Prod_Month_kWhCountSecs = Fast_Total_Prod_Month_kWh.GetInt("fast_total_prod_month_kwhcountsecs")
} else {
ext.Fast_Total_Prod_Month_kWh = emptyValueBig
ext.Fast_Total_Prod_Month_kWhCountSecs = 0
}
Fast_ActivePowerOutPWCSell_kW := Fast_ActivePowerOutPWCSell_kWMap[key]
if Fast_ActivePowerOutPWCSell_kW != nil {
ext.Fast_ActivePowerOutPWCSell_kW = Fast_ActivePowerOutPWCSell_kW.GetFloat64("fast_activepoweroutpwcsell_kw")
ext.Fast_ActivePowerOutPWCSell_kWCountSecs = Fast_ActivePowerOutPWCSell_kW.GetInt("fast_activepoweroutpwcsell_kwcountsecs")
} else {
ext.Fast_ActivePowerOutPWCSell_kW = emptyValueBig
ext.Fast_ActivePowerOutPWCSell_kWCountSecs = 0
}
Fast_Frequency_Hz := Fast_Frequency_HzMap[key]
if Fast_Frequency_Hz != nil {
ext.Fast_Frequency_Hz = Fast_Frequency_Hz.GetFloat64("fast_frequency_hz")
ext.Fast_Frequency_HzCountSecs = Fast_Frequency_Hz.GetInt("fast_frequency_hzcountsecs")
} else {
ext.Fast_Frequency_Hz = emptyValueBig
ext.Fast_Frequency_HzCountSecs = 0
}
Slow_TempG1L2 := Slow_TempG1L2Map[key]
if Slow_TempG1L2 != nil {
ext.Slow_TempG1L2 = Slow_TempG1L2.GetFloat64("slow_tempg1l2")
ext.Slow_TempG1L2CountSecs = Slow_TempG1L2.GetInt("slow_tempg1l2countsecs")
} else {
ext.Slow_TempG1L2 = emptyValueBig
ext.Slow_TempG1L2CountSecs = 0
}
Slow_TempG1L3 := Slow_TempG1L3Map[key]
if Slow_TempG1L3 != nil {
ext.Slow_TempG1L3 = Slow_TempG1L3.GetFloat64("slow_tempg1l3")
ext.Slow_TempG1L3CountSecs = Slow_TempG1L3.GetInt("slow_tempg1l3countsecs")
} else {
ext.Slow_TempG1L3 = emptyValueBig
ext.Slow_TempG1L3CountSecs = 0
}
Slow_TempGearBoxHSSDE := Slow_TempGearBoxHSSDEMap[key]
if Slow_TempGearBoxHSSDE != nil {
ext.Slow_TempGearBoxHSSDE = Slow_TempGearBoxHSSDE.GetFloat64("slow_tempgearboxhssde")
ext.Slow_TempGearBoxHSSDECountSecs = Slow_TempGearBoxHSSDE.GetInt("slow_tempgearboxhssdecountsecs")
} else {
ext.Slow_TempGearBoxHSSDE = emptyValueBig
ext.Slow_TempGearBoxHSSDECountSecs = 0
}
Slow_TempGearBoxIMSNDE := Slow_TempGearBoxIMSNDEMap[key]
if Slow_TempGearBoxIMSNDE != nil {
ext.Slow_TempGearBoxIMSNDE = Slow_TempGearBoxIMSNDE.GetFloat64("slow_tempgearboximsnde")
ext.Slow_TempGearBoxIMSNDECountSecs = Slow_TempGearBoxIMSNDE.GetInt("slow_tempgearboximsndecountsecs")
} else {
ext.Slow_TempGearBoxIMSNDE = emptyValueBig
ext.Slow_TempGearBoxIMSNDECountSecs = 0
}
Slow_TempOutdoor := Slow_TempOutdoorMap[key]
if Slow_TempOutdoor != nil {
ext.Slow_TempOutdoor = Slow_TempOutdoor.GetFloat64("slow_tempoutdoor")
ext.Slow_TempOutdoorCountSecs = Slow_TempOutdoor.GetInt("slow_tempoutdoorcountsecs")
} else {
ext.Slow_TempOutdoor = emptyValueBig
ext.Slow_TempOutdoorCountSecs = 0
}
Fast_PitchAccuV3 := Fast_PitchAccuV3Map[key]
if Fast_PitchAccuV3 != nil {
ext.Fast_PitchAccuV3 = Fast_PitchAccuV3.GetFloat64("fast_pitchaccuv3")
ext.Fast_PitchAccuV3CountSecs = Fast_PitchAccuV3.GetInt("fast_pitchaccuv3countsecs")
} else {
ext.Fast_PitchAccuV3 = emptyValueBig
ext.Fast_PitchAccuV3CountSecs = 0
}
Slow_TotalTurbineActiveHours := Slow_TotalTurbineActiveHoursMap[key]
if Slow_TotalTurbineActiveHours != nil {
ext.Slow_TotalTurbineActiveHours = Slow_TotalTurbineActiveHours.GetFloat64("slow_totalturbineactivehours")
ext.Slow_TotalTurbineActiveHoursCountSecs = Slow_TotalTurbineActiveHours.GetInt("slow_totalturbineactivehourscountsecs")
} else {
ext.Slow_TotalTurbineActiveHours = emptyValueBig
ext.Slow_TotalTurbineActiveHoursCountSecs = 0
}
Slow_TotalTurbineOKHours := Slow_TotalTurbineOKHoursMap[key]
if Slow_TotalTurbineOKHours != nil {
ext.Slow_TotalTurbineOKHours = Slow_TotalTurbineOKHours.GetFloat64("slow_totalturbineokhours")
ext.Slow_TotalTurbineOKHoursCountSecs = Slow_TotalTurbineOKHours.GetInt("slow_totalturbineokhourscountsecs")
} else {
ext.Slow_TotalTurbineOKHours = emptyValueBig
ext.Slow_TotalTurbineOKHoursCountSecs = 0
}
Slow_TotalTurbineTimeAllHours := Slow_TotalTurbineTimeAllHoursMap[key]
if Slow_TotalTurbineTimeAllHours != nil {
ext.Slow_TotalTurbineTimeAllHours = Slow_TotalTurbineTimeAllHours.GetFloat64("slow_totalturbinetimeallhours")
ext.Slow_TotalTurbineTimeAllHoursCountSecs = Slow_TotalTurbineTimeAllHours.GetInt("slow_totalturbinetimeallhourscountsecs")
} else {
ext.Slow_TotalTurbineTimeAllHours = emptyValueBig
ext.Slow_TotalTurbineTimeAllHoursCountSecs = 0
}
Slow_TempG1L1 := Slow_TempG1L1Map[key]
if Slow_TempG1L1 != nil {
ext.Slow_TempG1L1 = Slow_TempG1L1.GetFloat64("slow_tempg1l1")
ext.Slow_TempG1L1CountSecs = Slow_TempG1L1.GetInt("slow_tempg1l1countsecs")
} else {
ext.Slow_TempG1L1 = emptyValueBig
ext.Slow_TempG1L1CountSecs = 0
}
Slow_TempGearBoxOilSump := Slow_TempGearBoxOilSumpMap[key]
if Slow_TempGearBoxOilSump != nil {
ext.Slow_TempGearBoxOilSump = Slow_TempGearBoxOilSump.GetFloat64("slow_tempgearboxoilsump")
ext.Slow_TempGearBoxOilSumpCountSecs = Slow_TempGearBoxOilSump.GetInt("slow_tempgearboxoilsumpcountsecs")
} else {
ext.Slow_TempGearBoxOilSump = emptyValueBig
ext.Slow_TempGearBoxOilSumpCountSecs = 0
}
Fast_PitchAccuV2 := Fast_PitchAccuV2Map[key]
if Fast_PitchAccuV2 != nil {
ext.Fast_PitchAccuV2 = Fast_PitchAccuV2.GetFloat64("fast_pitchaccuv2")
ext.Fast_PitchAccuV2CountSecs = Fast_PitchAccuV2.GetInt("fast_pitchaccuv2countsecs")
} else {
ext.Fast_PitchAccuV2 = emptyValueBig
ext.Fast_PitchAccuV2CountSecs = 0
}
Slow_TotalGridOkHours := Slow_TotalGridOkHoursMap[key]
if Slow_TotalGridOkHours != nil {
ext.Slow_TotalGridOkHours = Slow_TotalGridOkHours.GetFloat64("slow_totalgridokhours")
ext.Slow_TotalGridOkHoursCountSecs = Slow_TotalGridOkHours.GetInt("slow_totalgridokhourscountsecs")
} else {
ext.Slow_TotalGridOkHours = emptyValueBig
ext.Slow_TotalGridOkHoursCountSecs = 0
}
Slow_TotalActPowerOut_kWh := Slow_TotalActPowerOut_kWhMap[key]
if Slow_TotalActPowerOut_kWh != nil {
ext.Slow_TotalActPowerOut_kWh = Slow_TotalActPowerOut_kWh.GetFloat64("slow_totalactpowerout_kwh")
ext.Slow_TotalActPowerOut_kWhCountSecs = Slow_TotalActPowerOut_kWh.GetInt("slow_totalactpowerout_kwhcountsecs")
} else {
ext.Slow_TotalActPowerOut_kWh = emptyValueBig
ext.Slow_TotalActPowerOut_kWhCountSecs = 0
}
Fast_YawService := Fast_YawServiceMap[key]
if Fast_YawService != nil {
ext.Fast_YawService = Fast_YawService.GetFloat64("fast_yawservice")
ext.Fast_YawServiceCountSecs = Fast_YawService.GetInt("fast_yawservicecountsecs")
} else {
ext.Fast_YawService = emptyValueBig
ext.Fast_YawServiceCountSecs = 0
}
Fast_YawAngle := Fast_YawAngleMap[key]
if Fast_YawAngle != nil {
ext.Fast_YawAngle = Fast_YawAngle.GetFloat64("fast_yawangle")
ext.Fast_YawAngleCountSecs = Fast_YawAngle.GetInt("fast_yawanglecountsecs")
} else {
ext.Fast_YawAngle = emptyValueBig
ext.Fast_YawAngleCountSecs = 0
}
Slow_WindDirection := Slow_WindDirectionMap[key]
if Slow_WindDirection != nil {
ext.Slow_WindDirection = Slow_WindDirection.GetFloat64("slow_winddirection")
ext.Slow_WindDirectionCountSecs = Slow_WindDirection.GetInt("slow_winddirectioncountsecs")
} else {
ext.Slow_WindDirection = emptyValueBig
ext.Slow_WindDirectionCountSecs = 0
}
Slow_CapableInductivePwrFactor := Slow_CapableInductivePwrFactorMap[key]
if Slow_CapableInductivePwrFactor != nil {
ext.Slow_CapableInductivePwrFactor = Slow_CapableInductivePwrFactor.GetFloat64("slow_capableinductivepwrfactor")
ext.Slow_CapableInductivePwrFactorCountSecs = Slow_CapableInductivePwrFactor.GetInt("slow_capableinductivepwrfactorcountsecs")
} else {
ext.Slow_CapableInductivePwrFactor = emptyValueBig
ext.Slow_CapableInductivePwrFactorCountSecs = 0
}
Slow_TempGearBoxHSSNDE := Slow_TempGearBoxHSSNDEMap[key]
if Slow_TempGearBoxHSSNDE != nil {
ext.Slow_TempGearBoxHSSNDE = Slow_TempGearBoxHSSNDE.GetFloat64("slow_tempgearboxhssnde")
ext.Slow_TempGearBoxHSSNDECountSecs = Slow_TempGearBoxHSSNDE.GetInt("slow_tempgearboxhssndecountsecs")
} else {
ext.Slow_TempGearBoxHSSNDE = emptyValueBig
ext.Slow_TempGearBoxHSSNDECountSecs = 0
}
Slow_TempHubBearing := Slow_TempHubBearingMap[key]
if Slow_TempHubBearing != nil {
ext.Slow_TempHubBearing = Slow_TempHubBearing.GetFloat64("slow_temphubbearing")
ext.Slow_TempHubBearingCountSecs = Slow_TempHubBearing.GetInt("slow_temphubbearingcountsecs")
} else {
ext.Slow_TempHubBearing = emptyValueBig
ext.Slow_TempHubBearingCountSecs = 0
}
Slow_TotalG1ActiveHours := Slow_TotalG1ActiveHoursMap[key]
if Slow_TotalG1ActiveHours != nil {
ext.Slow_TotalG1ActiveHours = Slow_TotalG1ActiveHours.GetFloat64("slow_totalg1activehours")
ext.Slow_TotalG1ActiveHoursCountSecs = Slow_TotalG1ActiveHours.GetInt("slow_totalg1activehourscountsecs")
} else {
ext.Slow_TotalG1ActiveHours = emptyValueBig
ext.Slow_TotalG1ActiveHoursCountSecs = 0
}
Slow_TotalActPowerOutG1_kWh := Slow_TotalActPowerOutG1_kWhMap[key]
if Slow_TotalActPowerOutG1_kWh != nil {
ext.Slow_TotalActPowerOutG1_kWh = Slow_TotalActPowerOutG1_kWh.GetFloat64("slow_totalactpoweroutg1_kwh")
ext.Slow_TotalActPowerOutG1_kWhCountSecs = Slow_TotalActPowerOutG1_kWh.GetInt("slow_totalactpoweroutg1_kwhcountsecs")
} else {
ext.Slow_TotalActPowerOutG1_kWh = emptyValueBig
ext.Slow_TotalActPowerOutG1_kWhCountSecs = 0
}
Slow_TotalReactPowerInG1_kVArh := Slow_TotalReactPowerInG1_kVArhMap[key]
if Slow_TotalReactPowerInG1_kVArh != nil {
ext.Slow_TotalReactPowerInG1_kVArh = Slow_TotalReactPowerInG1_kVArh.GetFloat64("slow_totalreactpowering1_kvarh")
ext.Slow_TotalReactPowerInG1_kVArhCountSecs = Slow_TotalReactPowerInG1_kVArh.GetInt("slow_totalreactpowering1_kvarhcountsecs")
} else {
ext.Slow_TotalReactPowerInG1_kVArh = emptyValueBig
ext.Slow_TotalReactPowerInG1_kVArhCountSecs = 0
}
Slow_NacelleDrill := Slow_NacelleDrillMap[key]
if Slow_NacelleDrill != nil {
ext.Slow_NacelleDrill = Slow_NacelleDrill.GetFloat64("slow_nacelledrill")
ext.Slow_NacelleDrillCountSecs = Slow_NacelleDrill.GetInt("slow_nacelledrillcountsecs")
} else {
ext.Slow_NacelleDrill = emptyValueBig
ext.Slow_NacelleDrillCountSecs = 0
}
Slow_TempGearBoxIMSDE := Slow_TempGearBoxIMSDEMap[key]
if Slow_TempGearBoxIMSDE != nil {
ext.Slow_TempGearBoxIMSDE = Slow_TempGearBoxIMSDE.GetFloat64("slow_tempgearboximsde")
ext.Slow_TempGearBoxIMSDECountSecs = Slow_TempGearBoxIMSDE.GetInt("slow_tempgearboximsdecountsecs")
} else {
ext.Slow_TempGearBoxIMSDE = emptyValueBig
ext.Slow_TempGearBoxIMSDECountSecs = 0
}
Fast_Total_Operating_hrs := Fast_Total_Operating_hrsMap[key]
if Fast_Total_Operating_hrs != nil {
ext.Fast_Total_Operating_hrs = Fast_Total_Operating_hrs.GetFloat64("fast_total_operating_hrs")
ext.Fast_Total_Operating_hrsCountSecs = Fast_Total_Operating_hrs.GetInt("fast_total_operating_hrscountsecs")
} else {
ext.Fast_Total_Operating_hrs = emptyValueBig
ext.Fast_Total_Operating_hrsCountSecs = 0
}
Slow_TempNacelle := Slow_TempNacelleMap[key]
if Slow_TempNacelle != nil {
ext.Slow_TempNacelle = Slow_TempNacelle.GetFloat64("slow_tempnacelle")
ext.Slow_TempNacelleCountSecs = Slow_TempNacelle.GetInt("slow_tempnacellecountsecs")
} else {
ext.Slow_TempNacelle = emptyValueBig
ext.Slow_TempNacelleCountSecs = 0
}
Fast_Total_Grid_OK_hrs := Fast_Total_Grid_OK_hrsMap[key]
if Fast_Total_Grid_OK_hrs != nil {
ext.Fast_Total_Grid_OK_hrs = Fast_Total_Grid_OK_hrs.GetFloat64("fast_total_grid_ok_hrs")
ext.Fast_Total_Grid_OK_hrsCountSecs = Fast_Total_Grid_OK_hrs.GetInt("fast_total_grid_ok_hrscountsecs")
} else {
ext.Fast_Total_Grid_OK_hrs = emptyValueBig
ext.Fast_Total_Grid_OK_hrsCountSecs = 0
}
Fast_Total_WTG_OK_hrs := Fast_Total_WTG_OK_hrsMap[key]
if Fast_Total_WTG_OK_hrs != nil {
ext.Fast_Total_WTG_OK_hrs = Fast_Total_WTG_OK_hrs.GetFloat64("fast_total_wtg_ok_hrs")
ext.Fast_Total_WTG_OK_hrsCountSecs = Fast_Total_WTG_OK_hrs.GetInt("fast_total_wtg_ok_hrscountsecs")
} else {
ext.Fast_Total_WTG_OK_hrs = emptyValueBig
ext.Fast_Total_WTG_OK_hrsCountSecs = 0
}
Slow_TempCabinetTopBox := Slow_TempCabinetTopBoxMap[key]
if Slow_TempCabinetTopBox != nil {
ext.Slow_TempCabinetTopBox = Slow_TempCabinetTopBox.GetFloat64("slow_tempcabinettopbox")
ext.Slow_TempCabinetTopBoxCountSecs = Slow_TempCabinetTopBox.GetInt("slow_tempcabinettopboxcountsecs")
} else {
ext.Slow_TempCabinetTopBox = emptyValueBig
ext.Slow_TempCabinetTopBoxCountSecs = 0
}
Slow_TempGeneratorBearingNDE := Slow_TempGeneratorBearingNDEMap[key]
if Slow_TempGeneratorBearingNDE != nil {
ext.Slow_TempGeneratorBearingNDE = Slow_TempGeneratorBearingNDE.GetFloat64("slow_tempgeneratorbearingnde")
ext.Slow_TempGeneratorBearingNDECountSecs = Slow_TempGeneratorBearingNDE.GetInt("slow_tempgeneratorbearingndecountsecs")
} else {
ext.Slow_TempGeneratorBearingNDE = emptyValueBig
ext.Slow_TempGeneratorBearingNDECountSecs = 0
}
Fast_Total_Access_hrs := Fast_Total_Access_hrsMap[key]
if Fast_Total_Access_hrs != nil {
ext.Fast_Total_Access_hrs = Fast_Total_Access_hrs.GetFloat64("fast_total_access_hrs")
ext.Fast_Total_Access_hrsCountSecs = Fast_Total_Access_hrs.GetInt("fast_total_access_hrscountsecs")
} else {
ext.Fast_Total_Access_hrs = emptyValueBig
ext.Fast_Total_Access_hrsCountSecs = 0
}
Slow_TempBottomPowerSection := Slow_TempBottomPowerSectionMap[key]
if Slow_TempBottomPowerSection != nil {
ext.Slow_TempBottomPowerSection = Slow_TempBottomPowerSection.GetFloat64("slow_tempbottompowersection")
ext.Slow_TempBottomPowerSectionCountSecs = Slow_TempBottomPowerSection.GetInt("slow_tempbottompowersectioncountsecs")
} else {
ext.Slow_TempBottomPowerSection = emptyValueBig
ext.Slow_TempBottomPowerSectionCountSecs = 0
}
Slow_TempGeneratorBearingDE := Slow_TempGeneratorBearingDEMap[key]
if Slow_TempGeneratorBearingDE != nil {
ext.Slow_TempGeneratorBearingDE = Slow_TempGeneratorBearingDE.GetFloat64("slow_tempgeneratorbearingde")
ext.Slow_TempGeneratorBearingDECountSecs = Slow_TempGeneratorBearingDE.GetInt("slow_tempgeneratorbearingdecountsecs")
} else {
ext.Slow_TempGeneratorBearingDE = emptyValueBig
ext.Slow_TempGeneratorBearingDECountSecs = 0
}
Slow_TotalReactPowerIn_kVArh := Slow_TotalReactPowerIn_kVArhMap[key]
if Slow_TotalReactPowerIn_kVArh != nil {
ext.Slow_TotalReactPowerIn_kVArh = Slow_TotalReactPowerIn_kVArh.GetFloat64("slow_totalreactpowerin_kvarh")
ext.Slow_TotalReactPowerIn_kVArhCountSecs = Slow_TotalReactPowerIn_kVArh.GetInt("slow_totalreactpowerin_kvarhcountsecs")
} else {
ext.Slow_TotalReactPowerIn_kVArh = emptyValueBig
ext.Slow_TotalReactPowerIn_kVArhCountSecs = 0
}
Slow_TempBottomControlSection := Slow_TempBottomControlSectionMap[key]
if Slow_TempBottomControlSection != nil {
ext.Slow_TempBottomControlSection = Slow_TempBottomControlSection.GetFloat64("slow_tempbottomcontrolsection")
ext.Slow_TempBottomControlSectionCountSecs = Slow_TempBottomControlSection.GetInt("slow_tempbottomcontrolsectioncountsecs")
} else {
ext.Slow_TempBottomControlSection = emptyValueBig
ext.Slow_TempBottomControlSectionCountSecs = 0
}
Slow_TempConv1 := Slow_TempConv1Map[key]
if Slow_TempConv1 != nil {
ext.Slow_TempConv1 = Slow_TempConv1.GetFloat64("slow_tempconv1")
ext.Slow_TempConv1CountSecs = Slow_TempConv1.GetInt("slow_tempconv1countsecs")
} else {
ext.Slow_TempConv1 = emptyValueBig
ext.Slow_TempConv1CountSecs = 0
}
Fast_ActivePowerRated_kW := Fast_ActivePowerRated_kWMap[key]
if Fast_ActivePowerRated_kW != nil {
ext.Fast_ActivePowerRated_kW = Fast_ActivePowerRated_kW.GetFloat64("fast_activepowerrated_kw")
ext.Fast_ActivePowerRated_kWCountSecs = Fast_ActivePowerRated_kW.GetInt("fast_activepowerrated_kwcountsecs")
} else {
ext.Fast_ActivePowerRated_kW = emptyValueBig
ext.Fast_ActivePowerRated_kWCountSecs = 0
}
Fast_NodeIP := Fast_NodeIPMap[key]
if Fast_NodeIP != nil {
ext.Fast_NodeIP = Fast_NodeIP.GetFloat64("fast_nodeip")
ext.Fast_NodeIPCountSecs = Fast_NodeIP.GetInt("fast_nodeipcountsecs")
} else {
ext.Fast_NodeIP = emptyValueBig
ext.Fast_NodeIPCountSecs = 0
}
Fast_PitchSpeed1 := Fast_PitchSpeed1Map[key]
if Fast_PitchSpeed1 != nil {
ext.Fast_PitchSpeed1 = Fast_PitchSpeed1.GetFloat64("fast_pitchspeed1")
ext.Fast_PitchSpeed1CountSecs = Fast_PitchSpeed1.GetInt("fast_pitchspeed1countsecs")
} else {
ext.Fast_PitchSpeed1 = emptyValueBig
ext.Fast_PitchSpeed1CountSecs = 0
}
Slow_CFCardSize := Slow_CFCardSizeMap[key]
if Slow_CFCardSize != nil {
ext.Slow_CFCardSize = Slow_CFCardSize.GetFloat64("slow_cfcardsize")
ext.Slow_CFCardSizeCountSecs = Slow_CFCardSize.GetInt("slow_cfcardsizecountsecs")
} else {
ext.Slow_CFCardSize = emptyValueBig
ext.Slow_CFCardSizeCountSecs = 0
}
Slow_CPU_Number := Slow_CPU_NumberMap[key]
if Slow_CPU_Number != nil {
ext.Slow_CPU_Number = Slow_CPU_Number.GetFloat64("slow_cpu_number")
ext.Slow_CPU_NumberCountSecs = Slow_CPU_Number.GetInt("slow_cpu_numbercountsecs")
} else {
ext.Slow_CPU_Number = emptyValueBig
ext.Slow_CPU_NumberCountSecs = 0
}
Slow_CFCardSpaceLeft := Slow_CFCardSpaceLeftMap[key]
if Slow_CFCardSpaceLeft != nil {
ext.Slow_CFCardSpaceLeft = Slow_CFCardSpaceLeft.GetFloat64("slow_cfcardspaceleft")
ext.Slow_CFCardSpaceLeftCountSecs = Slow_CFCardSpaceLeft.GetInt("slow_cfcardspaceleftcountsecs")
} else {
ext.Slow_CFCardSpaceLeft = emptyValueBig
ext.Slow_CFCardSpaceLeftCountSecs = 0
}
Slow_TempBottomCapSection := Slow_TempBottomCapSectionMap[key]
if Slow_TempBottomCapSection != nil {
ext.Slow_TempBottomCapSection = Slow_TempBottomCapSection.GetFloat64("slow_tempbottomcapsection")
ext.Slow_TempBottomCapSectionCountSecs = Slow_TempBottomCapSection.GetInt("slow_tempbottomcapsectioncountsecs")
} else {
ext.Slow_TempBottomCapSection = emptyValueBig
ext.Slow_TempBottomCapSectionCountSecs = 0
}
Slow_RatedPower := Slow_RatedPowerMap[key]
if Slow_RatedPower != nil {
ext.Slow_RatedPower = Slow_RatedPower.GetFloat64("slow_ratedpower")
ext.Slow_RatedPowerCountSecs = Slow_RatedPower.GetInt("slow_ratedpowercountsecs")
} else {
ext.Slow_RatedPower = emptyValueBig
ext.Slow_RatedPowerCountSecs = 0
}
Slow_TempConv3 := Slow_TempConv3Map[key]
if Slow_TempConv3 != nil {
ext.Slow_TempConv3 = Slow_TempConv3.GetFloat64("slow_tempconv3")
ext.Slow_TempConv3CountSecs = Slow_TempConv3.GetInt("slow_tempconv3countsecs")
} else {
ext.Slow_TempConv3 = emptyValueBig
ext.Slow_TempConv3CountSecs = 0
}
Slow_TempConv2 := Slow_TempConv2Map[key]
if Slow_TempConv2 != nil {
ext.Slow_TempConv2 = Slow_TempConv2.GetFloat64("slow_tempconv2")
ext.Slow_TempConv2CountSecs = Slow_TempConv2.GetInt("slow_tempconv2countsecs")
} else {
ext.Slow_TempConv2 = emptyValueBig
ext.Slow_TempConv2CountSecs = 0
}
Slow_TotalActPowerIn_kWh := Slow_TotalActPowerIn_kWhMap[key]
if Slow_TotalActPowerIn_kWh != nil {
ext.Slow_TotalActPowerIn_kWh = Slow_TotalActPowerIn_kWh.GetFloat64("slow_totalactpowerin_kwh")
ext.Slow_TotalActPowerIn_kWhCountSecs = Slow_TotalActPowerIn_kWh.GetInt("slow_totalactpowerin_kwhcountsecs")
} else {
ext.Slow_TotalActPowerIn_kWh = emptyValueBig
ext.Slow_TotalActPowerIn_kWhCountSecs = 0
}
Slow_TotalActPowerInG1_kWh := Slow_TotalActPowerInG1_kWhMap[key]
if Slow_TotalActPowerInG1_kWh != nil {
ext.Slow_TotalActPowerInG1_kWh = Slow_TotalActPowerInG1_kWh.GetFloat64("slow_totalactpowering1_kwh")
ext.Slow_TotalActPowerInG1_kWhCountSecs = Slow_TotalActPowerInG1_kWh.GetInt("slow_totalactpowering1_kwhcountsecs")
} else {
ext.Slow_TotalActPowerInG1_kWh = emptyValueBig
ext.Slow_TotalActPowerInG1_kWhCountSecs = 0
}
Slow_TotalActPowerInG2_kWh := Slow_TotalActPowerInG2_kWhMap[key]
if Slow_TotalActPowerInG2_kWh != nil {
ext.Slow_TotalActPowerInG2_kWh = Slow_TotalActPowerInG2_kWh.GetFloat64("slow_totalactpowering2_kwh")
ext.Slow_TotalActPowerInG2_kWhCountSecs = Slow_TotalActPowerInG2_kWh.GetInt("slow_totalactpowering2_kwhcountsecs")
} else {
ext.Slow_TotalActPowerInG2_kWh = emptyValueBig
ext.Slow_TotalActPowerInG2_kWhCountSecs = 0
}
Slow_TotalActPowerOutG2_kWh := Slow_TotalActPowerOutG2_kWhMap[key]
if Slow_TotalActPowerOutG2_kWh != nil {
ext.Slow_TotalActPowerOutG2_kWh = Slow_TotalActPowerOutG2_kWh.GetFloat64("slow_totalactpoweroutg2_kwh")
ext.Slow_TotalActPowerOutG2_kWhCountSecs = Slow_TotalActPowerOutG2_kWh.GetInt("slow_totalactpoweroutg2_kwhcountsecs")
} else {
ext.Slow_TotalActPowerOutG2_kWh = emptyValueBig
ext.Slow_TotalActPowerOutG2_kWhCountSecs = 0
}
Slow_TotalG2ActiveHours := Slow_TotalG2ActiveHoursMap[key]
if Slow_TotalG2ActiveHours != nil {
ext.Slow_TotalG2ActiveHours = Slow_TotalG2ActiveHours.GetFloat64("slow_totalg2activehours")
ext.Slow_TotalG2ActiveHoursCountSecs = Slow_TotalG2ActiveHours.GetInt("slow_totalg2activehourscountsecs")
} else {
ext.Slow_TotalG2ActiveHours = emptyValueBig
ext.Slow_TotalG2ActiveHoursCountSecs = 0
}
Slow_TotalReactPowerInG2_kVArh := Slow_TotalReactPowerInG2_kVArhMap[key]
if Slow_TotalReactPowerInG2_kVArh != nil {
ext.Slow_TotalReactPowerInG2_kVArh = Slow_TotalReactPowerInG2_kVArh.GetFloat64("slow_totalreactpowering2_kvarh")
ext.Slow_TotalReactPowerInG2_kVArhCountSecs = Slow_TotalReactPowerInG2_kVArh.GetInt("slow_totalreactpowering2_kvarhcountsecs")
} else {
ext.Slow_TotalReactPowerInG2_kVArh = emptyValueBig
ext.Slow_TotalReactPowerInG2_kVArhCountSecs = 0
}
Slow_TotalReactPowerOut_kVArh := Slow_TotalReactPowerOut_kVArhMap[key]
if Slow_TotalReactPowerOut_kVArh != nil {
ext.Slow_TotalReactPowerOut_kVArh = Slow_TotalReactPowerOut_kVArh.GetFloat64("slow_totalreactpowerout_kvarh")
ext.Slow_TotalReactPowerOut_kVArhCountSecs = Slow_TotalReactPowerOut_kVArh.GetInt("slow_totalreactpowerout_kvarhcountsecs")
} else {
ext.Slow_TotalReactPowerOut_kVArh = emptyValueBig
ext.Slow_TotalReactPowerOut_kVArhCountSecs = 0
}
Slow_UTCoffset_int := Slow_UTCoffset_intMap[key]
if Slow_UTCoffset_int != nil {
ext.Slow_UTCoffset_int = Slow_UTCoffset_int.GetFloat64("slow_utcoffset_int")
ext.Slow_UTCoffset_intCountSecs = Slow_UTCoffset_int.GetInt("slow_utcoffset_intcountsecs")
} else {
ext.Slow_UTCoffset_int = emptyValueBig
ext.Slow_UTCoffset_intCountSecs = 0
}
// log.Printf("%#v \n", tenScada)
mutexX.Lock()
/*if ext.Turbine == "HBR004" {
log.Printf("tenScada: %v | %v | %v | %v \n", ext.ID, ext.TimeStamp.UTC().Format("20060102 15:04"), startTime.Format("20060102 15:04"), idSub.Get("timestampint").(int64))
}*/
err := ctx.Insert(ext)
ErrorHandler(err, "Saving")
mutexX.Unlock()
}
wg.Done()
}(data)
counter++
if endIndex >= countData {
isFinish = true
}
}
wg.Wait()
}
startTime = hpp.GenNext10Minutes(startTime)
}
}
}
}
csr.Close()
log.Println("End Conversion.")
return
}
func (d *ConvThreeExt) getAvg(ctx *DataContext, timestampconverted time.Time, field string) (result []tk.M) {
pipes := []tk.M{}
match := tk.M{
"timestampconverted": timestampconverted,
field: tk.M{"$gt": emptyValueBig},
}
group := tk.M{
"_id": tk.M{
"timestamp": "$timestampsecondgroup",
"projectname": "$projectname",
"turbine": "$turbine",
},
field: tk.M{"$avg": "$" + field},
field + "countsecs": tk.M{"$sum": 1},
}
pipes = append(pipes, tk.M{"$match": match})
pipes = append(pipes, tk.M{"$group": group})
csr, e := ctx.Connection.NewQuery().
From(new(ScadaThreeSecs).TableName()).
Command("pipe", pipes).
Cursor(nil)
if e != nil {
log.Printf("ERR: %#v \n", e.Error())
} else {
e = csr.Fetch(&result, 0, false)
}
csr.Close()
return
}
func (d *ConvThreeExt) getMap(list []tk.M, field string) (result map[string]tk.M) {
result = map[string]tk.M{}
for _, val := range list {
id := val.Get("_id").(tk.M)
timeStamp := id.Get("timestamp").(time.Time)
projectName := id.GetString("projectname")
turbine := id.GetString("turbine")
timeStampStr := timeStamp.UTC().Format("060102_150405")
key := timeStampStr + "#" + projectName + "#" + turbine
value := tk.M{}
var avg float64
var count int
count = val.GetInt(field + "countsecs")
// log.Printf("count: %v | %#v | %v \n", val.GetInt(field+"_count"), key, timeStamp.UTC().String())
if count == 0 {
avg = emptyValueBig
} else {
avg = val.GetFloat64(field)
}
value.Set(field, avg)
value.Set(field+"countsecs", count)
result[key] = value
}
/*if field == "fast_currentl3" {
log.Printf("list: \n%#v \n", result)
}*/
return
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 765
|
Q: CSqlDataProvider sort attributes in CListView I have the following, a $dataprovider in Controller:
$sql = 'select * from tbl_user order by id_user';
$count = 'select count(*) from tbl_user';
$dataProvider = new CSqlDataProvider($sql, array(
'keyField' => 'id_user',
'totalItemCount' => $count,
'sort' => array(
'attributes' => array(
'name',
),
),
'pagination' => array(
'pageSize' => 15,
),
));
and ClistView in view:
$this->widget('zii.widgets.CListView', array(
'dataProvider' => $dataProvider,
'sortableAttributes' => array(
'name' => 'Name',
),
));
The Sort by: Name appears but when i click nothing happens i.e. the ajax request works but my records are not sorted. What am i missing? Thanks.
I have solved the problem with:
$sql = 'select * from tbl_user';
$dataProvider = new CSqlDataProvider($sql, array(
'keyField' => 'id_user',
'totalItemCount' => $count,
'sort' => array(
'attributes' => array(
'name',
),
'defaultOrder' => array('name' => false)
),
'pagination' => array(
'pageSize' => 15,
),
));
The idea is to remove any order by from within the sql statement and let the csqldataprovider to handle it.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,773
|
I own a 70s-vintage V15-3 with an original VN35HE stylus, now defunct. I am very fond of the noise it makes - can Shure supply a replacement stylus and how much would it cost? Can Shure supply a stylus for the V15-3 or the M75 (ED)series for playing 78s?
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,198
|
At a workshop I led for DreamMaker Bath and Kitchen franchise owners, one franchisee shared how his team had exceeded a customer's expectations. The customer, who had bad remodeling experiences previously, was so pleased with DreamMaker's work they hosted a party for the entire crew, including all of the subcontractors!
This festive celebration was a great reminder of the importance of work that is well done.
In the workplace, celebrations for a job well done can be inspiring and encourage even greater achievements.
Originally published August 22, 2011.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,706
|
{"url":"https:\/\/www.cs.utexas.edu\/users\/flame\/laff\/pfhp\/week2-strassen.html","text":"## Unit2.5.2Strassen's algorithm\n\nSo far, we have discussed algorithms for computing $C := A B + C$ via a triple-nested loop that then perform $2 m n k$ flops. A question is whether this is the best we can do.\n\nA classic result regarding this is Strassen's algorithm [26]. It shows that, for problems of size $m = n = k = 2^r$ for some integer $r \\text{,}$ matrix-matrix multiplication can be computed in time $O( n^{\\log_2{7}} ) \\approx O( n^{2.807}) \\text{.}$ Since Strassen proposed this, a succession of results further improved upon the exponent. In this discussion, we stick to the original result.\n\nHow can this be? For simplicity, assume $m = n = k$ are all even. Partition\n\n\\begin{equation*} C = \\left( \\begin{array}{c c} C_{00} \\amp C_{01} \\\\ C_{10} \\amp C_{11} \\end{array} \\right) , A = \\left( \\begin{array}{c c} A_{00} \\amp A_{01} \\\\ A_{10} \\amp A_{11} \\end{array} \\right) , B = \\left( \\begin{array}{c c} B_{00} \\amp B_{01} \\\\ B_{10} \\amp B_{11} \\end{array} \\right), \\end{equation*}\n\nwhere $X_{ij}$ are $n\/2 \\times n\/2$ for $X \\in \\{ C, A, B \\}$ and $ij \\in \\{ 00, 01, 10, 11 \\} \\text{.}$ Now that you understand how partitioned matrix-matrix multiplication works, you know that the following computations compute $C := A B + C \\text{:}$\n\n\\begin{equation*} \\begin{array}{l c l} C_{00} \\amp=\\amp \\alpha (A_{00} B_{00} + A_{01} B_{10}) + C_{00} \\\\ C_{01} \\amp=\\amp \\alpha (A_{00} B_{01} + A_{01} B_{11}) + C_{01} \\\\ C_{10} \\amp=\\amp \\alpha (A_{10} B_{00} + A_{11} B_{10}) + C_{10} \\\\ C_{11} \\amp=\\amp \\alpha (A_{10} B_{01} + A_{11} B_{11}) + C_{11} . \\end{array} \\end{equation*}\n\nEach of the eight matrix-matrix multiplications requires $2 (n\/2)^3 = 1\/4 n^3$ flops and hence the total cost is our usual $2 n^3$ flops.\n\nSurprisingly, the following computations also compute $C := A B + C \\text{:}$\n\n\\begin{equation*} \\begin{array}{l @{\\hspace{1pt}} c @{\\hspace{1pt}} l l r} M_0 \\amp =\\amp ( A_{00} + A_{11} ) ( B_{00} + B_{11} ); % \\\\ \\amp \\amp C_{00} +\\!\\!= M_0; C_{11} +\\!\\!= M_0; \\\\ M_1 \\amp=\\amp ( A_{10} + A_{11} ) B_{00}; % \\\\ \\amp \\amp C_{10} +\\!\\!= M_1 ; C_{11} -\\!\\!= M_1 ; \\\\ M_2 \\amp=\\amp A_{00} ( B_{01} - B_{11} ); % \\\\ \\amp \\amp C_{01} +\\!\\!= M_2 ; C_{11} +\\!\\!= M_2 ; \\\\ M_3 \\amp=\\amp A_{11}( B_{10} - B_{00} ); % \\\\ \\amp \\amp C_{00} +\\!\\!= M_3 ; C_{10} +\\!\\!= M_3 ; \\\\ M_4 \\amp=\\amp ( A_{00} + A_{01}) B_{11}; % \\\\ \\amp \\amp C_{01} +\\!\\!= M_4 ; C_{00} -\\!\\!= M_4; \\\\ M_5\\amp=\\amp (A_{10} - A_{00} )( B_{00} + B_{01} ); % \\\\ \\amp \\amp C_{11} +\\!\\!= M_5; \\\\ M_6\\amp=\\amp (A_{01} - A_{11} )( B_{10} + B_{11} ); % \\\\ \\amp \\amp C_{00} +\\!\\!= M_6 . \\end{array} \\end{equation*}\n\nIf you count carefully, this requires 22 additions of $n\/2 \\times n\/2$ matrices and 7 multiplications with $n\/2 \\times n\/2$ matrices. Adding matrices together requires $O( n^2)$ flops, which are insignificant if $n$ is large. Ignoring this, the cost now becomes\n\n\\begin{equation*} 7 \\times 2 ( n\/2 )^3 = 2 \\frac{7}{8} n^3 \\end{equation*}\n\nflops. The cost of the matrix-matrix multiplication is now $7\/8$ of what it was before!\n\nBut it gets better! Each of the matrix-matrix multiplications can themselves be computed via this scheme. If you do that, applying the idea at two levels, the cost is reduced to\n\n\\begin{equation*} 7 \\times ( 7 \\times 2 (n\/4)^3 ) = 2 \\left( \\frac{7}{8} \\right)^2 n^3 \\end{equation*}\n\nflops. How many times can we do that? If $n = 2^r$ we can half the size of the matrices $r = \\log_2( n )$ times. If you do that, the cost becomes\n\n\\begin{equation*} 2 \\left( \\frac{7}{8} \\right)^r n^3 {\\rm ~flops}. \\end{equation*}\n\nNow,\n\n\\begin{equation*} \\left( {7}\/{8} \\right)^{\\log_2(n)} 2 n^3 = n^{\\log_2(7\/8)} 2 n^3 = 2 n^{\\log_2 7} \\approx 2 n^{2.807} . \\end{equation*}\n\nHere we ignored the cost of the additions. However, it can be analyzed that this recursive approach requires $O( n^{2.807} )$ flops.","date":"2021-01-18 01:35:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 1.0000098943710327, \"perplexity\": 3177.308936857248}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703514046.20\/warc\/CC-MAIN-20210117235743-20210118025743-00235.warc.gz\"}"}
| null | null |
package io.github.kszatan.gocd.phabricator.stagingmaterial.handlers.bodies;
public class RevisionData {
@Override
public boolean equals(Object o) {
if (this == o) { return true; }
if (o == null) { return false; }
if (getClass() != o.getClass()) { return false; }
return true;
}
@Override
public int hashCode() {
return super.hashCode();
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,898
|
Sylvandale is Alive with the Sound of Music
Mr. Richard Lyman, Sylvandale's Music director for last two years, has big plans for his program in the Spring and the coming months. Right now his classes are working on a joint concert with Andrew Hill High School. The pop music and choir classes are preparing for the popular Singing Valentines that will be presented to peers and staff. Sylvandale is lucky to have the musical chops of Mr. Lyman.
Sylvandale's Music Program offers, instruction in choral singing, pop singing, ukulele, guitar, and traditional concert band instruments at both first and second year levels. Right now all of the classes are working toward a joint concert with Andrew Hill High School. The choirs and pop music class will sing a variety of pop,classical and gospel tunes, while the bands will be performing arrangements of familiar folk songs such as When The Saints Go Marching in and Aura Lee. In addition, the pop music and choir classes are also preparing to deliver singing valentines to their peers and staff at Sylvandale to raise money for the music program. The program is off to a great start for the school year and has great things to come for the Sylvandale staff and students.R
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,889
|
Q: echoing in a form after submission I have a form that send information to a mysql db. Error messages and a confirmation appear to the right of the form in a seperate div so the form action needs to send the user to the same page. I am echoing the field the user types in in case they need to edit something based on the error message that appears. However, if the form is submitted correct, how could i make it so the php does not echo in the form?
<?php
$submit = filter_input(INPUT_POST, 'submit');
//form data
$fullname = filter_input(INPUT_POST, 'fullname');
$email = filter_input(INPUT_POST, 'email');
$business = filter_input(INPUT_POST, 'business');
$date = date("Y-m-d");
?>
then after other html code....
<div class="wrap">
<div id="reg">
<form action='register.php' method='POST'>
<table>
<tr>
<td>
Your full name:
</td>
<td>
<input type='text' name='fullname' value='<?php echo $fullname;?>'>
}
</td>
</tr>
<tr>
<td>
Your email:
</td>
<td>
<input type='text' name='email' value='<?php echo $email;?>'>
</td>
</tr>
<tr>
<td>
Your business:
</td>
<td>
<input type='text' name='business' value='<?php echo $business;?>'>
</td>
</tr>
</table>
</br>
<input type='submit' name='submit' value='Register'>
</form>
</br>
</div>
<?php
if ($submit)
{
//open database
$connect=mysql_connect("localhost","root","Ryweb1994");
mysql_select_db("phapsy");
//check for existence
if($fullname&&$business&&$business)
{
$queryreg = mysql_query("INSERT INTO users VALUES ('','$fullname','$email','$business','$date')");
echo ("<div id='message'><p>You have been registered! We will send you an email with login information. Thank you for your interest in Phapsy!<p></div>");
}
else echo "Please fill in <b>all</b> fields!";
}
?>
</div>
</div>
A: Just wrap your form around with the $submit variable:
+ <?php if($submit) { ?>
<div class="wrap">
...
<form action='register.php' method='POST'>
...
</form>
</div>
+ <?php }; ?>
<?php
if ($submit)
{
//open database
$connect=mysql_connect("localhost","root","Ryweb1994");
mysql_select_db("phapsy");
//check for existence
if($fullname&&$business&&$business)
{
$queryreg = mysql_query("INSERT INTO users VALUES ('','$fullname','$email','$business','$date')");
echo ("<div id='message'><p>You have been registered! We will send you an email with login information. Thank you for your interest in Phapsy!<p></div>");
}
else echo "Please fill in <b>all</b> fields!";
}
?>
</div>
</div>
A: Since you say you are echoing the confirmation, you are obviously testing somehow for successful input - but you don't show that code.
So something like <input type='text' name='fullname' value='<?php if(!$successful){echo $fullname;} ?>'> would do it
A: Put the processing logic before the display logic.
Set a variable that reflects whether the database interaction succeeded.
Update your echo statements throughout the form to only print a value if the interaction failed (or failed to validate):
<?php $successfullyProcessed = false;
//processing logic
$successfullyProcessed = true;
<input type='text' name='fullname' value='<?php if (!successfullyProcessed) echo $fullname;?>'>
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,515
|
Q: Pricing structure after quota exceeds youtube-v3-api I wanted to know and understand the pricing structure of YouTube Data API v3 after the daily quota of 10,000 units is exceeded.
I understood the quota calculation.
Before requesting for additional quota by filling this form, I want to understand the pricing structure of it.
Thanks in advance
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,189
|
\section{Introduction}
Large-scale supervised learning has been the driving force behind advances in visual recognition. Recently, however, there has been a growing number of concerns about the disparate impact of these visual recognition systems. Face recognition systems trained from datasets with an underrepresentation of certain racial groups have exhibited lower accuracy for those groups~\cite{BG18IntersectionalDataset}. Activity recognition models trained on datasets with high correlations between the activity and the gender expression of the depicted person have over-amplified those correlations~\cite{ZWYOC17BiasAmp}. Computer vision systems are statistical models that are trained to maximize accuracy on the majority of examples, and they do so by exploiting the most discriminative cues in a dataset, potentially learning spurious correlations.
In this work, we introduce a new framework for training computer vision models that aims to mitigate such concerns, illustrated in Figure~\ref{fig:pullfig}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Images/PullFigure_large.png}
\caption{Training a visual classifier for an attribute (e.g., \texttt{hat}) can be complicated by correlations in the training data. For example, the presence of hats can be correlated with the presence of glasses. We propose a dataset augmentation strategy using Generative Adversarial Networks (GANs) that successfully removes this correlation by adding or removing glasses from existing images, creating a balanced dataset.
}
\label{fig:pullfig}
\end{figure}
One proposed path for building `fairer' computer vision systems is through a `fairer' data collection process. Works such as \cite{BG18IntersectionalDataset,YQFDR20BalancingImagenet} propose techniques for better sampling data to more accurately represent all people. Creating a perfectly balanced dataset, however, is infeasible in many cases.
With the advances in Generative Adversarial Networks (GANs)~\cite{GPMXWOCB14GANs}, several works propose using generated data to augment real-world datasets~\cite{GCSE19FairModeling,SHCV19FairnessGAN,XYZW18Fairgan}. These methods have been growing in computational and algorithmic complexity (e.g., \cite{SHCV19FairnessGAN,XYZW18Fairgan} adding multiple loss functions to GAN training), necessitating access to a sufficient number of inter-sectional real-world samples. In contrast, we demonstrate a simple and novel data augmentation technique that uses a single GAN trained on a biased real-world dataset.
\smallsec{Illustrative example}
Consider our example from Figure~\ref{fig:pullfig}. Our goal is to train a visual recognition model that recognizes the presence of an attribute, such as wearing a hat. Suppose in the real world wearing a hat is correlated with wearing glasses---for example, because people often wear both hats and sunglasses outside and take them off inside. This correlation may be reflected in the training data, and a classifier trained to recognize a hat may rely on the presence of glasses. Consequently, the classifier may fail to recognize a hat in the absence of glasses, and vice versa.
We propose using a GAN to generate more images with hats but not glasses and images with glasses but not hats, such that \texttt{WearingHat} is de-correlated from \texttt{Glasses} in the training data, by making perturbations in the latent space.
Building on work by Denton et al.~\cite{DHMG19}, which demonstrates a method for learning interpretable image manipulation directions, we propose an improved latent vector perturbation method that allows us to preserve the \texttt{WearingHat} attribute while changing the \texttt{Glasses} attribute (Figure~\ref{fig:changeGlasses}).
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{Images/glasses_baseline.png} \\
\includegraphics[width=0.9\linewidth]{Images/glasses_perp.png}
\caption{Consider a GAN trained on a biased real-world dataset of faces where the presence of hats is correlated with the presence of glasses. Naively moving in a direction that adds glasses also adds a hat (\emph{Top}). We learn a direction in the latent space that allows us to add glasses, while not adding a hat (\emph{Bottom}). Note that attributes apart from the target attribute can change.}
\label{fig:changeGlasses}
\end{figure}
\smallsec{Protected attributes} Our goal is to examine and mitigate biases of sensitive attributes such as gender expression, race, or age in visual classifiers. However, visual manipulations or explicit classifications along these dimensions have the potential to perpetuate harmful stereotypes (see ~\cite{google_VB}). Hence in our illustrations, we use \texttt{Glasses} as the protected attribute, as it has a clear visual signal.
In the quantitative experimental results, we report our findings on the more sensitive protected attributes of gender expression and age.
\smallsec{Contributions}
We propose a method for perturbing vectors in the GAN latent space that successfully de-correlates target and protected attributes and allows for generating a de-biased dataset, which we use to augment the real-world dataset.
Attribute classifiers trained with the augmented dataset achieve quantitative improvements in several fairness metrics over both baselines and prior work~\cite{SHCV19FairnessGAN,Sharmanska2020contrastive,WQKGNHR19DomainBiasMitigation}, while maintaining comparable average precision.
Furthermore, we analyze the CelebA~\cite{LLWT15CelebA} attributes with respect to label characteristics\footnote{We observe several discrepancies in the CelebA~\cite{LLWT15CelebA} attribute labels and categorize the attributes into three categories: inconsistently labeled, gender-dependent, and gender-independent.}, discriminability, and skew, and discuss how these factors influence our method's performance.
We also evaluate our design choices with ablation studies and the results demonstrate the effectiveness of our augmentation method.\footnote{Code for all our experiments can be found at \url{https://github.com/princetonvisualai/gan-debiasing}.}
\section{Related Work}
\smallsec{De-biasing models}
The effect of gender and racial bias on AI models has been well documented~\cite{BCZSK16DebiasWord,BG18IntersectionalDataset,HBSDR18BiasCaptioning,WZYCO19BalancedDatasets,WQKGNHR19DomainBiasMitigation}.
Models trained on biased data sometimes even amplify the existing biases~\cite{ZWYOC17BiasAmp}.
Tools such as AI Fairness 360~\cite{aif360-oct-2018} and REVISE~\cite{wang2020revise} surface such biases in large-scale datasets and enable preemptive analysis.
In parallel, various work propose methods for mitigating unwanted dataset biases from influencing the model.
Oversampling techniques~\cite{bickel09discriminative,elkan01CostSensitiveLearning} duplicate minority samples in imbalanced data to give them higher weight in training.
Some work propose to mitigate bias through adversarial learning~\cite{WZYCO19BalancedDatasets,ZLM18AdversarialLearning} or through learning separate classifiers for each protected attribute~\cite{RAM17inclusivefacenet,WQKGNHR19DomainBiasMitigation}.
Other work improve fairness by introducing constraints~\cite{lokh2020fairalm} or regularization terms~\cite{Baharlouei_ICLR2020} during training.
Contrary to these algorithmic approaches, our work aims to mitigate biases by training the model with a generated de-biased dataset.
\smallsec{Generating and perturbing images using GANs}
Generative Adversarial Network (GAN)~\cite{GPMXWOCB14GANs} is a popular class of generative models composed of a generator and a discriminator trained in an adversarial setting.
Over the past few years, a number of works \cite{GAADC17WassersteinTraining,KALL17PGAN,KLA19StyleGAN,liu2020selfconditioned,SGZCRC16TrainingGAN} improved GANs to generate more realistic images with better stability.
Shen et al.~\cite{SGTZ20LatentSpaceGANs} show that the latent space of GANs have semantic meaning and demonstrate facial attributes editing through latent space manipulation.
Denton et al.~\cite{DHMG19} propose a method to evaluate how sensitive a trained classifier is to such image manipulations, and find several attributes that affect a smiliing classifier trained on CelebA.
Balakrishnan et al.~\cite{Balakrishnan2020transect} use GANs to generate synthetic images that differ along specific attributes while preserving other attributes, and use them to measure algorithmic bias of face analysis algorithms.
Unlike~\cite{Balakrishnan2020transect,DHMG19} who use the GAN-generated images to evaluate models, our work uses these generated images to train better attribute classification models.
\smallsec{Using GANs to augment datasets}
Several works use GANs to augment datasets for low-shot~\cite{HG17LowShotRecognition} and long-tail~\cite{ZLQL17EmotionCycleGAN} recognition tasks, whereas our work focuses specifically on de-biasing classifiers affected by dataset bias.
More related to our work are~\cite{GCSE19FairModeling,SHCV19FairnessGAN,Sharmanska2020contrastive} which leverage GANs to generate less biased data.
Choi et al.~\cite{GCSE19FairModeling}, given access to a small, unlabeled, and unbiased dataset, detect bias in a large and potentially biased dataset, and learn a generator that generates unbiased data at test time.
Sattigeri et al.~\cite{SHCV19FairnessGAN} train a GAN with a modified loss function to achieve demographic parity or equality of odds in the generated dataset.
Sharmanska et al.~\cite{Sharmanska2020contrastive} use an image-to-image translation GAN to generate more minority samples and create a balanced dataset.
While~\cite{GCSE19FairModeling,SHCV19FairnessGAN,Sharmanska2020contrastive} require training a new GAN for each bias they want to correct, our method uses a single GAN trained on a biased dataset to augment all attributes.
\section{Method} \label{sec:method}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{Images/MethodFigure_Latent.png}
\caption{\emph{(Top left)} Our latent vector perturbation method.
For each $\mathbf{z}$ sampled from the latent space of a trained GAN, we compute $\mathbf{z}'$ such that its target attribute score remains the same (according to $\mathbf{w_t}$) while its protected attribute score is negated (according to $\mathbf{w_g}$). \emph{(Top right)} We add images $G(\mathbf{z})$ and $G(\mathbf{z}')$ to our training set, and train a target attribute classifier on both the real-world data and the generated de-biased data.}
\label{fig:Method}
\end{figure}
We study a class of problems where a protected attribute is correlated with a target label in the data $\mathcal{X}$, influencing target label prediction.
Let $t$ be the target label (e.g., \texttt{WearingHat} in the running example from Figure~\ref{fig:pullfig}) and $g$ be the protected attribute (e.g., gender expression or \texttt{Glasses} from our running example) with $t,g\in\{-1,1\}$.
To mitigate the effect of unwanted dataset bias, we aim to generate a balanced set of synthetic images $\mathcal{X_\textit{syn}}$ where the protected attribute and target label are de-correlated.
Concretely, let $f_t$ be a function from images to binary labels that approximates the target label $t$, and $f_g$ be a function from images to binary labels that approximates the protected attribute $g$.
We learn these classifiers in a supervised fashion with the original data.\footnote{$f_t$ is equivalent to the baseline classifier in Section~\ref{sec:baseline}.}
We now want to generate synthetic data $\mathcal{X_\textit{syn}}$ with the property that for $\mathbf{x} \in \mathcal{X_\textit{syn}}$:
\begingroup
\setlength\abovedisplayskip{3pt}
\setlength\belowdisplayskip{3pt}
\begin{equation}
P\left[{f_t}(\mathbf{x}) = 1 | {f_g}(\mathbf{x}) = 1 \right] = P\left[{f_t}(\mathbf{x}) =1 \right],
\label{eq:indep}
\end{equation}
\endgroup
such that attributes $t$ and $g$ are de-correlated.
\smallsec{De-biased dataset creation}
To create $\mathcal{X_\textit{syn}}$, we use a GAN trained on real images $\mathcal{X}$ whose generator $G$ generates a synthetic image $\mathbf{x}$ from a random latent vector $\mathbf{z} \in \mathcal{Z}$.
We can assign semantic attribute labels to these images using the learned functions ${f_t}(\mathbf{x})$ and ${f_g}(\mathbf{x})$.
However, as the GAN inherits correlations from its training data, a random sampling of $\mathbf{z}$ will produce an $\mathcal{X_\textit{syn}}$ with similar correlations and biases as $\mathcal{X}$.
Hence, we propose a latent vector perturbation method that allows us to generate a de-biased $\mathcal{X_\textit{syn}}$.
We sample a random set of latent vectors $Z \subset \mathcal{Z}$ (inheriting the biases) and train classifiers $h_t, h_g \colon \mathcal{Z} \rightarrow [\num{-1}, 1]$ in the latent space that approximate ${f_t} \circ G$ and ${f_g} \circ G$, respectively.
That is, we train classifiers $h_t$ with input $\mathbf{z}$ and output ${f_t}(G(\mathbf{z}))$, and $h_g$ with input $\mathbf{z}$ and output ${f_g}(G(\mathbf{z}))$.
Given a vector $\mathbf{z}$, we generate a complementary vector $\mathbf{z}'$ with the same (predicted) target label but the opposite (predicted) protected attribute label, or
\begingroup
\setlength\abovedisplayskip{3pt}
\setlength\belowdisplayskip{3pt}
\begin{equation}
\label{eq:pair_generation}
h_t(\mathbf{z}') = h_t(\mathbf{z}), \; \; \; h_g(\mathbf{z}') = -h_g(\mathbf{z}).
\end{equation}
\endgroup
We note that this data generation method is agnostic to the type of classifier used to compute $h$.
In our work, we assume that the latent spaces is approximately linearly separable in the semantic attributes, as observed and empirically validated by Denton et al.~\cite{DHMG19}. In this case, $h_t$ and $h_g$ can be represented as linear models (hyperplanes) $\mathbf{w_t}$ and $\mathbf{w_g}$ with intercepts $b_t$ and $b_g$ for the target and protected attributes respectively. We can derive a closed-form solution for $\mathbf{z}'$ as\footnote{Derivations are in the appendix (Section~\ref{sec:derivation}).
$\|\mathbf{w_t}\| = \|\mathbf{w_g}\| = 1$.}
\begingroup
\setlength\abovedisplayskip{3pt}
\setlength\belowdisplayskip{3pt}
\begin{equation}
\mathbf{z}' = \mathbf{z} - 2\left(\frac{\wg^T \mathbf{z} +b_g}{1 - (\wg^T\mathbf{w_t})^2}\right) \left(\mathbf{w_g} - (\wg^T\mathbf{w_t})\mathbf{w_t} \right).
\end{equation}
\endgroup
This latent vector perturbation method is illustrated in Figure~\ref{fig:Method} (\emph{Top left}). A similar idea of hyperplane projection was presented in Zhang et al.~\cite{ZLM18AdversarialLearning}, although for a different goal of adversarial training.
The sampling process results in a complementary image pair:
\begin{itemize}[topsep=1pt, itemsep=1pt, leftmargin=*]
\item $\mathbf{x}=G(\mathbf{z})$ with target label ${f_t}(G(\mathbf{z}))$ and protected attribute label ${f_g}(G(\mathbf{z}))$
\item $\mathbf{x}'=G(\mathbf{z}')$ with target label ${f_t}(G(\mathbf{z}))$ and protected attribute label $-{f_g}(G(\mathbf{z}))$,
\end{itemize}
creating de-biased data $\mathcal{X}_{syn}$. We train our target attribute classifier with $\mathcal{X}$ and $\mathcal{X}_{syn}$, as shown in Figure~\ref{fig:Method}.
We label the generated images $\mathbf{x}$ and $\mathbf{x}'$ both with ${f_t}(\mathbf{x})$ because it allows us to capture the target attribute labels better than using ${f_t}(\mathbf{x})$ and ${f_t}(\mathbf{x}')$.
It is likely that the accuracy of ${f_t}$ is higher for the overrepresented group, and $\mathbf{x}$ will more often belong to the overrepresented group and $\mathbf{x}'$ to the underrepresented group.
However, other design choices are possible in our approach---for example, we could use $h_t(\mathbf{z})$ and $h_t(\mathbf{z}')$ instead (after thresholding appropriately) or only use $\mathbf{z}$ for which ${f_t}(\mathbf{x}) = {f_t}(\mathbf{x}')$.
We compare these different design choices experimentally in Section~\ref{sec:designchoices}.
\smallsec{Advantages}
Our data augmentation method has several attractive properties:
\begin{enumerate}[topsep=1pt, itemsep=1pt, leftmargin=*]
\item We use a single GAN trained on the biased real-world dataset to augment multiple target labels and protected attributes. This is in contrast to prior works like \cite{SHCV19FairnessGAN,GCSE19FairModeling} that require training a GAN for every pair of target and protected attributes.
\item By augmenting samples $\mathbf{z}$ generated from (approximately) the original data distribution the GAN was trained on and maintaining their target attribute scores, our method preserves the intra-class variation of the images.
\item The samples $\mathbf{z}$ and $\mathbf{z}'$ are generated to simulate the independence goal of Equation~\ref{eq:indep}. By construction, $\mathbf{z}'$ maintains $\mathbf{z}$'s target label ${f_t}(G(\mathbf{z}))$ and takes on the opposite protected attribute label $-{f_g}(G(\mathbf{z}))$.
\item Our method generalizes to multiple protected attributes $g$. We demonstrate how our method can simultaneously augment two protected attributes in Section~\ref{sec:comparisons_recent} when we compare our work to Sharmanska et al.~\cite{Sharmanska2020contrastive}.
\end{enumerate}
\section{Experiments} \label{sec:exp}
In this section, we study the effectiveness of our data augmentation method on training fairer attribute classifiers. We first describe our experiment setup and compare our results to those of a baseline classifier. We then discuss how different factors influence our method's performance, and finally compare our work to several prior works.
\smallsec{Dataset and attributes categorization}
Given the task of training attribute classifiers that are not dependent on gender expression, we require a dataset that has target labels, as well as gender expression labels.
CelebA~\cite{LLWT15CelebA} is a dataset with 2,022,599 images of celebrity faces, each with 40 binary attributes labels.
We assume the \texttt{Male} attribute corresponds to gender expression.\footnote{Consistent with the dataset annotation and with the literature, we adopt the convention of using \texttt{Male} as our protected attribute. It is not clear if this label denotes assigned sex at birth, gender identity, or gender expression (socially perceived gender). Since the images were labeled by a professional labeling company~\cite{LLWT15CelebA}, we assume that the annotation refers to the perceived gender, or gender expression. Moreover, this attribute is annotated in a binary fashion. We would like to point out that none of these attributes (assigned sex at birth, gender identity, nor gender expression) are binary, however, we use these labels as is for our goal of de-biasing classifiers.}
Among the other 39 attributes, we use 26 of them that have between 1\% and 99\% fraction of positive images for each gender expression.\footnote{We don't use \texttt{Blurry} as it has very few positive images ($\approx 5\%$).
We don't use \texttt{WearingNecklace} as the cropped images used in the GAN from \cite{PytorchPGAN} don't display the neck.} However, we noticed several discrepancies among the attribute labels, and decided to categorize the attributes into three categories: \textit{inconsistently labeled}, \textit{gender-dependent}, and \textit{gender-independent}.
We categorized attributes as \textit{inconsistently labeled} when we visually examined sets of examples and found that we often disagreed with the labeling and could not distinguish between positive and negative examples.
This category includes \texttt{StraightHair} shown in Figure \ref{fig:celeba_straighthair}, as well as \texttt{BigLips}, \texttt{BigNose}, \texttt{OvalFace}, \texttt{PaleSkin}, and \texttt{WavyHair}.\footnote{We note that for \texttt{BigNose}, we found that while there were some images that were easy to classify as having a big nose, or not having a big nose, most images were between these two extremes, and we believe that different annotators marked these `in-between' images differently. The same is true for the attribute \texttt{BigLips}.} While we report results on these attributes for completeness in Section~\ref{sec:baseline}, classifiers trained on these attributes may behave erratically.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Images/straighthair_shuffled.png}
\caption{Examples of CelebA \texttt{StraightHair} labels.
Some of these are labeled as having \texttt{StraightHair} (1st, 3rd, 5th) and some as not (2nd, 4th, 6th).
We deemed this attribute as \emph{inconsistently labeled}.}
\label{fig:celeba_straighthair}
\end{figure}
Of the remaining attributes with more consistent labeling, we found that some attribute labels are \emph{gender-dependent}. That is, images are labeled to have (or not have) these attributes based on the perceived gender.
For example in Figure~\ref{fig:celeba_young}, we observe that the images labeled as \texttt{Young} and \texttt{Male} appear much older than the images labeled as \texttt{Young} and \texttt{not Male}. Other attributes in this category are \texttt{ArchedBrows}, \texttt{Attractive}, \texttt{BushyBrows}, \texttt{PointyNose} and \texttt{RecedingHair}.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Images/young_contrast.png}
\caption{Examples of CelebA \texttt{Young} labels.
The first three images are labeled \texttt{Male}, \texttt{Young} while the last three images are labeled \texttt{not Male}, \texttt{not Young}, even though the first three appear older than the last three.
We deemed this attribute as \emph{gender-dependent}.}
\label{fig:celeba_young}
\end{figure}
The \textit{gender-independent} attribute labels appear to be reasonably consistent among annotators, and do not appear to depend on the gender expression. We classified 14 attributes into this category: \texttt{Bangs}, \texttt{BlackHair}, \texttt{BlondHair}, \texttt{BrownHair}, \texttt{Chubby}, \texttt{Earrings}, \texttt{EyeBags}, \texttt{Glasses}, \texttt{GrayHair}, \texttt{HighCheeks}, \texttt{MouthOpen}, \texttt{NarrowEyes}, \texttt{Smiling}, and \texttt{WearingHat}. While we use the label `gender-independent' we note that these attributes can still be correlated with gender expression---for example \texttt{Earrings} are much more common among images labeled as \texttt{not Male} than those labeled as \texttt{Male}.
\smallsec{Implementation details}
To generate images, we use a Progressive GAN~\cite{KALL17PGAN} with a 512-D latent space trained on the CelebA~\cite{LLWT15CelebA} training set from the PyTorch GAN Zoo~\cite{PytorchPGAN}.
We use 10,000 synthetic images, labeled with baseline attribute classifiers, and learn hyperplanes ($h_t$, $h_g$) in the latent space with scikit-learn's~\cite{scikit-learn} linear SVM implementation.
For all attribute classifiers, we use ResNet-50~\cite{HZRS16ResNet} pre-trained on ImageNet~\cite{Russakovskyplus15Imagenet} as the base architecture. We replace the linear layer in ResNet with two linear layers with the hidden layer of size 2,048. Dropout and ReLU are applied between these. The inputs are $64{\times}64$ images and their target attribute labels. We train all models with the binary cross entropy loss for 20 epochs with a batch size of 32.
We use the Adam~\cite{KB14Adam} optimizer with a learning rate of 1e-4. We save the model with the smallest loss on a validation set that has the same distribution as the training set.
The baseline model is trained on the CelebA training set $\mathcal{X}$ with 162,770 images. Our model is trained on $\mathcal{X}$ and the balanced synthetic dataset $\mathcal{X}_{syn}$ (160,000 pairs of images).\footnote{We trained classifiers using different number of synthetic pairs for 4 different attributes, and found that AP stabilizes after 160,000 pairs, which is what we used to train our classifiers.} Results are reported on the CelebA test set unless noted otherwise. Error bars are 95\% confidence intervals estimated through bootstrapping. We note that we use a single GAN to construct the de-biased dataset for each target attribute, and then train separate classifiers for each target attribute. We also emphasize that protected attribute labels are only used in learning $h_g$ and in evaluation
\smallsec{Evaluation Metrics}
We use \emph{average precision (AP)} to measure the accuracy of the classifiers. AP is a threshold-invariant accuracy metric that summarizes the precision and recall curve. We use this metric to ensure that our models learn a reasonable classification rule. AP, however, does not capture a classifier's behavior on different protected classes, and in fact, we expect to see a slight dip in overall AP when our model improves on some of the fairness metrics.
Multiple metrics have been proposed to measure fairness of a model~\cite{HPS16EqualityOdds,ZVGG17EqualityOpportunity,ZWYOC17BiasAmp,CKP09DemographicParity,chen2020riskdistribution} and each of these measures a different notion of fairness.
In our work, we use three metrics for comprehensive understanding.
First, we measure the \emph{difference in equality of opportunity (DEO)}, i.e. the absolute difference between the false negative rates for both gender expression, as in Lokhande et al.~\cite{lokh2020fairalm}\footnote{In our experiments, we choose a calibrated threshold on the validation set, i.e, a threshold that ensures that we make the same number of positive predictions as the ground truth, to compute both DEO and BA. We tried other ways of choosing the threshold, such as choosing the one that gives the best $F_1$ score on a validation set, and while the values varied, they did not change our findings.}.
As our second fairness metric, we use the \emph{bias amplification (BA)} metric proposed by Wang and Russakovsky~\cite{wang2021directional}.
Intuitively, BA measures how much more often a target attribute is predicted with a protected attribute than the ground truth value. Let $P_{t|g}$ be the fraction of images with protected attribute $g$ that have target attribute $t$, ${P}_{\hat{t}| g}$ be the fraction of images with protected attribute $g$ that are predicted to have target attribute $t$, $P_{t,g}$ be the fraction of images with target $t$ and protected attribute $g$, and $P_{t}$ and $P_g$ be the fraction of images with attribute $t$ and $g$ respectively. For each pair of target and protected attribute values, we add $(P_{t|g} - P_{\hat{t}|g})$ if $P_{t,g}>P_{t}P_{g}$ and $-(P_{t|g} - P_{\hat{t}|g})$ otherwise. A negative value implies that bias now exists in a different direction than in the training data.
Both DEO and BA fluctuate based on the chosen classification threshold.
Hence, as our final fairness metric, we use a threshold-invariant metric that measures the \emph{divergence between score distributions (KL)}~\cite{chen2020riskdistribution} defined as follows: Suppose $s_{g,t}$ represents a smoothed histogram of classifier scores of a certain protected attribute label and a target label,
appropriately normalized as a probability distribution of the scores. For each target attribute label $t$,
we measure $KL\big[s_{g=\num{-1},t}\|s_{g=1,t}\big] + KL\big[s_{g=1,t}\|s_{g=\num{-1},t}\big]$. That is, we measure the divergence of $g{=}\num{-1}$ and $g{=}1$ score distributions, separately for positive and negative attribute samples. This is a stricter notion of \emph{equalized odds}\cite{HPS16EqualityOdds}.
\subsection{Comparison with the baseline} \label{sec:baseline}
To start, we compare our model (i.e. target classifiers trained using both the balanced synthetic datasets $\mathcal{X}_{syn}$ and the real dataset $\mathcal{X}$) with a baseline model trained using just $\mathcal{X}$.
In Table~\ref{tab:baseline}, we show results on the four metrics, averaged for each of the three attribute categories.
As expected, our model performs better on all three fairness metrics, DEO, BA and KL, while maintaining comparable AP. For gender-independent attributes, AP drops from 83.9 to 83.0, while DEO improves from 16.7 to 13.9, BA improves from 0.3 to 0.0 and KL improves from 1.1 to 0.9.
For gender-dependent attributes, the fairness metrics improve over the baseline, but the improvements are smaller compared to those of gender-independent attributes.
Later in Section~\ref{sec:extensions}, we demonstrate an extension of our augmentation method with an improved performance on the gender-dependent attributes.
Additionally, we conduct score change evaluations suggested by Denton et al.~\cite{DHMG19} and measure the change in target attribute score as we perturb the protected attribute in images. Specifically, we measure the classifier score difference between $G(\mathbf{z})$ and $G(\mathbf{z}')$. This evaluation helps understand how the protected attribute influences a trained classifier's output.
We find that the model trained with our augmentation method consistently has a smaller change in score than the baseline: 0.09 vs. 0.12 for inconsistently labeled, 0.07 vs. 0.11 for gender-dependent, and 0.06 vs. 0.09 for gender-independent attributes.
We also observe that the baseline score changes are higher when we try to construct underrepresented samples.
Consider the attribute \texttt{ArchedBrows} where only 2.3\% of the training set images are labeled to have \texttt{ArchedBrows}, and appear masculine.
When we construct a $\mathbf{z}'$ with this target and protected value, the baseline classifier's score changes by 0.41.
On the other hand, when we try to construct an image that is without \texttt{ArchedBrows} and appears feminine, which comprises 33.7\% of the training set, the baseline classifier score only changes by 0.094. This could be due to the errors that the baseline classifier makes on underrepresented images during synthetic image labeling, or could imply that underrepresented attributes are harder to maintain during image manipulations.
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{\input{baseline_comp}}
\caption{
Comparison of our model (i.e. attribute classifier trained with our data augmentation method) to the baseline model. Arrows indicate which direction is better. Numbers are averages over all attributes within the specific category. As expected, we have slightly lower AP than the baseline, but perform better on the three fairness metrics, DEO, BA, and KL.}
\label{tab:baseline}
\end{table}
We next examine several factors that could influence our method, including how easy the protected attribute is to learn compared to the target attribute and how data skew affects our method. We discuss the former here and provide more information about the latter in the appendix (Section~\ref{sec:factors}).
\smallsec{Discriminability of attributes}
Nam et al.~\cite{nam2020learning} recently observed that correlations among attributes affect a classifier only if the protected attribute is `easier' to learn than the target attribute.
Inspired by their observation, we conduct a two-step experiment to understand how the relative discriminability of attributes affects our method's effectiveness.
First, we put a pair of CelebA attributes in competition to assess their relative discriminability. Experiment details are in the appendix. We find that gender expression is one of the easiest attributes to learn (\texttt{Gender} is easier than all but \texttt{Glasses} and \texttt{WearingHat}), which may be why gender bias is prevalent in many models. On the other hand, \texttt{Young} is relatively hard for a model to learn (\texttt{Young} is harder to learn than all but 4 other attributes), so its correlation with other attributes may not be as influential.
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|cc|cc|cc|}
\hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Protected\\ Attribute\end{tabular}} & \multicolumn{6}{c|}{Improvement over baseline $\uparrow$} \\
\cline{2-7}
& \multicolumn{2}{c|}{DEO} & \multicolumn{2}{c|}{BA} & \multicolumn{2}{c|}{KL} \\
\cline{2-7}
& Easy & Hard & Easy & Hard & Easy & Hard \\
\hline
{\tt Glasses} (0,19) & -- & \textbf{4.1} & -- & \textbf{0.9} & -- & \textbf{0.0} \\
{\tt Gender} (2, 17) & 0.8 & \textbf{3.2} & 0.0 & \textbf{0.4} & -0.2 & \textbf{0.2} \\
{\tt Young} (15, 4) & -0.2 & \textbf{2.1} & 0.2 & \textbf{1.0} & -0.2 & \textbf{0.0} \\ \hline
\end{tabular}}
\caption{Improvement over baseline for different fairness metrics when using different protected attributes. Next to the protected attribute are numbers of attributes that are `easier' and `harder' to learn, compared to the protected attribute. Columns `Easy' (`Hard') show the averages of all non-inconsistent target attributes that are easier (harder) for a classifier to learn. We note that our method works better when the target attribute is `harder' to learn.}
\label{tab:other_prot_attributes}
\end{table}
Next, to understand how the relative discriminability of attributes affects our method's performance, we train target attribute classifiers for gender-dependent and gender-independent attributes, using \texttt{Young} and \texttt{Glasses} as protected attributes.
In Table~\ref{tab:other_prot_attributes}, we report our method's improvement over baseline in the three fairness metrics.
For each protected attribute, we report the average improvement separately for `easier' and `harder' target attributes. While training with our augmentation method generally outperforms the baseline on the three fairness metrics, as expected, the improvement is greater for target attributes that are harder to learn than the protected attribute, for example, for \texttt{Young}, the improvement in DEO over baseline is -0.2 for easy target attributes, and 2.1 for hard target attributes.
\smallsec{Skew of the dataset} The \emph{skew} of a target attribute $t$ is measured following the literature~\cite{WQKGNHR19DomainBiasMitigation} as $\frac{\max (P_{-1}, P_1)}{P_{-1}+P_1}$ where $P_{-1}$ is the number of images with $t{=}1$ and protected attribute label $g{=}-1$, and $P_1$ is the number of images with $t{=}1$ and protected attribute label $g{=}1$. We find that our augmentation method is most effective on attributes with low to moderate skew.
Full details are in the appendix.
\subsection{Ablation studies}\label{sec:designchoices}
We now examine the design choices made in our method
\smallsec{Removal of $\mathbf{z}'$ samples}
First, we evaluate the effect of $G(\mathbf{z}')$ on the classifier. We train a classifier with just $G(\mathbf{z})$ and the real dataset $\mathcal{X}$, and compare its performance against the performance of our model, trained with $G(\mathbf{z})$, $G(\mathbf{z}')$, and $\mathcal{X}$ on the gender-dependent and gender-independent attributes.
While the new classifier's AP is higher than that of our model (82.9 vs. 82.6), all fairness metrics are worse: DEO is higher (19.7 vs. 16.1), BA is higher (1.1 vs. 0.5) and KL is higher (1.6 vs 1.3). All numbers were calculated on the validation set. In fact, it performs worse on the fairness metrics than the baseline model trained on $\mathcal{X}$. This result suggests that simply synthesizing more images with a GAN and adding them to the training data does not improve the model but rather hurts performance. Possible reasons include the image and label noise of $G(\mathbf{z})$ and the skew of $G(\mathbf{z})$ being worse than the original data the GAN was trained on. The fairness metrics improve only when we add $G(\mathbf{z}')$, and make the training data more balanced.
\smallsec{Choice of $\mathbf{z}'$}
Next, we evaluate our choice of $\mathbf{z}'$ through examining a number of alternative perturbation choices visualized in Figure~\ref{fig:AvgPrecFakeOnly}. We train classifiers on just the generated data for gender-dependent and gender-independent attributes and compare the overall AP on the validation set.
As expected, training with $\mathbf{z}'$ (our choice)
has the highest AP.
\begin{figure}[t!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{c p{7pt}|c|cc|}
\cline{3-5}
\multirow{6}{*}{\includegraphics[width=0.45\linewidth]{Images/Ablation.png}} && \multirow{2}{*}{Perturbation} & \multicolumn{2}{c|}{AP $\uparrow$} \\
\cline{4-5}
& & & G-dep & G-indep \\
\cline{3-5}
& & $\textbf{z}'_{g,0}$ & 74.0 & 79.9\\
& & $\textbf{z}'_{g}$ & 69.6 & 77.3\\
& & $\textbf{z}'_{0}$ & 74.4 & 79.8\\
& & $\textbf{z}'$ (ours) & \textbf{76.0} & \textbf{81.4}\\
\cline{3-5}\\
\end{tabular}
}
\caption{Comparison of different perturbation choices. We train attribute classifiers using only synthetic images generated from the perturbations, and measure the mean AP over all target attributes on the validation set. The classifier trained with $\mathbf{z}'$ (our choice) has the highest AP.}
\label{fig:AvgPrecFakeOnly}
\end{figure}
\smallsec{Filtering $\mathbf{z}$'s and using different labels for synthetic images}
Since we hallucinate labels for the synthetic images, some of these labels may be incorrect and harm our classifier.
We try three different ways of addressing this issue:
First, we try learning hyperplanes with different fractions of positive and negative samples. We find that while this improves the hyperplane accuracy, the downstream classifiers trained with samples generated using different hyperplanes have similar performances.
For the second and third methods, we use the original hyperplanes learned in our method, but vary the vectors/labelling used. We remove points that are incorrectly classified by the baseline classifier after perturbing the latent vector from $\mathbf{z}$ to $\mathbf{z}'$, i.e, we remove all points wherein $f_t(G(\mathbf{z})) \not= f_t(G(\mathbf{z}'))$, and use the remaining synthetic images and the real dataset to train the classifiers.
Third, we label the synthetic images $G(\mathbf{z})$ and $G(\mathbf{z}')$ with $h_t(\mathbf{z})$, and use these labels to train the classifiers.
We compare their performance to our method on the validation set.
We find that these two methods result in a slight drop in AP (79.8 when using $h_t$ scores, 82.1 when removing incorrectly classified points, and 82.6 for our method), as well as a small drop in the fairness metrics (the average DEO is 18.1 when using $h_t$ scores, 17.4 when removing incorrectly classified points, and 16.1 for our method), suggesting that our current labeling of the synthetic images works well. {Full results are in the appendix (Section~\ref{sec:extra_ablations})}.
\subsection{Comparison with prior work}
\label{sec:comparisons_recent}
In this section, we compare our method to few recent works~\cite{SHCV19FairnessGAN,Sharmanska2020contrastive,WQKGNHR19DomainBiasMitigation}.
One of the current challenges in the space of AI fairness is the lack of standardized benchmarks and metrics.
While some of this stems from the complexity of the problem at hand (where it is difficult and even counter-productive to use a single fairness definition), in the computer vision community, we believe that more effort should be made to provide thorough comparison between methods. Each work we consider here uses slightly different evaluation protocols and benchmarks. We made comparisons to the best of our ability, and hope that our work helps enable more standardization and empirical comparisons.
\smallsec{Fairness GAN} Sattigeri et al.~\cite{SHCV19FairnessGAN} use GANs to create datasets that achieve either demographic parity (Dem. Par.) or equality of opportunity (Eq. Opp.). They train classifiers for the \texttt{Attractive} attribute on just the generated data, using gender expression as the protected attribute. We train classifiers with our pair-augmented synthetic data to mimic the conditions of Fairness GAN, and evaluate both on the CelebA test data. Comparison results are in Table~\ref{tab:FairnessGANcomp}. Our model performs better on most metrics, even though we use a single GAN to augment all attributes.
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular} {c|cc|cc|cc|}
\cline{2-7}
& \multicolumn{4}{c|}{Fairness GAN \cite{SHCV19FairnessGAN}} & \multicolumn{2}{c|}{Ours} \\ \cline{2-5}
& \multicolumn{2}{c|}{Dem. Par.} & \multicolumn{2}{c|}{Eq. Opp.} &\multicolumn{2}{c|}{(Synthetic only)} \\
\hline
\multicolumn{1}{|c|}{Gender exp. $g$} & $g{=}\num{-1}$ & $g{=}1$ & $g{=}\num{-1}$ & $g{=}1$ & $g{=}\num{-1}$ & $g{=}1$\\ \hline
\multicolumn{1}{|c|}{FPR $\downarrow$} & 0.52 & 0.26 & 0.42 & \textbf{0.17} & \textbf{0.22} & {0.39} \\
\multicolumn{1}{|c|}{FNR $\downarrow$} & 0.18 & 0.41 & 0.21 & 0.44 & \textbf{0.06} & \textbf{0.27} \\
\multicolumn{1}{|c|}{Error $\downarrow$} & 0.30 & 0.28 & 0.29 & 0.23 & \textbf{0.21} & \textbf{0.18} \\ \hline
\multicolumn{1}{|c|}{Error Rate $\downarrow$} & \multicolumn{2}{c|}{0.22} & \multicolumn{2}{c|}{0.29} & \multicolumn{2}{c|}{\textbf{0.20}}\\ \hline
\end{tabular}
}
\caption{Comparison of the \texttt{Attractive} classifier trained using synthetic data from Fairness GAN~\cite{SHCV19FairnessGAN} and the classifier trained using our pair-augmented synthetic data. The latter (ours) outperforms on most metrics.}
\label{tab:FairnessGANcomp}
\end{table}
\smallsec{Contrastive examples generated by image-to-image translation GANs}
Sharmanska et al.~\cite{Sharmanska2020contrastive} propose a different method for balancing a biased dataset using StarGAN~\cite{choi2018stargan}, a class of image-to-image translation GANs. They use two protected attributes, age and gender expression, and create a balanced dataset by creating contrastive examples, i.e. images of different ages and gender, for each image in the training set. They train a \texttt{Smiling} classifier with the augmented dataset, and propose making a prediction at test time only when the classifier makes the same prediction on the image and their contrastive examples. We extend our method to incorporate multiple protected attributes, and use gradient descent to find three points $\{z'_i\}_{i \in \{1, 2, 3\}}$ in the latent space that preserve the target attribute score and flip either the gender expression score, the age score, or both. This process gives us three synthetic images per training image, with which we train a \texttt{Smiling} classifier. To ensure that the error rates are similar across all four protected groups---(\texttt{Young}, \texttt{Male}), (\texttt{Young}, \texttt{not Male}), (\texttt{not Young}, \texttt{Male}), (\texttt{not Young}, \texttt{not Male})---they measure the the mean difference in the false positive and false negative rates between all pairs of protected groups.
We reproduce their method to ensure that the results are reported on the same test set. We find that our model performs better in terms of the mean difference in FNR (0.34 versus their 0.54) and FPR (0.23 compared to their 0.46).
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Skew & Method & AP $\uparrow$ & DEO $\downarrow$ & BA $\downarrow$ & KL $\downarrow$ \\
\hline
{Low/} & Dom. Ind. & \textbf{83.4 $\pm$ 1.3} & 7.0 $\pm$ 3.1 & \textbf{-0.1 $\pm$ 0.5} & 0.8 $\pm$ 0.7 \\
Mod. & Ours & 81.4 $\pm$ 1.5 & \textbf{6.0 $\pm$ 3.0} & \textbf{-0.1 $\pm$ 0.5} & \textbf{0.3 $\pm$ 0.1}\\\hline
\multirow{2}{*}{High} & Dom. Ind. & \textbf{80.7 $\pm$ 1.6} & \textbf{14.9 $\pm$ 5.6} & \textbf{-0.4 $\pm$ 0.5} & \textbf{0.8 $\pm$ 1.0} \\
& Ours & 80.4 $\pm$ 1.5 & {23.9 $\pm$ 5.5} & {0.9 $\pm$ 0.4} & 1.5 $\pm$ 0.6\\
\hline
\end{tabular}}
\caption{Comparison of our method with domain independent training~\cite{WQKGNHR19DomainBiasMitigation}. Numbers reported are the mean over all gender-dependent and gender-independent attributes on the test set. We note that we perform better than domain-independent training for attributes with low to moderate skew.
\label{tab:dom_ind}
\end{table}
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
Method & AP $\uparrow$ & DEO $\downarrow$ & BA $\downarrow$ & KL $\downarrow$ \\
\hline
Weighted & {79.6 $\pm$ 1.6} & \textbf{5.7 $\pm$ 4.2} & \textbf{-2.8 $\pm$ 0.5} & \textbf{0.5 $\pm$ 0.4}\\
Adversarial & 81.3 $\pm$ 1.6 & 23.9 $\pm$ 4.4 & 1.5 $\pm$ 0.5 & 0.6 $\pm$ 0.5 \\
Ours & \textbf{81.5 $\pm$ 1.5} & {16.7 $\pm$ 4.7} & {0.5 $\pm$ 0.5} & {1.0 $\pm$ 0.5}\\
\hline
\end{tabular}}
\caption{Comparison of our method with weighted and adversarial training from~\cite{WQKGNHR19DomainBiasMitigation}. Numbers reported are the mean over all gender-dependent and gender-independent attributes on the test set. We note that the weighted model overall performs better on the fairness metrics, however, the large negative BA suggests that the model now has bias in the opposite direction, to the extent that the AP drops. The adversarial model performs significantly worse than ours on DEO and BA, and marginally better on KL.}
\label{tab:weighted}
\end{table}
\smallsec{Effective training strategies for bias mitigation}
Wang et al.~\cite{WQKGNHR19DomainBiasMitigation} quantitatively compare different techniques for bias mitigation, including weighted training~\cite{bickel09discriminative,elkan01CostSensitiveLearning}, adversarial training with losses inspired by~\cite{AZN18FairnessBlindness,ZLM18AdversarialLearning}, and their proposed \emph{domain discriminative} and \emph{domain independent} training.
We compare our method to their best performing domain independent training method where they learn separate classifiers for each protected attribute class and combine them to leverage any shared information. We report results for all gender-dependent and gender-independent attributes in Table~\ref{tab:dom_ind}. We find that our method performs better for attributes with low to moderate skew ($<$0.7)---DEO is 6.0 compared to 7.0, KL is 0.3 compared to 0.8---whereas domain independent training performs better for attributes with high skew---DEO is 23.9 compared to 14.9, KL is 1.5 compared to 0.8.
This result is consistent with our earlier observation that our method works well for low to moderately skewed datasets.
Wang et al also use a simpler weighted training method that reweights samples such that the protected attribute classes have equal weight and an adversarial training method that uses a minimax objective to maximize the classifier's accuracy on the objective while minimizing an adversary's ability to predict the protected attribute from the learned features. For weighted and adversarial training methods, we report results in Table~\ref{tab:weighted}. We find that while the weighted model overall performs well on the fairness metrics, it has a strongly negative BA (-2.7 versus our 0.5) indicating that bias is now in the opposite direction,
and a low AP (79.6 versus our 81.5) suggesting that it makes incorrect predictions to reduce bias. For adversarial training, our method does better overall, with lower DEO (16.7 versus 23.9) and lower BA (0.5 versus 1.5).
\section{Extensions of our method}
\label{sec:extensions}
In this final section, we study two natural extensions of our method: using domain-dependent hyperplanes in place of the current domain-independent hyperplanes, and directly augmenting a real image dataset with GAN-inversion.
\smallsec{Domain-dependent hyperplanes}
Our method implicitly assumes the learned hyperplane $\mathbf{w_t}$ behaves equally well for all $\mathbf{z}$, irrespective of the value of ${f_g}(G(\mathbf{z}))$.
However, for gender-dependent attributes, the hyperplane learned using samples with ${f_g}(G(\mathbf{z})) {=}1$ may be very different from that learned using samples with ${f_g}(G(\mathbf{z})){=} \num{-1}$.
For these attributes, we extend our method to learn per-domain target attribute hyperplanes:
$\mathbf{w}_{t_1}, b_{t_1}$ for points with ${f_g}(G(\mathbf{z})){=}1$ and $\mathbf{w}_{t_{\num{-1}}}, b_{t_{\num{-1}}}$ for points with ${f_g}(G(\mathbf{z})){=}-1$.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Images/per_gender_generation.png}
\caption{Computing $\mathbf{z}'$
when the target attribute hyperplanes for each protected attribute class are very different.}
\label{fig:per_gender}
\end{figure}
For $\mathbf{z}$ with $f_g(G(\mathbf{z})){=}1$, we find $\mathbf{z}'$ such that
\begingroup
\setlength\abovedisplayskip{3pt}
\setlength\belowdisplayskip{3pt}
\begin{equation}
\label{eq:per_gender}
\begin{split}
\mathbf{w}_{t_{\num{-1}}}^T(\mathbf{z}')+b_{t_{\num{-1}}} & = \mathbf{w}_{t_1}^T(\mathbf{z})+b_{t_1}, \mbox{ and}\\
\wg^T\mathbf{z}'+b_g & = -\wg^T(\mathbf{z})-b_g
\end{split}
\end{equation}
\endgroup
as shown in Figure \ref{fig:per_gender}. In order to compute $\mathbf{z}'$ that satisfies the above constraints, while minimizing $||\mathbf{z} - \mathbf{z}'||_2$, we note that all constraints are linear, hence the feasible region is the intersection of several hyperplanes. Starting from a point in this region, in each iteration, we find a new location of the point using gradient descent, then project it back onto the feasible region to maintain the constraints.
If $\mathbf{w}_{t_1}$ and $\mathbf{w}_{t_{\num{-1}}}$ are similar, these constraints are the same as Equation~\ref{eq:pair_generation} and this method of computing $\mathbf{z}'$ collapses to the first.
We compare results of training a classifier that is augmented with images computed with domain-independent hyperplanes and with that using images computed with domain-dependent hyperplanes for all gender-dependent and gender-independent attributes over the validation set. We find that for gender-dependent attributes, using domain-dependent hyperplanes improves the fairness metrics considerably (DEO reduces from 21.4 to 17.2, BA reduces from 1.5 to 0.4, KL reduces from 1.2 to 1.0), without losing accuracy. However, for gender-independent attributes, we do not see significant improvement, suggesting that $\mathbf{w_t}$ is similar to both $\mathbf{w}_{t_1}$ and $\mathbf{w}_{t_{\num{-1}}}$. Full results are in Table~\ref{tab:per_gender}.
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|cc||cc|}
\hline
\multirow{2}{*}{Attr. type} & \multicolumn{2}{c||}{AP $\uparrow$} & \multicolumn{2}{c|}{DEO $\downarrow$}\\
\cline{2-5}
& Dom-ind & Dom-dep & Dom-indep & Dom-dep \\ \hline
\multicolumn{1}{|c|}{G-dep} & \textbf{78.1 $\pm$ 1.5} & \textbf{78.1 $\pm$ 1.4} & {21.4 $\pm$ 4.0} & \textbf{17.2 $\pm$ 4.0}\\
\multicolumn{1}{|c|}{G-indep} & 84.5 $\pm$ 1.5 & \textbf{84.6 $\pm$ 1.6} & 13.9 $\pm$ 4.3 & \textbf{13.1 $\pm$ 4.6} \\ \hline
\multirow{2}{*}{Attr. type} & \multicolumn{2}{c||}{BA $\downarrow$} & \multicolumn{2}{c|}{KL $\downarrow$} \\
\cline{2-5}
& Dom-indep & Dom-dep & Dom-indep & Dom-dep \\ \hline
\multicolumn{1}{|c|}{G-dep} & 1.5 $\pm$ 0.5 & \textbf{0.4 $\pm$ 0.5} & 1.2 $\pm$ 0.2 & \textbf{1.0 $\pm$ 0.3} \\
\multicolumn{1}{|c|}{G-indep} & \textbf{0.1 $\pm$ 0.4} & 0.2 $\pm$ 0.4 & \textbf{0.9 $\pm$ 0.5} & \textbf{0.9 $\pm$ 0.6}\\ \hline
\end{tabular}}
\caption{Comparison of classifiers that use domain-dependent hyperplanes vs. domain-independent hyperplanes to compute $z'$. We see a significant improvement among Gender-dependent attributes when we use Domain-dependent hyperplanes. Numbers are reported on the validation set. }
\label{tab:per_gender}
\end{table}
\smallsec{Augmenting real images with GAN-inversion} Our method operates in the GAN latent space and can only augment images that are generated from latent vectors, and so, only the GAN-generated images. Recently, several GAN-inversion methods have been proposed~\cite{Abdal_2019_ICCV,bau2019seeing,zhu2020indomain}. These methods invert a real image $\mathbf{x}_{real}\in\mathcal{X}$ to a vector $\mathbf{z}_{inv}$ in the latent space of a trained GAN.
Using Zhu et al.~\cite{zhu2020indomain}, we tried directly augmenting the original dataset by perturbing $\mathbf{z}_{inv}$ to $\mathbf{z}'_{inv}$ with our method, creating $\mathbf{x}_{real}'{=}G(\mathbf{z}'_{inv})$ with the same target label and the opposite protected label of $\mathbf{x}_{real}$.
When we trained classifiers with datasets augmented in this way, however, we did not see an appreciable improvement, despite the more complex procedure (Table~\ref{tab:inverse}).
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|}
\cline{2-5}
& AP $\uparrow$ & DEO $\downarrow$ & BA $\downarrow$ & KL $\downarrow$ \\
\hline
\multicolumn{1}{|l|}{Without} & \textbf{82.6 $\pm$ 1.5} & 1.5 $\pm$ 2.3 & \textbf{1.3 $\pm$ 0.4} & \textbf{1.0 $\pm$ 0.5} \\
\multicolumn{1}{|l|}{With inv.} & 82.4 $\pm$ 1.5 & \textbf{1.4 $\pm$ 2.3} & \textbf{1.3 $\pm$ 0.4} & \textbf{1.0 $\pm$ 0.5} \\ \hline
\end{tabular}}
\caption{Comparison of our classifiers (without) to classifiers trained using data augmented with a GAN-inversion module (with inv.). Numbers reported are the mean over all gender-dependent and gender-independent attributes on the validation set. We do not see an appreciable improvement.}
\label{tab:inverse}
\end{table}
\section{Conclusions}
We introduced a GAN-based data augmentation method for training fairer attribute classifiers when correlations between the target label and the protected attribute (such as gender expression) might skew the results. We report results across a large number of attributes and metrics, including comparisons with existing techniques. We also analyze in detail when our method is the most effective. Our findings show the promise of augmenting data in the GAN latent space in a variety of settings. We hope our detailed analyses and publicly available code serve as a stepping stone for future explorations in this very important space.
\smallsec{Acknowledgements} This work is supported by the National Science Foundation under Grant No. 1763642 and the Princeton First Year Fellowship to SK. We also thank Arvind Narayanan, Deniz Oktay, Angelina Wang, Zeyu Wang, Felix Yu, Sharon Zhang, as well as the Bias in AI reading group for helpful comments and suggestions.
{\small
\bibliographystyle{ieee_fullname}
\section*{Fair Attribute Classification through Latent Space De-biasing (Appendix)}
\section*{Appendix}
In this supplementary document, we provide additional details on certain sections of the main paper.
\begin{itemize}[leftmargin=0pt, itemsep=1pt, topsep=1pt]
\item[] \textbf{Section \ref{sec:derivation}:} We derive a closed form solution for $\mathbf{z}'$ which allows us to easily manipulate latent vectors in the latent space (Section~\ref{sec:method}).
\item[] \textbf{Section~\ref{sec:dataset}}
We provide attribute-level results and further analysis of our main experiments (Section~\ref{sec:baseline}).
\item[] \textbf{Section~\ref{sec:factors}:}
We discuss some factors that influence (or not) our method's effectiveness.
\item[] \textbf{Section~\ref{sec:extra_ablations}:}
We provide more details on the ablation studies (Section~\ref{sec:designchoices}).
\item[] \textbf{Section \ref{sec:choi}:}
We investigate how many images with protected attribute labels our method requires to achieve the desired performance.
\end{itemize}
\subsection{Derivation}
\label{sec:derivation}
In Section 3 of the main paper, we describe a method to compute perturbations within the latent vector space, such that the protected attribute score changes, while the target attribute score remains the same. More formally, if $h_t$ is a function that approximates the target attribute score, and $h_g$ is a function that approximates the protected attribute score, for every latent vector $\mathbf{z}$, we want to compute $\mathbf{z}'$ such that
\begin{equation}
\label{eq:target}
h_t(\mathbf{z}') = h_t(\mathbf{z}), \; \; \; h_g(\mathbf{z}') = -h_g(\mathbf{z}).
\end{equation}
We assume that the latent space $\mathcal{Z}$ is approximately linearly separable in the semantic attributes. $h_t$ and $h_g$ thus can be represented as linear models $\mathbf{w_t}$ and $\mathbf{w_g}$, normalized as $||\mathbf{w_t}|| = 1, ||\mathbf{w_g}||=1$, for the target and protected attribute respectively, with intercepts $b_t$ and $b_g$.
Equation \ref{eq:target} thus reduces to
\begin{equation}
\wa^T \mathbf{z} + b_t = \wa^T \mathbf{z}' + b_t, \; \; \; \wg^T \mathbf{z}' + b_g = -\wg^T \mathbf{z} - b_g.
\end{equation}
Simplifying, we get
\begin{equation}
\wa^T (\mathbf{z}'-\mathbf{z}) = 0, \; \; \; \wg^T (\mathbf{z}'+\mathbf{z}) + 2b_g = 0.
\end{equation}
These equations have infinitely many solutions, we choose the solution that minimizes the distance between $\mathbf{z}$ and $\mathbf{z}'$. This is true if $\mathbf{z}'-\mathbf{z}$ is in the span of $\{\mathbf{w_g}, \mathbf{w_t}\}$. Hence, we can represent $\mathbf{z}' - \mathbf{z} = \alpha \mathbf{w_t} + \beta \mathbf{w_g}$, and we get:
\begin{align}
\wa^T (\mathbf{z}'-\mathbf{z}) & = 0 \\
\wa^T (\alpha \mathbf{w_t} + \beta \mathbf{w_g}) & = 0\\
\Rightarrow \alpha = - \beta \wa^T \mathbf{w_g}\\
\wg^T ((\mathbf{z}'-\mathbf{z}) +2\mathbf{z}) + 2b_g & = 0\\
\wg^T (\alpha\mathbf{w_t} + \beta \mathbf{w_g} +2\mathbf{z}) + 2b_g & = 0\\
-\beta (\wa^T \mathbf{w_g})^2 + \beta + 2\wg^T\mathbf{z} + 2b_g & = 0\\
\Rightarrow (1-(\wa^T \mathbf{w_g})^2)\beta = -2(\wg^T\mathbf{z} + b_g) \\
\Rightarrow \beta = -2\frac{(\wg^T \mathbf{z} + b_g)}{(1-(\wa^T\mathbf{w_g})^2)}\\
\Rightarrow \alpha = 2\frac{(\wg^T \mathbf{z} + b_g)(\wa^T\mathbf{w_g})}{(1-(\wa^T\mathbf{w_g})^2)}
\end{align}
This gives us a closed form solution for $\mathbf{z}'$:
\begin{equation}
\mathbf{z}' = \mathbf{z} - 2\left(\frac{\wg^T \mathbf{z} +b_g}{1 - (\wg^T\mathbf{w_t})^2}\right) \left(\mathbf{w_g} - (\wg^T\mathbf{w_t})\mathbf{w_t} \right).
\end{equation}
As a quick verification, we confirm that this value of $\mathbf{z}'$ maintains changes the protected attribute score, and maintains the target attribute score:
\begin{align*}
&h_g(\mathbf{z}') \\
&= \wg^T \mathbf{z}' + b_g \\
&= \wg^T \left[\mathbf{z} -2\left(\frac{\wg^T \mathbf{z} +b_g}{1 - (\wg^T\mathbf{w_t})^2}\right) \left(\mathbf{w_g} - (\wg^T\mathbf{w_t})\mathbf{w_t} \right)\right] + b_g \\
&= \wg^T \mathbf{z} - 2\left(\frac{\wg^T \mathbf{z} +b_g}{1 - (\wg^T\mathbf{w_t})^2}\right) \left(1 - (\wg^T\mathbf{w_t})\wg^T\mathbf{w_t} \right) +b_g \\
&= \wg^T \mathbf{z} - 2(\wg^T \mathbf{z}+b_g) +b_g = -\wg^T \mathbf{z} - b_g = -h_g(\mathbf{z})
\end{align*}
\begin{align*}
&h_a(\mathbf{z}') \\
&= \wa^T \mathbf{z}' + b_t\\
&= \wa^T\left[\mathbf{z} - 2\left(\frac{\wg^T \mathbf{z} + b_g}{1 - (\wg^T\mathbf{w_t})^2}\right) \left(\mathbf{w_g} - (\wg^T\mathbf{w_t})\mathbf{w_t} \right) \right] +b_t\\
&= \wa^T \mathbf{z} - 2\left(\frac{\wg^T \mathbf{z}}{1 - (\wg^T\mathbf{w_t})^2}\right) \left(\wa^T\mathbf{w_g} - (\wg^T\mathbf{w_t})\right) + b_t\\
&= \wa^T \mathbf{z} + b_t = h_t(\mathbf{z})
\end{align*}
\subsection{Attribute-level results}
\label{sec:dataset}
We provide attribute-level results and further analysis of our main experiments (Section 4.1 of the main paper).
\subsubsection{Linear separability of latent space}
Our paired augmentation method assumes that the latent space is approximately linearly separable in the semantic attributes. Here we investigate to what extent this assumption holds for different attributes.
As described in the main paper, the attribute hyperplanes were estimated with 10,000 samples using linear SVM.
In Table~\ref{tab:hyperplane-performance}, we report hyperplane accuracy and AP, measured on 160,000 synthetic samples, as well as the percentage of positive samples and the skew of the CelebA training set. The skew is calculated as $\frac{\max(N_{g=-1,a=1}, N_{g=1,a=1})}{N_{g=-1,a=1}+N_{g=1,a=1}}$ where $N_{g=-1,a=1}$ is the number of samples with protected attribute label $g{=-1}$ (perceived as not male) and target label 1 (positive) and $N_{g=1,a=1}$ defined likewise. The protected attribute class with more positive samples is noted in the skew column.
We observe that most attributes are well separated with the estimated hyperplanes, except for those with high skew that have too few examples from underrepresented subgroups.
For completeness, we also report our model's improvement over the baseline model on the four evaluation metrics. We did not find immediate correlations between the hyperplane quality with the downstream model performance.
\begin{table*}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|cc|cc|cc|rrrr|}
\hline
Attribute type & \multicolumn{3}{|c|}{Attribute statistics} & \multicolumn{2}{|c|}{Hyperplane acc.} & \multicolumn{2}{|c|}{Hyperplane AP} & \multicolumn{4}{|c|}{Improvement over baseline}\\
\hline
Inconsistently labeled & Positive & \multicolumn{2}{|c|}{Skew} & $g{=}{-1}$ & $g{=}{-1}$ & $g{=}{-1}$ & $g{=}1$ & AP & DEO & BA & KL\\ \hline
\texttt{BigLips} & 24.1\% & 0.73 & $g{=}{-1}$ & 80.3 & 92.0 & 49.7 & 28.9 & -0.35 & -0.79 & 1.23 & -0.03 \\
\texttt{BigNose} & 23.6\% & 0.75 & $g{=}1$ & 91.7 & 74.5 & 51.1 & 82.4 & -0.66 & 11.03 & 2.52 & 1.04 \\
\texttt{OvalFace} & 28.3\% & 0.68 & $g{=}{-1}$ & 75.4 & 74.2 & 85.3 & 63.1 & -1.82 & 7.53 & 3.33 & 0.77 \\
\texttt{PaleSkin} & 4.3\% & 0.76 & $g{=}{-1}$ & 94.4 & 96.9 & 48.4 & 30.9 & -1.90 & 4.26 & 0.31 & 0.26 \\
\texttt{StraightHair} & 20.9\% & 0.52 & $g{=}{-1}$ & 87.7 & 69.8 & 25.0 & 58.8 & -1.76 & 0.94 & 0.53 & -0.08 \\
\texttt{WavyHair} & 31.9\% & 0.81 & $g{=}{-1}$ & 73.0 & 92.1 & 79.4 & 23.5 & -0.65 & 7.59 & 1.33 & 0.26 \\ \hline
Gender-dependent & Positive & \multicolumn{2}{|c|}{Skew} & $g{=}{-1}$ & $g{=}1$ & $g{=}{-1}$ & $g{=}1$ & AP & DEO & BA & KL\\ \hline
\texttt{ArchedBrows} & 26.6\% & 0.92 & $g{=}{-1}$ & 72.3 & 92.1 & 82.6 & 25.5 & -0.69 & -3.31 & -0.09 & 0.02 \\
\texttt{Attractive} & 51.4\% & 0.77 & $g{=}{-1}$ & 88.4 & 81.0 & 97.9 & 81.9 & -0.33 & 3.25 & 0.98 & 0.41 \\
\texttt{BushyBrows} & 14.4\% & 0.71 & $g{=}1$ & 94.5 & 79.6 & 37.6 & 62.0 & -1.20 & 8.49 & 1.14 & 0.25 \\
\texttt{PointyNose} & 27.6\% & 0.75 & $g{=}{-1}$ & 73.6 & 82.9 & 84.4 & 59.9 & -1.32 & 3.25 & 0.99 & -0.40 \\
\texttt{RecedingHair} & 8.0\% & 0.62 & $g{=}1$ & 94.5 & 88.3 & 41.8 & 57.7 & -1.44 & 2.32 & 0.40 & 0.17 \\
\texttt{Young} & 77.9\% & 0.66 & $g{=}{-1}$ & 96.2 & 84.1 & 99.7 & 95.3 & -0.24 & 0.78 & 0.49 & 0.31 \\ \hline
Gender-independent & Positive & \multicolumn{2}{|c|}{Skew} & $g{=}{-1}$ & $g{=}1$ & $g{=}{-1}$ & $g{=}1$ & AP & DEO & BA & KL\\ \hline
\texttt{Bangs} & 15.2\% & 0.77 & $g{=}{-1}$ & 90.3 & 94.9 & 81.5 & 58.9 & -0.50 & 0.62 & 0.38 & 0.09 \\
\texttt{BlackHair} & 23.9\% & 0.52 & $g{=}1$ & 89.3 & 83.2 & 78.9 & 79.2 & -1.00 & 2.25 & 0.44 & 0.00 \\
\texttt{BlondHair} & 14.9\% & 0.94 & $g{=}{-1}$ & 88.9 & 97.1 & 82.7 & 19.8 & -0.77 & 1.04 & 0.23 & -0.12 \\
\texttt{BrownHair} & 20.3\% & 0.69 & $g{=}{-1}$ & 66.4 & 80.4 & 45.5 & 38.8 & -0.51 & -0.57 & -0.01 & 0.01 \\
\texttt{Chubby} & 5.8\% & 0.88 & $g{=}1$ & 99.1 & 89.9 & 7.6 & 33.8 & -1.95 & 4.08 & 0.01 & 0.13 \\
\texttt{EyeBags} & 20.4\% & 0.71 & $g{=}1$ & 90.7 & 74.4 & 64.1 & 74.4 & -1.74 & 8.30 & 1.91 & 0.58 \\
\texttt{Glasses} & 6.5\% & 0.80 & $g{=}1$ & 97.8 & 92.5 & 60.3 & 77.8 & -0.24 & -0.07 & 0.05 & -0.27 \\
\texttt{GrayHair} & 4.2\% & 0.86 & $g{=}1$ & 98.4 & 92.6 & 10.4 & 32.9 & -2.60 & 7.02 & 0.32 & 0.54 \\
\texttt{HighCheeks} & 45.2\% & 0.72 & $g{=}{-1}$ & 86.3 & 86.3 & 95.2 & 83.5 & -0.33 & -1.06 & 0.24 & 0.04 \\
\texttt{MouthOpen} & 48.2\% & 0.63 & $g{=}{-1}$ & 88.6 & 87.0 & 96.4 & 93.1 & -0.08 & 0.69 & 0.34 & -0.03 \\
\texttt{NarrowEyes} & 11.6\% & 0.56 & $g{=}{-1}$ & 93.8 & 92.1 & 29.6 & 26.4 & -0.97 & 3.10 & -0.53 & 0.12 \\
\texttt{Smiling} & 48.0\% & 0.65 & $g{=}{-1}$ & 91.5 & 90.7 & 98.0 & 96.5 & -0.09 & 1.01 & 0.67 & 0.03 \\
\texttt{Earrings} & 18.7\% & 0.97 & $g{=}{-1}$ & 71.8 & 96.3 & 56.9 & 3.0 & -0.63 & 8.18 & 0.64 & 1.40 \\
\texttt{WearingHat} & 4.9\% & 0.70 & $g{=}1$ & 97.4 & 94.0 & 45.0 & 60.6 & -0.95 & 2.67 & 0.14 & -0.06 \\ \hline
\textbf{Average} & 24.1\% & 0.73 & & 87.4 & 86.9 & 62.9 & 55.7 & -0.95 & 3.18 & 0.69 & 0.21 \\ \hline
\end{tabular}
}
\caption{Attribute-level information. The columns are (from left to right) target attribute name, percentage of positive samples, skew, hyperplane accuracy, hyperplane AP, and our model's improvement over the baseline model on the four evaluation metrics.}
\label{tab:hyperplane-performance}
\end{table*}
\subsubsection{Changes in baseline score}
We next evaluate how well we are able to maintain the target attribute score when perturbing the latent vector. We use the change in the baseline classifier as a proxy to measure the target attribute score. We note that this measurement is flawed because the baseline classifier is known to perform worse on minority examples, however, we believe that this measurement still leads to some valuable insights.
For each attribute, we measure the the absolute change in baseline score $|f_t(G(\mathbf{z}) - f_t(G(\mathbf{z}'))|$ over 5000 images, and compute averages based on what we expect the target and protected attribute values of $G(\mathbf{z}')$ to be. We plot this versus the fraction of images in the real world dataset that have these target and protected values (Figure~\ref{fig:score_change}). We find that there is a strong negative correlation. This could be because the target attribute is harder to maintain in this case, or because the baseline classifier has a tendency to misclassify minority samples.
Another question that we were interested in was interactions between different attributes as we create balanced synthetic datasets for different attributes. We measured the change in baseline classifier score for different targets $t'$ when trying to maintain target attribute $t$ and found that some attributes changed drastically when creating a balanced dataset for any attribute (Table~\ref{tab:score_change_all}). For example, the attribute \texttt{Attractive} changed by a large amount irrespective of which target attribute we were trying to preserve. This suggests that some of these attributes are more sensitive to latent space manipulations.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{ImagesAppendix/baseline_score_change.png}
\caption{We plot average absolute change in the baseline classifier score versus the fraction of images in the dataset that have the corresponding ground truth labels. We separate them based on what the new ground truth values should be, for each attribute. We find that the score change is larger when creating an image with minority labels. This could be because we are unable to maintain the target attribute in this case or because the baseline classifier performs worse on minority images.}
\label{fig:score_change}
\end{figure}
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{\begin{tabular}{|c|c|c|c|}
\hline
Attribute & Change & Attribute & Change \\ \hline
\texttt{ArchedBrows} & \textbf{0.314} & \texttt{Glasses} & 0.109 \\
\texttt{Attractive} & \textbf{0.336} & \texttt{GrayHair} & 0.056 \\
\texttt{Bangs} & 0.120 & \texttt{HighCheeks} & \textbf{0.233} \\
\texttt{BlackHair} & 0.153 & \texttt{MouthOpen} & 0.187 \\
\texttt{BlondHair} & 0.180 & \texttt{NarrowEyes} & 0.066 \\
\texttt{BrownHair} & 0.158 & \texttt{PointyNose} & 0.152 \\
\texttt{BushyBrows} & 0.136 & \texttt{RecedingHair} & 0.069 \\
\texttt{Chubby} & 0.067 & \texttt{Smiling} & 0.176 \\
\texttt{Earrings} & 0.176 & \texttt{WearingHat} & 0.065 \\
\texttt{Eyebags} & \textbf{0.212} & \texttt{Young} & \textbf{0.268} \\ \hline
\end{tabular}}
\caption{We report the average classifier score change in an attribute when trying to create balanced datasets for other attributes. Classifier scores are between 0 and 1, and changes above 0.2 are bolded. We find that some attributes (e.g. \texttt{Attractive}, \texttt{Young}) change by a lot, whereas others (e.g. \texttt{GrayHair}, \texttt{WearingHat}) do not change much.}
\label{tab:score_change_all}
\end{table}
\subsection{Factors of influence}
\label{sec:factors}
In this section, we discuss in more detail how some factors influence (or not) our method's effectiveness (Section 4.1 of the main paper).
\subsubsection{Skew of attributes}
For some attributes, the majority of the positive samples come from one gender expression.
For example, \texttt{ArchedBrows} has a skew of 0.92 towards $g{=}\num{-1}$, that is, 92\% of positive \texttt{ArchedBrows} samples have gender expression label $g{=}\num{-1}$.
To understand the effect of data skew on our method's performance, we ran experiments with differently skewed data. From the 162,770 CelebA training set images, we created slightly smaller training sets where the attribute of interest (e.g. \texttt{HighCheeks}) has different values of skew. Specifically, we created three versions of training data each with skew 0.5, 0.7, 0.9, while keeping the total number of images fixed. We trained a GAN on each training set, created a synthetic de-biased dataset with our method, and trained an attribute classifier with the training set and 160,000 pairs of synthetic images. For comparison, we also trained baseline models on just the differently skewed training sets. The classifiers were evaluated on the CelebA validation set. Table~\ref{tab:skewdata} summarizes the results. Compared to the baseline, our model has lower AP as expected, better DEO for skew 0.5 and 0.7, worse DNAP, and better or on par BA. Overall, classifiers trained on more imbalanced data with higher skew perform worse on all metrics.
\begin{table}[ht!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|cc|cc|}
\hline
\multirow{2}{*}{Skew} & \multicolumn{2}{c|}{AP $\uparrow$} & \multicolumn{2}{c|}{DEO $\downarrow$} \\ \cline{2-5}
& Base & Ours & Base & Ours \\ \hline
\multicolumn{1}{|c|}{0.5} & \textbf{95.1 $\pm$ 0.3} & 93.6 $\pm$ 0.4 & 7.0 $\pm$ 1.7 & \textbf{6.6 $\pm$ 1.8} \\
\multicolumn{1}{|c|}{0.7} & \textbf{94.8 $\pm$ 0.3} & 94.1 $\pm$ 0.3 & 19.6 $\pm$ 1.9 & \textbf{19.4 $\pm$ 1.9} \\
\multicolumn{1}{|c|}{0.9} & \textbf{94.1 $\pm$ 1.7} & 93.1 $\pm$ 0.4 & \textbf{31.3 $\pm$ 2.0} & 32.9 $\pm$ 1.9 \\ \hline
\multirow{2}{*}{Skew}& \multicolumn{2}{c|}{BA $\downarrow$} & \multicolumn{2}{c|}{KL $\downarrow$} \\
\cline{2-5}
& Base & Ours & Base & Ours \\ \hline
\multicolumn{1}{|c|}{0.5} & -1.9 $\pm$ 0.5 & \textbf{-3.0 $\pm$ 0.5} & 0.4 $\pm$ 0.1 & \textbf{0.3 $\pm$ 0.1} \\
\multicolumn{1}{|c|}{0.7} & \textbf{3.4 $\pm$ 0.5} & \textbf{3.4 $\pm$ 0.5} & \textbf{0.9 $\pm$ 0.1} & \textbf{0.9 $\pm$ 0.1} \\
\multicolumn{1}{|c|}{0.9} & 7.1 $\pm$ 0.5 & \textbf{7.0 $\pm$ 0.5} & \textbf{1.7 $\pm$ 0.1} & 1.9 $\pm$ 0.1 \\ \hline
\end{tabular}
}
\caption{Comparison of \texttt{HighCheeks} attribute classifiers trained on differently skewed data.}
\label{tab:skewdata}
\end{table}
\subsubsection{Discriminability of attributes}
Nam et al.~\cite{nam2020learning} recently observed that correlations among attributes affect a classifier only if the protected attribute is `easier' to learn than the target attribute.
Inspired by their observation, we design an experiment where we put a pair of CelebA attributes in competition to assess their relative discriminability.
We create a fully skewed dataset in which half of the images have both attributes and the other half have neither. With this dataset, we train a classifier to predict if an image has both attributes or neither. At test time, we evaluate the classifier on a perfectly balanced subset of the CelebA validation set (where each of the four possible hat-glasses combinations occupies a quarter of the dataset), and compute AP for each attribute. If one attribute has a higher AP than the other, it suggests that this attribute is `easier' to learn than the other. We repeat this experiment with a second dataset skewed in a different way (i.e. half of the images have one attribute but not the other).
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{
\input{full_skew_results}}
\caption{Discriminability of attributes. We compare attributes on the row to those in the columns. \texttt{y} indicates that the attribute in the row is easier to learn than that in the column and \texttt{n} indicates the opposite. We find that gender expression is one of the easiest attributes to learn, while \texttt{Young} is relatively hard.}
\label{tab:full_skew}
\end{table}
The results for gender-dependent and gender-independent attributes are in Table~\ref{tab:full_skew}. We report that an attribute is `easier' to learn than the other if it has a higher AP for both created datasets. We find that gender expression is one of the easiest attributes to learn, which may be why gender bias is prevalent in many models. On the other hand, \texttt{Young} is relatively hard for a model to learn, so its correlation with other attributes may not be as influential.
We find that gender expression is one of the easiest attributes to learn (with gender expression having a higher AP than every attribute we tested except \texttt{WearingHat} and \texttt{Glasses}), which may be why gender bias is prevalent in many models. On the other hand, \texttt{Young} is relatively hard for a model to learn (\texttt{Young} is harder to learn than all but 4 other attributes), so its correlation with other attributes may not be as influential.
\subsection{Ablation studies}
\label{sec:extra_ablations}
In this section, we describe in more detail the ablation studies we have conducted to investigate how improved hyperplanes and use of different labels for synthetic images impact (or not) our method's performance (Section 4.2 of the main paper).
We first investigate if hyperplanes estimated with better balanced samples improve the performance of downstream attribute classifiers.
We test this hypothesis by training models using hyperplanes that are estimated with different fractions of positive or negative samples.
For the attribute \texttt{HighCheeks}, we estimate hyperplanes with different fractions of positive and negative samples, while keeping the total number of samples constant at 12,000 and the number of positive samples same for each gender expression.
We then train attribute classifiers with the CelebA training set and synthetic pair images augmented with these different hyperplanes. In Table~\ref{tab:underrep}, we report results evaluated on the CelebA validation set.
We find that although the fairness metrics deteriorate as the target attribute hyperplanes were estimated with less balanced samples, this rate is relatively slow, and the downstream classifier still performs reasonably well.
\begin{table}[ht]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Fraction & AP $\uparrow$ & DEO $\downarrow$ & BA $\downarrow$ & KL $\downarrow$ \\ \hline
50.0\% & 95.1 $\pm$ 0.3 & 13.2 $\pm$ 1.7 & {0.5 $\pm$ 0.5} & 0.7 $\pm$ 0.1 \\
12.5\% & {95.1 $\pm$ 0.3} & 14.0 $\pm$ 1.7 & {0.8 $\pm$ 0.5} & \textbf{0.6 $\pm$ 0.1}\\
6.3\% & {95.1 $\pm$ 0.3} & {15.1 $\pm$ 1.8} & 1.3 $\pm$ 0.5 & 0.8 $\pm$ 0.2 \\
3.1\% & {95.1 $\pm$ 0.3} & {14.2 $\pm$ 1.7} & 1.0 $\pm$ 0.5 & 0.7 $\pm$ 0.1\\
1.6\% & 95.1 $\pm$ 0.3 & \textbf{12.9 $\pm$ 1.8} & \textbf{0.3 $\pm$ 0.5} & 0.7 $\pm$ 0.1 \\ \hline
\end{tabular}
}
\caption{The amount of underrepresentation in samples used for hyperplane estimation doesn't appear to affect the performancee of the downstream classsification model much.}
\label{tab:underrep}
\end{table}
Next, we tried training models with synthetic images with the same hallucinated target labels, i.e. using only $G(\mathbf{z})$ and $G(\mathbf{z}')$ such that $f_t(G(\mathbf{z})){=}f_t(G(\mathbf{z}'))$, and labeling synthetic images with $h_t(\mathbf{z})$ in place of $f_t(G(\mathbf{z}))$.
Table~\ref{tab:h_scores_incorrect} contains all results. We report average results over all gender-dependent and gender-independent attributes. We find that both these ablations are comparable to ours, with in a slight loss in AP (79.8 and 82.1 versus 82.6), and worse fairness metrics in general (average DEO is 18.1 and 17.4 vs 16.1, BA is 0.9 and 0.7 vs 0.5).
\begin{table}[t!]
\resizebox{\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|}
\cline{2-5}
& AP $\uparrow$ & DEO $\downarrow$ & BA $\downarrow$ & KL $\downarrow$ \\ \hline
\multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}$f_t(G(\mathbf{z})) =$\\ $\;\; f_t(G(\mathbf{z}'))$\end{tabular}} & 79.8 $\pm$ 1.6 & 17.4 $\pm$ 4.5 & 0.9 $\pm$ 0.4 & \textbf{1.0 $\pm$ 0.3} \\ \hline
\multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}Labels \\ computed \\ using $h_t$\end{tabular}} & 82.1 $\pm$ 1.5 & 18.1 $\pm$ 4.2 & 0.7 $\pm$ 0.4 & 1.4 $\pm$ 0.8 \\ \hline
\multicolumn{1}{|c|}{Ours} & \textbf{82.6 $\pm$ 1.5} & \textbf{16.1 $\pm$ 4.2} & \textbf{0.5 $\pm$ 0.4} & 1.3 $\pm$ 0.7 \\ \hline
\end{tabular}}
\caption{Mean performances over all gender-dependent and gender-independent attributes on the validation set when using different methods to pick and label synthetic images. We find that most performances are comparable, with our method having a slightly higher AP, and slightly better DEO and KL.}
\label{tab:h_scores_incorrect}
\end{table}
\begin{comment}
\subsection{Gender expression manipulation}
\label{sec:gender_manip}
In this section, we provide some examples of images we generated by manipulating gender expression. Figure~\ref{fig:gender_exp_manipulation} shows three such examples. For each, we contrast our method with the method proposed by Denton et al.~\cite{DHMG19}. These images are chosen among the many generated in order to demonstrate the strength of our approach.
\begin{figure}[ht!]
\centering
\scalebox{.5}{\input{gender_exp_manipulation}}
\caption{Manipulation of gender expression from one gender expression through another by manipulating vectors in the latent vector space. The top two rows change from a masculine gender expression to a feminine gender expression, while the bottom 4 change from a feminine gender expression to a masculine one. For each attribute, the top row shows the naive manipulation, which preserves correlations of real-world data, and the bottom row shows our method. For example, for the attribute \texttt{Glasses}, we see that the naive manipulation removes eyeglasses while changing the gender expression from masculine to feminine, whereas our method preserves glasses.}
\label{fig:gender_exp_manipulation}
\end{figure}
\end{comment}
\subsection{Number of required labeled images}
\label{sec:choi}
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Metric} & \multicolumn{5}{|c|}{Num. of samples used to compute $f_g$} \\
\cline{2-6}
& 10 & 100 & 1000 & 10000 & 162,770 \\
\hline
AP $\uparrow$ & 78.8 $\pm$ 1.5 & 78.8 $\pm$ 1.5 & 78.8 $\pm$ 1.5 & \textbf{78.9 $\pm$ 1.6} & 78.7 $\pm$ 1.6 \\
DEO $\downarrow$ & 11.1 $\pm$ 3.4 & 11.3 $\pm$ 3.0 & 10.5 $\pm$ 3.7 & 10.8 $\pm$ 3.7 & \textbf{9.6 $\pm$ 3.1} \\
BA $\downarrow$ & 0.6 $\pm$ 0.5 & 1.0 $\pm$ 0.5 & 0.5 $\pm$ 0.5 & 0.7 $\pm$ 0.5 & \textbf{0.4 $\pm$ 0.5} \\
KL $\downarrow$ & 0.6 $\pm$ 0.2 & 0.8 $\pm$ 0.3 & 0.7 $\pm$ 0.3 & 0.7 $\pm$ 0.3 & \textbf{0.5 $\pm$ 0.6} \\ \hline
\end{tabular}}
\caption{Average over 4 attributes when using different numbers of labeled examples to compute gender expression. Results are reported on the validation set. We find that while the fairness metrics improve slightly by using more labelled examples, this is gradual, and within the error bars, in all cases.}
\label{tab:choi_comp}
\end{table}
Choi et al.~\cite{GCSE19FairModeling} use a method that is unsupervised. Assuming access to a small unbiased dataset, as well as a large (possibly biased) dataset, they estimate the bias in the larger dataset, and learn a generative model that generates unbiased data at test time. Using these generated images, as well as real images, they train a downstream classifier for the attribute \texttt{Attractive}, and achieve an accuracy of 75\%. Since most of the protected attributes that we care about are sensitive (for example gender or race), not requiring protected attribute labels prevents perpetuation of harmful stereotypes. In order to understand how much our model depends on the protected attribute labels, we investigate where our model depends on the protected attributes labels. We use protected attribute labels only to compute the linear separator in the latent space ($\mathbf{w_g}$ and $b_g$ from section~\ref{sec:derivation} in this document). We now train classifiers for gender expression, using different numbers of labeled images, and use these classifiers to train target attribute classifiers for 4 different attributes (\texttt{EyeBags}, \texttt{BrownHair}, \texttt{GrayHair} and \texttt{HighCheeks}).
Most of the fairness metrics improve slightly when using more labeled examples (DEO improves from 11.1 when using just 10 samples to 9.6 when using all 162k samples in the CelebA training set, BA improves from 0.6 to 0.4, and KL improves from 0.6 to 0.5), however, these are all gradual, and within the error bars. Full results are in Table~\ref{tab:choi_comp}.
\section*{Fair Attribute Classification through Latent Space De-biasing (Appendix)}
\section*{Appendix}
In this supplementary document, we provide additional details on certain sections of the main paper.
\begin{itemize}[leftmargin=0pt, itemsep=1pt, topsep=1pt]
\item[] \textbf{Section \ref{sec:derivation}:} We derive a closed form solution for $\mathbf{z}'$ which allows us to easily manipulate latent vectors in the latent space (Section~\ref{sec:method}).
\item[] \textbf{Section~\ref{sec:dataset}}
We provide attribute-level results and further analysis of our main experiments (Section~\ref{sec:baseline}).
\item[] \textbf{Section~\ref{sec:factors}:}
We discuss some factors that influence (or not) our method's effectiveness.
\item[] \textbf{Section~\ref{sec:extra_ablations}:}
We provide more details on the ablation studies (Section~\ref{sec:designchoices}).
\item[] \textbf{Section \ref{sec:choi}:}
We investigate how many images with protected attribute labels our method requires to achieve the desired performance.
\end{itemize}
\subsection{Derivation}
\label{sec:derivation}
In Section 3 of the main paper, we describe a method to compute perturbations within the latent vector space, such that the protected attribute score changes, while the target attribute score remains the same. More formally, if $h_t$ is a function that approximates the target attribute score, and $h_g$ is a function that approximates the protected attribute score, for every latent vector $\mathbf{z}$, we want to compute $\mathbf{z}'$ such that
\begin{equation}
\label{eq:target}
h_t(\mathbf{z}') = h_t(\mathbf{z}), \; \; \; h_g(\mathbf{z}') = -h_g(\mathbf{z}).
\end{equation}
We assume that the latent space $\mathcal{Z}$ is approximately linearly separable in the semantic attributes. $h_t$ and $h_g$ thus can be represented as linear models $\mathbf{w_t}$ and $\mathbf{w_g}$, normalized as $||\mathbf{w_t}|| = 1, ||\mathbf{w_g}||=1$, for the target and protected attribute respectively, with intercepts $b_t$ and $b_g$.
Equation \ref{eq:target} thus reduces to
\begin{equation}
\wa^T \mathbf{z} + b_t = \wa^T \mathbf{z}' + b_t, \; \; \; \wg^T \mathbf{z}' + b_g = -\wg^T \mathbf{z} - b_g.
\end{equation}
Simplifying, we get
\begin{equation}
\wa^T (\mathbf{z}'-\mathbf{z}) = 0, \; \; \; \wg^T (\mathbf{z}'+\mathbf{z}) + 2b_g = 0.
\end{equation}
These equations have infinitely many solutions, we choose the solution that minimizes the distance between $\mathbf{z}$ and $\mathbf{z}'$. This is true if $\mathbf{z}'-\mathbf{z}$ is in the span of $\{\mathbf{w_g}, \mathbf{w_t}\}$. Hence, we can represent $\mathbf{z}' - \mathbf{z} = \alpha \mathbf{w_t} + \beta \mathbf{w_g}$, and we get:
\begin{align}
\wa^T (\mathbf{z}'-\mathbf{z}) & = 0 \\
\wa^T (\alpha \mathbf{w_t} + \beta \mathbf{w_g}) & = 0\\
\Rightarrow \alpha = - \beta \wa^T \mathbf{w_g}\\
\wg^T ((\mathbf{z}'-\mathbf{z}) +2\mathbf{z}) + 2b_g & = 0\\
\wg^T (\alpha\mathbf{w_t} + \beta \mathbf{w_g} +2\mathbf{z}) + 2b_g & = 0\\
-\beta (\wa^T \mathbf{w_g})^2 + \beta + 2\wg^T\mathbf{z} + 2b_g & = 0\\
\Rightarrow (1-(\wa^T \mathbf{w_g})^2)\beta = -2(\wg^T\mathbf{z} + b_g) \\
\Rightarrow \beta = -2\frac{(\wg^T \mathbf{z} + b_g)}{(1-(\wa^T\mathbf{w_g})^2)}\\
\Rightarrow \alpha = 2\frac{(\wg^T \mathbf{z} + b_g)(\wa^T\mathbf{w_g})}{(1-(\wa^T\mathbf{w_g})^2)}
\end{align}
This gives us a closed form solution for $\mathbf{z}'$:
\begin{equation}
\mathbf{z}' = \mathbf{z} - 2\left(\frac{\wg^T \mathbf{z} +b_g}{1 - (\wg^T\mathbf{w_t})^2}\right) \left(\mathbf{w_g} - (\wg^T\mathbf{w_t})\mathbf{w_t} \right).
\end{equation}
As a quick verification, we confirm that this value of $\mathbf{z}'$ maintains changes the protected attribute score, and maintains the target attribute score:
\begin{align*}
&h_g(\mathbf{z}') \\
&= \wg^T \mathbf{z}' + b_g \\
&= \wg^T \left[\mathbf{z} -2\left(\frac{\wg^T \mathbf{z} +b_g}{1 - (\wg^T\mathbf{w_t})^2}\right) \left(\mathbf{w_g} - (\wg^T\mathbf{w_t})\mathbf{w_t} \right)\right] + b_g \\
&= \wg^T \mathbf{z} - 2\left(\frac{\wg^T \mathbf{z} +b_g}{1 - (\wg^T\mathbf{w_t})^2}\right) \left(1 - (\wg^T\mathbf{w_t})\wg^T\mathbf{w_t} \right) +b_g \\
&= \wg^T \mathbf{z} - 2(\wg^T \mathbf{z}+b_g) +b_g = -\wg^T \mathbf{z} - b_g = -h_g(\mathbf{z})
\end{align*}
\begin{align*}
&h_a(\mathbf{z}') \\
&= \wa^T \mathbf{z}' + b_t\\
&= \wa^T\left[\mathbf{z} - 2\left(\frac{\wg^T \mathbf{z} + b_g}{1 - (\wg^T\mathbf{w_t})^2}\right) \left(\mathbf{w_g} - (\wg^T\mathbf{w_t})\mathbf{w_t} \right) \right] +b_t\\
&= \wa^T \mathbf{z} - 2\left(\frac{\wg^T \mathbf{z}}{1 - (\wg^T\mathbf{w_t})^2}\right) \left(\wa^T\mathbf{w_g} - (\wg^T\mathbf{w_t})\right) + b_t\\
&= \wa^T \mathbf{z} + b_t = h_t(\mathbf{z})
\end{align*}
\subsection{Attribute-level results}
\label{sec:dataset}
We provide attribute-level results and further analysis of our main experiments (Section 4.1 of the main paper).
\subsubsection{Linear separability of latent space}
Our paired augmentation method assumes that the latent space is approximately linearly separable in the semantic attributes. Here we investigate to what extent this assumption holds for different attributes.
As described in the main paper, the attribute hyperplanes were estimated with 10,000 samples using linear SVM.
In Table~\ref{tab:hyperplane-performance}, we report hyperplane accuracy and AP, measured on 160,000 synthetic samples, as well as the percentage of positive samples and the skew of the CelebA training set. The skew is calculated as $\frac{\max(N_{g=-1,a=1}, N_{g=1,a=1})}{N_{g=-1,a=1}+N_{g=1,a=1}}$ where $N_{g=-1,a=1}$ is the number of samples with protected attribute label $g{=-1}$ (perceived as not male) and target label 1 (positive) and $N_{g=1,a=1}$ defined likewise. The protected attribute class with more positive samples is noted in the skew column.
We observe that most attributes are well separated with the estimated hyperplanes, except for those with high skew that have too few examples from underrepresented subgroups.
For completeness, we also report our model's improvement over the baseline model on the four evaluation metrics. We did not find immediate correlations between the hyperplane quality with the downstream model performance.
\begin{table*}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|cc|cc|cc|rrrr|}
\hline
Attribute type & \multicolumn{3}{|c|}{Attribute statistics} & \multicolumn{2}{|c|}{Hyperplane acc.} & \multicolumn{2}{|c|}{Hyperplane AP} & \multicolumn{4}{|c|}{Improvement over baseline}\\
\hline
Inconsistently labeled & Positive & \multicolumn{2}{|c|}{Skew} & $g{=}{-1}$ & $g{=}{-1}$ & $g{=}{-1}$ & $g{=}1$ & AP & DEO & BA & KL\\ \hline
\texttt{BigLips} & 24.1\% & 0.73 & $g{=}{-1}$ & 80.3 & 92.0 & 49.7 & 28.9 & -0.35 & -0.79 & 1.23 & -0.03 \\
\texttt{BigNose} & 23.6\% & 0.75 & $g{=}1$ & 91.7 & 74.5 & 51.1 & 82.4 & -0.66 & 11.03 & 2.52 & 1.04 \\
\texttt{OvalFace} & 28.3\% & 0.68 & $g{=}{-1}$ & 75.4 & 74.2 & 85.3 & 63.1 & -1.82 & 7.53 & 3.33 & 0.77 \\
\texttt{PaleSkin} & 4.3\% & 0.76 & $g{=}{-1}$ & 94.4 & 96.9 & 48.4 & 30.9 & -1.90 & 4.26 & 0.31 & 0.26 \\
\texttt{StraightHair} & 20.9\% & 0.52 & $g{=}{-1}$ & 87.7 & 69.8 & 25.0 & 58.8 & -1.76 & 0.94 & 0.53 & -0.08 \\
\texttt{WavyHair} & 31.9\% & 0.81 & $g{=}{-1}$ & 73.0 & 92.1 & 79.4 & 23.5 & -0.65 & 7.59 & 1.33 & 0.26 \\ \hline
Gender-dependent & Positive & \multicolumn{2}{|c|}{Skew} & $g{=}{-1}$ & $g{=}1$ & $g{=}{-1}$ & $g{=}1$ & AP & DEO & BA & KL\\ \hline
\texttt{ArchedBrows} & 26.6\% & 0.92 & $g{=}{-1}$ & 72.3 & 92.1 & 82.6 & 25.5 & -0.69 & -3.31 & -0.09 & 0.02 \\
\texttt{Attractive} & 51.4\% & 0.77 & $g{=}{-1}$ & 88.4 & 81.0 & 97.9 & 81.9 & -0.33 & 3.25 & 0.98 & 0.41 \\
\texttt{BushyBrows} & 14.4\% & 0.71 & $g{=}1$ & 94.5 & 79.6 & 37.6 & 62.0 & -1.20 & 8.49 & 1.14 & 0.25 \\
\texttt{PointyNose} & 27.6\% & 0.75 & $g{=}{-1}$ & 73.6 & 82.9 & 84.4 & 59.9 & -1.32 & 3.25 & 0.99 & -0.40 \\
\texttt{RecedingHair} & 8.0\% & 0.62 & $g{=}1$ & 94.5 & 88.3 & 41.8 & 57.7 & -1.44 & 2.32 & 0.40 & 0.17 \\
\texttt{Young} & 77.9\% & 0.66 & $g{=}{-1}$ & 96.2 & 84.1 & 99.7 & 95.3 & -0.24 & 0.78 & 0.49 & 0.31 \\ \hline
Gender-independent & Positive & \multicolumn{2}{|c|}{Skew} & $g{=}{-1}$ & $g{=}1$ & $g{=}{-1}$ & $g{=}1$ & AP & DEO & BA & KL\\ \hline
\texttt{Bangs} & 15.2\% & 0.77 & $g{=}{-1}$ & 90.3 & 94.9 & 81.5 & 58.9 & -0.50 & 0.62 & 0.38 & 0.09 \\
\texttt{BlackHair} & 23.9\% & 0.52 & $g{=}1$ & 89.3 & 83.2 & 78.9 & 79.2 & -1.00 & 2.25 & 0.44 & 0.00 \\
\texttt{BlondHair} & 14.9\% & 0.94 & $g{=}{-1}$ & 88.9 & 97.1 & 82.7 & 19.8 & -0.77 & 1.04 & 0.23 & -0.12 \\
\texttt{BrownHair} & 20.3\% & 0.69 & $g{=}{-1}$ & 66.4 & 80.4 & 45.5 & 38.8 & -0.51 & -0.57 & -0.01 & 0.01 \\
\texttt{Chubby} & 5.8\% & 0.88 & $g{=}1$ & 99.1 & 89.9 & 7.6 & 33.8 & -1.95 & 4.08 & 0.01 & 0.13 \\
\texttt{EyeBags} & 20.4\% & 0.71 & $g{=}1$ & 90.7 & 74.4 & 64.1 & 74.4 & -1.74 & 8.30 & 1.91 & 0.58 \\
\texttt{Glasses} & 6.5\% & 0.80 & $g{=}1$ & 97.8 & 92.5 & 60.3 & 77.8 & -0.24 & -0.07 & 0.05 & -0.27 \\
\texttt{GrayHair} & 4.2\% & 0.86 & $g{=}1$ & 98.4 & 92.6 & 10.4 & 32.9 & -2.60 & 7.02 & 0.32 & 0.54 \\
\texttt{HighCheeks} & 45.2\% & 0.72 & $g{=}{-1}$ & 86.3 & 86.3 & 95.2 & 83.5 & -0.33 & -1.06 & 0.24 & 0.04 \\
\texttt{MouthOpen} & 48.2\% & 0.63 & $g{=}{-1}$ & 88.6 & 87.0 & 96.4 & 93.1 & -0.08 & 0.69 & 0.34 & -0.03 \\
\texttt{NarrowEyes} & 11.6\% & 0.56 & $g{=}{-1}$ & 93.8 & 92.1 & 29.6 & 26.4 & -0.97 & 3.10 & -0.53 & 0.12 \\
\texttt{Smiling} & 48.0\% & 0.65 & $g{=}{-1}$ & 91.5 & 90.7 & 98.0 & 96.5 & -0.09 & 1.01 & 0.67 & 0.03 \\
\texttt{Earrings} & 18.7\% & 0.97 & $g{=}{-1}$ & 71.8 & 96.3 & 56.9 & 3.0 & -0.63 & 8.18 & 0.64 & 1.40 \\
\texttt{WearingHat} & 4.9\% & 0.70 & $g{=}1$ & 97.4 & 94.0 & 45.0 & 60.6 & -0.95 & 2.67 & 0.14 & -0.06 \\ \hline
\textbf{Average} & 24.1\% & 0.73 & & 87.4 & 86.9 & 62.9 & 55.7 & -0.95 & 3.18 & 0.69 & 0.21 \\ \hline
\end{tabular}
}
\caption{Attribute-level information. The columns are (from left to right) target attribute name, percentage of positive samples, skew, hyperplane accuracy, hyperplane AP, and our model's improvement over the baseline model on the four evaluation metrics.}
\label{tab:hyperplane-performance}
\end{table*}
\subsubsection{Changes in baseline score}
We next evaluate how well we are able to maintain the target attribute score when perturbing the latent vector. We use the change in the baseline classifier as a proxy to measure the target attribute score. We note that this measurement is flawed because the baseline classifier is known to perform worse on minority examples, however, we believe that this measurement still leads to some valuable insights.
For each attribute, we measure the the absolute change in baseline score $|f_t(G(\mathbf{z}) - f_t(G(\mathbf{z}'))|$ over 5000 images, and compute averages based on what we expect the target and protected attribute values of $G(\mathbf{z}')$ to be. We plot this versus the fraction of images in the real world dataset that have these target and protected values (Figure~\ref{fig:score_change}). We find that there is a strong negative correlation. This could be because the target attribute is harder to maintain in this case, or because the baseline classifier has a tendency to misclassify minority samples.
Another question that we were interested in was interactions between different attributes as we create balanced synthetic datasets for different attributes. We measured the change in baseline classifier score for different targets $t'$ when trying to maintain target attribute $t$ and found that some attributes changed drastically when creating a balanced dataset for any attribute (Table~\ref{tab:score_change_all}). For example, the attribute \texttt{Attractive} changed by a large amount irrespective of which target attribute we were trying to preserve. This suggests that some of these attributes are more sensitive to latent space manipulations.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{ImagesAppendix/baseline_score_change.png}
\caption{We plot average absolute change in the baseline classifier score versus the fraction of images in the dataset that have the corresponding ground truth labels. We separate them based on what the new ground truth values should be, for each attribute. We find that the score change is larger when creating an image with minority labels. This could be because we are unable to maintain the target attribute in this case or because the baseline classifier performs worse on minority images.}
\label{fig:score_change}
\end{figure}
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{\begin{tabular}{|c|c|c|c|}
\hline
Attribute & Change & Attribute & Change \\ \hline
\texttt{ArchedBrows} & \textbf{0.314} & \texttt{Glasses} & 0.109 \\
\texttt{Attractive} & \textbf{0.336} & \texttt{GrayHair} & 0.056 \\
\texttt{Bangs} & 0.120 & \texttt{HighCheeks} & \textbf{0.233} \\
\texttt{BlackHair} & 0.153 & \texttt{MouthOpen} & 0.187 \\
\texttt{BlondHair} & 0.180 & \texttt{NarrowEyes} & 0.066 \\
\texttt{BrownHair} & 0.158 & \texttt{PointyNose} & 0.152 \\
\texttt{BushyBrows} & 0.136 & \texttt{RecedingHair} & 0.069 \\
\texttt{Chubby} & 0.067 & \texttt{Smiling} & 0.176 \\
\texttt{Earrings} & 0.176 & \texttt{WearingHat} & 0.065 \\
\texttt{Eyebags} & \textbf{0.212} & \texttt{Young} & \textbf{0.268} \\ \hline
\end{tabular}}
\caption{We report the average classifier score change in an attribute when trying to create balanced datasets for other attributes. Classifier scores are between 0 and 1, and changes above 0.2 are bolded. We find that some attributes (e.g. \texttt{Attractive}, \texttt{Young}) change by a lot, whereas others (e.g. \texttt{GrayHair}, \texttt{WearingHat}) do not change much.}
\label{tab:score_change_all}
\end{table}
\subsection{Factors of influence}
\label{sec:factors}
In this section, we discuss in more detail how some factors influence (or not) our method's effectiveness (Section 4.1 of the main paper).
\subsubsection{Skew of attributes}
For some attributes, the majority of the positive samples come from one gender expression.
For example, \texttt{ArchedBrows} has a skew of 0.92 towards $g{=}\num{-1}$, that is, 92\% of positive \texttt{ArchedBrows} samples have gender expression label $g{=}\num{-1}$.
To understand the effect of data skew on our method's performance, we ran experiments with differently skewed data. From the 162,770 CelebA training set images, we created slightly smaller training sets where the attribute of interest (e.g. \texttt{HighCheeks}) has different values of skew. Specifically, we created three versions of training data each with skew 0.5, 0.7, 0.9, while keeping the total number of images fixed. We trained a GAN on each training set, created a synthetic de-biased dataset with our method, and trained an attribute classifier with the training set and 160,000 pairs of synthetic images. For comparison, we also trained baseline models on just the differently skewed training sets. The classifiers were evaluated on the CelebA validation set. Table~\ref{tab:skewdata} summarizes the results. Compared to the baseline, our model has lower AP as expected, better DEO for skew 0.5 and 0.7, worse DNAP, and better or on par BA. Overall, classifiers trained on more imbalanced data with higher skew perform worse on all metrics.
\begin{table}[ht!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|cc|cc|}
\hline
\multirow{2}{*}{Skew} & \multicolumn{2}{c|}{AP $\uparrow$} & \multicolumn{2}{c|}{DEO $\downarrow$} \\ \cline{2-5}
& Base & Ours & Base & Ours \\ \hline
\multicolumn{1}{|c|}{0.5} & \textbf{95.1 $\pm$ 0.3} & 93.6 $\pm$ 0.4 & 7.0 $\pm$ 1.7 & \textbf{6.6 $\pm$ 1.8} \\
\multicolumn{1}{|c|}{0.7} & \textbf{94.8 $\pm$ 0.3} & 94.1 $\pm$ 0.3 & 19.6 $\pm$ 1.9 & \textbf{19.4 $\pm$ 1.9} \\
\multicolumn{1}{|c|}{0.9} & \textbf{94.1 $\pm$ 1.7} & 93.1 $\pm$ 0.4 & \textbf{31.3 $\pm$ 2.0} & 32.9 $\pm$ 1.9 \\ \hline
\multirow{2}{*}{Skew}& \multicolumn{2}{c|}{BA $\downarrow$} & \multicolumn{2}{c|}{KL $\downarrow$} \\
\cline{2-5}
& Base & Ours & Base & Ours \\ \hline
\multicolumn{1}{|c|}{0.5} & -1.9 $\pm$ 0.5 & \textbf{-3.0 $\pm$ 0.5} & 0.4 $\pm$ 0.1 & \textbf{0.3 $\pm$ 0.1} \\
\multicolumn{1}{|c|}{0.7} & \textbf{3.4 $\pm$ 0.5} & \textbf{3.4 $\pm$ 0.5} & \textbf{0.9 $\pm$ 0.1} & \textbf{0.9 $\pm$ 0.1} \\
\multicolumn{1}{|c|}{0.9} & 7.1 $\pm$ 0.5 & \textbf{7.0 $\pm$ 0.5} & \textbf{1.7 $\pm$ 0.1} & 1.9 $\pm$ 0.1 \\ \hline
\end{tabular}
}
\caption{Comparison of \texttt{HighCheeks} attribute classifiers trained on differently skewed data.}
\label{tab:skewdata}
\end{table}
\subsubsection{Discriminability of attributes}
Nam et al.~\cite{nam2020learning} recently observed that correlations among attributes affect a classifier only if the protected attribute is `easier' to learn than the target attribute.
Inspired by their observation, we design an experiment where we put a pair of CelebA attributes in competition to assess their relative discriminability.
We create a fully skewed dataset in which half of the images have both attributes and the other half have neither. With this dataset, we train a classifier to predict if an image has both attributes or neither. At test time, we evaluate the classifier on a perfectly balanced subset of the CelebA validation set (where each of the four possible hat-glasses combinations occupies a quarter of the dataset), and compute AP for each attribute. If one attribute has a higher AP than the other, it suggests that this attribute is `easier' to learn than the other. We repeat this experiment with a second dataset skewed in a different way (i.e. half of the images have one attribute but not the other).
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{
\input{full_skew_results}}
\caption{Discriminability of attributes. We compare attributes on the row to those in the columns. \texttt{y} indicates that the attribute in the row is easier to learn than that in the column and \texttt{n} indicates the opposite. We find that gender expression is one of the easiest attributes to learn, while \texttt{Young} is relatively hard.}
\label{tab:full_skew}
\end{table}
The results for gender-dependent and gender-independent attributes are in Table~\ref{tab:full_skew}. We report that an attribute is `easier' to learn than the other if it has a higher AP for both created datasets. We find that gender expression is one of the easiest attributes to learn, which may be why gender bias is prevalent in many models. On the other hand, \texttt{Young} is relatively hard for a model to learn, so its correlation with other attributes may not be as influential.
We find that gender expression is one of the easiest attributes to learn (with gender expression having a higher AP than every attribute we tested except \texttt{WearingHat} and \texttt{Glasses}), which may be why gender bias is prevalent in many models. On the other hand, \texttt{Young} is relatively hard for a model to learn (\texttt{Young} is harder to learn than all but 4 other attributes), so its correlation with other attributes may not be as influential.
\subsection{Ablation studies}
\label{sec:extra_ablations}
In this section, we describe in more detail the ablation studies we have conducted to investigate how improved hyperplanes and use of different labels for synthetic images impact (or not) our method's performance (Section 4.2 of the main paper).
We first investigate if hyperplanes estimated with better balanced samples improve the performance of downstream attribute classifiers.
We test this hypothesis by training models using hyperplanes that are estimated with different fractions of positive or negative samples.
For the attribute \texttt{HighCheeks}, we estimate hyperplanes with different fractions of positive and negative samples, while keeping the total number of samples constant at 12,000 and the number of positive samples same for each gender expression.
We then train attribute classifiers with the CelebA training set and synthetic pair images augmented with these different hyperplanes. In Table~\ref{tab:underrep}, we report results evaluated on the CelebA validation set.
We find that although the fairness metrics deteriorate as the target attribute hyperplanes were estimated with less balanced samples, this rate is relatively slow, and the downstream classifier still performs reasonably well.
\begin{table}[ht]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Fraction & AP $\uparrow$ & DEO $\downarrow$ & BA $\downarrow$ & KL $\downarrow$ \\ \hline
50.0\% & 95.1 $\pm$ 0.3 & 13.2 $\pm$ 1.7 & {0.5 $\pm$ 0.5} & 0.7 $\pm$ 0.1 \\
12.5\% & {95.1 $\pm$ 0.3} & 14.0 $\pm$ 1.7 & {0.8 $\pm$ 0.5} & \textbf{0.6 $\pm$ 0.1}\\
6.3\% & {95.1 $\pm$ 0.3} & {15.1 $\pm$ 1.8} & 1.3 $\pm$ 0.5 & 0.8 $\pm$ 0.2 \\
3.1\% & {95.1 $\pm$ 0.3} & {14.2 $\pm$ 1.7} & 1.0 $\pm$ 0.5 & 0.7 $\pm$ 0.1\\
1.6\% & 95.1 $\pm$ 0.3 & \textbf{12.9 $\pm$ 1.8} & \textbf{0.3 $\pm$ 0.5} & 0.7 $\pm$ 0.1 \\ \hline
\end{tabular}
}
\caption{The amount of underrepresentation in samples used for hyperplane estimation doesn't appear to affect the performancee of the downstream classsification model much.}
\label{tab:underrep}
\end{table}
Next, we tried training models with synthetic images with the same hallucinated target labels, i.e. using only $G(\mathbf{z})$ and $G(\mathbf{z}')$ such that $f_t(G(\mathbf{z})){=}f_t(G(\mathbf{z}'))$, and labeling synthetic images with $h_t(\mathbf{z})$ in place of $f_t(G(\mathbf{z}))$.
Table~\ref{tab:h_scores_incorrect} contains all results. We report average results over all gender-dependent and gender-independent attributes. We find that both these ablations are comparable to ours, with in a slight loss in AP (79.8 and 82.1 versus 82.6), and worse fairness metrics in general (average DEO is 18.1 and 17.4 vs 16.1, BA is 0.9 and 0.7 vs 0.5).
\begin{table}[t!]
\resizebox{\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|}
\cline{2-5}
& AP $\uparrow$ & DEO $\downarrow$ & BA $\downarrow$ & KL $\downarrow$ \\ \hline
\multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}$f_t(G(\mathbf{z})) =$\\ $\;\; f_t(G(\mathbf{z}'))$\end{tabular}} & 79.8 $\pm$ 1.6 & 17.4 $\pm$ 4.5 & 0.9 $\pm$ 0.4 & \textbf{1.0 $\pm$ 0.3} \\ \hline
\multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}Labels \\ computed \\ using $h_t$\end{tabular}} & 82.1 $\pm$ 1.5 & 18.1 $\pm$ 4.2 & 0.7 $\pm$ 0.4 & 1.4 $\pm$ 0.8 \\ \hline
\multicolumn{1}{|c|}{Ours} & \textbf{82.6 $\pm$ 1.5} & \textbf{16.1 $\pm$ 4.2} & \textbf{0.5 $\pm$ 0.4} & 1.3 $\pm$ 0.7 \\ \hline
\end{tabular}}
\caption{Mean performances over all gender-dependent and gender-independent attributes on the validation set when using different methods to pick and label synthetic images. We find that most performances are comparable, with our method having a slightly higher AP, and slightly better DEO and KL.}
\label{tab:h_scores_incorrect}
\end{table}
\begin{comment}
\subsection{Gender expression manipulation}
\label{sec:gender_manip}
In this section, we provide some examples of images we generated by manipulating gender expression. Figure~\ref{fig:gender_exp_manipulation} shows three such examples. For each, we contrast our method with the method proposed by Denton et al.~\cite{DHMG19}. These images are chosen among the many generated in order to demonstrate the strength of our approach.
\begin{figure}[ht!]
\centering
\scalebox{.5}{\input{gender_exp_manipulation}}
\caption{Manipulation of gender expression from one gender expression through another by manipulating vectors in the latent vector space. The top two rows change from a masculine gender expression to a feminine gender expression, while the bottom 4 change from a feminine gender expression to a masculine one. For each attribute, the top row shows the naive manipulation, which preserves correlations of real-world data, and the bottom row shows our method. For example, for the attribute \texttt{Glasses}, we see that the naive manipulation removes eyeglasses while changing the gender expression from masculine to feminine, whereas our method preserves glasses.}
\label{fig:gender_exp_manipulation}
\end{figure}
\end{comment}
\subsection{Number of required labeled images}
\label{sec:choi}
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Metric} & \multicolumn{5}{|c|}{Num. of samples used to compute $f_g$} \\
\cline{2-6}
& 10 & 100 & 1000 & 10000 & 162,770 \\
\hline
AP $\uparrow$ & 78.8 $\pm$ 1.5 & 78.8 $\pm$ 1.5 & 78.8 $\pm$ 1.5 & \textbf{78.9 $\pm$ 1.6} & 78.7 $\pm$ 1.6 \\
DEO $\downarrow$ & 11.1 $\pm$ 3.4 & 11.3 $\pm$ 3.0 & 10.5 $\pm$ 3.7 & 10.8 $\pm$ 3.7 & \textbf{9.6 $\pm$ 3.1} \\
BA $\downarrow$ & 0.6 $\pm$ 0.5 & 1.0 $\pm$ 0.5 & 0.5 $\pm$ 0.5 & 0.7 $\pm$ 0.5 & \textbf{0.4 $\pm$ 0.5} \\
KL $\downarrow$ & 0.6 $\pm$ 0.2 & 0.8 $\pm$ 0.3 & 0.7 $\pm$ 0.3 & 0.7 $\pm$ 0.3 & \textbf{0.5 $\pm$ 0.6} \\ \hline
\end{tabular}}
\caption{Average over 4 attributes when using different numbers of labeled examples to compute gender expression. Results are reported on the validation set. We find that while the fairness metrics improve slightly by using more labelled examples, this is gradual, and within the error bars, in all cases.}
\label{tab:choi_comp}
\end{table}
Choi et al.~\cite{GCSE19FairModeling} use a method that is unsupervised. Assuming access to a small unbiased dataset, as well as a large (possibly biased) dataset, they estimate the bias in the larger dataset, and learn a generative model that generates unbiased data at test time. Using these generated images, as well as real images, they train a downstream classifier for the attribute \texttt{Attractive}, and achieve an accuracy of 75\%. Since most of the protected attributes that we care about are sensitive (for example gender or race), not requiring protected attribute labels prevents perpetuation of harmful stereotypes. In order to understand how much our model depends on the protected attribute labels, we investigate where our model depends on the protected attributes labels. We use protected attribute labels only to compute the linear separator in the latent space ($\mathbf{w_g}$ and $b_g$ from section~\ref{sec:derivation} in this document). We now train classifiers for gender expression, using different numbers of labeled images, and use these classifiers to train target attribute classifiers for 4 different attributes (\texttt{EyeBags}, \texttt{BrownHair}, \texttt{GrayHair} and \texttt{HighCheeks}).
Most of the fairness metrics improve slightly when using more labeled examples (DEO improves from 11.1 when using just 10 samples to 9.6 when using all 162k samples in the CelebA training set, BA improves from 0.6 to 0.4, and KL improves from 0.6 to 0.5), however, these are all gradual, and within the error bars. Full results are in Table~\ref{tab:choi_comp}.
\section{Introduction}
Large-scale supervised learning has been the driving force behind advances in visual recognition. Recently, however, there has been a growing number of concerns about the disparate impact of these visual recognition systems. Face recognition systems trained from datasets with an underrepresentation of certain racial groups have exhibited lower accuracy for those groups~\cite{BG18IntersectionalDataset}. Activity recognition models trained on datasets with high correlations between the activity and the gender expression of the depicted person have over-amplified those correlations~\cite{ZWYOC17BiasAmp}. Computer vision systems are statistical models that are trained to maximize accuracy on the majority of examples, and they do so by exploiting the most discriminative cues in a dataset, potentially learning spurious correlations.
In this work, we introduce a new framework for training computer vision models that aims to mitigate such concerns, illustrated in Figure~\ref{fig:pullfig}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Images/PullFigure_large.png}
\caption{Training a visual classifier for an attribute (e.g., \texttt{hat}) can be complicated by correlations in the training data. For example, the presence of hats can be correlated with the presence of glasses. We propose a dataset augmentation strategy using Generative Adversarial Networks (GANs) that successfully removes this correlation by adding or removing glasses from existing images, creating a balanced dataset.
}
\label{fig:pullfig}
\end{figure}
One proposed path for building `fairer' computer vision systems is through a `fairer' data collection process. Works such as \cite{BG18IntersectionalDataset,YQFDR20BalancingImagenet} propose techniques for better sampling data to more accurately represent all people. Creating a perfectly balanced dataset, however, is infeasible in many cases.
With the advances in Generative Adversarial Networks (GANs)~\cite{GPMXWOCB14GANs}, several works propose using generated data to augment real-world datasets~\cite{GCSE19FairModeling,SHCV19FairnessGAN,XYZW18Fairgan}. These methods have been growing in computational and algorithmic complexity (e.g., \cite{SHCV19FairnessGAN,XYZW18Fairgan} adding multiple loss functions to GAN training), necessitating access to a sufficient number of inter-sectional real-world samples. In contrast, we demonstrate a simple and novel data augmentation technique that uses a single GAN trained on a biased real-world dataset.
\smallsec{Illustrative example}
Consider our example from Figure~\ref{fig:pullfig}. Our goal is to train a visual recognition model that recognizes the presence of an attribute, such as wearing a hat. Suppose in the real world wearing a hat is correlated with wearing glasses---for example, because people often wear both hats and sunglasses outside and take them off inside. This correlation may be reflected in the training data, and a classifier trained to recognize a hat may rely on the presence of glasses. Consequently, the classifier may fail to recognize a hat in the absence of glasses, and vice versa.
We propose using a GAN to generate more images with hats but not glasses and images with glasses but not hats, such that \texttt{WearingHat} is de-correlated from \texttt{Glasses} in the training data, by making perturbations in the latent space.
Building on work by Denton et al.~\cite{DHMG19}, which demonstrates a method for learning interpretable image manipulation directions, we propose an improved latent vector perturbation method that allows us to preserve the \texttt{WearingHat} attribute while changing the \texttt{Glasses} attribute (Figure~\ref{fig:changeGlasses}).
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{Images/glasses_baseline.png} \\
\includegraphics[width=0.9\linewidth]{Images/glasses_perp.png}
\caption{Consider a GAN trained on a biased real-world dataset of faces where the presence of hats is correlated with the presence of glasses. Naively moving in a direction that adds glasses also adds a hat (\emph{Top}). We learn a direction in the latent space that allows us to add glasses, while not adding a hat (\emph{Bottom}). Note that attributes apart from the target attribute can change.}
\label{fig:changeGlasses}
\end{figure}
\smallsec{Protected attributes} Our goal is to examine and mitigate biases of sensitive attributes such as gender expression, race, or age in visual classifiers. However, visual manipulations or explicit classifications along these dimensions have the potential to perpetuate harmful stereotypes (see ~\cite{google_VB}). Hence in our illustrations, we use \texttt{Glasses} as the protected attribute, as it has a clear visual signal.
In the quantitative experimental results, we report our findings on the more sensitive protected attributes of gender expression and age.
\smallsec{Contributions}
We propose a method for perturbing vectors in the GAN latent space that successfully de-correlates target and protected attributes and allows for generating a de-biased dataset, which we use to augment the real-world dataset.
Attribute classifiers trained with the augmented dataset achieve quantitative improvements in several fairness metrics over both baselines and prior work~\cite{SHCV19FairnessGAN,Sharmanska2020contrastive,WQKGNHR19DomainBiasMitigation}, while maintaining comparable average precision.
Furthermore, we analyze the CelebA~\cite{LLWT15CelebA} attributes with respect to label characteristics\footnote{We observe several discrepancies in the CelebA~\cite{LLWT15CelebA} attribute labels and categorize the attributes into three categories: inconsistently labeled, gender-dependent, and gender-independent.}, discriminability, and skew, and discuss how these factors influence our method's performance.
We also evaluate our design choices with ablation studies and the results demonstrate the effectiveness of our augmentation method.\footnote{Code for all our experiments can be found at \url{https://github.com/princetonvisualai/gan-debiasing}.}
\section{Related Work}
\smallsec{De-biasing models}
The effect of gender and racial bias on AI models has been well documented~\cite{BCZSK16DebiasWord,BG18IntersectionalDataset,HBSDR18BiasCaptioning,WZYCO19BalancedDatasets,WQKGNHR19DomainBiasMitigation}.
Models trained on biased data sometimes even amplify the existing biases~\cite{ZWYOC17BiasAmp}.
Tools such as AI Fairness 360~\cite{aif360-oct-2018} and REVISE~\cite{wang2020revise} surface such biases in large-scale datasets and enable preemptive analysis.
In parallel, various work propose methods for mitigating unwanted dataset biases from influencing the model.
Oversampling techniques~\cite{bickel09discriminative,elkan01CostSensitiveLearning} duplicate minority samples in imbalanced data to give them higher weight in training.
Some work propose to mitigate bias through adversarial learning~\cite{WZYCO19BalancedDatasets,ZLM18AdversarialLearning} or through learning separate classifiers for each protected attribute~\cite{RAM17inclusivefacenet,WQKGNHR19DomainBiasMitigation}.
Other work improve fairness by introducing constraints~\cite{lokh2020fairalm} or regularization terms~\cite{Baharlouei_ICLR2020} during training.
Contrary to these algorithmic approaches, our work aims to mitigate biases by training the model with a generated de-biased dataset.
\smallsec{Generating and perturbing images using GANs}
Generative Adversarial Network (GAN)~\cite{GPMXWOCB14GANs} is a popular class of generative models composed of a generator and a discriminator trained in an adversarial setting.
Over the past few years, a number of works \cite{GAADC17WassersteinTraining,KALL17PGAN,KLA19StyleGAN,liu2020selfconditioned,SGZCRC16TrainingGAN} improved GANs to generate more realistic images with better stability.
Shen et al.~\cite{SGTZ20LatentSpaceGANs} show that the latent space of GANs have semantic meaning and demonstrate facial attributes editing through latent space manipulation.
Denton et al.~\cite{DHMG19} propose a method to evaluate how sensitive a trained classifier is to such image manipulations, and find several attributes that affect a smiliing classifier trained on CelebA.
Balakrishnan et al.~\cite{Balakrishnan2020transect} use GANs to generate synthetic images that differ along specific attributes while preserving other attributes, and use them to measure algorithmic bias of face analysis algorithms.
Unlike~\cite{Balakrishnan2020transect,DHMG19} who use the GAN-generated images to evaluate models, our work uses these generated images to train better attribute classification models.
\smallsec{Using GANs to augment datasets}
Several works use GANs to augment datasets for low-shot~\cite{HG17LowShotRecognition} and long-tail~\cite{ZLQL17EmotionCycleGAN} recognition tasks, whereas our work focuses specifically on de-biasing classifiers affected by dataset bias.
More related to our work are~\cite{GCSE19FairModeling,SHCV19FairnessGAN,Sharmanska2020contrastive} which leverage GANs to generate less biased data.
Choi et al.~\cite{GCSE19FairModeling}, given access to a small, unlabeled, and unbiased dataset, detect bias in a large and potentially biased dataset, and learn a generator that generates unbiased data at test time.
Sattigeri et al.~\cite{SHCV19FairnessGAN} train a GAN with a modified loss function to achieve demographic parity or equality of odds in the generated dataset.
Sharmanska et al.~\cite{Sharmanska2020contrastive} use an image-to-image translation GAN to generate more minority samples and create a balanced dataset.
While~\cite{GCSE19FairModeling,SHCV19FairnessGAN,Sharmanska2020contrastive} require training a new GAN for each bias they want to correct, our method uses a single GAN trained on a biased dataset to augment all attributes.
\section{Method} \label{sec:method}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{Images/MethodFigure_Latent.png}
\caption{\emph{(Top left)} Our latent vector perturbation method.
For each $\mathbf{z}$ sampled from the latent space of a trained GAN, we compute $\mathbf{z}'$ such that its target attribute score remains the same (according to $\mathbf{w_t}$) while its protected attribute score is negated (according to $\mathbf{w_g}$). \emph{(Top right)} We add images $G(\mathbf{z})$ and $G(\mathbf{z}')$ to our training set, and train a target attribute classifier on both the real-world data and the generated de-biased data.}
\label{fig:Method}
\end{figure}
We study a class of problems where a protected attribute is correlated with a target label in the data $\mathcal{X}$, influencing target label prediction.
Let $t$ be the target label (e.g., \texttt{WearingHat} in the running example from Figure~\ref{fig:pullfig}) and $g$ be the protected attribute (e.g., gender expression or \texttt{Glasses} from our running example) with $t,g\in\{-1,1\}$.
To mitigate the effect of unwanted dataset bias, we aim to generate a balanced set of synthetic images $\mathcal{X_\textit{syn}}$ where the protected attribute and target label are de-correlated.
Concretely, let $f_t$ be a function from images to binary labels that approximates the target label $t$, and $f_g$ be a function from images to binary labels that approximates the protected attribute $g$.
We learn these classifiers in a supervised fashion with the original data.\footnote{$f_t$ is equivalent to the baseline classifier in Section~\ref{sec:baseline}.}
We now want to generate synthetic data $\mathcal{X_\textit{syn}}$ with the property that for $\mathbf{x} \in \mathcal{X_\textit{syn}}$:
\begingroup
\setlength\abovedisplayskip{3pt}
\setlength\belowdisplayskip{3pt}
\begin{equation}
P\left[{f_t}(\mathbf{x}) = 1 | {f_g}(\mathbf{x}) = 1 \right] = P\left[{f_t}(\mathbf{x}) =1 \right],
\label{eq:indep}
\end{equation}
\endgroup
such that attributes $t$ and $g$ are de-correlated.
\smallsec{De-biased dataset creation}
To create $\mathcal{X_\textit{syn}}$, we use a GAN trained on real images $\mathcal{X}$ whose generator $G$ generates a synthetic image $\mathbf{x}$ from a random latent vector $\mathbf{z} \in \mathcal{Z}$.
We can assign semantic attribute labels to these images using the learned functions ${f_t}(\mathbf{x})$ and ${f_g}(\mathbf{x})$.
However, as the GAN inherits correlations from its training data, a random sampling of $\mathbf{z}$ will produce an $\mathcal{X_\textit{syn}}$ with similar correlations and biases as $\mathcal{X}$.
Hence, we propose a latent vector perturbation method that allows us to generate a de-biased $\mathcal{X_\textit{syn}}$.
We sample a random set of latent vectors $Z \subset \mathcal{Z}$ (inheriting the biases) and train classifiers $h_t, h_g \colon \mathcal{Z} \rightarrow [\num{-1}, 1]$ in the latent space that approximate ${f_t} \circ G$ and ${f_g} \circ G$, respectively.
That is, we train classifiers $h_t$ with input $\mathbf{z}$ and output ${f_t}(G(\mathbf{z}))$, and $h_g$ with input $\mathbf{z}$ and output ${f_g}(G(\mathbf{z}))$.
Given a vector $\mathbf{z}$, we generate a complementary vector $\mathbf{z}'$ with the same (predicted) target label but the opposite (predicted) protected attribute label, or
\begingroup
\setlength\abovedisplayskip{3pt}
\setlength\belowdisplayskip{3pt}
\begin{equation}
\label{eq:pair_generation}
h_t(\mathbf{z}') = h_t(\mathbf{z}), \; \; \; h_g(\mathbf{z}') = -h_g(\mathbf{z}).
\end{equation}
\endgroup
We note that this data generation method is agnostic to the type of classifier used to compute $h$.
In our work, we assume that the latent spaces is approximately linearly separable in the semantic attributes, as observed and empirically validated by Denton et al.~\cite{DHMG19}. In this case, $h_t$ and $h_g$ can be represented as linear models (hyperplanes) $\mathbf{w_t}$ and $\mathbf{w_g}$ with intercepts $b_t$ and $b_g$ for the target and protected attributes respectively. We can derive a closed-form solution for $\mathbf{z}'$ as\footnote{Derivations are in the appendix (Section~\ref{sec:derivation}).
$\|\mathbf{w_t}\| = \|\mathbf{w_g}\| = 1$.}
\begingroup
\setlength\abovedisplayskip{3pt}
\setlength\belowdisplayskip{3pt}
\begin{equation}
\mathbf{z}' = \mathbf{z} - 2\left(\frac{\wg^T \mathbf{z} +b_g}{1 - (\wg^T\mathbf{w_t})^2}\right) \left(\mathbf{w_g} - (\wg^T\mathbf{w_t})\mathbf{w_t} \right).
\end{equation}
\endgroup
This latent vector perturbation method is illustrated in Figure~\ref{fig:Method} (\emph{Top left}). A similar idea of hyperplane projection was presented in Zhang et al.~\cite{ZLM18AdversarialLearning}, although for a different goal of adversarial training.
The sampling process results in a complementary image pair:
\begin{itemize}[topsep=1pt, itemsep=1pt, leftmargin=*]
\item $\mathbf{x}=G(\mathbf{z})$ with target label ${f_t}(G(\mathbf{z}))$ and protected attribute label ${f_g}(G(\mathbf{z}))$
\item $\mathbf{x}'=G(\mathbf{z}')$ with target label ${f_t}(G(\mathbf{z}))$ and protected attribute label $-{f_g}(G(\mathbf{z}))$,
\end{itemize}
creating de-biased data $\mathcal{X}_{syn}$. We train our target attribute classifier with $\mathcal{X}$ and $\mathcal{X}_{syn}$, as shown in Figure~\ref{fig:Method}.
We label the generated images $\mathbf{x}$ and $\mathbf{x}'$ both with ${f_t}(\mathbf{x})$ because it allows us to capture the target attribute labels better than using ${f_t}(\mathbf{x})$ and ${f_t}(\mathbf{x}')$.
It is likely that the accuracy of ${f_t}$ is higher for the overrepresented group, and $\mathbf{x}$ will more often belong to the overrepresented group and $\mathbf{x}'$ to the underrepresented group.
However, other design choices are possible in our approach---for example, we could use $h_t(\mathbf{z})$ and $h_t(\mathbf{z}')$ instead (after thresholding appropriately) or only use $\mathbf{z}$ for which ${f_t}(\mathbf{x}) = {f_t}(\mathbf{x}')$.
We compare these different design choices experimentally in Section~\ref{sec:designchoices}.
\smallsec{Advantages}
Our data augmentation method has several attractive properties:
\begin{enumerate}[topsep=1pt, itemsep=1pt, leftmargin=*]
\item We use a single GAN trained on the biased real-world dataset to augment multiple target labels and protected attributes. This is in contrast to prior works like \cite{SHCV19FairnessGAN,GCSE19FairModeling} that require training a GAN for every pair of target and protected attributes.
\item By augmenting samples $\mathbf{z}$ generated from (approximately) the original data distribution the GAN was trained on and maintaining their target attribute scores, our method preserves the intra-class variation of the images.
\item The samples $\mathbf{z}$ and $\mathbf{z}'$ are generated to simulate the independence goal of Equation~\ref{eq:indep}. By construction, $\mathbf{z}'$ maintains $\mathbf{z}$'s target label ${f_t}(G(\mathbf{z}))$ and takes on the opposite protected attribute label $-{f_g}(G(\mathbf{z}))$.
\item Our method generalizes to multiple protected attributes $g$. We demonstrate how our method can simultaneously augment two protected attributes in Section~\ref{sec:comparisons_recent} when we compare our work to Sharmanska et al.~\cite{Sharmanska2020contrastive}.
\end{enumerate}
\section{Experiments} \label{sec:exp}
In this section, we study the effectiveness of our data augmentation method on training fairer attribute classifiers. We first describe our experiment setup and compare our results to those of a baseline classifier. We then discuss how different factors influence our method's performance, and finally compare our work to several prior works.
\smallsec{Dataset and attributes categorization}
Given the task of training attribute classifiers that are not dependent on gender expression, we require a dataset that has target labels, as well as gender expression labels.
CelebA~\cite{LLWT15CelebA} is a dataset with 2,022,599 images of celebrity faces, each with 40 binary attributes labels.
We assume the \texttt{Male} attribute corresponds to gender expression.\footnote{Consistent with the dataset annotation and with the literature, we adopt the convention of using \texttt{Male} as our protected attribute. It is not clear if this label denotes assigned sex at birth, gender identity, or gender expression (socially perceived gender). Since the images were labeled by a professional labeling company~\cite{LLWT15CelebA}, we assume that the annotation refers to the perceived gender, or gender expression. Moreover, this attribute is annotated in a binary fashion. We would like to point out that none of these attributes (assigned sex at birth, gender identity, nor gender expression) are binary, however, we use these labels as is for our goal of de-biasing classifiers.}
Among the other 39 attributes, we use 26 of them that have between 1\% and 99\% fraction of positive images for each gender expression.\footnote{We don't use \texttt{Blurry} as it has very few positive images ($\approx 5\%$).
We don't use \texttt{WearingNecklace} as the cropped images used in the GAN from \cite{PytorchPGAN} don't display the neck.} However, we noticed several discrepancies among the attribute labels, and decided to categorize the attributes into three categories: \textit{inconsistently labeled}, \textit{gender-dependent}, and \textit{gender-independent}.
We categorized attributes as \textit{inconsistently labeled} when we visually examined sets of examples and found that we often disagreed with the labeling and could not distinguish between positive and negative examples.
This category includes \texttt{StraightHair} shown in Figure \ref{fig:celeba_straighthair}, as well as \texttt{BigLips}, \texttt{BigNose}, \texttt{OvalFace}, \texttt{PaleSkin}, and \texttt{WavyHair}.\footnote{We note that for \texttt{BigNose}, we found that while there were some images that were easy to classify as having a big nose, or not having a big nose, most images were between these two extremes, and we believe that different annotators marked these `in-between' images differently. The same is true for the attribute \texttt{BigLips}.} While we report results on these attributes for completeness in Section~\ref{sec:baseline}, classifiers trained on these attributes may behave erratically.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Images/straighthair_shuffled.png}
\caption{Examples of CelebA \texttt{StraightHair} labels.
Some of these are labeled as having \texttt{StraightHair} (1st, 3rd, 5th) and some as not (2nd, 4th, 6th).
We deemed this attribute as \emph{inconsistently labeled}.}
\label{fig:celeba_straighthair}
\end{figure}
Of the remaining attributes with more consistent labeling, we found that some attribute labels are \emph{gender-dependent}. That is, images are labeled to have (or not have) these attributes based on the perceived gender.
For example in Figure~\ref{fig:celeba_young}, we observe that the images labeled as \texttt{Young} and \texttt{Male} appear much older than the images labeled as \texttt{Young} and \texttt{not Male}. Other attributes in this category are \texttt{ArchedBrows}, \texttt{Attractive}, \texttt{BushyBrows}, \texttt{PointyNose} and \texttt{RecedingHair}.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Images/young_contrast.png}
\caption{Examples of CelebA \texttt{Young} labels.
The first three images are labeled \texttt{Male}, \texttt{Young} while the last three images are labeled \texttt{not Male}, \texttt{not Young}, even though the first three appear older than the last three.
We deemed this attribute as \emph{gender-dependent}.}
\label{fig:celeba_young}
\end{figure}
The \textit{gender-independent} attribute labels appear to be reasonably consistent among annotators, and do not appear to depend on the gender expression. We classified 14 attributes into this category: \texttt{Bangs}, \texttt{BlackHair}, \texttt{BlondHair}, \texttt{BrownHair}, \texttt{Chubby}, \texttt{Earrings}, \texttt{EyeBags}, \texttt{Glasses}, \texttt{GrayHair}, \texttt{HighCheeks}, \texttt{MouthOpen}, \texttt{NarrowEyes}, \texttt{Smiling}, and \texttt{WearingHat}. While we use the label `gender-independent' we note that these attributes can still be correlated with gender expression---for example \texttt{Earrings} are much more common among images labeled as \texttt{not Male} than those labeled as \texttt{Male}.
\smallsec{Implementation details}
To generate images, we use a Progressive GAN~\cite{KALL17PGAN} with a 512-D latent space trained on the CelebA~\cite{LLWT15CelebA} training set from the PyTorch GAN Zoo~\cite{PytorchPGAN}.
We use 10,000 synthetic images, labeled with baseline attribute classifiers, and learn hyperplanes ($h_t$, $h_g$) in the latent space with scikit-learn's~\cite{scikit-learn} linear SVM implementation.
For all attribute classifiers, we use ResNet-50~\cite{HZRS16ResNet} pre-trained on ImageNet~\cite{Russakovskyplus15Imagenet} as the base architecture. We replace the linear layer in ResNet with two linear layers with the hidden layer of size 2,048. Dropout and ReLU are applied between these. The inputs are $64{\times}64$ images and their target attribute labels. We train all models with the binary cross entropy loss for 20 epochs with a batch size of 32.
We use the Adam~\cite{KB14Adam} optimizer with a learning rate of 1e-4. We save the model with the smallest loss on a validation set that has the same distribution as the training set.
The baseline model is trained on the CelebA training set $\mathcal{X}$ with 162,770 images. Our model is trained on $\mathcal{X}$ and the balanced synthetic dataset $\mathcal{X}_{syn}$ (160,000 pairs of images).\footnote{We trained classifiers using different number of synthetic pairs for 4 different attributes, and found that AP stabilizes after 160,000 pairs, which is what we used to train our classifiers.} Results are reported on the CelebA test set unless noted otherwise. Error bars are 95\% confidence intervals estimated through bootstrapping. We note that we use a single GAN to construct the de-biased dataset for each target attribute, and then train separate classifiers for each target attribute. We also emphasize that protected attribute labels are only used in learning $h_g$ and in evaluation
\smallsec{Evaluation Metrics}
We use \emph{average precision (AP)} to measure the accuracy of the classifiers. AP is a threshold-invariant accuracy metric that summarizes the precision and recall curve. We use this metric to ensure that our models learn a reasonable classification rule. AP, however, does not capture a classifier's behavior on different protected classes, and in fact, we expect to see a slight dip in overall AP when our model improves on some of the fairness metrics.
Multiple metrics have been proposed to measure fairness of a model~\cite{HPS16EqualityOdds,ZVGG17EqualityOpportunity,ZWYOC17BiasAmp,CKP09DemographicParity,chen2020riskdistribution} and each of these measures a different notion of fairness.
In our work, we use three metrics for comprehensive understanding.
First, we measure the \emph{difference in equality of opportunity (DEO)}, i.e. the absolute difference between the false negative rates for both gender expression, as in Lokhande et al.~\cite{lokh2020fairalm}\footnote{In our experiments, we choose a calibrated threshold on the validation set, i.e, a threshold that ensures that we make the same number of positive predictions as the ground truth, to compute both DEO and BA. We tried other ways of choosing the threshold, such as choosing the one that gives the best $F_1$ score on a validation set, and while the values varied, they did not change our findings.}.
As our second fairness metric, we use the \emph{bias amplification (BA)} metric proposed by Wang and Russakovsky~\cite{wang2021directional}.
Intuitively, BA measures how much more often a target attribute is predicted with a protected attribute than the ground truth value. Let $P_{t|g}$ be the fraction of images with protected attribute $g$ that have target attribute $t$, ${P}_{\hat{t}| g}$ be the fraction of images with protected attribute $g$ that are predicted to have target attribute $t$, $P_{t,g}$ be the fraction of images with target $t$ and protected attribute $g$, and $P_{t}$ and $P_g$ be the fraction of images with attribute $t$ and $g$ respectively. For each pair of target and protected attribute values, we add $(P_{t|g} - P_{\hat{t}|g})$ if $P_{t,g}>P_{t}P_{g}$ and $-(P_{t|g} - P_{\hat{t}|g})$ otherwise. A negative value implies that bias now exists in a different direction than in the training data.
Both DEO and BA fluctuate based on the chosen classification threshold.
Hence, as our final fairness metric, we use a threshold-invariant metric that measures the \emph{divergence between score distributions (KL)}~\cite{chen2020riskdistribution} defined as follows: Suppose $s_{g,t}$ represents a smoothed histogram of classifier scores of a certain protected attribute label and a target label,
appropriately normalized as a probability distribution of the scores. For each target attribute label $t$,
we measure $KL\big[s_{g=\num{-1},t}\|s_{g=1,t}\big] + KL\big[s_{g=1,t}\|s_{g=\num{-1},t}\big]$. That is, we measure the divergence of $g{=}\num{-1}$ and $g{=}1$ score distributions, separately for positive and negative attribute samples. This is a stricter notion of \emph{equalized odds}\cite{HPS16EqualityOdds}.
\subsection{Comparison with the baseline} \label{sec:baseline}
To start, we compare our model (i.e. target classifiers trained using both the balanced synthetic datasets $\mathcal{X}_{syn}$ and the real dataset $\mathcal{X}$) with a baseline model trained using just $\mathcal{X}$.
In Table~\ref{tab:baseline}, we show results on the four metrics, averaged for each of the three attribute categories.
As expected, our model performs better on all three fairness metrics, DEO, BA and KL, while maintaining comparable AP. For gender-independent attributes, AP drops from 83.9 to 83.0, while DEO improves from 16.7 to 13.9, BA improves from 0.3 to 0.0 and KL improves from 1.1 to 0.9.
For gender-dependent attributes, the fairness metrics improve over the baseline, but the improvements are smaller compared to those of gender-independent attributes.
Later in Section~\ref{sec:extensions}, we demonstrate an extension of our augmentation method with an improved performance on the gender-dependent attributes.
Additionally, we conduct score change evaluations suggested by Denton et al.~\cite{DHMG19} and measure the change in target attribute score as we perturb the protected attribute in images. Specifically, we measure the classifier score difference between $G(\mathbf{z})$ and $G(\mathbf{z}')$. This evaluation helps understand how the protected attribute influences a trained classifier's output.
We find that the model trained with our augmentation method consistently has a smaller change in score than the baseline: 0.09 vs. 0.12 for inconsistently labeled, 0.07 vs. 0.11 for gender-dependent, and 0.06 vs. 0.09 for gender-independent attributes.
We also observe that the baseline score changes are higher when we try to construct underrepresented samples.
Consider the attribute \texttt{ArchedBrows} where only 2.3\% of the training set images are labeled to have \texttt{ArchedBrows}, and appear masculine.
When we construct a $\mathbf{z}'$ with this target and protected value, the baseline classifier's score changes by 0.41.
On the other hand, when we try to construct an image that is without \texttt{ArchedBrows} and appears feminine, which comprises 33.7\% of the training set, the baseline classifier score only changes by 0.094. This could be due to the errors that the baseline classifier makes on underrepresented images during synthetic image labeling, or could imply that underrepresented attributes are harder to maintain during image manipulations.
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{\input{baseline_comp}}
\caption{
Comparison of our model (i.e. attribute classifier trained with our data augmentation method) to the baseline model. Arrows indicate which direction is better. Numbers are averages over all attributes within the specific category. As expected, we have slightly lower AP than the baseline, but perform better on the three fairness metrics, DEO, BA, and KL.}
\label{tab:baseline}
\end{table}
We next examine several factors that could influence our method, including how easy the protected attribute is to learn compared to the target attribute and how data skew affects our method. We discuss the former here and provide more information about the latter in the appendix (Section~\ref{sec:factors}).
\smallsec{Discriminability of attributes}
Nam et al.~\cite{nam2020learning} recently observed that correlations among attributes affect a classifier only if the protected attribute is `easier' to learn than the target attribute.
Inspired by their observation, we conduct a two-step experiment to understand how the relative discriminability of attributes affects our method's effectiveness.
First, we put a pair of CelebA attributes in competition to assess their relative discriminability. Experiment details are in the appendix. We find that gender expression is one of the easiest attributes to learn (\texttt{Gender} is easier than all but \texttt{Glasses} and \texttt{WearingHat}), which may be why gender bias is prevalent in many models. On the other hand, \texttt{Young} is relatively hard for a model to learn (\texttt{Young} is harder to learn than all but 4 other attributes), so its correlation with other attributes may not be as influential.
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|cc|cc|cc|}
\hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Protected\\ Attribute\end{tabular}} & \multicolumn{6}{c|}{Improvement over baseline $\uparrow$} \\
\cline{2-7}
& \multicolumn{2}{c|}{DEO} & \multicolumn{2}{c|}{BA} & \multicolumn{2}{c|}{KL} \\
\cline{2-7}
& Easy & Hard & Easy & Hard & Easy & Hard \\
\hline
{\tt Glasses} (0,19) & -- & \textbf{4.1} & -- & \textbf{0.9} & -- & \textbf{0.0} \\
{\tt Gender} (2, 17) & 0.8 & \textbf{3.2} & 0.0 & \textbf{0.4} & -0.2 & \textbf{0.2} \\
{\tt Young} (15, 4) & -0.2 & \textbf{2.1} & 0.2 & \textbf{1.0} & -0.2 & \textbf{0.0} \\ \hline
\end{tabular}}
\caption{Improvement over baseline for different fairness metrics when using different protected attributes. Next to the protected attribute are numbers of attributes that are `easier' and `harder' to learn, compared to the protected attribute. Columns `Easy' (`Hard') show the averages of all non-inconsistent target attributes that are easier (harder) for a classifier to learn. We note that our method works better when the target attribute is `harder' to learn.}
\label{tab:other_prot_attributes}
\end{table}
Next, to understand how the relative discriminability of attributes affects our method's performance, we train target attribute classifiers for gender-dependent and gender-independent attributes, using \texttt{Young} and \texttt{Glasses} as protected attributes.
In Table~\ref{tab:other_prot_attributes}, we report our method's improvement over baseline in the three fairness metrics.
For each protected attribute, we report the average improvement separately for `easier' and `harder' target attributes. While training with our augmentation method generally outperforms the baseline on the three fairness metrics, as expected, the improvement is greater for target attributes that are harder to learn than the protected attribute, for example, for \texttt{Young}, the improvement in DEO over baseline is -0.2 for easy target attributes, and 2.1 for hard target attributes.
\smallsec{Skew of the dataset} The \emph{skew} of a target attribute $t$ is measured following the literature~\cite{WQKGNHR19DomainBiasMitigation} as $\frac{\max (P_{-1}, P_1)}{P_{-1}+P_1}$ where $P_{-1}$ is the number of images with $t{=}1$ and protected attribute label $g{=}-1$, and $P_1$ is the number of images with $t{=}1$ and protected attribute label $g{=}1$. We find that our augmentation method is most effective on attributes with low to moderate skew.
Full details are in the appendix.
\subsection{Ablation studies}\label{sec:designchoices}
We now examine the design choices made in our method
\smallsec{Removal of $\mathbf{z}'$ samples}
First, we evaluate the effect of $G(\mathbf{z}')$ on the classifier. We train a classifier with just $G(\mathbf{z})$ and the real dataset $\mathcal{X}$, and compare its performance against the performance of our model, trained with $G(\mathbf{z})$, $G(\mathbf{z}')$, and $\mathcal{X}$ on the gender-dependent and gender-independent attributes.
While the new classifier's AP is higher than that of our model (82.9 vs. 82.6), all fairness metrics are worse: DEO is higher (19.7 vs. 16.1), BA is higher (1.1 vs. 0.5) and KL is higher (1.6 vs 1.3). All numbers were calculated on the validation set. In fact, it performs worse on the fairness metrics than the baseline model trained on $\mathcal{X}$. This result suggests that simply synthesizing more images with a GAN and adding them to the training data does not improve the model but rather hurts performance. Possible reasons include the image and label noise of $G(\mathbf{z})$ and the skew of $G(\mathbf{z})$ being worse than the original data the GAN was trained on. The fairness metrics improve only when we add $G(\mathbf{z}')$, and make the training data more balanced.
\smallsec{Choice of $\mathbf{z}'$}
Next, we evaluate our choice of $\mathbf{z}'$ through examining a number of alternative perturbation choices visualized in Figure~\ref{fig:AvgPrecFakeOnly}. We train classifiers on just the generated data for gender-dependent and gender-independent attributes and compare the overall AP on the validation set.
As expected, training with $\mathbf{z}'$ (our choice)
has the highest AP.
\begin{figure}[t!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{c p{7pt}|c|cc|}
\cline{3-5}
\multirow{6}{*}{\includegraphics[width=0.45\linewidth]{Images/Ablation.png}} && \multirow{2}{*}{Perturbation} & \multicolumn{2}{c|}{AP $\uparrow$} \\
\cline{4-5}
& & & G-dep & G-indep \\
\cline{3-5}
& & $\textbf{z}'_{g,0}$ & 74.0 & 79.9\\
& & $\textbf{z}'_{g}$ & 69.6 & 77.3\\
& & $\textbf{z}'_{0}$ & 74.4 & 79.8\\
& & $\textbf{z}'$ (ours) & \textbf{76.0} & \textbf{81.4}\\
\cline{3-5}\\
\end{tabular}
}
\caption{Comparison of different perturbation choices. We train attribute classifiers using only synthetic images generated from the perturbations, and measure the mean AP over all target attributes on the validation set. The classifier trained with $\mathbf{z}'$ (our choice) has the highest AP.}
\label{fig:AvgPrecFakeOnly}
\end{figure}
\smallsec{Filtering $\mathbf{z}$'s and using different labels for synthetic images}
Since we hallucinate labels for the synthetic images, some of these labels may be incorrect and harm our classifier.
We try three different ways of addressing this issue:
First, we try learning hyperplanes with different fractions of positive and negative samples. We find that while this improves the hyperplane accuracy, the downstream classifiers trained with samples generated using different hyperplanes have similar performances.
For the second and third methods, we use the original hyperplanes learned in our method, but vary the vectors/labelling used. We remove points that are incorrectly classified by the baseline classifier after perturbing the latent vector from $\mathbf{z}$ to $\mathbf{z}'$, i.e, we remove all points wherein $f_t(G(\mathbf{z})) \not= f_t(G(\mathbf{z}'))$, and use the remaining synthetic images and the real dataset to train the classifiers.
Third, we label the synthetic images $G(\mathbf{z})$ and $G(\mathbf{z}')$ with $h_t(\mathbf{z})$, and use these labels to train the classifiers.
We compare their performance to our method on the validation set.
We find that these two methods result in a slight drop in AP (79.8 when using $h_t$ scores, 82.1 when removing incorrectly classified points, and 82.6 for our method), as well as a small drop in the fairness metrics (the average DEO is 18.1 when using $h_t$ scores, 17.4 when removing incorrectly classified points, and 16.1 for our method), suggesting that our current labeling of the synthetic images works well. {Full results are in the appendix (Section~\ref{sec:extra_ablations})}.
\subsection{Comparison with prior work}
\label{sec:comparisons_recent}
In this section, we compare our method to few recent works~\cite{SHCV19FairnessGAN,Sharmanska2020contrastive,WQKGNHR19DomainBiasMitigation}.
One of the current challenges in the space of AI fairness is the lack of standardized benchmarks and metrics.
While some of this stems from the complexity of the problem at hand (where it is difficult and even counter-productive to use a single fairness definition), in the computer vision community, we believe that more effort should be made to provide thorough comparison between methods. Each work we consider here uses slightly different evaluation protocols and benchmarks. We made comparisons to the best of our ability, and hope that our work helps enable more standardization and empirical comparisons.
\smallsec{Fairness GAN} Sattigeri et al.~\cite{SHCV19FairnessGAN} use GANs to create datasets that achieve either demographic parity (Dem. Par.) or equality of opportunity (Eq. Opp.). They train classifiers for the \texttt{Attractive} attribute on just the generated data, using gender expression as the protected attribute. We train classifiers with our pair-augmented synthetic data to mimic the conditions of Fairness GAN, and evaluate both on the CelebA test data. Comparison results are in Table~\ref{tab:FairnessGANcomp}. Our model performs better on most metrics, even though we use a single GAN to augment all attributes.
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular} {c|cc|cc|cc|}
\cline{2-7}
& \multicolumn{4}{c|}{Fairness GAN \cite{SHCV19FairnessGAN}} & \multicolumn{2}{c|}{Ours} \\ \cline{2-5}
& \multicolumn{2}{c|}{Dem. Par.} & \multicolumn{2}{c|}{Eq. Opp.} &\multicolumn{2}{c|}{(Synthetic only)} \\
\hline
\multicolumn{1}{|c|}{Gender exp. $g$} & $g{=}\num{-1}$ & $g{=}1$ & $g{=}\num{-1}$ & $g{=}1$ & $g{=}\num{-1}$ & $g{=}1$\\ \hline
\multicolumn{1}{|c|}{FPR $\downarrow$} & 0.52 & 0.26 & 0.42 & \textbf{0.17} & \textbf{0.22} & {0.39} \\
\multicolumn{1}{|c|}{FNR $\downarrow$} & 0.18 & 0.41 & 0.21 & 0.44 & \textbf{0.06} & \textbf{0.27} \\
\multicolumn{1}{|c|}{Error $\downarrow$} & 0.30 & 0.28 & 0.29 & 0.23 & \textbf{0.21} & \textbf{0.18} \\ \hline
\multicolumn{1}{|c|}{Error Rate $\downarrow$} & \multicolumn{2}{c|}{0.22} & \multicolumn{2}{c|}{0.29} & \multicolumn{2}{c|}{\textbf{0.20}}\\ \hline
\end{tabular}
}
\caption{Comparison of the \texttt{Attractive} classifier trained using synthetic data from Fairness GAN~\cite{SHCV19FairnessGAN} and the classifier trained using our pair-augmented synthetic data. The latter (ours) outperforms on most metrics.}
\label{tab:FairnessGANcomp}
\end{table}
\smallsec{Contrastive examples generated by image-to-image translation GANs}
Sharmanska et al.~\cite{Sharmanska2020contrastive} propose a different method for balancing a biased dataset using StarGAN~\cite{choi2018stargan}, a class of image-to-image translation GANs. They use two protected attributes, age and gender expression, and create a balanced dataset by creating contrastive examples, i.e. images of different ages and gender, for each image in the training set. They train a \texttt{Smiling} classifier with the augmented dataset, and propose making a prediction at test time only when the classifier makes the same prediction on the image and their contrastive examples. We extend our method to incorporate multiple protected attributes, and use gradient descent to find three points $\{z'_i\}_{i \in \{1, 2, 3\}}$ in the latent space that preserve the target attribute score and flip either the gender expression score, the age score, or both. This process gives us three synthetic images per training image, with which we train a \texttt{Smiling} classifier. To ensure that the error rates are similar across all four protected groups---(\texttt{Young}, \texttt{Male}), (\texttt{Young}, \texttt{not Male}), (\texttt{not Young}, \texttt{Male}), (\texttt{not Young}, \texttt{not Male})---they measure the the mean difference in the false positive and false negative rates between all pairs of protected groups.
We reproduce their method to ensure that the results are reported on the same test set. We find that our model performs better in terms of the mean difference in FNR (0.34 versus their 0.54) and FPR (0.23 compared to their 0.46).
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Skew & Method & AP $\uparrow$ & DEO $\downarrow$ & BA $\downarrow$ & KL $\downarrow$ \\
\hline
{Low/} & Dom. Ind. & \textbf{83.4 $\pm$ 1.3} & 7.0 $\pm$ 3.1 & \textbf{-0.1 $\pm$ 0.5} & 0.8 $\pm$ 0.7 \\
Mod. & Ours & 81.4 $\pm$ 1.5 & \textbf{6.0 $\pm$ 3.0} & \textbf{-0.1 $\pm$ 0.5} & \textbf{0.3 $\pm$ 0.1}\\\hline
\multirow{2}{*}{High} & Dom. Ind. & \textbf{80.7 $\pm$ 1.6} & \textbf{14.9 $\pm$ 5.6} & \textbf{-0.4 $\pm$ 0.5} & \textbf{0.8 $\pm$ 1.0} \\
& Ours & 80.4 $\pm$ 1.5 & {23.9 $\pm$ 5.5} & {0.9 $\pm$ 0.4} & 1.5 $\pm$ 0.6\\
\hline
\end{tabular}}
\caption{Comparison of our method with domain independent training~\cite{WQKGNHR19DomainBiasMitigation}. Numbers reported are the mean over all gender-dependent and gender-independent attributes on the test set. We note that we perform better than domain-independent training for attributes with low to moderate skew.
\label{tab:dom_ind}
\end{table}
\begin{table}[t!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
Method & AP $\uparrow$ & DEO $\downarrow$ & BA $\downarrow$ & KL $\downarrow$ \\
\hline
Weighted & {79.6 $\pm$ 1.6} & \textbf{5.7 $\pm$ 4.2} & \textbf{-2.8 $\pm$ 0.5} & \textbf{0.5 $\pm$ 0.4}\\
Adversarial & 81.3 $\pm$ 1.6 & 23.9 $\pm$ 4.4 & 1.5 $\pm$ 0.5 & 0.6 $\pm$ 0.5 \\
Ours & \textbf{81.5 $\pm$ 1.5} & {16.7 $\pm$ 4.7} & {0.5 $\pm$ 0.5} & {1.0 $\pm$ 0.5}\\
\hline
\end{tabular}}
\caption{Comparison of our method with weighted and adversarial training from~\cite{WQKGNHR19DomainBiasMitigation}. Numbers reported are the mean over all gender-dependent and gender-independent attributes on the test set. We note that the weighted model overall performs better on the fairness metrics, however, the large negative BA suggests that the model now has bias in the opposite direction, to the extent that the AP drops. The adversarial model performs significantly worse than ours on DEO and BA, and marginally better on KL.}
\label{tab:weighted}
\end{table}
\smallsec{Effective training strategies for bias mitigation}
Wang et al.~\cite{WQKGNHR19DomainBiasMitigation} quantitatively compare different techniques for bias mitigation, including weighted training~\cite{bickel09discriminative,elkan01CostSensitiveLearning}, adversarial training with losses inspired by~\cite{AZN18FairnessBlindness,ZLM18AdversarialLearning}, and their proposed \emph{domain discriminative} and \emph{domain independent} training.
We compare our method to their best performing domain independent training method where they learn separate classifiers for each protected attribute class and combine them to leverage any shared information. We report results for all gender-dependent and gender-independent attributes in Table~\ref{tab:dom_ind}. We find that our method performs better for attributes with low to moderate skew ($<$0.7)---DEO is 6.0 compared to 7.0, KL is 0.3 compared to 0.8---whereas domain independent training performs better for attributes with high skew---DEO is 23.9 compared to 14.9, KL is 1.5 compared to 0.8.
This result is consistent with our earlier observation that our method works well for low to moderately skewed datasets.
Wang et al also use a simpler weighted training method that reweights samples such that the protected attribute classes have equal weight and an adversarial training method that uses a minimax objective to maximize the classifier's accuracy on the objective while minimizing an adversary's ability to predict the protected attribute from the learned features. For weighted and adversarial training methods, we report results in Table~\ref{tab:weighted}. We find that while the weighted model overall performs well on the fairness metrics, it has a strongly negative BA (-2.7 versus our 0.5) indicating that bias is now in the opposite direction,
and a low AP (79.6 versus our 81.5) suggesting that it makes incorrect predictions to reduce bias. For adversarial training, our method does better overall, with lower DEO (16.7 versus 23.9) and lower BA (0.5 versus 1.5).
\section{Extensions of our method}
\label{sec:extensions}
In this final section, we study two natural extensions of our method: using domain-dependent hyperplanes in place of the current domain-independent hyperplanes, and directly augmenting a real image dataset with GAN-inversion.
\smallsec{Domain-dependent hyperplanes}
Our method implicitly assumes the learned hyperplane $\mathbf{w_t}$ behaves equally well for all $\mathbf{z}$, irrespective of the value of ${f_g}(G(\mathbf{z}))$.
However, for gender-dependent attributes, the hyperplane learned using samples with ${f_g}(G(\mathbf{z})) {=}1$ may be very different from that learned using samples with ${f_g}(G(\mathbf{z})){=} \num{-1}$.
For these attributes, we extend our method to learn per-domain target attribute hyperplanes:
$\mathbf{w}_{t_1}, b_{t_1}$ for points with ${f_g}(G(\mathbf{z})){=}1$ and $\mathbf{w}_{t_{\num{-1}}}, b_{t_{\num{-1}}}$ for points with ${f_g}(G(\mathbf{z})){=}-1$.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Images/per_gender_generation.png}
\caption{Computing $\mathbf{z}'$
when the target attribute hyperplanes for each protected attribute class are very different.}
\label{fig:per_gender}
\end{figure}
For $\mathbf{z}$ with $f_g(G(\mathbf{z})){=}1$, we find $\mathbf{z}'$ such that
\begingroup
\setlength\abovedisplayskip{3pt}
\setlength\belowdisplayskip{3pt}
\begin{equation}
\label{eq:per_gender}
\begin{split}
\mathbf{w}_{t_{\num{-1}}}^T(\mathbf{z}')+b_{t_{\num{-1}}} & = \mathbf{w}_{t_1}^T(\mathbf{z})+b_{t_1}, \mbox{ and}\\
\wg^T\mathbf{z}'+b_g & = -\wg^T(\mathbf{z})-b_g
\end{split}
\end{equation}
\endgroup
as shown in Figure \ref{fig:per_gender}. In order to compute $\mathbf{z}'$ that satisfies the above constraints, while minimizing $||\mathbf{z} - \mathbf{z}'||_2$, we note that all constraints are linear, hence the feasible region is the intersection of several hyperplanes. Starting from a point in this region, in each iteration, we find a new location of the point using gradient descent, then project it back onto the feasible region to maintain the constraints.
If $\mathbf{w}_{t_1}$ and $\mathbf{w}_{t_{\num{-1}}}$ are similar, these constraints are the same as Equation~\ref{eq:pair_generation} and this method of computing $\mathbf{z}'$ collapses to the first.
We compare results of training a classifier that is augmented with images computed with domain-independent hyperplanes and with that using images computed with domain-dependent hyperplanes for all gender-dependent and gender-independent attributes over the validation set. We find that for gender-dependent attributes, using domain-dependent hyperplanes improves the fairness metrics considerably (DEO reduces from 21.4 to 17.2, BA reduces from 1.5 to 0.4, KL reduces from 1.2 to 1.0), without losing accuracy. However, for gender-independent attributes, we do not see significant improvement, suggesting that $\mathbf{w_t}$ is similar to both $\mathbf{w}_{t_1}$ and $\mathbf{w}_{t_{\num{-1}}}$. Full results are in Table~\ref{tab:per_gender}.
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|cc||cc|}
\hline
\multirow{2}{*}{Attr. type} & \multicolumn{2}{c||}{AP $\uparrow$} & \multicolumn{2}{c|}{DEO $\downarrow$}\\
\cline{2-5}
& Dom-ind & Dom-dep & Dom-indep & Dom-dep \\ \hline
\multicolumn{1}{|c|}{G-dep} & \textbf{78.1 $\pm$ 1.5} & \textbf{78.1 $\pm$ 1.4} & {21.4 $\pm$ 4.0} & \textbf{17.2 $\pm$ 4.0}\\
\multicolumn{1}{|c|}{G-indep} & 84.5 $\pm$ 1.5 & \textbf{84.6 $\pm$ 1.6} & 13.9 $\pm$ 4.3 & \textbf{13.1 $\pm$ 4.6} \\ \hline
\multirow{2}{*}{Attr. type} & \multicolumn{2}{c||}{BA $\downarrow$} & \multicolumn{2}{c|}{KL $\downarrow$} \\
\cline{2-5}
& Dom-indep & Dom-dep & Dom-indep & Dom-dep \\ \hline
\multicolumn{1}{|c|}{G-dep} & 1.5 $\pm$ 0.5 & \textbf{0.4 $\pm$ 0.5} & 1.2 $\pm$ 0.2 & \textbf{1.0 $\pm$ 0.3} \\
\multicolumn{1}{|c|}{G-indep} & \textbf{0.1 $\pm$ 0.4} & 0.2 $\pm$ 0.4 & \textbf{0.9 $\pm$ 0.5} & \textbf{0.9 $\pm$ 0.6}\\ \hline
\end{tabular}}
\caption{Comparison of classifiers that use domain-dependent hyperplanes vs. domain-independent hyperplanes to compute $z'$. We see a significant improvement among Gender-dependent attributes when we use Domain-dependent hyperplanes. Numbers are reported on the validation set. }
\label{tab:per_gender}
\end{table}
\smallsec{Augmenting real images with GAN-inversion} Our method operates in the GAN latent space and can only augment images that are generated from latent vectors, and so, only the GAN-generated images. Recently, several GAN-inversion methods have been proposed~\cite{Abdal_2019_ICCV,bau2019seeing,zhu2020indomain}. These methods invert a real image $\mathbf{x}_{real}\in\mathcal{X}$ to a vector $\mathbf{z}_{inv}$ in the latent space of a trained GAN.
Using Zhu et al.~\cite{zhu2020indomain}, we tried directly augmenting the original dataset by perturbing $\mathbf{z}_{inv}$ to $\mathbf{z}'_{inv}$ with our method, creating $\mathbf{x}_{real}'{=}G(\mathbf{z}'_{inv})$ with the same target label and the opposite protected label of $\mathbf{x}_{real}$.
When we trained classifiers with datasets augmented in this way, however, we did not see an appreciable improvement, despite the more complex procedure (Table~\ref{tab:inverse}).
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|}
\cline{2-5}
& AP $\uparrow$ & DEO $\downarrow$ & BA $\downarrow$ & KL $\downarrow$ \\
\hline
\multicolumn{1}{|l|}{Without} & \textbf{82.6 $\pm$ 1.5} & 1.5 $\pm$ 2.3 & \textbf{1.3 $\pm$ 0.4} & \textbf{1.0 $\pm$ 0.5} \\
\multicolumn{1}{|l|}{With inv.} & 82.4 $\pm$ 1.5 & \textbf{1.4 $\pm$ 2.3} & \textbf{1.3 $\pm$ 0.4} & \textbf{1.0 $\pm$ 0.5} \\ \hline
\end{tabular}}
\caption{Comparison of our classifiers (without) to classifiers trained using data augmented with a GAN-inversion module (with inv.). Numbers reported are the mean over all gender-dependent and gender-independent attributes on the validation set. We do not see an appreciable improvement.}
\label{tab:inverse}
\end{table}
\section{Conclusions}
We introduced a GAN-based data augmentation method for training fairer attribute classifiers when correlations between the target label and the protected attribute (such as gender expression) might skew the results. We report results across a large number of attributes and metrics, including comparisons with existing techniques. We also analyze in detail when our method is the most effective. Our findings show the promise of augmenting data in the GAN latent space in a variety of settings. We hope our detailed analyses and publicly available code serve as a stepping stone for future explorations in this very important space.
\smallsec{Acknowledgements} This work is supported by the National Science Foundation under Grant No. 1763642 and the Princeton First Year Fellowship to SK. We also thank Arvind Narayanan, Deniz Oktay, Angelina Wang, Zeyu Wang, Felix Yu, Sharon Zhang, as well as the Bias in AI reading group for helpful comments and suggestions.
{\small
\bibliographystyle{ieee_fullname}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,021
|
{"url":"https:\/\/zbmath.org\/?q=an:04217182","text":"## Quadratic stochastic operators, Lyapunov functions and tournaments.(Russian)Zbl\u00a00766.47037\n\nThe discrete dynamic system of the form $x_ k'=x_ k\\bigl(1+\\sum^ m_{i=1}a_{ki}x_ i\\bigr),\\quad k=\\overline{1,m},\\quad a_{ki}=- a_{ik},\\quad| a_{ki}|\\leq 1, \\tag{1}$ acting on the simplex $$S^{m-1}$$ is studied. The existence of the Lyapunov function of the form $$\\varphi(x)=x_ 1^{p_ 1}\\cdots x_ m^{p_ m}$$ is proved. An algorithm is offered for finding isolated fixed points of the mapping $$V:S^{m-1}\\to S^{m-1}$$ where $$Vx=x'=(x_ 1',\\dots,x_ m')$$. The connection between fixed points of $$V$$ and Lyapunov functions for (1) is investigated. It is proved that $$V:S^{m-1}\\to S^{m-1}$$ is a homeomorphism. Convergence of \u201cnegative\u201d trajectories $$\\{V^{-n}x^ 0\\}_{n\\in N}$$ and, as a rule, non-regular behaviour of positive trajectories $$\\{V^ nx^ 0\\}$$ are typical for the dynamic system (1). For estimating the set of limit points of trajectories, elements of tournament theory are used. The absence of periodical orbits for dynamic systems of the form (1) is proved. A biological interpretation of the obtained results is given.\nReviewer:\u00a0R.N.Ganikhodzhaev\n\n### MSC:\n\n 47J05 Equations involving nonlinear operators (general)","date":"2022-05-26 14:13:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6532332301139832, \"perplexity\": 264.5787059783501}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662606992.69\/warc\/CC-MAIN-20220526131456-20220526161456-00114.warc.gz\"}"}
| null | null |
De Oosteindse polder is een polder ten oosten van de oude kern van Bergschenhoek, in de Nederlandse provincie Zuid-Holland. In het zuiden grenst de polder aan de Boterdorpse polder, in het noorden aan de Oosthoekeindse polder. Vanaf 1778 werd begonnen met het droogmaken van deze gebieden. Het gebied valt onder Hoogheemraadschap van Schieland en de Krimpenerwaard.
Het poldergebied Bleiswijkse Droogmakerij telde diverse molengangen die het water naar de Rotte pompten.
Glastuinbouw
In 2001 werd het glastuinbouwgebied de "Oosteindse Polder" met tien moderne glastuinbouwbedrijven officieel in gebruik genomen worden.
Polder in Zuid-Holland
Geografie van Lansingerland
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,772
|
Forrestfield station will lie east of Perth Airport, adjacent to Dundas Road and south of Maida Vale Road.
TRANSPORT-driven urban design is on track around the proposed Forrestfield airport and city-rail link.
Public submissions on the transit hub plan are under review, with a report due shortly to the Shire of Kalamunda.
The Shire confirmed a small number of properties impacted by the train station development were subject to compulsory land acquisition.
"The Shire of Kalamunda is not proposing to undertake any land acquisitions. However, we understand that the State Government has progressed negotiations with some landowners," acting chief executive Gary Ticehurst said.
The WA Planning Commission will then receive the DSP with the Shire's recommendation.
An approved DSP gives high-level guidance for development and permits work to start on local structure plans, which provide more detail at individual lot level.
A town centre bordered by high to medium-density housing and light industry features in the DSP, under development with the Department of Planning and Public Transport Authority.
The Forrestfield station precinct covers about 250ha and includes several areas of park and recreational space.
"Whilst it is premature to speculate about how the development will look, the Shire expects similar outcomes to contemporary train station developments such as Cockburn Central, which offer a wide range of residential, office, retail and entertainment options," Mr Ticehurst said.
Light industrial development will service businesses with easy access to the airport, city and Roe and Tonkin highways.
Forrestfield MLA Nathan Morton said Forrestfield and High Wycombe were suburbs "on the move".
"This development will truly transform the foothills with a first-class activity centre, bringing a wide range of amenity and, importantly, local jobs as well," Mr Morton said.
The planning phase for the rail link will take two to three years, with the railway station scheduled for completion in 2020.
Property developers also anticipate the airport and city link will be a huge drawcard to Forrestfield and the surrounding Hills suburbs.
"Projects of this magnitude and scale of impact only happen every 10 years in the Perth market, so it presents a truly unique and rare opportunity for all surrounding residents," said Gavin Hegney, of Hegney Property Group.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 958
|
var leafletDirective = angular.module("leaflet-directive", []);
leafletDirective.directive("leaflet", ["$http", "$log", "$parse", function ($http, $log, $parse) {
var defaults = {
maxZoom: 14,
minZoom: 1,
tileLayer: 'http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png',
tileLayerOptions: {
attribution: 'Tiles © Open Street Maps'
},
icon: {
url: 'http://cdn.leafletjs.com/leaflet-0.5.1/images/marker-icon.png',
retinaUrl: 'http://cdn.leafletjs.com/leaflet-0.5.1/images/marker-icon@2x.png',
size: [25, 41],
anchor: [12, 40],
popup: [0, -40],
shadow: {
url: 'http://cdn.leafletjs.com/leaflet-0.5.1/images/marker-shadow.png',
retinaUrl: 'http://cdn.leafletjs.com/leaflet-0.5.1/images/marker-shadow.png',
size: [41, 41],
anchor: [12, 40]
}
},
path: {
weight: 10,
opacity: 1,
color: '#0000ff'
}
};
var str_inspect_hint = 'Add testing="testing" to <leaflet> tag to inspect this object';
return {
restrict: "E",
replace: true,
transclude: true,
scope: {
center: '=center',
maxBounds: '=maxbounds',
bounds: '=bounds',
marker: '=marker',
markers: '=markers',
defaults: '=defaults',
paths: '=paths',
tiles: '=tiles'
},
template: '<div class="angular-leaflet-map"></div>',
link: function ($scope, element, attrs /*, ctrl */) {
var centerModel = {
lat:$parse("center.lat"),
lng:$parse("center.lng"),
zoom:$parse("center.zoom")
};
if (attrs.width) {
element.css('width', attrs.width);
}
if (attrs.height) {
element.css('height', attrs.height);
}
$scope.leaflet = {};
$scope.leaflet.maxZoom = !!(attrs.defaults && $scope.defaults && $scope.defaults.maxZoom) ?
parseInt($scope.defaults.maxZoom, 10) : defaults.maxZoom;
$scope.leaflet.minZoom = !!(attrs.defaults && $scope.defaults && $scope.defaults.minZoom) ?
parseInt($scope.defaults.minZoom, 10) : defaults.minZoom;
var map = new L.Map(element[0], { maxZoom: $scope.leaflet.maxZoom, minZoom: $scope.leaflet.minZoom });
map.setView([0, 0], 1);
$scope.leaflet.tileLayer = !!(attrs.defaults && $scope.defaults && $scope.defaults.tileLayer) ?
$scope.defaults.tileLayer : defaults.tileLayer;
$scope.leaflet.map = !!attrs.testing ? map : str_inspect_hint;
setupTiles();
setupCenter();
setupMaxBounds();
setupBounds();
setupMainMaerker();
setupMarkers();
setupPaths();
// use of leafletDirectiveSetMap event is not encouraged. only use
// it when there is no easy way to bind data to the directive
$scope.$on('leafletDirectiveSetMap', function(event, message) {
var meth = message.shift();
map[meth].apply(map, message);
});
$scope.safeApply = function(fn) {
var phase = this.$root.$$phase;
if (phase == '$apply' || phase == '$digest') {
$scope.$eval(fn);
} else {
$scope.$apply(fn);
}
};
function setupTiles(){
// TODO build custom object for tiles, actually only the tile string
if ($scope.defaults && $scope.defaults.tileLayerOptions) {
for (var key in $scope.defaults.tileLayerOptions) {
defaults.tileLayerOptions[key] = $scope.defaults.tileLayerOptions[key];
}
}
if ($scope.tiles) {
if ($scope.tiles.tileLayer) {
$scope.leaflet.tileLayer = $scope.tiles.tileLayer;
}
if ($scope.tiles.tileLayerOptions.attribution) {
defaults.tileLayerOptions.attribution = $scope.tiles.tileLayerOptions.attribution;
}
}
var tileLayerObj = L.tileLayer(
$scope.leaflet.tileLayer, defaults.tileLayerOptions);
tileLayerObj.addTo(map);
$scope.leaflet.tileLayerObj = !!attrs.testing ? tileLayerObj : str_inspect_hint;
}
function setupMaxBounds() {
if (!$scope.maxBounds) {
return;
}
if ($scope.maxBounds.southWest && $scope.maxBounds.southWest.lat && $scope.maxBounds.southWest.lng && $scope.maxBounds.northEast && $scope.maxBounds.northEast.lat && $scope.maxBounds.northEast.lng) {
map.setMaxBounds(
new L.LatLngBounds(
new L.LatLng($scope.maxBounds.southWest.lat, $scope.maxBounds.southWest.lng),
new L.LatLng($scope.maxBounds.northEast.lat, $scope.maxBounds.northEast.lng)
)
);
$scope.$watch("maxBounds", function (maxBounds) {
if (maxBounds.southWest && maxBounds.northEast && maxBounds.southWest.lat && maxBounds.southWest.lng && maxBounds.northEast.lat && maxBounds.northEast.lng) {
map.setMaxBounds(
new L.LatLngBounds(
new L.LatLng(maxBounds.southWest.lat, maxBounds.southWest.lng),
new L.LatLng(maxBounds.northEast.lat, maxBounds.northEast.lng)
)
);
}
});
}
}
function tryFitBounds(bounds) {
if (bounds) {
var southWest = bounds.southWest;
var northEast = bounds.northEast;
if (southWest && northEast && southWest.lat && southWest.lng && northEast.lat && northEast.lng) {
var sw_latlng = new L.LatLng(southWest.lat, southWest.lng);
var ne_latlng = new L.LatLng(northEast.lat, northEast.lng);
map.fitBounds(new L.LatLngBounds(sw_latlng, ne_latlng));
}
}
}
function setupBounds() {
if (!$scope.bounds) {
return;
}
$scope.$watch('bounds', function (new_bounds) {
tryFitBounds(new_bounds);
});
}
function setupCenter() {
$scope.$watch("center", function (center) {
if (!center) {
$log.warn("[AngularJS - Leaflet] 'center' is undefined in the current scope, did you forget to initialize it?");
return;
}
if (center.lat && center.lng && center.zoom) {
map.setView([center.lat, center.lng], center.zoom);
} else if (center.autoDiscover === true) {
map.locate({ setView: true, maxZoom: $scope.leaflet.maxZoom });
}
}, true);
map.on("dragend", function (/* event */) {
$scope.safeApply(function (scope) {
centerModel.lat.assign(scope, map.getCenter().lat);
centerModel.lng.assign(scope, map.getCenter().lng);
});
});
map.on("zoomend", function (/* event */) {
if(angular.isUndefined($scope.center)){
$log.warn("[AngularJS - Leaflet] 'center' is undefined in the current scope, did you forget to initialize it?");
}
if (angular.isUndefined($scope.center) || $scope.center.zoom !== map.getZoom()) {
$scope.safeApply(function (s) {
centerModel.zoom.assign(s, map.getZoom());
centerModel.lat.assign(s, map.getCenter().lat);
centerModel.lng.assign(s, map.getCenter().lng);
});
}
});
}
function setupMainMaerker() {
var main_marker;
if (!$scope.marker) {
return;
}
main_marker = createMarker('marker', $scope.marker, map);
$scope.leaflet.marker = !!attrs.testing ? main_marker : str_inspect_hint;
}
function setupMarkers() {
var markers = {};
$scope.leaflet.markers = !!attrs.testing ? markers : str_inspect_hint;
if (!$scope.markers) {
return;
}
for (var name in $scope.markers) {
markers[name] = createMarker(
'markers.'+name, $scope.markers[name], map);
}
$scope.$watch('markers', function(newMarkers) {
// Delete markers from the array
for (var name in markers) {
if (newMarkers[name] === undefined) {
delete markers[name];
}
}
// add new markers
for (var new_name in newMarkers) {
if (markers[new_name] === undefined) {
markers[new_name] = createMarker(
'markers.'+new_name, newMarkers[new_name], map);
}
}
}, true);
}
function createMarker(scope_watch_name, marker_data, map) {
var marker = buildMarker(marker_data);
map.addLayer(marker);
if (marker_data.focus === true) {
marker.openPopup();
}
marker.on("dragend", function () {
$scope.safeApply(function (scope) {
marker_data.lat = marker.getLatLng().lat;
marker_data.lng = marker.getLatLng().lng;
});
if (marker_data.message) {
marker.openPopup();
}
});
var clearWatch = $scope.$watch(scope_watch_name, function (data, old_data) {
if (!data) {
map.removeLayer(marker);
clearWatch();
return;
}
if (old_data) {
if (data.draggable !== undefined && data.draggable !== old_data.draggable) {
if (data.draggable === true) {
marker.dragging.enable();
} else {
marker.dragging.disable();
}
}
if (data.focus !== undefined && data.focus !== old_data.focus) {
if (data.focus === true) {
marker.openPopup();
} else {
marker.closePopup();
}
}
if (data.message !== undefined && data.message !== old_data.message) {
marker.bindPopup(data);
}
if (data.lat !== old_data.lat || data.lng !== old_data.lng) {
marker.setLatLng(new L.LatLng(data.lat, data.lng));
}
if (data.icon && data.icon !== old_data.icon) {
marker.setIcon(data.icon);
}
}
}, true);
return marker;
}
function buildMarker(data) {
var micon = null;
if (data.icon) {
micon = data.icon;
} else {
micon = buildIcon();
}
var marker = new L.marker(data,
{
icon: micon,
draggable: data.draggable ? true : false
}
);
if (data.message) {
marker.bindPopup(data.message);
}
return marker;
}
function buildIcon() {
return L.icon({
iconUrl: defaults.icon.url,
iconRetinaUrl: defaults.icon.retinaUrl,
iconSize: defaults.icon.size,
iconAnchor: defaults.icon.anchor,
popupAnchor: defaults.icon.popup,
shadowUrl: defaults.icon.shadow.url,
shadowRetinaUrl: defaults.icon.shadow.retinaUrl,
shadowSize: defaults.icon.shadow.size,
shadowAnchor: defaults.icon.shadow.anchor
});
}
function setupPaths() {
var paths = {};
$scope.leaflet.paths = !!attrs.testing ? paths : str_inspect_hint;
if (!$scope.paths) {
return;
}
$log.warn("[AngularJS - Leaflet] Creating polylines and adding them to the map will break the directive's scope's inspection in AngularJS Batarang");
for (var name in $scope.paths) {
paths[name] = createPath(name, $scope.paths[name], map);
}
$scope.$watch("paths", function (newPaths) {
for (var new_name in newPaths) {
if (paths[new_name] === undefined) {
paths[new_name] = createPath(new_name, newPaths[new_name], map);
}
}
// Delete paths from the array
for (var name in paths) {
if (newPaths[name] === undefined) {
delete paths[name];
}
}
}, true);
}
function createPath(name, scopePath, map) {
var polyline = new L.Polyline([], {
weight: defaults.path.weight,
color: defaults.path.color,
opacity: defaults.path.opacity
});
if (scopePath.latlngs !== undefined) {
var latlngs = convertToLeafletLatLngs(scopePath.latlngs);
polyline.setLatLngs(latlngs);
}
if (scopePath.weight !== undefined) {
polyline.setStyle({ weight: scopePath.weight });
}
if (scopePath.color !== undefined) {
polyline.setStyle({ color: scopePath.color });
}
if (scopePath.opacity !== undefined) {
polyline.setStyle({ opacity: scopePath.opacity });
}
map.addLayer(polyline);
var clearWatch = $scope.$watch('paths.' + name, function (data, oldData) {
if (!data) {
map.removeLayer(polyline);
clearWatch();
return;
}
if (oldData) {
if (data.latlngs !== undefined && data.latlngs !== oldData.latlngs) {
var latlngs = convertToLeafletLatLngs(data.latlngs);
polyline.setLatLngs(latlngs);
}
if (data.weight !== undefined && data.weight !== oldData.weight) {
polyline.setStyle({ weight: data.weight });
}
if (data.color !== undefined && data.color !== oldData.color) {
polyline.setStyle({ color: data.color });
}
if (data.opacity !== undefined && data.opacity !== oldData.opacity) {
polyline.setStyle({ opacity: data.opacity });
}
}
}, true);
return polyline;
}
function convertToLeafletLatLngs(latlngs) {
var leafletLatLngs = latlngs.filter(function (latlng) {
return !!latlng.lat && !!latlng.lng;
}).map(function (latlng) {
return new L.LatLng(latlng.lat, latlng.lng);
});
return leafletLatLngs;
}
}
};
}]);
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,499
|
Wake Up! The LIV- Approved Morning Routine for the Best Day Ever - Liquid I.V.
Imagine starting your day feeling amazing—your mind alert, your mood high, your body in a Captain Marvel power pose. You emerge from your morning routine rested—calm, but energetic and excited for the day ahead. Now imagine feeling that every day. How might your capabilities be affected—at work and at home? How might it alter the quality of your relationships? Over time, how might it transform your life?
Consistently waking up on the right side of the bed is possible—it just takes a little discipline. Here at Liquid I.V., we're committed to helping people live better lives through wellness, so our products are specially designed to enhance your day for optimal adventuring. That's why we've created an ideal morning routine to get you powered up when you need it most. See below for a step-by-step guide that will propel you forward into the best day ever (until tomorrow)!
A great morning starts with a great night. As we discuss in The Stages of Sleep, your body goes through different phases while you're snoozing. Stage 1 is the lightest stage, Stage 2 brings you a bit deeper into slumberland, Stage 3 is the deepest sleep, and Stage 4, or REM sleep, is the dreaming stage.
The optimal time to wake up is right after REM sleep, just as Stage 1 is beginning again. Waking up at this stage reduces that morning grogginess and disorientation we all love so much (JK). Each sleep cycle lasts about 90 minutes, so if you have an alarm set for the morning, it's a good idea to time it so that you wake up at the end of a cycle. If you need help falling asleep on time, read Zonk Out: How to Fall Asleep Quickly and Naturally, or simply pick up some Liquid I.V. Sleep. One stick in 8 ounces of water can help you ease into sleep faster!
Now, your alarm is key. Choose an alarm that works for you—one that's loud enough to wake you up, but pleasant enough that it won't scare the pajama pants off you before you've even started your day. We naturally wake up with high levels of cortisol, the stress hormone, and waking up to an alarm that sounds like a jackhammer won't help matters.
Exercise, exercise, exercise. While it's not necessary to break a sweat first thing in the morning, it is important to activate your body in order to signal to your brain that it's time to rise and shine. Exercise also gives the face a flushed, wide-awake appearance, which is much less frightening than that "living dead" look most of us are rockin' in the morning. Take 5 minutes to stretch, or do some jumping jacks or light yoga.
Most people shower in the morning, but if you're not one of them, consider trying it out! Hot water gets the circulation system going, increasing your energy levels and heightening your alertness. You'll be amazed at how refreshed you feel once you've washed the night away.
When you're done in the bathroom—your teeth brushed, your skin cleansed, and your hair combed—it's time to chug. Your body gets super dehydrated while you sleep, so re-up on H20 before you eat. To maximize hydration, pour a stick of Liquid I.V. Hydration Multiplier into 16 ounces of water and drink up. Even more than coffee, hydrating well is THE BEST way to wake up your body.
Aside from providing you with more energy, hydration also elevates mood and increases mental clarity, so don't skip this step!
You are what you eat—especially in the morning. Don't load up on sugary cereals and pancakes unless you want a pretty nasty mid-morning crash. Fruit, oatmeal and low-sugar cereals are great options for fueling up first thing.
Tons of successful people, from Benjamin Franklin to Oprah Winfrey, have endorsed the powerful practice of setting an intention for the day. Simply run through your planned activities in your head before you walk out the door.
What do you want to accomplish? How do you want to feel? How do you intend to act toward those with whom you come into contact?
Visualizing events going well before they happen makes an easy, fluid transition into your day, and often produces some serendipitous surprises!
Caring for your mind and body before you head out on your daily adventures can drastically improve your sense of wellbeing—and your daily performance! Follow these steps and watch your day elevate. Let us know how it works in the comments below!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,044
|
Q: How can I construct the Contact application title bar In android 2.0 Contact application, under the 'Favorites' tab, there is a 'Frequently contacted' title bar. Can you please tell me how can I construct that myself?
I would like what is the font size, font color , back ground color that they use.
A: You can grab the source of the Contacts apk from here and see exactly how the Android devs constructed that UI.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 103
|
[learn_more caption="Book Description"]MAGIC IS A DRUG.
After the grisly murder of a dirty magic coven leader, Kate Prospero and The Magical Enforcement Agency team up with the local police to find the killer. When a tenacious reporter sticks her nose in both the investigation and Prospero's past in the covens, old ghosts resurface.
[learn_more caption="Book Description"]Finn Gramaraye was framed for the crime of dark necromancy at the age of 15, and exiled to the Other Realm for twenty five years. But now that he's free, someone—probably the same someone—is trying to get him sent back. Finn has only a few days to discover who is so desperate to keep him out of the mortal world, and find evidence to prove it to the Arcane Enforcers. They are going to be very hard to convince, since he's already been convicted of trying to kill someone with dark magic.
[learn_more caption="Book Description"]Relic hunter and archaeology expert Kendall Morgan has a lot on her mind. After finding the Fountain of Youth—and discovering that Nathan, her handsome billionaire boss, might actually be her long-lost childhood love—she could really use some time to think. Except a two-thousand-year-old Protettori guardian has just teleported into her bathroom, desperate for help.
[learn_more caption="Book Description"]Eve Dallas has solved a lot of high-profile murders for the NYPSD and gotten a lot of media. She—and her billionaire husband—are getting accustomed to being objects of attention, of gossip, of speculation.
[learn_more caption="Book Description"]Forensic science, magic, mystery, and romance mix in this edgy steampunk fantasy—a retelling of the horror classic, in which Dr. Eliza Jekyll, daughter of the infamous Dr. Henry Jekyll—pursues a dangerous murderer in an alternate Victorian London.
[learn_more caption="Book Description"]IN A RICH, DISTINCTIVE WORLD THAT MIXES MAGIC WITH TECHNOLOGY, WHO COULD STAND AGAINST MAGES THAT CONTROL GUNPOWDER AND BULLETS?
[learn_more caption="Book Description"]Strong heroines and riveting storytelling are the hallmark of groundbreaking fantasy author Kate Elliott (Crown of Stars, Crossroads). Elliott is a highly-compelling voice in genre fiction, an innovative author of historically-based narratives set in imaginary worlds. This first, retrospective collection of her short fiction is the essential guide to Elliott's shorter works. Here her bold adventuresses, complex quests, noble sacrifices, and hard-won victories shine in classic, compact legends.
[learn_more caption="Book Description"]When up-and-coming chef Michael "Blue" Whitley returns with three friends to the remote Canadian community of his birth, it appears to be the perfect getaway from New York. He soon discovers, however, that everything he thought he knew about himself is a carefully orchestrated lie. Though he had no recollection of the event, as a young boy, Blue and another child went missing for weeks in the idyllic, mysterious woods of Starling Cove. Soon thereafter, his mother suddenly fled with him to America, their homeland left behind.
[learn_more caption="Book Description"]Mare Barrow's world is divided by blood—those with common, Red blood serve the Silver- blooded elite, who are gifted with superhuman abilities. Mare is a Red, scraping by as a thief in a poor, rural village, until a twist of fate throws her in front of the Silver court. Before the king, princes, and all the nobles, she discovers she has an ability of her own.
[learn_more caption="Book Description"]The night Quin Kincaid takes her Oath, she will become what she has trained to be her entire life. She will become a Seeker. This is her legacy, and it is an honor.
[learn_more caption="Book Description"]All That Glows author Ryan Graudin returns with the fantasy novel's sequel, rife with intense romance and riveting action. As this alluring mortal-prince-meets-immortal-fairy love story continues, this urban London tale serves up irresistible chemistry and adventure. In this second book, Emrys, the Faery guard to the British royal family, has sacrificed her powers to be with her soul mate, King Richard, choosing love over immortality. But as a strange, dark magic threatens all, Emrys must make the most difficult decision of her life.
The worlds of magic and mortal are colliding as London celebrates its new king, marking an era of unity between the Faery realm and the human one. As Emrys struggles to navigate her place between the Faery queen's court and London's lavish galas, danger looms beyond the Thames.
[learn_more caption="Book Description"]Military legacy Ari Alexander has survived alien spies, WWIV, and a changing world order. But when the new leader of Earth uses Jackson—the only boy she's ever let herself care about—to get to her, Ari has no choice but to surrender.
[learn_more caption="Book Description"]Lucas and Juliet couldn't be more different from each other. But from the moment Lucas sees Juliet, he swears he remembers their first kiss. Their first dance. Their first fight. He even knows what's going to happen between them—not because he can predict the future, but because he claims to have already lived it.
[learn_more caption="Book Description"] London, 1882: Queen Victoria appoints Harold Spire of the Metropolitan Police to Special Branch Division Omega. Omega is to secretly investigate paranormal and supernatural events and persons. Spire, a skeptic driven to protect the helpless and see justice done, is the perfect man to lead the department, which employs scholars and scientists, assassins and con men, and a traveling circus. Spire's chief researcher is Rose Everhart, who believes fervently that there is more to the world than can be seen by mortal eyes.
[learn_more caption="Book Description"]In a world where females are scarce and are hunted, then bought and sold at market for breeding, 15-year old Aya has learned how to hide. With a ragtag bunch of other women and girls, she has successfully avoided capture and eked out a nomadic but free existence in the mountains. But when Aya's luck runs out and she's caught by a group of businessmen on a hunting expedition, fighting to survive takes on a whole new meaning.
[learn_more caption="Book Description"]Rory and her friends are reeling from a series of sudden and tragic events. While racked with grief, Rory tries to determine if she acted in time to save a member of the squad. If she did, how do you find a ghost? Also, Rory's classmate Charlotte has been kidnapped by Jane and her nefarious organization. Evidence is uncovered of a forty-year-old cult, ten missing teenagers, and a likely mass murder. Everything indicates that Charlotte's in danger, and it seems that something much bigger and much more terrible is coming.
[learn_more caption="Book Description"]The city of Bryre suffers under the magic of an evil wizard. Because of his curse, girls sicken and disappear without a trace, and Bryre's inhabitants live in fear. No one is allowed outside after dark.
Holy shizzballs. #40 in In Death series by JD Robb?! Ay caramba.
Don't forget to feed your Kindle!
I'm with Felicia. I should seriously avert my eyes from your Fresh Meat posts.
Hi, my name is E.J. and I am a book addict.
I want ALL THE BOOKS, but I'm so far behind. Gah! There are at least three books on this list that I'm buying, even though I don't know when I'll be able to read them. I'm not the only one who does this, right?
Nooooo! Then I'd have to take full responsibility for my book buying binges.
LOL! Well, at least take some of it cuz I'm starting to feel guilty.
I hope for your sake that you're not too far behind, otherwise you have your work cut out for ya!
The only one I'm really dying to grab this week is The Shadow Cabinet, so I'm hoping to get it as soon as I can lift my book buying ban!
Red Queen & Glass Arrow are still on the maybes list!
I'm behaving myself this week which is saying something considering I have two $10 Amazon GCs burning a hole in my pocket.
The Fantasy genre does have a thing for its epic page counts. Eeessh!
So many great new releases this week! I got a copy of Seeker, which was pretty good. I am really looking forward to Red Queen. Despite all of my efforts I have not be able to get my hands on it…yet. I will though. I will probably just buy it for myself. Glass Arrow, Dead Spots, and The Shadow Cabinet look really good. Thanks for the list!
Hmm… I thought I saw Red Queen up for review on one of the galley sites… Edelweiss maybe? I'm sure you'll get it one way or another!
Aah, yes, it's a bitter sweet week for Bloodlines fans.
From these ones the only one I wanted to read was Red Queen, but after reading some reviews… I decided not to.
I don't blame you. They haven't been very promising.
Wishlists are a great to keep track of titles you need to buy once pay day rolls around. I'd be lost without mine!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,897
|
\section{Introduction}
{\tt 4DAO} is a FORTRAN code designed to launch automatically {\tt DAOSPEC} \citep{stetson}
for a large sample of spectra.
The main aims of {\tt 4DAO} are: (1)~to allow an analysis cascade of
a list of spectra provided in input, by automatically writing the input
{\tt DAOSPEC} files and managing its output files;
(2)~to optimize automatically some spectral parameters used by {\tt DAOSPEC} in the
process of equivalent width (EW) measurement, above all the Full Width Half Maximum (FWHM);
(3)~to mask some spectral regions (telluric lines, interstellar features, photospheric
lines with prominent Lorenztian wings) that can bias the correct EWs measurement;
(4)~to provide suitable graphical tools in order to evaluate the quality
of the solution, especially of the Gaussian fit to each
individual spectral line;
(5)~to provide the final normalized, zero radial velocity spectra.
\section{About {\tt DAOSPEC}}
Here the basic use of {\tt DAOSPEC} is drawn but we refer the user to the official documentation
of this code
\citep[][and the {\tt DAOSPEC} Cookbook\footnote{http://www.bo.astro.it/$\sim$pancino/docs/daospec.pdf}]{stetson}.
{\tt DAOSPEC} is a code that automatically identifies absorption spectral lines, estimates the
continuum with a Legendre polynomial, measures the EWs and the radial velocities (RVs)
for all the detected lines, and identifies
among them the lines provided in an input line list. The measurement of the EWs is performed adopting
a saturated Gaussian function and using the same FWHM for all the lines.
One of the most interesting features of {\tt DAOSPEC} is the computation of a global continuum
that takes into account the effects of weak lines.
Among the different input parameters, the most important are the value of the FWHM ({\tt FW}), the order
of Legendre polynomial used to fit the continuum ({\tt ORD}), the residual core flux ({\tt RE}, useful
to refine the EW measurement for strong lines) and the possible scaling of the FWHM with
the wavelength ({\tt SC}).
{\tt DAOSPEC} can run by using a fixed value of the FWHM, chosen by the user, or alternatively the
FWHM is refined according to the residuals of the spectrum.
\section{Basic layout of {\tt 4DAO}}
At the first run, the analysis starts by adopting as FWHM
the input value specified by the user and investigating
a range of RVs specified in input.
{\tt DAOSPEC} runs, finding new values of FWHM and RV.
If the difference between the input and output FWHM is larger than
a threshold value (chosen by the user, see Section 5), a new
run of {\tt DAOSPEC} is called, starting with the output FWHM of the previous
run as new input value, and moving in a range between RV-5$\sigma_{RV}$ and
RV+5$\sigma_{RV}$, where $\sigma_{RV}$ is the dispersion of the mean RV as computed
by {\tt DAOSPEC} by using the matched lines. During the process of RV determination,
the {\tt DAOSPEC} parameter {\tt VE} (that is the number of
standard deviations from the mean radial velocity used to discard the discrepant lines)
is set to 3 by default.
During the first iteration, the {\tt RE} parameter can be tuned
according to different recipes, chosen by the user (see Section 5).
Basically, {\tt RE} can be:
{\sl (i)} chosen by the user as fixed parameter,
{\sl (ii)} determined by using the central depth of the strongest line
available along the spectrum. In this case {\tt 4DAO} will read the
{\tt .daospec} file, looking for the line with the highest EW, among all the
measured spectral lines (and not only the matched ones).
Also, in the identification of the strongest line available
among those measured by {\tt DAOSPEC},
{\tt 4DAO} does exclude automatically
lines lying in spectral regions affected by non-photospheric transitions
(like telluric and interstellar features).
These regions are stored in the code and listed in Table 1;
{\sl (iii)} determined by using the central depth of a precise spectral line
provided by the user
The optimization of the FWHM can be affected by the
presence in the observed spectrum of non-photospheric lines (like telluric
or interstellar features, whose FWHM will be different with respect
to that of the photospheric lines), ruined spectral regions or zones
dominated by wide damped wings (as the Balmer lines or the Calcium II triplet lines).
{\tt 4DAO} allows to mask spectral regions that the user wish to exclude by the analysis.
When this option is enabled, {\tt 4DAO} performs a first run of {\tt DAOSPEC}
on the entire spectrum; then the regions to be masked are substituted with the
continuum spectrum estimated by {\tt DAOSPEC}.
Because the presence of flat spectral regions can create problems
with {\tt DAOSPEC}, Gaussian noise (estimated according to the estimated average
residuals) is added in these regions,
following the prescriptions by \citet{press}.
Fig.~\ref{out5} shows an example of this process, where the regions corresponding
to the Na D interstellar lines have been masked in a UVES-FLAMES spectrum
of a giant star in the globular cluster NGC~5694 \citep{m56}.
This temporary file (called {\tt masked.fits})
is used in the following runs of {\tt 4DAO} and when the FWHM converges,
the last run is performed on the original spectrum.
When the difference between the input and output FWHM reaches the threshold value,
a final run of {\tt DAOSPEC} is performed, but keeping the FWHM fixed at the
value optimized in the previous iteration.
The output on the terminal summarises for each iteration the input FWHM (expressed
in pixels), the {\tt RE} value, the used range of radial velocities (RV1 and RV2) with
the average RV derived in the previous iteration. Additionally, if the spectral mask
is enabled, a label advises of the use of this option.
\begin{figure}[h]
\epsscale{0.6}
\plotone{terminal.ps}
\end{figure}
During the execution, the basic input files of {\tt DAOSPEC}, namely
{\tt laboratory.dat} and {\tt daospec.opt}, are written and managed
by {\tt 4DAO}, while the output messages that usually {\tt DAOSPEC}
writes on the terminal, are written in a temporary file named {\tt log.daospec}.
Also, the graphical output on the monitor usually provided by {\tt DAOSPEC} is
automatically disabled.
\section{Installation}
Once you have downloaded the archive file {\tt 4DAO\_v*.*.tar.gz} from the website
\begin{center}
\url{www.cosmic-lab.eu/Cosmic-Lab/Products.html},
\end{center}
type the commands\\
\\
{\tt gunzip 4DAO\_v*.*.tar.gz}\\
{\tt tar -xvf 4dao.tar}\\
These commands unpack the archive file creating a directory named {\tt 4DAO/},
including the source files and the Makefile needed to compile the code.
Additionally, a sub-directory named {\tt tutorial} includes some examples
of the configuration files to check quickly if the code is well installed.
To compile the code you need to have installed in your machine the same
libraries that you have used to install {\tt DAOSPEC} :
(1) the SuperMongo\footnote{http://www.astro.princeton.edu/$\sim$rhl/sm/} (SM) libraries
(namely {\tt libplotsub.a}, {\tt libdevices.a} and {\tt libutils.a}, compiled in single precision),
(2) the X11 libraries, and (3) the {\tt libcfitsio.a} library
\footnote{http://heasarc.gsfc.nasa.gov/fitsio/fitsio.html}.
{\tt 4DAO} can be compiled with the Intel Fortran Compiler, that you can download from
the Intel website\footnote{http://software.intel.com/en-us/non-commercial-software-development},
after the registration.
The {\tt Makefile} can be easily updated, by properly setting the paths of the
requested libraries.
Before to start the {\tt 4DAO} installation, check that the installation of all the
requested libraries is correct (basically, if you have already installed {\tt DAOSPEC} you should be
already solved all the problems about the installation of these libraries). However,
we refer the reader to Section 4 of the {\tt DAOSPEC} {\sl Cookbook}
and to Section 3 of the GALA {\sl Cookbook}\footnote{http://www.cosmic-lab.eu/gala/gala.php}
for the description of some common installation problems and their possible solution.
Also, {\tt 4DAO} makes use of the {\tt GPL Ghostscript} software (executable {\tt gs}), freely available at
{\tt http://www.ghostscript.com}; please, check if {\tt gs} is installed on your machine.\\
The installation procedure assumes that you have already installed {\tt DAOSPEC} on your machine
(if you have not yet done, do it!) and that the executable file is named {\tt daospec} (in
lower-case letters) and its path already stored in your login file.
If your executable is named with a different name, before to install {\tt 4DAO} you need to
properly set the variable {\sl nexe} in the {\tt 4dao.f} source file.
Now you can install {\tt 4DAO}, typing the command\\
\\
{\tt make all}\\
\\
and the executable {\tt 4dao} will be saved in the current directory.
Finally, put the path of this directory in your login file according to the shell environment of your
machine (for instance in the configuration file .bashrc or .tcshrc)
In order to check the installation, go to the {\tt tutorial} subdirectory, hold your
breath, cross the fingers and type {\tt 4dao}.
\section{Input files}
Only two specific input files (besides the input spectra and the line lists) are necessary
to run {\tt 4DAO}, a configuration file named {\tt 4dao.param} and a file with the list of
the spectra to be analysed, named {\tt 4dao.list}.
(1)~\underline{\tt 4dao.param}\\
The input file {\tt 4dao.param} includes the main configuration parameters,
adopted in the analysis of all the star listed in {\tt 4dao.list}.
The layout of the file is as follows:
\begin{figure}[h]
\epsscale{0.6}
\plotone{param2.ps}
\end{figure}
\begin{itemize}
\item {\bf tol} is the minimum difference (expressed in pixels) between
the values of the FWHM derived by {\tt DAOSPEC} in two consecutive iterations.
When this difference is smaller than this parameter, {\tt 4DAO} assumes that
the convergence is reached and the FWHM obtained in the last iteration is
taken as the final value.
If the FWHM is fixed to the input value (see below for the {\tt 4dao.list} file),
this parameter is ignored.
\item {\bf nimax} is the maximum number of allowed iterations. Typically,
{\tt 4DAO} needs of less than 5 iterations to converge to a stable FWHM value, but
this parameter is useful to avoid infinite loops due to unforeseen problems
with your spectra.
\item {\bf fmax} is the maximum value allowed for the FWHM (expressed in
pixels). If the FWHM exceeds the value of this parameter, the convergence process
is stopped to the last iteration and the star flagged to easily identify the occurrence
of this problem.
\item {\bf plot} specifies the kind of output plot created for each spectrum.
The allowed values are 0 (the line plots will be sorted according to their
wavelengths) and 1 (the line plots will be sorted according to corresponding
element). If different values are provided, {\tt 4DAO} sets automatically this
parameter to 0.
\item {\bf verbose} specifies the verbosity level on the terminal.
Accepted values are 0 (no message at all), 1 (only the sequence of the
analysed spectra is shown), 2 (all the information about the procedure is shown).
\item {\bf restart} is the initial value of the residual core flux {\tt RE} parameter.
{\tt 4DAO} includes different ways to estimate this parameter, but if the code
fails to derive a reliable parameter (we assumed that {\tt RE} ranges from 0 to 30),
{\tt RE} will be set to the value specified by {\tt restart}.
\item {\bf output} specifies the format for the output (normalized and
radial velocity-corrected) spectra: with {\tt F} (or {\tt f})
the output spectra will be created in standard FITS format,
since with {\tt A} (or {\tt a}) will be written in
ASCII format (the wavelength in the first column and the normalized flux in the second column).
If different values are provided, the output will be
created in ASCII format.
\item {\bf clean} deletes some files used by {\tt DAOSPEC}, like
{\sl laboratory.dat}, {\sl daospec.opt}, {\sl log.daospec}
and all the files related to the mask of some spectral regions.
The allowed options are {\sl Y} (or {\sl y}) and {\sl N} (or {\sl n}).
Because in the execution of a sequence of spectra these files are over-written,
{\tt 4DAO} will save only the files related to the last spectrum. The option {\tt clean}={\sl n}
can be useful to check individually all the temporary files for some problematic spectra.
\item {\bf gala} specifies if the output format is that needed for the input files
of {\tt GALA} \citep{mgala} or not.
If the parameter is {\sl Y}/{\sl y} (the default value) the output file will be written
for {\tt GALA}, for all the other character values, the output file will include all
the information included in the input line list, reading it as a string.
\item {\bf conter} enables the measurement of the EWs by varying the continuum level.
This is a crude way to provide a conservative estimate of the impact of the continuum location
on the measured EWs. During this procedure, the normalized spectrum is lowered and raised by the relative
flux dispersion in the residual spectrum (as listed in the {\tt .daospec} output file).
Then, {\tt 4DAO} repeats the same procedure used for the original spectrum, but assuming {\tt ORD}=--1,
thus fixing the continuum level at 1.
The allowed values are -1 (to disable this option), 0
(to re-calculate the EWs after a new optimization of the FWHM starting from the best value
obtained in the main procedure) and 1
(to re-calculate the EWs by fixing the FWHM at the best value finding in the main procedure).
\end{itemize}
(2)~\underline{\tt 4dao.list}\\
This file lists the sequence of spectra that you plan to analysis with
{\tt DAOSPEC}.
The layout of this file will be as follows:
\begin{figure}[h]
\epsscale{0.6}
\plotone{list.ps}
\end{figure}
\begin{itemize}
\item The first column indicates the name of the spectrum in FITS format
(you can also not specify the extension {\sl .fits}).
\item The second and third columns are the initial FWHM (in pixels) and the order of
the Legendre polynomial, respectively.
\item The forth and fifth columns are the initial range of RV used by
{\tt DAOSPEC} .
\item The sixth value is the {\tt DAOSPEC} parameter {\tt SC}, allowing to enable
the line fitting procedure assuming that the FWHM is proportional
to wavelength (see Section 2.3.12 in the {\tt DAOSPEC} Cookbook). Thus, the
allowed values are 0 (for the use of a single FWHM for all the lines) and
1 (for scaling the FWHM according to the wavelength of the lines).
\item The seventh value enables the optimization of the FWHM (0) or
launch {\tt DAOSPEC} keeping the FWHM fixed to the value specifies in the second column
of the file (1).
\item The eighth column specifies the line list used for that spectrum.
No specific format is requested, but only that the first column is
the wavelength in $\mathring{A}$ and the second the code of the element.
For the latter, {\tt 4DAO} accepts both the {\tt GALA} format (i.e. 26.00 for
Fe~I and 26.01 for Fe~II) and the {\tt MOOG} format (i.e. 26.0 for Fe~I and
for 26.1 for Fe~II). Note that if the keyword {\tt gala} in {\tt 4dao.param}
is {\sl Y}, the input file needs to include all the information requested by {\sl GALA}
\citep[wavelength, element code, log~gf, excitation potential, damping constants and
$\alpha$ velocity parameter, see][]{mgala}:
\begin{figure}[h]
\epsscale{0.6}
\plotone{linelist_ex.ps}
\end{figure}
\item The ninth and tenth values are the wavelengths that identify
the spectral range where {\tt DAOSPEC} performs the spectral line measurements.
You can specify the spectral range that you prefer; if one (or both) of the
value is set to 0, {\tt 4DAO} tries to readjust the corresponding spectral
edge in order to avoid regions with negative flux, too noisy or with
dramatic variations of the flux with respect to the spectrum.
\item The eleventh column specifies the way to set the {\tt RE} parameter.
If a value between 0 and 30 is provided, {\tt RE} will be fixed at this value.
For negative values, {\tt RE} is fixed to the value specified in {\tt 4dao.param}
by the keyword {\tt restart}
for the first iteration, then it is refined by using the central depth
of the strongest line measured by {\tt DAOSPEC} .
Alternatively, you can provide the wavelength of a given strong line
(for instance a Balmer line, if available) and {\tt 4DAO} will try to match its position
with the closest transition measured by {\tt DAOSPEC} .
In both cases, if the derived {\tt RE} values is negative or larger
than 30, the value specified by {\tt restart} will be assumed.
\item The last column is the name of the ASCII file including the wavelengths
of spectral regions that you want to mask during the analysis.
The file does not need any specific format, but it
includes only two columns (each raw corresponding to a given spectral region
to be masked, considering the observed spectrum, thus the wavelengths do not
include the RV shift.).
If you do not need to use this option, you can only specify a name or a symbol
(like * in the some raws of the shown example) not corresponding to an existing file.
\end{itemize}
\section{Output files}
Together with the standard output files produced by {\tt DAOSPEC}
(the FITS files including the fitted continuum and the
residual spectrum, and the {\tt .daospec} file with all the
measured lines), {\tt 4DAO} produces some output files to check
the quality of the solution and manage the derived information.
For the spectrum named {\tt rootname} (as specified in the first column
of {\tt 4dao.list}), the following files
are created:
\begin{itemize}
\item {\tt rootname}\_4DAO.pdf includes some plots concerning the
continuum derived by {\tt DAOSPEC} , the fit of each individual line and
information about RVs and EWs uncertainties.
The first panel shows the entire spectrum with superimposed (as a red line)
the continuum level computed by {\tt DAOSPEC} . If you have masked some spectral
zones, these regions will be shown in this plot as yellow-shaded regions
(see Fig.~\ref{out1}). The RV shift of the star is not applied
in this plot.
The following panels display all the lines listed in the
input file, sorted in the wavelength or in the element code
(according to the keyword {\tt plot} in {\tt4dao.param}), with superimposed the
best-fit (red line) calculated by {\tt DAOSPEC} . Lines in the input file
that are rejected or not recognized by {\tt DAOSPEC} are plotted in
blue color, in order to allow an easy identification of the
lost features. In each panel the main information are labelled,
as wavelength, ion code, EW, radial velocity, uncertainty in EW
(expressed in percentage) and Q parameter. An example of these plots is shown
in Fig.~\ref{out2}.
The panel shown in Fig.~\ref{outc} is created only if the keyword {\tt conter} is
0 or 1. It shows the variation of the measured EWs with respect to the original values
when an increase or a decrease of the continuum level is assumed (black and red points,
respectively). The variation of EWs is shown as a function of the wavelength (upper panel)
and of the EW (lower panel).
The second to last panel (see Fig.~\ref{out3}) shows the RV of all the lines as a function
of the wavelength (upper panel) and the EW (lower panel), and with the
$\pm$1$\sigma$, $\pm$2$\sigma$ and $\pm$3$\sigma$ levels marked as dotted lines.
The last panel (Fig.~\ref{out4}) shows the behavior of the EW error (upper panels)
and of the Q parameter (lower panels) as a function of EW and wavelength.
\item the file named {\tt rootname.in} includes the main information about the EW measurements.
If the {\tt GALA} output is enabled, this file will have the same format described
in the {\tt GALA} Cookbook, with the addition of the Q-parameter (not requested by {\tt GALA})
in the eleventh column.
Alternatively, it will contain wavelength, EW, error in EW, Q-parameter and then
all the other information provided in the input line list.
Note that if {\tt conter} is 0 or 1, two additional columns will be added (at the end of the file),
including the EWs measured by raising and lowering the normalized spectrum, respectively.
\item {\tt rootname}\_ZVN (.fits or .dat according
to your choice in the keyword {\tt output} in {\tt 4dao.param})
is the original input spectrum, normalized using the continuum
calculated by {\tt DAOSPEC}
and corrected for radial velocity using the average RV derived
by {\tt DAOSPEC} . This file is especially useful to create scientific plots
or to use to perform additional chemical analysis based on the spectral
synthesis.
\end{itemize}
Additionally, the file {\tt daospec.log} summarises the main parameters derived
by {\tt 4DAO} for all the spectra listed in {\tt 4dao.list}.
For each spectrum the final FWHM (in pixels), the average radial
velocities (in km/s) with its dispersion, the number of matched lines, the
flux residuals (in percentage), a convergence flag related to the FWHM, the used starting and ending wavelengths,
the {\tt RE} value and the wavelength of the line used to derived {\tt RE} (only in case this parameter
is tuned by using the strongest available line) are provided.
The file header explains
the meaning of the convergence flag. Briefly:\\
CONV=1 the FWHM does converge (or the FWHM has been fixed to the initial value without optimization);\\
CONV=0 the number of iterations exceeds the maximum number of allowed iterations (specified by {\tt nimax}
in {\tt 4dao.param}). In this case the results are referred to the last iteration;\\
CONV=--1 if the code calculates the same value of FWHM in two different iterations,
to avoid the risk of an infinite loop, the code exits, writing the results of the last iteration,
and passing to the next spectrum;\\
CONV=--2 the FWHM exceeds its maximum allowed value specified by {\tt fmax} in {\tt 4dao.param}
Also in this case, the written values in the output are referred to the last iteration.
This flag identifies also the cases where a negative FWHM is provided or found;\\
CONV=--3 the median value of the entire spectrum is negative, pointing out some problems in the
spectral reduction (or the spectrum is missing).
{\tt DAOSPEC} would crash with similar spectra, thus the analysis is stopped, all the
output values are 0.0 and {\tt 4DAO} moves to the next spectrum of the list\\
CONV=--4 format problems in the {\tt daospec.opt} or {\tt .daospec} files are found;\\
CONV=--5 no line is found in the wavelength range of the observed spectrum.
\begin{deluxetable}{lll}
\tablecolumns{3}
\tablewidth{0pc}
\tablehead{\colhead{Feature} & \colhead{$\lambda_{start}$}& \colhead{$\lambda_{end}$} \\
\colhead{} & \colhead{($\mathring{A}$)} & \colhead{($\mathring{A}$)}}
\startdata
\hline
Na~D first component & 5889.0 & 5890.5 \\
Na~D second component & 5895.0 & 5896.5 \\
$O_2$ X0-b2 Band & 6275.0 & 6320.0 \\
B Band & 6860.0 & 6930.0 \\
$H_2$O Band & 7160.0 & 7330.0 \\
A Band & 7590.0 & 7700.0 \\
$H_2$O Band & 8125.0 & 8340.0 \\
$H_2$O Band & 9100.0 & 9800.0 \\
& & \\
\hline
\enddata
\tablecomments{Spectral regions masked by {\tt 4DAO} during the determination
of the {\tt RE} parameter.}
\end{deluxetable}
\begin{figure}[h]
\epsscale{1.0}
\plotone{plot_masked.ps}
\caption{Spectral region of the UVES spectrum of the star NGC~5694-37
around the Na D interstellar lines before (upper panel) and after (lower panel)
the application of the mask.}
\label{out5}
\end{figure}
\begin{figure}[h]
\epsscale{0.9}
\plotone{5694_out1.ps}
\caption{The UVES Red Arm 580 spectrum of the star NGC~5694-37 \citep{m56}
with superimposed the continuum computed by {\tt DAOSPEC} (red curve). The
yellow-shaded areas mark the spectral regions masked by {\tt 4DAO}, namely
the Na D photospheric lines, the region contaminated by telluric lines between
6280 and 6320 $\AA$ and the region around the $H_{\alpha}$ Balmer line.
}
\label{out1}
\end{figure}
\begin{figure}[h]
\epsscale{1.0}
\plotone{5694_out2.ps}
\caption{Example of the {\tt 4DAO} plots showing the fit (red lines)
of each individual line; blue lines mark the lines not matched by {\tt DAOSPEC}.
In each sub-panel the main information about the line (wavelength, ion code,
EW and its error, radial velocity and Q parameter) are listed.
}
\label{out2}
\end{figure}
\begin{figure}[h]
\epsscale{1.0}
\plotone{continuum_fig.ps}
\caption{Variations of the measured EWs calculated increasing (black points)
or decreasing (red points) the continuum level according to the flux residuals.
These variations are shown as a function of the wavelength (upper panel)
and of the EW (lower panel).
}
\label{outc}
\end{figure}
\begin{figure}[h]
\epsscale{1.0}
\plotone{5694_out3.ps}
\caption{Behavior of the radial velocity of each individual
spectral line as a function of the EW (upper panel) and of the
wavelength (lower panel). In both the panels the dashed horizontal
lines are $\pm$1$\sigma$, $\pm$2$\sigma$ and $\pm$3$\sigma$ levels.
}
\label{out3}
\end{figure}
\begin{figure}[h]
\epsscale{1.0}
\plotone{5694_out4.ps}
\caption{Upper panels: behavior of the EW error (expressed in percentage)
as a function of the EW and of the wavelength. Lower panels:
behavior of the Q parameter as a function of the EW and of the wavelength.
}
\label{out4}
\end{figure}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,981
|
{"url":"https:\/\/www.rosettacommons.org\/comment\/5216","text":"# The problem of compiling omp in Rosetta 3.4\n\n5 posts \/ 0 new\nThe problem of compiling omp in Rosetta 3.4\n#1\n\nWhen I execute the command:\n.\/scons.py -j? extras=omp bin mode=dubug\nI get the following error:\nbuild\/src\/release\/linux\/2.6\/64\/x86\/gcc\/4.3\/omp\/libprotocols_a.2.so: undefined reference to boost::this_thread::disable_interruption::disable_interruption()'\nbuild\/src\/release\/linux\/2.6\/64\/x86\/gcc\/4.3\/omp\/libprotocols_a.2.so: undefined reference to boost::thread::~thread()'\nbuild\/src\/release\/linux\/2.6\/64\/x86\/gcc\/4.3\/omp\/libprotocols_a.2.so: undefined reference to typeinfo for boost::detail::thread_data_base'\nbuild\/src\/release\/linux\/2.6\/64\/x86\/gcc\/4.3\/omp\/libprotocols_a.2.so: undefined reference to boost::thread::start_thread()'\nbuild\/src\/release\/linux\/2.6\/64\/x86\/gcc\/4.3\/omp\/libprotocols_a.2.so: undefined reference to vtable for boost::detail::thread_data_base'\nbuild\/src\/release\/linux\/2.6\/64\/x86\/gcc\/4.3\/omp\/libprotocols_a.2.so: undefined reference to boost::detail::thread_data_base::~thread_data_base()'\ncollect2: ld returned 1 exit status\nscons: *** [build\/src\/release\/linux\/2.6\/64\/x86\/gcc\/4.3\/omp\/FloppyTail.omp.linuxgccrelease] Error 1\n\nTherefore I made the following efforts:\n1)I have compiled a new libboost(boost_1_51_0) \uff0c which gain libboost_thread-mt.o\/a.\n2)Adding the following to the tools\/build\/basic.settings file\nimport os\nsettings = {\n\"base\" : {\n\"overrides\" : {\n\"CCFLAGS\" : \"\",\n\"CXXFLAGS\" : \"\",\n\"program_path\" : [\n\"\/bin\",\n\"\/usr\/bin\",\n\"\/usr\/local\/bin\",\n],\n\"include_path\" : [\n\"external\/boost_1_51_0\/include\/boost\",\n\"#external\/dbio\",\n\"\/usr\/include\",\n\"\/usr\/local\/include\",\n],\n\"library_path\" : [\n\"\/usr\/lib\",\n\"\/usr\/local\/lib\",\n\"external\/boost_1_51_0\/lib\",\n],\n},\n},\n\nthe same linux ,and in gcc\n\"CCFLAGS\" : '-isystem external\/boost_1_51_0\/include\/boost',\n\"CXXFLAGS\" : '-isystem external\/boost_1_51_0\/include\/boost',\n\nAny guidence on resolving this issue would be greatly appreaciated.\nThanks!\n\nPost Situation:\nSat, 2012-09-15 20:27\nlihowe\n\nI'm seeing if I can duplicate this locally. I wasn't aware we even had an openMP build at all, and it's not one that is regularly tested.\n\nI'm pretty sure it's going to turn out to be some file needs a header added #ifdef USE_OPENMP or headers reordered or something. The only external libraries that needs to be linked into separately are libz and C++'s STL; boost has all its bits supplied by Rosetta.\n\nMon, 2012-09-17 10:01\nsmlewis\n\n\".\/scons.py -j? extras=omp bin mode=dubug\"\n\nI assume dubug is supposed to be release, not debug?\n\nMon, 2012-09-17 10:07\nsmlewis\n\nI was able to duplicate this in 3.4, but not in developer trunk. Using that fact, and the hints that it was related to boost and to a file somewhere in library protocols_a.2, I tracked it to SVN 48527 (which is after 3.4 released). It seems src\/protocols\/frag_picker\/FragmentPicker.cc misuses MULTI_THREADED, which is also used by the extras=omp keyword (which #defines MULTI_THREADED and USE_OPENMP). Change all instances of MULTI_THREADED in that file to be some other keyword; USE_BOOST_THREAD is what developer trunk shows. I tested this against 3.4 and it compiles (no idea if OpenMP works or not).\n\nI would also get rid of your boost installation (use the one Rosetta provides).\n\nMon, 2012-09-17 13:29\nsmlewis\n\nAccording to your way, I have solved the problem of the compilation, thank you very much!\n\nMon, 2012-09-17 22:27\nlihowe","date":"2021-01-19 17:49:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5455158948898315, \"perplexity\": 12058.486820954984}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703519600.31\/warc\/CC-MAIN-20210119170058-20210119200058-00686.warc.gz\"}"}
| null | null |
Previous Sean Donovan "Recent Work" at M 2 3, New York
Next "Collection as Poem in the Age of Ephemerality" at X Museum, Beijing
Paul Chan "Drawings for Word Book by Ludwig Wittgenstein" at Greene Naftali, New York, 2020
Courtesy: the artist and Greene Naftali, New York
Paul Chan, Spekulieren (to speculate), 2020
Paul Chan, die Auskunft, Auskünfte (information), 2020
Paul Chan, die Essenz (essence), 2020
Paul Chan, der Rechtsanwalt (lawyer), 2020
Paul Chan, abspenstig (alienating), 2020
Paul Chan, Kurieren (or to cure / to heal), 2020
Paul Chan, Primitiv, 2020
Paul Chan, der Satan (Satan), 2020
Courtesy: the artist and Greene Naftalli, New York
Paul Chan, der Geist, geistig (spirit, spiritual), 2020
Paul Chan "Drawings for Word Book by Ludwig Wittgenstein" at Greene Naftali, New York
More than ten years ago, I heard a rumor about how Ludwig Wittgenstein wrote a textbook for children. I have never felt close to Wittgenstein's work: too axiomatic, too bloodless. But the more I learned about the circumstances that led to him writing this textbook, which was originally titled Wörterbuch für Volksschulen (or Dictionary for Elementary Schools), the more curious I became. In 2018, I located a copy of Wörterbuch. And I was bewitched. It has never been translated into English. He wrote it in 1925, during a time when he was just beginning to change, as a person and a philosopher. By the early 1920s, Wittgenstein was already famous, for his first book, Tractatus Logico-Philosophicus. He was teaching at Cambridge, and was expected to be the heir apparent to Bertrand Russell. But Wittgenstein was deeply unhappy there. He once said he was "tired of prostituting his mind for smart people." So in 1921 he abandoned Cambridge and his privileged philosophical career, and become an elementary school teacher in rural Austria. For six years he taught poor kids in grades four through six. During this time, it occurred to him that a good "dictionary" would help his young students learn. There were only two dictionaries available then. But one was too expensive, and the other was too small and badly put together. So Wittgenstein decided to write one. Word Book is the first English translation of Wörterbuch. It consists of words and concepts (5,968 terms in all) chosen by Wittgenstein as part of his curriculum. But it is also a revealing document of Wittgenstein's own mind changing as a result of teaching (and learning) from children. He reconsidered his entire philosophy after his experience. Word Book is an utterly unique work from a formidable philosopher who ended up learning from his students about what it means to understand the world, and a testament to how a mind is changed, if one is willing to let go of a certain idea of who one happens to be. My left hand is my non-dominant hand. I typically draw with my right hand. I've done what I call "left-handed path" drawings before. And it struck me as the way to go in making drawings for Word Book. The spirit of authority is not what motivates the drawings and the book. It's rather the notion that one's strength is really one's weakness, which makes possible the idea that one's weakness—given the right circumstances or frame of mind—may be one's real strength. I also like how the concept of the "left-handed path" is synonymous with alternative forms of belief, like mysticism and "black" magic. Or reason today. I like drawing with my left hand because it feels as if different stakes about what matters on paper became visible to me. Maybe that's all we're ever looking for in making any work: new ways to see the stakes that matter.
At Greene Naftali, New York
until 19 December 2020
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,596
|
\section{Introduction}
A number of real-world systems are mathematically represented by graphs, where nodes represent agents in the system and edges represent connections between agents. A common feature of interest in such systems comes from finding groups of nodes that can be considered as communities within the network. Although community detection is often referred to as though it is a single problem, a more accurate description is that it is a body of related problems, owing to the fact that the notion of what it means to be a "community" involves different concepts in different contexts.
When adopting the view that communities are not simply defined by nodes but the connections between nodes, a sensible approach to detecting communities is to cluster the edges present in the network. The first paper to explicitly acknowledge this point of view was \citep{EvLa09}, where communities are formed by partitioning the line graph of the network. However, edge clustering methods predate this by several years. Although it was not viewed in terms of clustering edges at the time of its publication, clique percolation \citep{Pal05} is perhaps one of the earliest examples of such methods, as cliques can be equivalently described as sets of completely interconnected nodes or edges.
In the clique percolation method, each node is described by the cliques it is a member of, and these cliques serve as the "atoms" from which to build community "molecules". The cliques provide a mechanism for representing a localized feature of a community structure. These localized features are then agglomerated to carve out communities by what the authors of \citep{Pal05} call a "percolation" process. This process consists of repeatedly adding all other cliques to the community that differ by (at most) one node from a clique already present in the community.
A natural generalization of the clique percolation methodology is to derive a node's local affinity for one, or multiple communities from its local perspective of the network itself. In such an approach, it is useful to represent the local neighborhood of each node by an "egonet". An \textit{egonet} is the network restricted to just the set of nodes a given node is connected to, along with all of the edges between the nodes in that set, where the node this net is built around is called the \textit{egocentric node}.
This generalization serves as the motivation for the collective friendship group inference method \citep{FriendshipGroup10}. With this method, instead of describing each node by the cliques it is a member of, each node is described by what are called friendship circles. A \textit{friendship circle} is any set of edges in an egonet that satisfy the definition of community on the network as a whole when the egocentric node is removed. These friendship circles then play the role of cliques in the aforementioned method, and communities are formed via an equivalent percolation process.
Although our methodology is most closely related to the preceding edge clustering methods, we also incorporate elements based on quality function optimization and spectral clustering \citep{SpectralClusteringOverview, SpectralClusteringComparativeAnalysis}. The interested reader can find a comprehensive overview of these approaches (and many others) in \citep{Fortunato10}, and an analysis of overlapping community detection methods in \citep{Xi13}.
\section{Proposed Approach}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.45\textwidth] {Fig1.png}
\end{center}
\caption{ Example network with community structure.}
\label{fig:AlgorithmExplainedBaseNetwork}
\end{figure}
Our work focuses on community detection with respect to social networks representable by undirected / unweighted graphs. We assume communities are fundamentally defined in terms of the properties of the edges comprising it, so we approach community detection as an edge clustering problem. As clustering edges naturally allows a node to belong to multiple communities based on how the edges it is connected to are clustered, this conforms well with a common feature of social networks where communities can overlap with one another.
Using the graph depicted in Figure \ref{fig:AlgorithmExplainedBaseNetwork} as a model network, several questions naturally arise on how to describe the community structure present. Firstly, what differentiates edges that connect a node to members of the same community from those that connect them to non-community members; what properties does a community possess at the scale of individual nodes? Secondly, what differentiates the sets of black and gray edges from the white ones; what properties does a community possess at the scale of individual communities? Lastly, what should be done with nodes that do not decisively belong to any particular community; what properties does the set of all communities possess at the scale of the entire network?
These questions confront the multiple scales that intrinsically define communities: the scale of individual nodes, the scale of individual communities, and the scale of the entire network. In this work, we attempt to provide logical answers to these foundational questions in the context of identifying overlapping communities in social networks. Because our approach is highly modular, one can easily modify the specific quantitative model for community structure at each scale as appropriate for a given community detection problem. This opens the door for creating community detection algorithms capable of searching for targeted notions of community that respect the context of the problem.
Let $N$ be the number of nodes in the network, $\overline{N_{com}^2}$ be the average squared community size, and $\overline{k^2}$ be the expected value of a node's degree squared. An overview of our algorithm and the computational cost of each step is presented in Figure \ref{alg:CD}.
\begin{figure}[h]
\makebox{\bfseries \sffamily Community Detection Algorithm} \\
\rule[0mm]{\linewidth}{0.5pt}
\begin{itemize}
\item [] {\bfseries \sffamily Input}:
\begin{itemize}
\item Adjacency matrix for the network
\end{itemize}
\item []{\bfseries \sffamily Algorithm:}
\begin{enumerate}
\item Detect the sets of edges that will be used to describe each node in the network as described in Section \ref{sec:NodeLevelPerspective}.
\begin{itemize}
\item Cost: $O(N~\overline{k^2})$
\end{itemize}
\item Cluster the edge sets to form communities as described in Section \ref{sec:CommunityLevelPerspective}.
\begin{itemize}
\item Cost: $O(\overline{N_{com}^2})$
\end{itemize}
\item If any nodes in the network remain unclustered, attach them to the community they share the most connections with.
\begin{itemize}
\item Cost: Negligible
\end{itemize}
\item Prune out the smaller communities detected subject to the constraint that all nodes are represented in at least one community, as described in Section \ref{sec:NetworkLevelPerspective}.
\begin{itemize}
\item Cost: Negligible
\end{itemize}
\end{enumerate}
\item [] {\bfseries \sffamily Output:} Sets of nodes comprising communities detected on network.
\end{itemize}
\rule[0mm]{\linewidth}{0.5pt}
\caption{Community detection algorithm
\label{alg:CD}}
\end{figure}
\subsection{Node Scale Features of Community Structure: Edge Descriptor Sets}
\label{sec:NodeLevelPerspective}
\begin{figure}[t]
\begin{center}
\includegraphics [width=0.45\textwidth] {Fig3.png}
\end{center}
\caption{ A depiction of the egonet for the starred node in the network; all of the red nodes/edges are included in the egonet.}
\label{fig:AlgorithmExplainedNodeLevel}
\end{figure}
Because we are interested in identifying communities from the arrangement of edges, we introduce the notion of \textit{edge descriptor sets} as a general term for using sets of edges to describe community structure local to each node. The cliques (or friendship circles) a node belongs to, would be specific examples such edge descriptor sets.
In our approach, we assume that a node is more likely to belong to a community if it has many mutual friends within that community. This suggests that edges linking members of the same community should be densely inter-connected. The prototypical example of such a cluster of edges would be a clique. Furthermore, we note that if a node belongs to several relatively large cliques that are mutually disjoint, then this node may be at the intersection of multiple communities. Guided by these simple heuristic principles, we propose to extract sets of edges that are both densely connected and largely disjoint from each other from each node's egonet to serve as the edge descriptor sets for that node. Each set can be thought of as encoding the fine-scale features of community structures present in the network.
We approach the task of extracting edge descriptor sets by first using a simple spectral clustering method to sparsify the local egonet defined around each node. The goal of the sparsification is to reveal the largest disjoint cliques that are present in the egonet. Finally, a more advanced spectral clustering method is used to construct the edge descriptor sets associated with the node in question. We begin our discussion with the latter of these two processes, because it is more involved and provides a natural introduction to the former.
\subsubsection{ICM-Matrices}
\label{sec:ICM}
In this section, we examine the spectral properties of idealized egonets formed by cliques that are only connected through a single (egocentric) vertex. While this situation may appear unrealistic, we explain in the next section how to extract such subgraphs from the original graph. Our present goal is simply to identify and extract the corresponding cliques; we propose a spectral approach to solve this problem.
Let us consider the sub-matrix of the network adjacency matrix that describes the local egonet. A trivial re-indexing of the nodes allows us to represent this submatrix as a block-diagonal matrix, where each block is a clique. The blocks do not overlap, but there is a row (and corresponding column) of ones to describe the connection of the egocentric node to all the cliques. We can assume that the row and corresponding column associated with the egocentric node are the first row and column, respectively. Instead of working directly with this matrix, we propose to make some slight modifications that will boost the spectral approach. The modifications are as follows: we add self connections to all the nodes if not already present, and we scale all connections to the egocentric node by a small parameter, $\delta$. We will call these modified matrices \textit{ideal community member matrices} for the sake of discussion, or \textit{ICM-matrices} for short.
Let $A$ be an ICM-matrix with $m$ blocks (each corresponding to a clique) along its diagonal, where the size of the $i^{th}$ block is $k_i \times k_i$. Without loss of generality, we assume the indices are arranged such that $k_j \le k_i$ whenever $j>i$ so that larger indices correspond to smaller blocks. Also assume that the first index corresponds to the egocentric node. We will represent the set of indices for the $k^{th}$ block by $V_k$. With this notation in place, $A$ is defined by Equation \eqref{eq:ADef}, and an example of the structure of such a matrix is given in Figure \ref{fig:IdealCMmatrix}.
\begin{equation}
A(i,j) =
\begin{cases}
& \delta, ~~~~~ ~~~ \text{if}~ i=1~\text{or}~j=1 , \\
& 1, ~~~~~ ~~~ \forall i,j \in V_k , ~~ k=1,...,m , \\
& 0, ~~~~~ ~~~ \text{otherwise} .
\end{cases}
\label{eq:ADef}
\end{equation}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.45\textwidth] {Fig4.png}
\end{center}
\caption{The non-zero values of an ICM-matrix for a node belonging to two cliques with 6 members, and three cliques of four members. }
\label{fig:IdealCMmatrix}
\end{figure}
Our focus will be on the eigenvectors of these matrices with corresponding positive eigenvalues, as they will be the ones we will use for identifying the cliques via spectral clustering. Let $P = \{i | \lambda_i > 0\}$ be the positive eigenvalues, and let $\{\mathbf{x}_p | p \in P\}$ be the associated eigenvectors. Assume that the eigenvalues are ordered so that $\lambda_j \le \lambda_i$ if $i>j$ (e.g. $\mathbf{x}_1$ is the dominant eigenvector). Let $\mathbf{x}_p(1)$ be the value of the first entry in such an eigenvector. Using a simple symmetry argument, one can show that each $\mathbf{x_p}$ is constant over each clique $i$; $\mathbf{x_p}(l \in V_i) = r_i$, for each vertex, $l$, in the set of vertices, $V_i$, in block $i$.
We now examine the properties of the entries of $\mathbf{x_p}$ by explicitly looking at the system of equations resulting from the constraint $\lambda_p ~ \mathbf{x}_p = A~\mathbf{x}_p $. For any $j \in V_i$, this constraint takes the following form,
\begin{equation}
\lambda_p~\mathbf{x}_p(j \in V_i) = \delta ~ \mathbf{x}_p(1) + \sum_{l ~ \in ~ V_i} \mathbf{x}_p(l), \label{eq:OtherRowa}
\end{equation}
\noindent and since $\mathbf{x_p}$ is constant and equal to $r_i$ over each clique, we obtain
\begin{equation}
\lambda_p~r_i = \delta ~ \mathbf{x}_p(1) + k_i ~ r_i,
\end{equation}
\noindent or
\begin{equation}
(\lambda_p - k_i) r_i = \delta ~ \mathbf{x}_p(1) . \label{eq:G2}
\end{equation}
As Equation \eqref{eq:G2} holds for any row indices, $i$ and $j$, this leads us to Equation \eqref{eq:G3},
\begin{equation}
r_i = \frac{\lambda_p - k_j }{\lambda_p - k_i } r_j .
\label{eq:G3}
\end{equation}
The importance of Equation \eqref{eq:G3} is that it shows that $|r_i| > |r_j|$ if $|\lambda_p - k_i| < |\lambda_p - k_j| $. In other words, the closer the number of members in the block corresponding to the $i^{th}$ clique is to $\lambda_p$, the larger the magnitude of $r_i$ in $\mathbf{x}_p$ and the easier it will be to pull out members of distinct cliques via spectral clustering. Now note that by setting $\delta$ to a small value, the ICM-matrix is a small perturbation of a block diagonal matrix. As the positive eigenvalues of a block diagonal matrix correspond directly to the sizes of the blocks present within the matrix, this will cause the eigenvalues of the ICM-matrix to be small perturbations of $k_i$. This implies that for each block of size $k_i$ there exists an eigenvalue, $\lambda$, such that $\lambda \approx k_i$. For the results presented in this paper, we use $\delta = 1/N_{ego}$, where $N_{ego}$ is the number of nodes in the egonet.
For illustrative purposes, the first three dominant eigenvectors of the matrix depicted in Figure \ref{fig:IdealCMmatrix} are shown in Figure \ref{fig:IdealCMmatrixEigVecs}. Note that the components of the eigenvectors clearly reveal each clique. However, this property does not extend to eigenvectors without positive eigenvalues. Indeed, the nullspace vectors are quite noisy and are detrimental to use for spectral clustering of the egonet.
\begin{figure}[t]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig5.png}
\end{center}
\caption{First three dominant eigenvectors for the ICM-matrix depicted in Figure \ref{fig:IdealCMmatrix}, where $\delta = 1/N_{ego}$ and $N_{ego}$ is the number of nodes involved in the egonet. }
\label{fig:IdealCMmatrixEigVecs}
\end{figure}
For the case of $\lambda_1$, we can also prove that $r_i \ge r_j$ when $j>i$. This follows from the fact that all of the entries of $A$ are non-zero and the power method will converge to the dominant eigenvector. As the power method will converge regardless of the initial starting vector, we can take a vector with all non-negative entries as our initial guess and the power method iteration will preserve this property. This implies that the magnitude of $\mathbf{x}_1(i)$ increases with the size of the clique in which the vertex $i$ belongs to. This is important to the egonet sparsification algorithm we discuss in the next section.
We are now in a position to define the edge descriptor sets associated with each node in our approach. Each edge descriptor set is comprised of densely connected subnetworks of the egonet. Formally, we define the densely connected subnetworks as follows. We begin by extracting the node's egonet from the network, and form its corresponding adjacency matrix. Using the methods described in the next section, we sparsify this adjacency matrix so that the remaining connections closely resemble the structure of an ICM-matrix. Next, we compute the eigenvectors associated with the largest positive eigenvalues of the sparsified matrix. We then use these eigenvectors to embed the egonet into a metric space, treating the value of each eigenvector at a given index as providing a spacial ordinate for the corresponding vertex \citep{Br03}. With each vertex having spacial coordinates, we then apply k-means clustering in order to find clusters of vertices. As each vertex is also connected to the egocentric node, each cluster of vertices represents a cluster of edges to use as a potential edge descriptor set. Finally, before accepting a cluster as an edge descriptor set, we additionally check that the set of nodes involved has 90\% or greater edge density between them to ensure they approximately form a clique. Although these edge descriptor sets will often be referred to as "cliques" for simplicity throughout this paper, we only require them to be very densely connected rather than fully connected.
To determine both the number of eigenvectors to use for the spectral embedding and the number of clusters, we use the following set of heuristics. Presumably, the largest cliques a node belongs to are the most important ones to accurately capture for describing that node. As larger cliques correspond to larger eigenvalues in an ICM-matrix, we estimate the number of coordinates for the spectral embedding and the number of clusters as the number of eigenvalues that are greater than one tenth of the largest eigenvalue. This allows us to recover all the largest cliques an egocentric node belongs to, while guarding against involving near nullspace vectors for the clustering process.
\subsubsection{Matrix Sparsification Algorithm}
\begin{figure}[h!]
\begin{center}
\subfigure [ Egonet for node A] { \label{fig:SparsificationAlgorithmExplained} \includegraphics [width=0.23\textwidth] {Fig6a.png} }%
\subfigure [Sub-egonet of A for node B] { \label{fig:SparsificationAlgorithmExplained_SubEgo} \includegraphics [width=0.23\textwidth] {Fig6b.png} }%
\end{center}
\caption{ a) The egonet of the node A is formed by the union of the gray and black nodes and corresponding edges. However, the cliques lying within each community are not completely disjoint, as the red edge connects node B in the black community to node C in the gray community. b) An example of a sub-egonet, where the black nodes and edges represent the sub-egonet of A for sub-egocentric node B.}
\end{figure}
In practice, it is unrealistic to expect to encounter egonets corresponding directly to the ideal community member matrices described in the previous section. A more realistic assumption is that the cliques falling in different communities have some random edges connecting them, as in Figure \ref{fig:SparsificationAlgorithmExplained}. However, adding such random connections to ICM-matrices can drastically alter their eigenspace properties, so this problem must be addressed.
To this end, we develop a method capable of removing such connections with high accuracy. This is accomplished by considering each node present in the egonet, and removing any edges that are not components of the largest clique(s) the node belongs to. To achieve this goal we propose a spectral technique that exploits the important observation made in the previous section: the magnitude of each entry of the dominant eigenvectors increases with the size of the clique that the corresponding node belongs to.
We will call a node that is not the original egocentric node, a sub-egocentric node, and define a sub-egonet to be the egonet for a sub-egocentric node restricted to the set of nodes and edges represented in the egonet under consideration; see Figure \ref{fig:SparsificationAlgorithmExplained_SubEgo} for an example. To determine the largest clique(s) each sub-egocentric node belongs to, we extract its sub-egonet and approximate the dominant eigenvector of the corresponding adjacency matrix. As larger values in this eigenvector correspond to nodes that are members of larger cliques, we remove all edges connecting the sub-egocentric node to a node whose entries in the dominant eigenvector is below half of the maximum for the eigenvector. Figure \ref{alg:SparsifyEgo}, describes the egonet sparsification algorithm.
\begin{figure}[h]
\makebox{\bfseries \sffamily Egonet Sparsification Algorithm} \\
\rule[0mm]{\linewidth}{0.5pt}
\begin{itemize}
\item [] {\bfseries \sffamily Input}:
\begin{itemize}
\item node $v$
\item EGO($v$), the egonet of $v$
\end{itemize}
\item []{\bfseries \sffamily Algorithm:}
\begin{itemize}
\item Repeat steps 1 to 4 until the egonet's adjacency matrix no longer changes or a maximum number of iterations has been reached.
\item Repeat steps 1 to 3 for each node $u$ in EGO($v$).
\end{itemize}
\begin{enumerate}
\item For each node, $u$, in EGO($v$) such that $u \neq v$, extract its sub-egonet, SEGO($u$).
\item Scale the entries of the adjacency matrix for SEGO($u$) by $\delta = 1/N_{sego}$, where $N_{sego}$ is the number of nodes in SEGO($u$). Call this matrix $A(u)$.
\item Approximate the dominant eigenvector of $A(u)$ using a small number of iterations, $O(10)$, of the power method. If a node's index in the eigenvector has a magnitude below $1/2$ the maximum, remove the edge from EGO($v$).
\item Symmeterize EGO($v$) by removing any edges that became directed through the sparsification process.
\end{enumerate}
\item [] {\bfseries \sffamily Output:} Sparsified EGO($v$).
\end{itemize}
\rule[0mm]{\linewidth}{0.5pt}
\caption{Sparsification of egonet links
\label{alg:SparsifyEgo}}
\end{figure}
Once the matrix has been sparsified, it can be analyzed using the spectral algorithm described in the previous section to generate the edge descriptor sets for the egocentric node. The computational cost of this process is dominated by the cost of implementing the spectral algorithm, which is driven by the cost of calculating the eigenvectors of the sparsified ICM-matrix with positive eigenvalues. Determining these eigenvectors costs $O(k^2)$ for a node of degree $k$ and thus $O(N~\overline{k^2})$ to generate them for an entire network involving $N$ nodes.
\subsubsection{Experimental Validation of the Sparsification Algorithm}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.45\textwidth] {Fig8.png}
\end{center}
\caption{Example of an ICM-matrix with varying clique sizes and random connections added. Each dot represents non-zero components of the matrix.}
\label{fig:CMatrixPreSparse}
\end{figure}
Our initial set of tests involve ICM-matrices that are perturbed by adding in random connections between nodes with probability $p$. We use cliques composed of 4, 6, and 8 members, and plant two cliques of each size in the ICM matrix. The value of $p$ is set according to Equation \eqref{eq:ChoiceOfP}, so that members of the smallest cliques have as many expected outlinks as they have inlinks.
\begin{equation}
p = \frac{\text{Number~of~Nodes~in~Smallest~Clique}}{\text{Total~Number~of~Nodes~Outside~Smallest~Clique}}
\label{eq:ChoiceOfP}
\end{equation}•
\noindent An example of such matrices is shown in Figure \ref{fig:CMatrixPreSparse}.
Our findings show that the sparsification algorithm yields either an ICM-matrix or a small perturbation thereof when the random connections do not to alter the planted disjoint clique structure. Even when the planted clique structure is altered through relatively large cliques developing via random connections, our findings are that the algorithm will still reliably recover the largest planted cliques.
Figure \ref{fig:ICMatrixWin} shows a typical example of a successful result: the algorithm correctly removes all of the random connections. Figure \ref{fig:ICMatrixFail} exemplifies the cases where the algorithm fails to return the ideal planted partition. Such cases correspond to a scenario where the random connections (colored in red in Figure \ref{fig:ICMatrixFail_PreSparse}) are generating cliques comparable in size to the original cliques that the nodes belong to, thus altering the clique structure that was originally planted. When a node belongs to multiple cliques of approximately the same size, the connections will all remain so long as none overlap with a clique that is substantially (50\% or greater) larger in size, and the connections will all be pruned otherwise. Both of these cases can be observed in Figure \ref{fig:ICMatrixFail_PostSparse}. As the intention of the algorithm is to recover the largest disjoint cliques an egocentric node belongs to, neither of these outcomes are actually undesirable but they make determining the accuracy of the algorithm problematic.
\begin{figure}[h!]
\subfigure [Pre-sparsification] { \label{fig:ICMatrixWin_PreSparse} \includegraphics [width=0.25\textwidth] {Fig9a.png} }%
\subfigure [Post-sparsification] { \label{fig:ICMatrixWin_PostSparse} \includegraphics [width=0.25\textwidth] {Fig9b.png} }%
\caption{An example of the resulting matrix after a successful sparsification. a) Matrix before a successful sparsification. b) Matrix after a successful sparsification. See text for discussion.}
\label{fig:ICMatrixWin}
\end{figure}
\begin{figure}[h!]
\subfigure [ Pre-sparsification] { \label{fig:ICMatrixFail_PreSparse} \includegraphics [width=0.25\textwidth] {Fig10a.jpg} }%
\subfigure [Post-sparsification] { \label{fig:ICMatrixFail_PostSparse} \includegraphics [width=0.25\textwidth] {Fig10b.jpg} }%
\caption{An example of the resulting matrix after a failed sparsification. a) Matrix before an "unsuccessful" sparsification. b) Matrix after an "unsuccessful" sparsification. See text for discussion.}
\label{fig:ICMatrixFail}
\end{figure}
To sidestep these difficulties, we tested the performance of the algorithm using cliques of fixed size (10 members) and increased the expected number of outlinks to other cliques from 0 to 30 (three times the number of inlinks). We measured the accuracy of the sparsification algorithm by counting the percentage of properly removed random connections that were created during the perturbation process. As can be seen in Figure \ref{fig:CMatrixPostSparse}, the algorithm performs remarkably well with 100\% accuracy, well past the point where there are more outlinks than inlinks in either case.
\begin{figure}[h]
\subfigure [ Five Groups of Ten] {
\begin{minipage}[c][0.75\width]{
0.5\textwidth}
\centering%
\label{fig:FiveGroupsTenMembers}
\includegraphics [width=1\textwidth] {Fig11a.png}
\end{minipage}}
\subfigure [ Ten Groups of Ten] {
\begin{minipage}[c][0.75\width]{
0.5\textwidth}
\centering%
\label{fig:TenGroupsTenMembers}
\includegraphics [width=1\textwidth] {Fig11b.png}
\end{minipage}}
\caption{The connection removal accuracy of the algorithm, as a function of the expected ratio of the number of inlinks between a given node and the members of its planted clique and the number of outlinks between a given node and members of other planted cliques. a) This test is carried out on an ICM-matrix composed from five disjoint cliques with ten members each. b) The same test carried out in (a), but with the number of cliques doubled.}
\label{fig:CMatrixPostSparse}
\end{figure}
\subsection{Community Scale Features of Community Structure}
\label{sec:CommunityLevelPerspective}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.45\textwidth] {Fig12.png}
\end{center}
\caption{At the community level, the goal is to find collections of edge descriptor sets that form a community. For the black and gray communities, we need to differentiate the sets of gray and black edges from the set of white edges. }
\label{fig:AlgorithmExplainedCommunityLevel}
\end{figure}
Once the edge descriptor sets have all been extracted, we need a way to stitch these localized features together to form communities. To do this, we use quantitative features relevant to the scale of communities to agglomerate these edge sets.
As is the case for the network depicted in Figure \ref{fig:AlgorithmExplainedCommunityLevel}, a reasonably general qualitative feature to expect communities to possess is that they have an edge density substantially higher than that of the network as a whole, and we note that this is not a property which is accurately represented at the level of individual nodes' edge descriptor sets. To account for this property, we use link density in order to agglomerate edge descriptor sets to form communities.
Let $C$ denote a set of nodes, and $E(C)$ denote the set of edges between all members of $C$. Given a set of $n$ vertices, the maximum number of edges is $n(n-1)/2$ (for a clique). We define the edge density as,
\begin{equation}
\rho (C) = \frac{|E(C)|}{\frac{1}{2} |C| (|C| - 1)}.
\label{eq:LinkDensity}
\end{equation}
Our method for community formation is based on satisfying $\rho (C) = D$ where $0\leq D \leq 1$ is a user supplied inlink density and the number of nodes involved, $|C|$, is as large as possible. The reason $D$ is taken as an input is because the appropriate value is dependent on the scale of the structures one is trying to extract from the network; lower threshold densities correspond to looser notions of what it means to be a community and higher threshold densities correspond to tighter ones.
We take a greedy algorithmic approach to expanding communities, where the nodes involved in a new edge descriptor set are added to the community if they decrease $\rho$ by the minimal amount while still staying above the user supplied threshold. Although this is not the most principled optimization approach from a mathematical standpoint, it preserves the intuitive notion of communities forming as a diffusive process of individual perspectives on the community structure of the network, similar to percolation of edge descriptor sets \citep{Pal05,FriendshipGroup10} or variations of label passing \citep{Raghavan07, GraphSwarm12}. An outline of the community expansion algorithm is given in Figure \ref{alg:CommunityFormation}.
\begin{figure}[h]
\makebox{\bfseries \sffamily Community Formation Algorithm} \\
\rule[0mm]{\linewidth}{0.5pt}
\begin{itemize}
\item [] {\bfseries \sffamily Input}:
\begin{itemize}
\item All edge descriptor sets detected on the network.
\end{itemize}
\item []{\bfseries \sffamily Algorithm:}
\begin{enumerate}
\item Start with largest edge descriptor set that remains unclustered as a community base.
\item Find the edge descriptor set that would cause the minimum reduction in inlink density for the community being formed.
\item If the inlink density would remain above the user supplied density threshold, add the descriptor set to the community being formed.
\item Repeat Steps 2 and 3 until no edge descriptor set satisfies the density constraint.
\item Repeat from Step 1 until no edge descriptor sets remain unclustered.
\end{enumerate}
\item [] {\bfseries \sffamily Output:} Initial set of communities.
\end{itemize}
\rule[0mm]{\linewidth}{0.5pt}
\caption{Sparsification of egonet links
\label{alg:CommunityFormation}}
\end{figure}
We conclude this section with a brief cost analysis of the community formation process. Let $S$ be the set of nodes involved in a potential edge descriptor set to add to a forming community, $C$. Let $N_C$ represent the total number of nodes that either belong to $C$ or are connected to one of the nodes in $C$ at any given stage of the community formation process. Each update to $\rho(C)$ involves a sweep over all edge descriptor sets for each of the $N_C$ nodes. Adding a potential edge descriptor set involves checking the density of the original adjacency matrix for the original network restricted to the indices representing nodes involved in the potential updated version of $C$. Because this resulting density can be written in terms of a sum of the link density before the addition and the link density of the addition, the only substantial computational cost to check a potential update comes from calculating the density of the addition, a calculation that has a computational cost of $O(N_C)$ flops so long as $|S| << N_C$. As this must be carried out for each of the $N_C$ nodes involved in $C$, the cost of each update is proportional to $N_C^2$. This implies that the computational cost to extract the entire community is bounded above by a constant multiple of $N_{com}^2$, where $N_{com}$ is the total number of nodes involved in that community. Since this must be carried out for each community in the network, the total computational cost for the community formation process is proportional to $O(\overline{N_{com}^2})$, the average of the square of community sizes.
\subsection{Network Scale Features of Community Structure}
\label{sec:NetworkLevelPerspective}
Lastly, we must address the question of what properties a collection of communities should have at the level of the entire network. The answer to this question depends on the expectations of the researcher applying the algorithm, but as general heuristics we will require that there are no unclustered nodes in the network and that only the minimal required number of communities will be returned which provide a cover for the set of nodes in the network.
\begin{figure}[h]
\begin{center}
\includegraphics [width=0.45\textwidth] {Fig14.png}
\end{center}
\caption{ At the network level, the goal is to make sure that the collection of communities returned by the algorithm satisfy criteria expected by the researchers using it. For the network depicted above, a natural question is what to do with the nodes that do not decisively belong to any communities detected.}
\label{fig:AlgorithmExplainedNetworkLevel}
\end{figure}
Given the initial set of communities formed using the method described in the preceding section, we force all nodes to belong to at least one community by taking any nodes that remain unclustered after the community formation process and assign them to the community they share the most edges with. This process is carried out iteratively if needed. We then build a cover for the network out of the communities detected, starting with the largest as an initial element in the set. Communities are then successively added to the set comprising the cover based on having the lowest percentage overlap with the current cover. When multiple communities are tied for the lowest percentage, larger communities are given preference over smaller ones. This process is carried out until every node in the network is represented in at least one of the communities in the set.
\section{Experiments}
We validate our approach on four different datasets. We first consider synthetically generated networks, coming from the planted $l$-partition and LFR benchmark tests. The other two datasets are based on graphs from real world social networks, one the famous Zachary Karate Club network \citep{Zachary77} and the other a high school friendship network \citep{Xi13}.
We view the task of community detection as querying a given network for the community structures present, therefore we use F-score to assess the quality of our community detection algorithm. The F-score of a result is a common measure to use in gauging the quality of information retrieval applications, and is defined by Equations \eqref{eq:Precision} through \eqref{eq:Fscore}.
\begin{equation}
\resizebox{1.0 \hsize}{!} {$Precision = \frac{| \{\text{Gold~Standard~Community} \} \cap \{\text{Detected~Community} \} | }{ | \{\text{Detected~Community} \} | } .$}
\label{eq:Precision}
\end{equation}•
\begin{equation}
\resizebox{1.0 \hsize}{!} {$ Recall = \frac{| \{\text{Gold~Standard~Community} \} \cap \{\text{Detected~Community} \} | }{ | \{\text{Gold~Standard~Community} \} | } . $}
\label{eq:Recall}
\end{equation}•
\begin{equation}
F = \frac{2 \times \text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}} .
\label{eq:Fscore}
\end{equation}•
The elements of each set are taken as the nodes involved in the community, and the precision and recall values we report are taken as the average precision and recall values when each planted community is paired with the detected community with the highest F-score. The benefit of using this metric to assess our algorithm is that the precision scores reflect the quality of our choice for edge descriptor sets, and the recall scores reflect the quality of our choices for community formation and desired network level properties.
For the high school and LFR tets, we additionally calculate the extended notion of normalized mutual information (NMI) \citep{La09,La09a} between the two sets of community labels in order to measure the algorithm's performance. The motivation for including this measure for these tests is that this was the performance metric used in \citep{Xi13}, and allows one to roughly compare our algorithm to a host of others. Although the extended notion of normalized mutual information slightly differs from what is standardly called NMI, we will refer to this extended version as just NMI in this paper for the sake of simplicity. An NMI score of 1 implies that the labels correlate perfectly, and an NMI score of zero indicates that the labels have no correlation with each other. We refer the reader to \citep{La09} for a detailed technical description of normalized mutual information.
\subsection{Planted $l$-Partition Benchmark Tests}
Our first set of synthetic network tests are planted $l$-partition tests. These are standard benchmark tests \citep{GN2001} that create randomly generated networks with planted communities, where nodes in the same community have a higher probability to be connected than nodes in differing communities. The test involves fixing the expected degree of a node, and increasing the expected proportion of those links which are outlinks to other communities. Not only does this make the boundary between communities less well defined because there are more links between communities, but it also makes communities less well defined by decreasing their inlink density. Because our algorithm is built to detect communities that can overlap, we again use F-scores to measure the quality of the results instead of the more standard metric of recovering the planted partitioning of nodes. The motivation for using F-scores is that there is not a one to one correspondence between sets of correctly partitioned nodes and correctly identified communities when it is assumed that communities can overlap with one another.
We conduct several versions of the planted $l$-partition test: four groups of 32 members, eight groups of 32 members, four groups of 64 members, and eight groups of 64 members. For all of these tests, the expected degree per node is set to be equal to half of the total number of members in each planted community. The first test with four groups of 32 is the most standard, and allows one to roughly compare our algorithm against a host of others \citep{DaEtAl05}. The remaining tests provide a controlled setting to demonstrate how the algorithm's performance substantially improves when there are more communities and/or larger communities, which more accurately reflects the types of networks the algorithm was intended for. The precision, recall, and F-scores for this series of tests are presented in Figures \ref{fig:PL_Precision}-\ref{fig:PL_F}, where each data point represents the averaged result computed over twenty independent realizations of the random network. Each test is denoted by [Number of Groups]g[Number of Members per Group]. Note that the algorithm's performance increases in all respects with either increased community sizes or number of communities, and the F-scores for the test involving eight groups of 64 still remains above 0.95 even in the case where half of the links for any node are expected to be outlinks.
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig15.png} %
\end{center}
\caption{ Precision scores for the planted $l$-partition tests.}
\label{fig:PL_Precision}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.50\textwidth] {Fig16.png} %
\end{center}
\caption{ Recall scores for the planted $l$-partition tests.}
\label{fig:PL_Recall}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.50\textwidth] {Fig17.png} %
\end{center}
\caption{ F-scores for the planted $l$-partition tests. }
\label{fig:PL_F}
\end{figure}
\subsection{LFR Benchmark Tests}
The second set of synthetic network tests consists of the LFR benchmark tests \citep{La09b} which are designed to construct synthetic networks with built-in community structures. LFR networks are a variation of the planted $l$-partition model, where nodes are no longer required to all have the same expected degree and communities can come in varying sizes. Although more general versions of the test exist in which edge weights and directions are also considered, we only focused on the extended version of the test that allows for nodes to be members of multiple communities. Nodes belonging to multiple communities will be referred to as \textit{overlapping nodes}, where the total number of overlapping nodes in a given network is referred to as $O_n$ and the number of communities an overlapping node belongs to as $O_m$.
In order to be able to compare the results of our algorithm with the experiments conducted in the recent survey \citep{Xi13}, we use the same parameters as \citep{Xi13} for many of our experiments. Node degree and community sizes are respectively drawn from power law distributions with $\tau_1 = 2$ and $\tau_2 = 1$, the average degree per node is set to $k_{ave} = 10$, and the maximum degree a node can have is set to $k_{max} = 50$. The remaining parameters were varied throughout the tests. We use networks with sizes $N \in \{1000,5000\}$, and community sizes in both a small range $s=(10,50)$ and a large range $b=(20,100)$. The fraction of links through which a node connects to members of other communities is denoted by the mixing parameter $\mu$. As with the planted $l$-partition tests, we randomly generate 20 networks for each set of parameter values and report the average performance.
In the first set of experiments, we keep the maximum number of overlapping communities constant, $O_m =2$. We increase the density of edges between communities: $\mu$ is increased from 0.1 to 0.3 by increments of 0.05. All of the combinations of parameter values for $N$ and community sizes are examined, along with setting $O_n$ to either 10\% or 50\% of the nodes in the network. For this set of tests, we examine the average precision, recall, F-score, and NMI for each set of communities returned by the algorithm compared against those planted by the test. The cut-off density for community expansion is set to $(1-\mu)$ multiplied by the average egonet density of the graph.
As we can see from Figures \ref{fig:VaryMu_N1000s}-\ref{fig:VaryMu_N5000b}, the algorithm's performance again improves as the number and the sizes of the communities increase. Somewhat surprisingly, the algorithm's F-scores also tends to increase with increasing values of the mixing parameter for the low overlap cases, where $O_n = 10\%$ of the total nodes. This apparent paradox can be explained by observing that increasing $\mu$ lowers the edge density within communities. We can use a lower edge density threshold to merge the cliques, and recover the communities, and the recall score is improved. Also, because $O_n$ is small, communities still remain well separated, and spurious cliques are not created by the increase in outlinks. However, for the high overlap case, we find that increasing the value of the mixing parameter tends to have only minor effects on the recall scores while significantly impairing the accuracy scores.
The second subset of tests varies $O_m$ from 2 to 8, with $N=5000$, $\mu=0.3$, and $O_n = 10\%$. This set of tests were also conducted on a variety of overlapping community detection algorithms in \citep{Xi13}, but only the NMI was examined in that work. The precision, recall, F-scores, and NMI of our algorithm for this set of tests are presented in Figure \ref{fig:VaryOm_N5000}. Although the NMI of our algorithm on this series of tests is about average with respect to all the algorithms analyzed in \citep{Xi13}, our precision scores are excellent.
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig18.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 1000 nodes and using the smaller community size range. $O_n = 10\%$ of the total nodes. }
\label{fig:VaryMu_N1000s}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig19.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 1000 nodes and using the smaller community size range. $O_n = 50\%$ of the total nodes. }
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig20.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 1000 nodes and using the larger community size range. $O_n$ = 10\% of the total nodes. }
\label{fig:VaryMu_N1000b}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig21.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 1000 nodes and using the larger community size range. $O_n$ = 50\% of the total nodes. }
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig22.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 5000 nodes and using the smaller community size range. $O_n$ = 10\% of the total nodes. }
\label{fig:VaryMu_N5000s}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig23.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 5000 nodes and using the smaller community size range. $O_n$ = 50\% of the total nodes. }
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig24.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 5000 nodes and using the larger community size range. $O_n$ = 10\% of the total nodes. }
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig25.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 5000 nodes and using the larger community size range. $O_n$ = 50\% of the total nodes. }
\label{fig:VaryMu_N5000b}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig26.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $O_m$ for networks with 5000 nodes with 10\% of the total nodes belonging to two different communities, using small community size range distribution. }
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig27.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $O_m$ for networks with 5000 nodes with 10\% of the total nodes belonging to two different communities, using large community size range distribution. }
\label{fig:VaryOm_N5000}
\end{figure}
\subsection{Zachary Karate Club}
\label{sec:Karate}
Zachary's karate club network is a small social network comprised of the interactions amongst members of a university karate club studied by sociologist Wayne Zachary in the 1970's \citep{Zachary77}. During the period of study, a political issue arose regarding the club's fees which eventually caused the club to fissure into two clubs. The social interactions of the clubs' members outside of the official meetings were examined, and edges between members indicate that they interacted socially outside of the club setting. The ground truth for this test is taken to be which specific club the members joined after the fissure.
This example illustrates a fundamental and intrinsic difficulty with the community detection problem: the definition of a community is problem dependent, and one can only design algorithms that are optimal for certain classes of communities. The communities on this network are defined in terms of who leads them, where the leaders can easily be identified by the two nodes with substantially higher degrees than the average of those they share connections with. This suggests that a node's perspective on community should be defined by the leader(s) it is connected to, and the community scale features is defined by the perspective of its leader node. If one were earnestly interested in solving community detection problems of this type, a very simple approach would be to take the edges involving the two nodes of highest degree as edge descriptor sets, agglomerate these based on which leader node is involved to form communities, and then assign any unclustered nodes to the community they have the most links to. We have implemented this idea, and our algorithm yields a perfect recall value for each group, with F-scores both being over 0.94 for the given gold standard groupings.
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.50\textwidth] {Fig28.png}
\end{center}
\caption{The Zachary karate club network. The gold standard grouping for a node is given by its shape, and its found grouping is given by its color(s). }
\label{fig:OriginalKarateClubNetwork}
\end{figure}
Although this type of community structure is not at all what is intended for our algorithm to detect, it is a standard enough test to warrant seeing how it performs nonetheless. In order to apply our algorithm to this network, we first need to get an initial estimate of what the community density should be. To this end, we examine the edge density of the egonets for each node to get a local understanding of the average edge density of the network. Finding that the average egonet link density is 78.2\%, we then set the community density to 3/4 of that in order to hold the communities to looser standards. This results in the three clusters of nodes given below, with the precision, recall, and F-scores for these groups are presented in Table \ref{tab:Karate}.
\noindent Group 1: ~ 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, \\
\indent \indent 14, 17, 18, 20, 22
\noindent Group 2: ~ 3, 9, 10, 19, 21, 23, 24, 25, 26, 28, 29, \\
\indent \indent 31, 32, 33, 34
\noindent Group 3: ~ 15, 16, 24, 27, 30, 33, 34
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& Precision & Recall & F-Score \\ \hline
Group 1 & 0.94 & 1.0 & 0.97 \\ \hline
Group 2 & 0.93 & 0.78 & 0.85 \\ \hline
Group 3 & 1.0 & 0.39 & 0.56 \\ \hline
\end{tabular}
\caption{The precision, recall, and F-scores for the detected communities on the karate club network. }
\label{tab:Karate}
\end{center}
\end{table}
As we can see, the communities produced by the algorithm cause the gold standard grouping, denoted by circles, to be split into two groups. The reason for this splitting is that the network is mainly composed of two subtrees (one for each leader), and therefore the density of connection within each subtree remains low. Our approach, which assumes a more "egalitarian" community structure, is no longer optimal when the network is organized in such a strongly hierarchical way. Splitting the gold standard group allows these new groups to have a higher edge densities of 34\% and 63\% whereas the community given as the gold standard only has an edge density of 27\%. As we can see, although the division is not desired for this particular gold standard grouping, it is still a sensible one with respect to the notion of community our algorithm is designed to capture.
\subsection{High School Friendship Network}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.50\textwidth] {Fig29.png}
\end{center}
\caption{The high school friendhip network examined in Xie et al. 2013 \citep{Xi13}. The ground truth for this network is reflected by the color coding of the nodes, and the found grouping for each node is reflected by the color(s) of the square surrounding it.}
\label{fig:HighSchoolFriendshipNetwork}
\end{figure}
We now describe the second real world network, used as a benchmark in a recent study that evaluated the states of the are algorithms for detecting overlapping communities \citep{Xi13}. The dataset is part of the National Longitudinal Study of Adolescent to Adult Health \footnote{http://www.cpc.unc.edu/projects/addhealth/}. The network is composed of high school students, where the links between students come from self-reported connections and the gold standard partitioning of the network is taken as the grades (7 through 12) the students belong to. Although the ground truth is taken as six communities, it is understood that the friendship connections for grade 9 demonstrate that the grade can be split into two distinct subgroups with one group composed of black students and the other white students, as can be inferred from Figure \ref{fig:HighSchoolFriendshipNetwork}.
Our approach to this network is the same as the karate club network discussed in the previous section. We estimate the desired community density by examining the average local edge density coming from each node's egonet, and set the community link density to 3/4 of that density. For this network, the average egonet link density is found to be 67.0\%, so the community density threshold is set to 50.3\%. We then take any nodes that remain unclustered after the community formation process, and assign them to the community they have the most links to. The performance of our algorithm on this network is presented in Table \ref{tab:HighSchool}.
\begin{table}[h]
\begin{center}
\begin{tabular}{ccc}
\hline
\multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}Number of\\ Communities\end{tabular}} & \multicolumn{1}{c|}{Overlapping Nodes} & \multicolumn{1}{c|}{NMI} \\ \hline
\multicolumn{1}{|c|}{8} & \multicolumn{1}{c|}{20} & \multicolumn{1}{c|}{0.52} \\ \hline
& & \\ \hline
\multicolumn{1}{|c|}{Precision} & \multicolumn{1}{c|}{Recall} & \multicolumn{1}{c|}{F-Score} \\ \hline
\multicolumn{1}{|c|}{0.79} & \multicolumn{1}{c|}{0.82} & \multicolumn{1}{c|}{0.80} \\ \hline
\end{tabular}
\caption{The performance of our algorithm on the high school friendship network. }
\label{tab:HighSchool}
\end{center}
\end{table}
Despite its low NMI score, the groupings found by the algoritm are sensible ones. It accurately detects the sub-division in grade 9, as well as the set of nodes falling in grades 9 and 10 that are densely connected to one another. Additionally, if one examines the nodes which are "incorrectly" labelled (e.g. 0, 42, and 63), it is readily apparent that the gold standard groupings do not accurately represent the nature of how these nodes are connected to the network.
\section{Conclusion and Discussion of Results}
This work has focused on developing a computationally inexpensive algorithm capable of detecting overlapping communities in social networks. A novel feature of how we approach the problem is that we define the community structure we are trying to capture based on how that structure would appear at differing scales. The scales specifically considered in this paper are: the scale of individual nodes, the scale of individual communities, and the scale of the network as a whole. Using the models developed in this work for each of these three scales, we find that applying our algorithm to benchmark tests demonstrates good overall performance. This performance improves with increasing either the number of communities or the sizes of the communities to be detected.
One advantage of our methodology is that it explicitly accounts for multiscale features during the community formation process. This aspect of our approach ensures that the detected communities are always sensible ones with respect to those features. Another distinct advantage of our method is that the way we quantify the features at each scale and tie them together is highly modular. This allows for the mathematical model of the community structure at any specific scale to be swapped out as appropriate based on the nature of a specific community detection problem.
Future work will focus on further developing the methodology used in our algorithm. One facet meriting further attention is to take advantage of the modularity of our algorithm to incorporate models of alternative features of community structure. The potential advantage of this was demonstrated in Section \ref{sec:Karate}, where detecting the leader based communities of the karate club network became trivialized by modeling community features as appropriate to the problem. Another avenue to explore is the possibility of chaining together sequences of node versus community level features, where community scale features are treated as node scale features at each higher link in the chain. This will allow us to incorporate detection of hierarchical community structures into our algorithm, and further increase its flexibility.
\bibliographystyle{plainnat}
\section{Introduction}
A number of real-world systems are mathematically represented by graphs, where nodes represent agents in the system and edges represent connections between agents. A common feature of interest in such systems comes from finding groups of nodes that can be considered as communities within the network. Although community detection is often referred to as though it is a single problem, a more accurate description is that it is a body of related problems, owing to the fact that the notion of what it means to be a "community" involves different concepts in different contexts.
When adopting the view that communities are not simply defined by nodes but the connections between nodes, a sensible approach to detecting communities is to cluster the edges present in the network. The first paper to explicitly acknowledge this point of view was \citep{EvLa09}, where communities are formed by partitioning the line graph of the network. However, edge clustering methods predate this by several years. Although it was not viewed in terms of clustering edges at the time of its publication, clique percolation \citep{Pal05} is perhaps one of the earliest examples of such methods, as cliques can be equivalently described as sets of completely interconnected nodes or edges.
In the clique percolation method, each node is described by the cliques it is a member of, and these cliques serve as the "atoms" from which to build community "molecules". The cliques provide a mechanism for representing a localized feature of a community structure. These localized features are then agglomerated to carve out communities by what the authors of \citep{Pal05} call a "percolation" process. This process consists of repeatedly adding all other cliques to the community that differ by (at most) one node from a clique already present in the community.
A natural generalization of the clique percolation methodology is to derive a node's local affinity for one, or multiple communities from its local perspective of the network itself. In such an approach, it is useful to represent the local neighborhood of each node by an "egonet". An \textit{egonet} is the network restricted to just the set of nodes a given node is connected to, along with all of the edges between the nodes in that set, where the node this net is built around is called the \textit{egocentric node}.
This generalization serves as the motivation for the collective friendship group inference method \citep{FriendshipGroup10}. With this method, instead of describing each node by the cliques it is a member of, each node is described by what are called friendship circles. A \textit{friendship circle} is any set of edges in an egonet that satisfy the definition of community on the network as a whole when the egocentric node is removed. These friendship circles then play the role of cliques in the aforementioned method, and communities are formed via an equivalent percolation process.
Although our methodology is most closely related to the preceding edge clustering methods, we also incorporate elements based on quality function optimization and spectral clustering \citep{SpectralClusteringOverview, SpectralClusteringComparativeAnalysis}. The interested reader can find a comprehensive overview of these approaches (and many others) in \citep{Fortunato10}, and an analysis of overlapping community detection methods in \citep{Xi13}.
\section{Proposed Approach}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.45\textwidth] {Fig1.png}
\end{center}
\caption{ Example network with community structure.}
\label{fig:AlgorithmExplainedBaseNetwork}
\end{figure}
Our work focuses on community detection with respect to social networks representable by undirected / unweighted graphs. We assume communities are fundamentally defined in terms of the properties of the edges comprising it, so we approach community detection as an edge clustering problem. As clustering edges naturally allows a node to belong to multiple communities based on how the edges it is connected to are clustered, this conforms well with a common feature of social networks where communities can overlap with one another.
Using the graph depicted in Figure \ref{fig:AlgorithmExplainedBaseNetwork} as a model network, several questions naturally arise on how to describe the community structure present. Firstly, what differentiates edges that connect a node to members of the same community from those that connect them to non-community members; what properties does a community possess at the scale of individual nodes? Secondly, what differentiates the sets of black and gray edges from the white ones; what properties does a community possess at the scale of individual communities? Lastly, what should be done with nodes that do not decisively belong to any particular community; what properties does the set of all communities possess at the scale of the entire network?
These questions confront the multiple scales that intrinsically define communities: the scale of individual nodes, the scale of individual communities, and the scale of the entire network. In this work, we attempt to provide logical answers to these foundational questions in the context of identifying overlapping communities in social networks. Because our approach is highly modular, one can easily modify the specific quantitative model for community structure at each scale as appropriate for a given community detection problem. This opens the door for creating community detection algorithms capable of searching for targeted notions of community that respect the context of the problem.
Let $N$ be the number of nodes in the network, $\overline{N_{com}^2}$ be the average squared community size, and $\overline{k^2}$ be the expected value of a node's degree squared. An overview of our algorithm and the computational cost of each step is presented in Figure \ref{alg:CD}.
\begin{figure}[h]
\makebox{\bfseries \sffamily Community Detection Algorithm} \\
\rule[0mm]{\linewidth}{0.5pt}
\begin{itemize}
\item [] {\bfseries \sffamily Input}:
\begin{itemize}
\item Adjacency matrix for the network
\end{itemize}
\item []{\bfseries \sffamily Algorithm:}
\begin{enumerate}
\item Detect the sets of edges that will be used to describe each node in the network as described in Section \ref{sec:NodeLevelPerspective}.
\begin{itemize}
\item Cost: $O(N~\overline{k^2})$
\end{itemize}
\item Cluster the edge sets to form communities as described in Section \ref{sec:CommunityLevelPerspective}.
\begin{itemize}
\item Cost: $O(\overline{N_{com}^2})$
\end{itemize}
\item If any nodes in the network remain unclustered, attach them to the community they share the most connections with.
\begin{itemize}
\item Cost: Negligible
\end{itemize}
\item Prune out the smaller communities detected subject to the constraint that all nodes are represented in at least one community, as described in Section \ref{sec:NetworkLevelPerspective}.
\begin{itemize}
\item Cost: Negligible
\end{itemize}
\end{enumerate}
\item [] {\bfseries \sffamily Output:} Sets of nodes comprising communities detected on network.
\end{itemize}
\rule[0mm]{\linewidth}{0.5pt}
\caption{Community detection algorithm
\label{alg:CD}}
\end{figure}
\subsection{Node Scale Features of Community Structure: Edge Descriptor Sets}
\label{sec:NodeLevelPerspective}
\begin{figure}[t]
\begin{center}
\includegraphics [width=0.45\textwidth] {Fig3.png}
\end{center}
\caption{ A depiction of the egonet for the starred node in the network; all of the red nodes/edges are included in the egonet.}
\label{fig:AlgorithmExplainedNodeLevel}
\end{figure}
Because we are interested in identifying communities from the arrangement of edges, we introduce the notion of \textit{edge descriptor sets} as a general term for using sets of edges to describe community structure local to each node. The cliques (or friendship circles) a node belongs to, would be specific examples such edge descriptor sets.
In our approach, we assume that a node is more likely to belong to a community if it has many mutual friends within that community. This suggests that edges linking members of the same community should be densely inter-connected. The prototypical example of such a cluster of edges would be a clique. Furthermore, we note that if a node belongs to several relatively large cliques that are mutually disjoint, then this node may be at the intersection of multiple communities. Guided by these simple heuristic principles, we propose to extract sets of edges that are both densely connected and largely disjoint from each other from each node's egonet to serve as the edge descriptor sets for that node. Each set can be thought of as encoding the fine-scale features of community structures present in the network.
We approach the task of extracting edge descriptor sets by first using a simple spectral clustering method to sparsify the local egonet defined around each node. The goal of the sparsification is to reveal the largest disjoint cliques that are present in the egonet. Finally, a more advanced spectral clustering method is used to construct the edge descriptor sets associated with the node in question. We begin our discussion with the latter of these two processes, because it is more involved and provides a natural introduction to the former.
\subsubsection{ICM-Matrices}
\label{sec:ICM}
In this section, we examine the spectral properties of idealized egonets formed by cliques that are only connected through a single (egocentric) vertex. While this situation may appear unrealistic, we explain in the next section how to extract such subgraphs from the original graph. Our present goal is simply to identify and extract the corresponding cliques; we propose a spectral approach to solve this problem.
Let us consider the sub-matrix of the network adjacency matrix that describes the local egonet. A trivial re-indexing of the nodes allows us to represent this submatrix as a block-diagonal matrix, where each block is a clique. The blocks do not overlap, but there is a row (and corresponding column) of ones to describe the connection of the egocentric node to all the cliques. We can assume that the row and corresponding column associated with the egocentric node are the first row and column, respectively. Instead of working directly with this matrix, we propose to make some slight modifications that will boost the spectral approach. The modifications are as follows: we add self connections to all the nodes if not already present, and we scale all connections to the egocentric node by a small parameter, $\delta$. We will call these modified matrices \textit{ideal community member matrices} for the sake of discussion, or \textit{ICM-matrices} for short.
Let $A$ be an ICM-matrix with $m$ blocks (each corresponding to a clique) along its diagonal, where the size of the $i^{th}$ block is $k_i \times k_i$. Without loss of generality, we assume the indices are arranged such that $k_j \le k_i$ whenever $j>i$ so that larger indices correspond to smaller blocks. Also assume that the first index corresponds to the egocentric node. We will represent the set of indices for the $k^{th}$ block by $V_k$. With this notation in place, $A$ is defined by Equation \eqref{eq:ADef}, and an example of the structure of such a matrix is given in Figure \ref{fig:IdealCMmatrix}.
\begin{equation}
A(i,j) =
\begin{cases}
& \delta, ~~~~~ ~~~ \text{if}~ i=1~\text{or}~j=1 , \\
& 1, ~~~~~ ~~~ \forall i,j \in V_k , ~~ k=1,...,m , \\
& 0, ~~~~~ ~~~ \text{otherwise} .
\end{cases}
\label{eq:ADef}
\end{equation}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.45\textwidth] {Fig4.png}
\end{center}
\caption{The non-zero values of an ICM-matrix for a node belonging to two cliques with 6 members, and three cliques of four members. }
\label{fig:IdealCMmatrix}
\end{figure}
Our focus will be on the eigenvectors of these matrices with corresponding positive eigenvalues, as they will be the ones we will use for identifying the cliques via spectral clustering. Let $P = \{i | \lambda_i > 0\}$ be the positive eigenvalues, and let $\{\mathbf{x}_p | p \in P\}$ be the associated eigenvectors. Assume that the eigenvalues are ordered so that $\lambda_j \le \lambda_i$ if $i>j$ (e.g. $\mathbf{x}_1$ is the dominant eigenvector). Let $\mathbf{x}_p(1)$ be the value of the first entry in such an eigenvector. Using a simple symmetry argument, one can show that each $\mathbf{x_p}$ is constant over each clique $i$; $\mathbf{x_p}(l \in V_i) = r_i$, for each vertex, $l$, in the set of vertices, $V_i$, in block $i$.
We now examine the properties of the entries of $\mathbf{x_p}$ by explicitly looking at the system of equations resulting from the constraint $\lambda_p ~ \mathbf{x}_p = A~\mathbf{x}_p $. For any $j \in V_i$, this constraint takes the following form,
\begin{equation}
\lambda_p~\mathbf{x}_p(j \in V_i) = \delta ~ \mathbf{x}_p(1) + \sum_{l ~ \in ~ V_i} \mathbf{x}_p(l), \label{eq:OtherRowa}
\end{equation}
\noindent and since $\mathbf{x_p}$ is constant and equal to $r_i$ over each clique, we obtain
\begin{equation}
\lambda_p~r_i = \delta ~ \mathbf{x}_p(1) + k_i ~ r_i,
\end{equation}
\noindent or
\begin{equation}
(\lambda_p - k_i) r_i = \delta ~ \mathbf{x}_p(1) . \label{eq:G2}
\end{equation}
As Equation \eqref{eq:G2} holds for any row indices, $i$ and $j$, this leads us to Equation \eqref{eq:G3},
\begin{equation}
r_i = \frac{\lambda_p - k_j }{\lambda_p - k_i } r_j .
\label{eq:G3}
\end{equation}
The importance of Equation \eqref{eq:G3} is that it shows that $|r_i| > |r_j|$ if $|\lambda_p - k_i| < |\lambda_p - k_j| $. In other words, the closer the number of members in the block corresponding to the $i^{th}$ clique is to $\lambda_p$, the larger the magnitude of $r_i$ in $\mathbf{x}_p$ and the easier it will be to pull out members of distinct cliques via spectral clustering. Now note that by setting $\delta$ to a small value, the ICM-matrix is a small perturbation of a block diagonal matrix. As the positive eigenvalues of a block diagonal matrix correspond directly to the sizes of the blocks present within the matrix, this will cause the eigenvalues of the ICM-matrix to be small perturbations of $k_i$. This implies that for each block of size $k_i$ there exists an eigenvalue, $\lambda$, such that $\lambda \approx k_i$. For the results presented in this paper, we use $\delta = 1/N_{ego}$, where $N_{ego}$ is the number of nodes in the egonet.
For illustrative purposes, the first three dominant eigenvectors of the matrix depicted in Figure \ref{fig:IdealCMmatrix} are shown in Figure \ref{fig:IdealCMmatrixEigVecs}. Note that the components of the eigenvectors clearly reveal each clique. However, this property does not extend to eigenvectors without positive eigenvalues. Indeed, the nullspace vectors are quite noisy and are detrimental to use for spectral clustering of the egonet.
\begin{figure}[t]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig5.png}
\end{center}
\caption{First three dominant eigenvectors for the ICM-matrix depicted in Figure \ref{fig:IdealCMmatrix}, where $\delta = 1/N_{ego}$ and $N_{ego}$ is the number of nodes involved in the egonet. }
\label{fig:IdealCMmatrixEigVecs}
\end{figure}
For the case of $\lambda_1$, we can also prove that $r_i \ge r_j$ when $j>i$. This follows from the fact that all of the entries of $A$ are non-zero and the power method will converge to the dominant eigenvector. As the power method will converge regardless of the initial starting vector, we can take a vector with all non-negative entries as our initial guess and the power method iteration will preserve this property. This implies that the magnitude of $\mathbf{x}_1(i)$ increases with the size of the clique in which the vertex $i$ belongs to. This is important to the egonet sparsification algorithm we discuss in the next section.
We are now in a position to define the edge descriptor sets associated with each node in our approach. Each edge descriptor set is comprised of densely connected subnetworks of the egonet. Formally, we define the densely connected subnetworks as follows. We begin by extracting the node's egonet from the network, and form its corresponding adjacency matrix. Using the methods described in the next section, we sparsify this adjacency matrix so that the remaining connections closely resemble the structure of an ICM-matrix. Next, we compute the eigenvectors associated with the largest positive eigenvalues of the sparsified matrix. We then use these eigenvectors to embed the egonet into a metric space, treating the value of each eigenvector at a given index as providing a spacial ordinate for the corresponding vertex \citep{Br03}. With each vertex having spacial coordinates, we then apply k-means clustering in order to find clusters of vertices. As each vertex is also connected to the egocentric node, each cluster of vertices represents a cluster of edges to use as a potential edge descriptor set. Finally, before accepting a cluster as an edge descriptor set, we additionally check that the set of nodes involved has 90\% or greater edge density between them to ensure they approximately form a clique. Although these edge descriptor sets will often be referred to as "cliques" for simplicity throughout this paper, we only require them to be very densely connected rather than fully connected.
To determine both the number of eigenvectors to use for the spectral embedding and the number of clusters, we use the following set of heuristics. Presumably, the largest cliques a node belongs to are the most important ones to accurately capture for describing that node. As larger cliques correspond to larger eigenvalues in an ICM-matrix, we estimate the number of coordinates for the spectral embedding and the number of clusters as the number of eigenvalues that are greater than one tenth of the largest eigenvalue. This allows us to recover all the largest cliques an egocentric node belongs to, while guarding against involving near nullspace vectors for the clustering process.
\subsubsection{Matrix Sparsification Algorithm}
\begin{figure}[h!]
\begin{center}
\subfigure [ Egonet for node A] { \label{fig:SparsificationAlgorithmExplained} \includegraphics [width=0.23\textwidth] {Fig6a.png} }%
\subfigure [Sub-egonet of A for node B] { \label{fig:SparsificationAlgorithmExplained_SubEgo} \includegraphics [width=0.23\textwidth] {Fig6b.png} }%
\end{center}
\caption{ a) The egonet of the node A is formed by the union of the gray and black nodes and corresponding edges. However, the cliques lying within each community are not completely disjoint, as the red edge connects node B in the black community to node C in the gray community. b) An example of a sub-egonet, where the black nodes and edges represent the sub-egonet of A for sub-egocentric node B.}
\end{figure}
In practice, it is unrealistic to expect to encounter egonets corresponding directly to the ideal community member matrices described in the previous section. A more realistic assumption is that the cliques falling in different communities have some random edges connecting them, as in Figure \ref{fig:SparsificationAlgorithmExplained}. However, adding such random connections to ICM-matrices can drastically alter their eigenspace properties, so this problem must be addressed.
To this end, we develop a method capable of removing such connections with high accuracy. This is accomplished by considering each node present in the egonet, and removing any edges that are not components of the largest clique(s) the node belongs to. To achieve this goal we propose a spectral technique that exploits the important observation made in the previous section: the magnitude of each entry of the dominant eigenvectors increases with the size of the clique that the corresponding node belongs to.
We will call a node that is not the original egocentric node, a sub-egocentric node, and define a sub-egonet to be the egonet for a sub-egocentric node restricted to the set of nodes and edges represented in the egonet under consideration; see Figure \ref{fig:SparsificationAlgorithmExplained_SubEgo} for an example. To determine the largest clique(s) each sub-egocentric node belongs to, we extract its sub-egonet and approximate the dominant eigenvector of the corresponding adjacency matrix. As larger values in this eigenvector correspond to nodes that are members of larger cliques, we remove all edges connecting the sub-egocentric node to a node whose entries in the dominant eigenvector is below half of the maximum for the eigenvector. Figure \ref{alg:SparsifyEgo}, describes the egonet sparsification algorithm.
\begin{figure}[h]
\makebox{\bfseries \sffamily Egonet Sparsification Algorithm} \\
\rule[0mm]{\linewidth}{0.5pt}
\begin{itemize}
\item [] {\bfseries \sffamily Input}:
\begin{itemize}
\item node $v$
\item EGO($v$), the egonet of $v$
\end{itemize}
\item []{\bfseries \sffamily Algorithm:}
\begin{itemize}
\item Repeat steps 1 to 4 until the egonet's adjacency matrix no longer changes or a maximum number of iterations has been reached.
\item Repeat steps 1 to 3 for each node $u$ in EGO($v$).
\end{itemize}
\begin{enumerate}
\item For each node, $u$, in EGO($v$) such that $u \neq v$, extract its sub-egonet, SEGO($u$).
\item Scale the entries of the adjacency matrix for SEGO($u$) by $\delta = 1/N_{sego}$, where $N_{sego}$ is the number of nodes in SEGO($u$). Call this matrix $A(u)$.
\item Approximate the dominant eigenvector of $A(u)$ using a small number of iterations, $O(10)$, of the power method. If a node's index in the eigenvector has a magnitude below $1/2$ the maximum, remove the edge from EGO($v$).
\item Symmeterize EGO($v$) by removing any edges that became directed through the sparsification process.
\end{enumerate}
\item [] {\bfseries \sffamily Output:} Sparsified EGO($v$).
\end{itemize}
\rule[0mm]{\linewidth}{0.5pt}
\caption{Sparsification of egonet links
\label{alg:SparsifyEgo}}
\end{figure}
Once the matrix has been sparsified, it can be analyzed using the spectral algorithm described in the previous section to generate the edge descriptor sets for the egocentric node. The computational cost of this process is dominated by the cost of implementing the spectral algorithm, which is driven by the cost of calculating the eigenvectors of the sparsified ICM-matrix with positive eigenvalues. Determining these eigenvectors costs $O(k^2)$ for a node of degree $k$ and thus $O(N~\overline{k^2})$ to generate them for an entire network involving $N$ nodes.
\subsubsection{Experimental Validation of the Sparsification Algorithm}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.45\textwidth] {Fig8.png}
\end{center}
\caption{Example of an ICM-matrix with varying clique sizes and random connections added. Each dot represents non-zero components of the matrix.}
\label{fig:CMatrixPreSparse}
\end{figure}
Our initial set of tests involve ICM-matrices that are perturbed by adding in random connections between nodes with probability $p$. We use cliques composed of 4, 6, and 8 members, and plant two cliques of each size in the ICM matrix. The value of $p$ is set according to Equation \eqref{eq:ChoiceOfP}, so that members of the smallest cliques have as many expected outlinks as they have inlinks.
\begin{equation}
p = \frac{\text{Number~of~Nodes~in~Smallest~Clique}}{\text{Total~Number~of~Nodes~Outside~Smallest~Clique}}
\label{eq:ChoiceOfP}
\end{equation}•
\noindent An example of such matrices is shown in Figure \ref{fig:CMatrixPreSparse}.
Our findings show that the sparsification algorithm yields either an ICM-matrix or a small perturbation thereof when the random connections do not to alter the planted disjoint clique structure. Even when the planted clique structure is altered through relatively large cliques developing via random connections, our findings are that the algorithm will still reliably recover the largest planted cliques.
Figure \ref{fig:ICMatrixWin} shows a typical example of a successful result: the algorithm correctly removes all of the random connections. Figure \ref{fig:ICMatrixFail} exemplifies the cases where the algorithm fails to return the ideal planted partition. Such cases correspond to a scenario where the random connections (colored in red in Figure \ref{fig:ICMatrixFail_PreSparse}) are generating cliques comparable in size to the original cliques that the nodes belong to, thus altering the clique structure that was originally planted. When a node belongs to multiple cliques of approximately the same size, the connections will all remain so long as none overlap with a clique that is substantially (50\% or greater) larger in size, and the connections will all be pruned otherwise. Both of these cases can be observed in Figure \ref{fig:ICMatrixFail_PostSparse}. As the intention of the algorithm is to recover the largest disjoint cliques an egocentric node belongs to, neither of these outcomes are actually undesirable but they make determining the accuracy of the algorithm problematic.
\begin{figure}[h!]
\subfigure [Pre-sparsification] { \label{fig:ICMatrixWin_PreSparse} \includegraphics [width=0.25\textwidth] {Fig9a.png} }%
\subfigure [Post-sparsification] { \label{fig:ICMatrixWin_PostSparse} \includegraphics [width=0.25\textwidth] {Fig9b.png} }%
\caption{An example of the resulting matrix after a successful sparsification. a) Matrix before a successful sparsification. b) Matrix after a successful sparsification. See text for discussion.}
\label{fig:ICMatrixWin}
\end{figure}
\begin{figure}[h!]
\subfigure [ Pre-sparsification] { \label{fig:ICMatrixFail_PreSparse} \includegraphics [width=0.25\textwidth] {Fig10a.jpg} }%
\subfigure [Post-sparsification] { \label{fig:ICMatrixFail_PostSparse} \includegraphics [width=0.25\textwidth] {Fig10b.jpg} }%
\caption{An example of the resulting matrix after a failed sparsification. a) Matrix before an "unsuccessful" sparsification. b) Matrix after an "unsuccessful" sparsification. See text for discussion.}
\label{fig:ICMatrixFail}
\end{figure}
To sidestep these difficulties, we tested the performance of the algorithm using cliques of fixed size (10 members) and increased the expected number of outlinks to other cliques from 0 to 30 (three times the number of inlinks). We measured the accuracy of the sparsification algorithm by counting the percentage of properly removed random connections that were created during the perturbation process. As can be seen in Figure \ref{fig:CMatrixPostSparse}, the algorithm performs remarkably well with 100\% accuracy, well past the point where there are more outlinks than inlinks in either case.
\begin{figure}[h]
\subfigure [ Five Groups of Ten] {
\begin{minipage}[c][0.75\width]{
0.5\textwidth}
\centering%
\label{fig:FiveGroupsTenMembers}
\includegraphics [width=1\textwidth] {Fig11a.png}
\end{minipage}}
\subfigure [ Ten Groups of Ten] {
\begin{minipage}[c][0.75\width]{
0.5\textwidth}
\centering%
\label{fig:TenGroupsTenMembers}
\includegraphics [width=1\textwidth] {Fig11b.png}
\end{minipage}}
\caption{The connection removal accuracy of the algorithm, as a function of the expected ratio of the number of inlinks between a given node and the members of its planted clique and the number of outlinks between a given node and members of other planted cliques. a) This test is carried out on an ICM-matrix composed from five disjoint cliques with ten members each. b) The same test carried out in (a), but with the number of cliques doubled.}
\label{fig:CMatrixPostSparse}
\end{figure}
\subsection{Community Scale Features of Community Structure}
\label{sec:CommunityLevelPerspective}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.45\textwidth] {Fig12.png}
\end{center}
\caption{At the community level, the goal is to find collections of edge descriptor sets that form a community. For the black and gray communities, we need to differentiate the sets of gray and black edges from the set of white edges. }
\label{fig:AlgorithmExplainedCommunityLevel}
\end{figure}
Once the edge descriptor sets have all been extracted, we need a way to stitch these localized features together to form communities. To do this, we use quantitative features relevant to the scale of communities to agglomerate these edge sets.
As is the case for the network depicted in Figure \ref{fig:AlgorithmExplainedCommunityLevel}, a reasonably general qualitative feature to expect communities to possess is that they have an edge density substantially higher than that of the network as a whole, and we note that this is not a property which is accurately represented at the level of individual nodes' edge descriptor sets. To account for this property, we use link density in order to agglomerate edge descriptor sets to form communities.
Let $C$ denote a set of nodes, and $E(C)$ denote the set of edges between all members of $C$. Given a set of $n$ vertices, the maximum number of edges is $n(n-1)/2$ (for a clique). We define the edge density as,
\begin{equation}
\rho (C) = \frac{|E(C)|}{\frac{1}{2} |C| (|C| - 1)}.
\label{eq:LinkDensity}
\end{equation}
Our method for community formation is based on satisfying $\rho (C) = D$ where $0\leq D \leq 1$ is a user supplied inlink density and the number of nodes involved, $|C|$, is as large as possible. The reason $D$ is taken as an input is because the appropriate value is dependent on the scale of the structures one is trying to extract from the network; lower threshold densities correspond to looser notions of what it means to be a community and higher threshold densities correspond to tighter ones.
We take a greedy algorithmic approach to expanding communities, where the nodes involved in a new edge descriptor set are added to the community if they decrease $\rho$ by the minimal amount while still staying above the user supplied threshold. Although this is not the most principled optimization approach from a mathematical standpoint, it preserves the intuitive notion of communities forming as a diffusive process of individual perspectives on the community structure of the network, similar to percolation of edge descriptor sets \citep{Pal05,FriendshipGroup10} or variations of label passing \citep{Raghavan07, GraphSwarm12}. An outline of the community expansion algorithm is given in Figure \ref{alg:CommunityFormation}.
\begin{figure}[h]
\makebox{\bfseries \sffamily Community Formation Algorithm} \\
\rule[0mm]{\linewidth}{0.5pt}
\begin{itemize}
\item [] {\bfseries \sffamily Input}:
\begin{itemize}
\item All edge descriptor sets detected on the network.
\end{itemize}
\item []{\bfseries \sffamily Algorithm:}
\begin{enumerate}
\item Start with largest edge descriptor set that remains unclustered as a community base.
\item Find the edge descriptor set that would cause the minimum reduction in inlink density for the community being formed.
\item If the inlink density would remain above the user supplied density threshold, add the descriptor set to the community being formed.
\item Repeat Steps 2 and 3 until no edge descriptor set satisfies the density constraint.
\item Repeat from Step 1 until no edge descriptor sets remain unclustered.
\end{enumerate}
\item [] {\bfseries \sffamily Output:} Initial set of communities.
\end{itemize}
\rule[0mm]{\linewidth}{0.5pt}
\caption{Sparsification of egonet links
\label{alg:CommunityFormation}}
\end{figure}
We conclude this section with a brief cost analysis of the community formation process. Let $S$ be the set of nodes involved in a potential edge descriptor set to add to a forming community, $C$. Let $N_C$ represent the total number of nodes that either belong to $C$ or are connected to one of the nodes in $C$ at any given stage of the community formation process. Each update to $\rho(C)$ involves a sweep over all edge descriptor sets for each of the $N_C$ nodes. Adding a potential edge descriptor set involves checking the density of the original adjacency matrix for the original network restricted to the indices representing nodes involved in the potential updated version of $C$. Because this resulting density can be written in terms of a sum of the link density before the addition and the link density of the addition, the only substantial computational cost to check a potential update comes from calculating the density of the addition, a calculation that has a computational cost of $O(N_C)$ flops so long as $|S| << N_C$. As this must be carried out for each of the $N_C$ nodes involved in $C$, the cost of each update is proportional to $N_C^2$. This implies that the computational cost to extract the entire community is bounded above by a constant multiple of $N_{com}^2$, where $N_{com}$ is the total number of nodes involved in that community. Since this must be carried out for each community in the network, the total computational cost for the community formation process is proportional to $O(\overline{N_{com}^2})$, the average of the square of community sizes.
\subsection{Network Scale Features of Community Structure}
\label{sec:NetworkLevelPerspective}
Lastly, we must address the question of what properties a collection of communities should have at the level of the entire network. The answer to this question depends on the expectations of the researcher applying the algorithm, but as general heuristics we will require that there are no unclustered nodes in the network and that only the minimal required number of communities will be returned which provide a cover for the set of nodes in the network.
\begin{figure}[h]
\begin{center}
\includegraphics [width=0.45\textwidth] {Fig14.png}
\end{center}
\caption{ At the network level, the goal is to make sure that the collection of communities returned by the algorithm satisfy criteria expected by the researchers using it. For the network depicted above, a natural question is what to do with the nodes that do not decisively belong to any communities detected.}
\label{fig:AlgorithmExplainedNetworkLevel}
\end{figure}
Given the initial set of communities formed using the method described in the preceding section, we force all nodes to belong to at least one community by taking any nodes that remain unclustered after the community formation process and assign them to the community they share the most edges with. This process is carried out iteratively if needed. We then build a cover for the network out of the communities detected, starting with the largest as an initial element in the set. Communities are then successively added to the set comprising the cover based on having the lowest percentage overlap with the current cover. When multiple communities are tied for the lowest percentage, larger communities are given preference over smaller ones. This process is carried out until every node in the network is represented in at least one of the communities in the set.
\section{Experiments}
We validate our approach on four different datasets. We first consider synthetically generated networks, coming from the planted $l$-partition and LFR benchmark tests. The other two datasets are based on graphs from real world social networks, one the famous Zachary Karate Club network \citep{Zachary77} and the other a high school friendship network \citep{Xi13}.
We view the task of community detection as querying a given network for the community structures present, therefore we use F-score to assess the quality of our community detection algorithm. The F-score of a result is a common measure to use in gauging the quality of information retrieval applications, and is defined by Equations \eqref{eq:Precision} through \eqref{eq:Fscore}.
\begin{equation}
\resizebox{1.0 \hsize}{!} {$Precision = \frac{| \{\text{Gold~Standard~Community} \} \cap \{\text{Detected~Community} \} | }{ | \{\text{Detected~Community} \} | } .$}
\label{eq:Precision}
\end{equation}•
\begin{equation}
\resizebox{1.0 \hsize}{!} {$ Recall = \frac{| \{\text{Gold~Standard~Community} \} \cap \{\text{Detected~Community} \} | }{ | \{\text{Gold~Standard~Community} \} | } . $}
\label{eq:Recall}
\end{equation}•
\begin{equation}
F = \frac{2 \times \text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}} .
\label{eq:Fscore}
\end{equation}•
The elements of each set are taken as the nodes involved in the community, and the precision and recall values we report are taken as the average precision and recall values when each planted community is paired with the detected community with the highest F-score. The benefit of using this metric to assess our algorithm is that the precision scores reflect the quality of our choice for edge descriptor sets, and the recall scores reflect the quality of our choices for community formation and desired network level properties.
For the high school and LFR tets, we additionally calculate the extended notion of normalized mutual information (NMI) \citep{La09,La09a} between the two sets of community labels in order to measure the algorithm's performance. The motivation for including this measure for these tests is that this was the performance metric used in \citep{Xi13}, and allows one to roughly compare our algorithm to a host of others. Although the extended notion of normalized mutual information slightly differs from what is standardly called NMI, we will refer to this extended version as just NMI in this paper for the sake of simplicity. An NMI score of 1 implies that the labels correlate perfectly, and an NMI score of zero indicates that the labels have no correlation with each other. We refer the reader to \citep{La09} for a detailed technical description of normalized mutual information.
\subsection{Planted $l$-Partition Benchmark Tests}
Our first set of synthetic network tests are planted $l$-partition tests. These are standard benchmark tests \citep{GN2001} that create randomly generated networks with planted communities, where nodes in the same community have a higher probability to be connected than nodes in differing communities. The test involves fixing the expected degree of a node, and increasing the expected proportion of those links which are outlinks to other communities. Not only does this make the boundary between communities less well defined because there are more links between communities, but it also makes communities less well defined by decreasing their inlink density. Because our algorithm is built to detect communities that can overlap, we again use F-scores to measure the quality of the results instead of the more standard metric of recovering the planted partitioning of nodes. The motivation for using F-scores is that there is not a one to one correspondence between sets of correctly partitioned nodes and correctly identified communities when it is assumed that communities can overlap with one another.
We conduct several versions of the planted $l$-partition test: four groups of 32 members, eight groups of 32 members, four groups of 64 members, and eight groups of 64 members. For all of these tests, the expected degree per node is set to be equal to half of the total number of members in each planted community. The first test with four groups of 32 is the most standard, and allows one to roughly compare our algorithm against a host of others \citep{DaEtAl05}. The remaining tests provide a controlled setting to demonstrate how the algorithm's performance substantially improves when there are more communities and/or larger communities, which more accurately reflects the types of networks the algorithm was intended for. The precision, recall, and F-scores for this series of tests are presented in Figures \ref{fig:PL_Precision}-\ref{fig:PL_F}, where each data point represents the averaged result computed over twenty independent realizations of the random network. Each test is denoted by [Number of Groups]g[Number of Members per Group]. Note that the algorithm's performance increases in all respects with either increased community sizes or number of communities, and the F-scores for the test involving eight groups of 64 still remains above 0.95 even in the case where half of the links for any node are expected to be outlinks.
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig15.png} %
\end{center}
\caption{ Precision scores for the planted $l$-partition tests.}
\label{fig:PL_Precision}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.50\textwidth] {Fig16.png} %
\end{center}
\caption{ Recall scores for the planted $l$-partition tests.}
\label{fig:PL_Recall}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.50\textwidth] {Fig17.png} %
\end{center}
\caption{ F-scores for the planted $l$-partition tests. }
\label{fig:PL_F}
\end{figure}
\subsection{LFR Benchmark Tests}
The second set of synthetic network tests consists of the LFR benchmark tests \citep{La09b} which are designed to construct synthetic networks with built-in community structures. LFR networks are a variation of the planted $l$-partition model, where nodes are no longer required to all have the same expected degree and communities can come in varying sizes. Although more general versions of the test exist in which edge weights and directions are also considered, we only focused on the extended version of the test that allows for nodes to be members of multiple communities. Nodes belonging to multiple communities will be referred to as \textit{overlapping nodes}, where the total number of overlapping nodes in a given network is referred to as $O_n$ and the number of communities an overlapping node belongs to as $O_m$.
In order to be able to compare the results of our algorithm with the experiments conducted in the recent survey \citep{Xi13}, we use the same parameters as \citep{Xi13} for many of our experiments. Node degree and community sizes are respectively drawn from power law distributions with $\tau_1 = 2$ and $\tau_2 = 1$, the average degree per node is set to $k_{ave} = 10$, and the maximum degree a node can have is set to $k_{max} = 50$. The remaining parameters were varied throughout the tests. We use networks with sizes $N \in \{1000,5000\}$, and community sizes in both a small range $s=(10,50)$ and a large range $b=(20,100)$. The fraction of links through which a node connects to members of other communities is denoted by the mixing parameter $\mu$. As with the planted $l$-partition tests, we randomly generate 20 networks for each set of parameter values and report the average performance.
In the first set of experiments, we keep the maximum number of overlapping communities constant, $O_m =2$. We increase the density of edges between communities: $\mu$ is increased from 0.1 to 0.3 by increments of 0.05. All of the combinations of parameter values for $N$ and community sizes are examined, along with setting $O_n$ to either 10\% or 50\% of the nodes in the network. For this set of tests, we examine the average precision, recall, F-score, and NMI for each set of communities returned by the algorithm compared against those planted by the test. The cut-off density for community expansion is set to $(1-\mu)$ multiplied by the average egonet density of the graph.
As we can see from Figures \ref{fig:VaryMu_N1000s}-\ref{fig:VaryMu_N5000b}, the algorithm's performance again improves as the number and the sizes of the communities increase. Somewhat surprisingly, the algorithm's F-scores also tends to increase with increasing values of the mixing parameter for the low overlap cases, where $O_n = 10\%$ of the total nodes. This apparent paradox can be explained by observing that increasing $\mu$ lowers the edge density within communities. We can use a lower edge density threshold to merge the cliques, and recover the communities, and the recall score is improved. Also, because $O_n$ is small, communities still remain well separated, and spurious cliques are not created by the increase in outlinks. However, for the high overlap case, we find that increasing the value of the mixing parameter tends to have only minor effects on the recall scores while significantly impairing the accuracy scores.
The second subset of tests varies $O_m$ from 2 to 8, with $N=5000$, $\mu=0.3$, and $O_n = 10\%$. This set of tests were also conducted on a variety of overlapping community detection algorithms in \citep{Xi13}, but only the NMI was examined in that work. The precision, recall, F-scores, and NMI of our algorithm for this set of tests are presented in Figure \ref{fig:VaryOm_N5000}. Although the NMI of our algorithm on this series of tests is about average with respect to all the algorithms analyzed in \citep{Xi13}, our precision scores are excellent.
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig18.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 1000 nodes and using the smaller community size range. $O_n = 10\%$ of the total nodes. }
\label{fig:VaryMu_N1000s}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig19.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 1000 nodes and using the smaller community size range. $O_n = 50\%$ of the total nodes. }
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig20.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 1000 nodes and using the larger community size range. $O_n$ = 10\% of the total nodes. }
\label{fig:VaryMu_N1000b}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig21.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 1000 nodes and using the larger community size range. $O_n$ = 50\% of the total nodes. }
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig22.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 5000 nodes and using the smaller community size range. $O_n$ = 10\% of the total nodes. }
\label{fig:VaryMu_N5000s}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig23.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 5000 nodes and using the smaller community size range. $O_n$ = 50\% of the total nodes. }
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig24.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 5000 nodes and using the larger community size range. $O_n$ = 10\% of the total nodes. }
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig25.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $\mu$ for a network with 5000 nodes and using the larger community size range. $O_n$ = 50\% of the total nodes. }
\label{fig:VaryMu_N5000b}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig26.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $O_m$ for networks with 5000 nodes with 10\% of the total nodes belonging to two different communities, using small community size range distribution. }
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.5\textwidth] {Fig27.png}
\end{center}
\caption{ The precision, recall, F-score, and NMI for the LFR tests varying $O_m$ for networks with 5000 nodes with 10\% of the total nodes belonging to two different communities, using large community size range distribution. }
\label{fig:VaryOm_N5000}
\end{figure}
\subsection{Zachary Karate Club}
\label{sec:Karate}
Zachary's karate club network is a small social network comprised of the interactions amongst members of a university karate club studied by sociologist Wayne Zachary in the 1970's \citep{Zachary77}. During the period of study, a political issue arose regarding the club's fees which eventually caused the club to fissure into two clubs. The social interactions of the clubs' members outside of the official meetings were examined, and edges between members indicate that they interacted socially outside of the club setting. The ground truth for this test is taken to be which specific club the members joined after the fissure.
This example illustrates a fundamental and intrinsic difficulty with the community detection problem: the definition of a community is problem dependent, and one can only design algorithms that are optimal for certain classes of communities. The communities on this network are defined in terms of who leads them, where the leaders can easily be identified by the two nodes with substantially higher degrees than the average of those they share connections with. This suggests that a node's perspective on community should be defined by the leader(s) it is connected to, and the community scale features is defined by the perspective of its leader node. If one were earnestly interested in solving community detection problems of this type, a very simple approach would be to take the edges involving the two nodes of highest degree as edge descriptor sets, agglomerate these based on which leader node is involved to form communities, and then assign any unclustered nodes to the community they have the most links to. We have implemented this idea, and our algorithm yields a perfect recall value for each group, with F-scores both being over 0.94 for the given gold standard groupings.
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.50\textwidth] {Fig28.png}
\end{center}
\caption{The Zachary karate club network. The gold standard grouping for a node is given by its shape, and its found grouping is given by its color(s). }
\label{fig:OriginalKarateClubNetwork}
\end{figure}
Although this type of community structure is not at all what is intended for our algorithm to detect, it is a standard enough test to warrant seeing how it performs nonetheless. In order to apply our algorithm to this network, we first need to get an initial estimate of what the community density should be. To this end, we examine the edge density of the egonets for each node to get a local understanding of the average edge density of the network. Finding that the average egonet link density is 78.2\%, we then set the community density to 3/4 of that in order to hold the communities to looser standards. This results in the three clusters of nodes given below, with the precision, recall, and F-scores for these groups are presented in Table \ref{tab:Karate}.
\noindent Group 1: ~ 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, \\
\indent \indent 14, 17, 18, 20, 22
\noindent Group 2: ~ 3, 9, 10, 19, 21, 23, 24, 25, 26, 28, 29, \\
\indent \indent 31, 32, 33, 34
\noindent Group 3: ~ 15, 16, 24, 27, 30, 33, 34
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& Precision & Recall & F-Score \\ \hline
Group 1 & 0.94 & 1.0 & 0.97 \\ \hline
Group 2 & 0.93 & 0.78 & 0.85 \\ \hline
Group 3 & 1.0 & 0.39 & 0.56 \\ \hline
\end{tabular}
\caption{The precision, recall, and F-scores for the detected communities on the karate club network. }
\label{tab:Karate}
\end{center}
\end{table}
As we can see, the communities produced by the algorithm cause the gold standard grouping, denoted by circles, to be split into two groups. The reason for this splitting is that the network is mainly composed of two subtrees (one for each leader), and therefore the density of connection within each subtree remains low. Our approach, which assumes a more "egalitarian" community structure, is no longer optimal when the network is organized in such a strongly hierarchical way. Splitting the gold standard group allows these new groups to have a higher edge densities of 34\% and 63\% whereas the community given as the gold standard only has an edge density of 27\%. As we can see, although the division is not desired for this particular gold standard grouping, it is still a sensible one with respect to the notion of community our algorithm is designed to capture.
\subsection{High School Friendship Network}
\begin{figure}[h!]
\begin{center}
\includegraphics [width=0.50\textwidth] {Fig29.png}
\end{center}
\caption{The high school friendhip network examined in Xie et al. 2013 \citep{Xi13}. The ground truth for this network is reflected by the color coding of the nodes, and the found grouping for each node is reflected by the color(s) of the square surrounding it.}
\label{fig:HighSchoolFriendshipNetwork}
\end{figure}
We now describe the second real world network, used as a benchmark in a recent study that evaluated the states of the are algorithms for detecting overlapping communities \citep{Xi13}. The dataset is part of the National Longitudinal Study of Adolescent to Adult Health \footnote{http://www.cpc.unc.edu/projects/addhealth/}. The network is composed of high school students, where the links between students come from self-reported connections and the gold standard partitioning of the network is taken as the grades (7 through 12) the students belong to. Although the ground truth is taken as six communities, it is understood that the friendship connections for grade 9 demonstrate that the grade can be split into two distinct subgroups with one group composed of black students and the other white students, as can be inferred from Figure \ref{fig:HighSchoolFriendshipNetwork}.
Our approach to this network is the same as the karate club network discussed in the previous section. We estimate the desired community density by examining the average local edge density coming from each node's egonet, and set the community link density to 3/4 of that density. For this network, the average egonet link density is found to be 67.0\%, so the community density threshold is set to 50.3\%. We then take any nodes that remain unclustered after the community formation process, and assign them to the community they have the most links to. The performance of our algorithm on this network is presented in Table \ref{tab:HighSchool}.
\begin{table}[h]
\begin{center}
\begin{tabular}{ccc}
\hline
\multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}Number of\\ Communities\end{tabular}} & \multicolumn{1}{c|}{Overlapping Nodes} & \multicolumn{1}{c|}{NMI} \\ \hline
\multicolumn{1}{|c|}{8} & \multicolumn{1}{c|}{20} & \multicolumn{1}{c|}{0.52} \\ \hline
& & \\ \hline
\multicolumn{1}{|c|}{Precision} & \multicolumn{1}{c|}{Recall} & \multicolumn{1}{c|}{F-Score} \\ \hline
\multicolumn{1}{|c|}{0.79} & \multicolumn{1}{c|}{0.82} & \multicolumn{1}{c|}{0.80} \\ \hline
\end{tabular}
\caption{The performance of our algorithm on the high school friendship network. }
\label{tab:HighSchool}
\end{center}
\end{table}
Despite its low NMI score, the groupings found by the algoritm are sensible ones. It accurately detects the sub-division in grade 9, as well as the set of nodes falling in grades 9 and 10 that are densely connected to one another. Additionally, if one examines the nodes which are "incorrectly" labelled (e.g. 0, 42, and 63), it is readily apparent that the gold standard groupings do not accurately represent the nature of how these nodes are connected to the network.
\section{Conclusion and Discussion of Results}
This work has focused on developing a computationally inexpensive algorithm capable of detecting overlapping communities in social networks. A novel feature of how we approach the problem is that we define the community structure we are trying to capture based on how that structure would appear at differing scales. The scales specifically considered in this paper are: the scale of individual nodes, the scale of individual communities, and the scale of the network as a whole. Using the models developed in this work for each of these three scales, we find that applying our algorithm to benchmark tests demonstrates good overall performance. This performance improves with increasing either the number of communities or the sizes of the communities to be detected.
One advantage of our methodology is that it explicitly accounts for multiscale features during the community formation process. This aspect of our approach ensures that the detected communities are always sensible ones with respect to those features. Another distinct advantage of our method is that the way we quantify the features at each scale and tie them together is highly modular. This allows for the mathematical model of the community structure at any specific scale to be swapped out as appropriate based on the nature of a specific community detection problem.
Future work will focus on further developing the methodology used in our algorithm. One facet meriting further attention is to take advantage of the modularity of our algorithm to incorporate models of alternative features of community structure. The potential advantage of this was demonstrated in Section \ref{sec:Karate}, where detecting the leader based communities of the karate club network became trivialized by modeling community features as appropriate to the problem. Another avenue to explore is the possibility of chaining together sequences of node versus community level features, where community scale features are treated as node scale features at each higher link in the chain. This will allow us to incorporate detection of hierarchical community structures into our algorithm, and further increase its flexibility.
\bibliographystyle{plainnat}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,764
|
{"url":"https:\/\/bugs.webkit.org\/show_bug.cgi?id=93073","text":"Bug\u00a093073 - REGRESSION (r124512): Failures in MathML Presentation tests on GTK and EFL\nREGRESSION (r124512): Failures in MathML Presentation tests on GTK and EFL\n Status: RESOLVED FIXED None WebKit Unclassified MathML (show other bugs) 528+ (Nightly build) Unspecified Unspecified P2 Normal Chris Dumez\n\n Reported: 2012-08-03 00:18 PDT by Zan Dobersek 2012-08-28 09:29 PDT (History) 14 users (show) cdumez cgarcia davidc dbarton eric fred.wang gyuyoung.kim hyatt kenneth mrobinson naginenis rakuco simon.fraser webkit.review.bot\n\nAttachments\nResults on EFL port (709.00 KB, application\/x-gzip)\n2012-08-26 22:59 PDT, Chris Dumez\nno flags Details\nPatch (deleted)\n2012-08-28 02:25 PDT, Chris Dumez\nno flags Details | Formatted Diff | Diff\n\n Note You need to log in before you can comment on or make changes to this bug.\n Zan Dobersek 2012-08-03 00:18:03 PDT After r124512[1] multiple mathml tests have begun to fail on both Mac and GTK. I counted 10 failures on Mac and 20 on GTK. For the latter port, I can confirm that out of those 20 tests 15 also have pixel mismatches which show severe regression. Dashboard: http:\/\/test-results.appspot.com\/dashboards\/flakiness_dashboard.html#group=%40ToT%20-%20webkit.org&tests=mathml Failures on builders: http:\/\/build.webkit.org\/results\/Apple%20Lion%20Release%20WK1%20(Tests)\/r124564%20(1904)\/results.html http:\/\/build.webkit.org\/results\/Apple%20Lion%20Debug%20WK1%20(Tests)\/r124560%20(1541)\/results.html http:\/\/build.webkit.org\/results\/GTK%20Linux%2064-bit%20Release\/r124564%20(27129)\/results.html [1] - http:\/\/trac.webkit.org\/changeset\/124512 Sudarsana Nagineni (babu) 2012-08-03 04:56:57 PDT 16 tests are failing on EFL bot too. http:\/\/build.webkit.org\/results\/EFL%20Linux%2064-bit%20Debug\/r124517%20%283783%29\/results.html Dave Barton 2012-08-03 10:32:38 PDT I am looking into this. I warned the GTK(?) folks that this was coming (I'm looking for the e-mail or bug note), and would be happy to warn EFL folks in the future also. I don't know why mac port tests would fail though. Thanks for all the info. Dave Barton 2012-08-03 10:56:46 PDT Sorry, it was Christophe Dumez of EFL that I warned in bug 89282. Please see that bug report, and I'd be grateful for any advice from experts now. Are the current mac failures probably due to the STIXGeneral font not being installed on some test bots? Can it be easily installed? Is this urgent? Sorry for my ignorance, I am just an outside volunteer. Thanks! Dave Barton 2012-08-03 16:29:09 PDT I've looked at the 10 failing Mac tests and they are all due to small differences (improvements) in STIX font metrics\/handling between Snow Leopard and Lion. I've submitted bug 93163 with a patch to deal with this, splitting out *-expected.txt files between the two OS versions. Zan, the pixel regressions you're seeing should actually be progressions. If any look really bad please let me know. Someday I'd like to convert most of these old pixel tests to reference tests, but that will require some other MathML changes first. In the meantime, thanks for everyone's patience and help with MathML on GTK and EFL. Dave Barton 2012-08-25 11:35:40 PDT The Mac bots now pass all the MathML tests, after the patch for bug 94393 upgraded the Lion bots to OS X 10.7.4. I don't know if you want to rebaseline the mathml tests on gtk and efl, or keep skipping them while MathML layout is still being improved. Chris Dumez 2012-08-26 22:59:10 PDT Created attachment 160642 [details] Results on EFL port On EFL port, the new results are *much* worse than the previous ones. I cannot do a simple rebaseline here. All the formulas are broken. Attaching the results on EFL port. Dave Barton 2012-08-27 10:47:55 PDT Ouch. I think that -webkit-linebox-contain maybe doesn't work on the EFL port? Maybe the functions it calls to get glyph metrics don't work on that port? Or maybe the STIX fonts have problems on EFL? Note that the old results on EFL may have been much worse than you see in the *-expected.png results here. People using MathML in general (e.g. in Firefox) are expected to download the STIX fonts so they can see various mathematical symbols. Apparently this was Alex Milowski's intention for WebKit MathML as well. Anyway, the STIX fonts didn't used to be enabled in the layout tests. So the old test results were showing better formatting than people were actually seeing, because the STIX fonts have large ascents and descents that were screwing up the old formatting in real use, even in the Apple\/Mac port. Thanks for uploading your attached test results. I will look into this further. Dave Barton 2012-08-27 13:21:32 PDT In the metrics files you uploaded, indeed -webkit-line-box-contain isn't working right. I believe it's worth looking at the 5 files LayoutTests\/platform\/*\/fast\/block\/lineboxcontain\/block-glyphs-replaced-expected.png. Of these 5, the 2 that show significant red are on efl and gtk. Also there are 3 lineboxcontain tests listed in efl\/Skipped. Is -webkit-line-box-contain supposed to work on efl and gtk? To work well, MathML needs either this CSS property or some equivalent. Will this be implemented on efl and gtk? Eric Seidel (no email) 2012-08-27 13:32:04 PDT I would expect that non-optional (#define guarded) CSS features work the same among all ports. More likely font differences are to blame here? Dave Barton 2012-08-27 13:42:33 PDT But the lineboxcontain tests use the Ahem font. Also the posted efl mathml metrics give inline-block heights of often just 1 pixel, even though the element contains text. I think the problem is probably with the linux versions of the routines to measure individual glyphs sizes. Chris Dumez 2012-08-27 22:47:00 PDT I think the likely cause is that SimpleFontData::platformBoundsForGlyph() is not implemented for freetype backend (used by both EFL and GTK). I'm currently looking into it. Chris Dumez 2012-08-28 02:25:15 PDT Created attachment 160935 [details] Patch Fix and rebaseline for both EFL and GTK ports. The output looks good for MathML presentation tests now. The output for the few fast\/block\/lineboxcontain tests that Dave Barton mentioned is also improved. WebKit Review Bot 2012-08-28 03:37:42 PDT Comment on attachment 160935 [details] Patch Clearing flags on attachment: 160935 Committed r126862: WebKit Review Bot 2012-08-28 03:37:47 PDT All reviewed patches have been landed. Closing bug. Dave Barton 2012-08-28 09:29:46 PDT This is wonderful, thanks! In the past, MathML implementations like in Firefox and MathJax and MathPlayer I believe, as well as TeX before MathML, have had big tables of font data for the very few fonts they allowed in mathematical expressions. Getting metrics dynamically from the OS \/ graphics layer for arbitrary fonts, without huge static tables, is a huge win!","date":"2020-06-01 08:42:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.34199151396751404, \"perplexity\": 10883.045492912019}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590347415315.43\/warc\/CC-MAIN-20200601071242-20200601101242-00238.warc.gz\"}"}
| null | null |
I once talked about salmon fishing in the Ashburton River with an older bloke who was doing some work on my 4WD Toyota. He told me that as a youngster he could remember a day when as many as 200 salmon were caught at the mouth of the Ashburton River. I'm guessing that had to be 50 to 60 years ago! In those days the Ashburton River had salmon runs very similar to those of the Rangitata River further to the south.
Alas nowadays the Ashburton is a shadow of its former self. Abstraction for irrigation has all but reduced this once mighty salmon river to a trickle. Some sea-run brown trout still run up through the river mouth when conditions permit – which isn't very often nowadays. Mostly the lower river seeps through the shingle bank into the sea through the closed river mouth making it impossible for trout and salmon to enter from the sea.
As a result of low water flow the mouth of the Ashburton River is often closed off by shingle.
Ashburton mouth. Just not enough water for salmon or sea run trout to run up the beach!
In Jack Byrne's excellent book Salmon Country there is a full-page photograph of a huge salmon caught at the Ashburton River mouth in 1978. It weighed 43 lb (19.5 kg). That is now almost four decades ago but shows just how good the trout and salmon fishing were in those days.
If planning to visit the mouth check with a sports store in Ashburton first to determine its condition.
The Ashburton River divides into the North and South Branch on the western outskirts of Ashburton. Generally, the north branch dries out in summer with the remaining flow being below ground. Hence the better fishing will be in the south branch. Mostly there are smallish brown trout in the lower and upper Ashburton River. These smallish trout that weigh up to kilogram or so in weight are best targeted on the fly rod with dry flies and nymphs. The water can get very low throughout the Ashburton River by late summer.
Looking downstream of the Ashburton River South Branch.
There are some interest small fishing lakes in the mountains of the Ashburton River catchment that are attractive to trout anglers. One, in particular, is Lake Emily which contains brook char up to 2 kg in weight.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,334
|
package com.hp.hpl.jena.shared.uuid;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.security.SecureRandom;
import java.util.Random;
import com.hp.hpl.jena.JenaRuntime;
class LibUUID
{
//static boolean warningSent = false ;
//private static boolean noRandWarningSent = false ;
static Random makeRandom()
{
SecureRandom sRandom = new SecureRandom() ; // SecureRandom.getInstance("SHA1PRNG");
// ---- Seeding.
// If no setSeed() call is made before a nextBytes call, the
// generator "self seeds". If a setSeed() is called before
// any nextBytes call, no self seeding is done. We use the
// self seeding and our own bytes.
// Access the internal seed generator.
byte[] seed1 = sRandom.generateSeed(16) ;
byte[] seed2 = LibUUID.makeSeed() ;
// seeds are cumulative
sRandom.setSeed(seed1) ;
sRandom.setSeed(seed2) ;
return sRandom ;
}
static byte[] makeSeed()
{
// Make a random number seed from various pieces of information.
// One thing that is missing is something related to the identify
// of this OS process (so two identical programs, starting at
// exactly the same time, might get the same seed).
StringBuffer seedInput = new StringBuffer(200) ;
try { seedInput.append(InetAddress.getLocalHost().getHostAddress()) ; }
// Not every machine has an IP address.
catch (UnknownHostException ex) { }
seedInput.append(JenaRuntime.getSystemProperty("os.version")) ;
seedInput.append(JenaRuntime.getSystemProperty("user.name")) ;
seedInput.append(JenaRuntime.getSystemProperty("java.version")) ;
seedInput.append(Integer.toString(Thread.activeCount())) ;
seedInput.append(Long.toString(Runtime.getRuntime().freeMemory())) ;
seedInput.append(Long.toString(Runtime.getRuntime().totalMemory())) ;
seedInput.append(Long.toString(System.currentTimeMillis())) ;
// Some heap variance. Maybe.
seedInput.append(Long.toString(new Object().hashCode())) ;
return seedInput.toString().getBytes() ;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,795
|
<?xml version="1.0" encoding="UTF-8" ?>
<ServiceGroupRegistrationParameters xmlns:sgc="http://mds.globus.org/servicegroup/client"
xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/03/addressing"
xmlns:agg="http://mds.globus.org/aggregator/types"
xmlns="http://mds.globus.org/servicegroup/client">
<!-- The ServiceGroupEPR defines the servicegroup to which registrations will be made -->
<ServiceGroupEPR>
<wsa:Address>http://localhost:8080/wsrf/services/DefaultIndexService</wsa:Address>
</ServiceGroupEPR>
<RegistrantEPR>
<wsa:Address>http://localhost:8080/wsrf/services/cagrid/RegistrationTest</wsa:Address>
</RegistrantEPR>
<!-- Specifies that the registration will be renewed every 10 minutes -->
<RefreshIntervalSecs>600</RefreshIntervalSecs>
<Content xsi:type="agg:AggregatorContent" xmlns:agg="http://mds.globus.org/aggregator/types">
<agg:AggregatorConfig xsi:type="agg:AggregatorConfig">
<agg:GetMultipleResourcePropertiesPollType
xmlns:ns2="gme://caGrid.caBIG/1.0/gov.nih.nci.cagrid.metadata"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:ns0="http://registrationtest.cagrid.nci.nih.gov/RegistrationTest/types"
xmlns:ns1="gme://caGrid.caBIG/1.0/gov.nih.nci.cagrid.metadata.security">
<!-- Specifies that the index should refresh information
every 300000 milliseconds (once every 5 minutes) -->
<agg:PollIntervalMillis>300000</agg:PollIntervalMillis>
<!-- specifies all Resource Properties that should be retrieved from the service -->
<agg:ResourcePropertyNames>ns2:ServiceMetadata</agg:ResourcePropertyNames>
</agg:GetMultipleResourcePropertiesPollType>
</agg:AggregatorConfig>
<agg:AggregatorData/>
</Content>
</ServiceGroupRegistrationParameters>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,094
|
\section{Introduction}
For mitigating large-scale crises such as armed conflicts, pandemics and natural disasters, incorporation of data in decision-making is becoming indispensable \cite{beck_improving_2000,sornette_endogenous_2006,weidmann_predicting_2010,obrien_crisis_2010, falck_measuring_2020}. However, insights from large amounts of data remain untapped if they are not detected and communicated by means of intuitive, accurate and preferably interactive scientific visualisation \cite{piburn_world_2015, kim_explaining_2017}. Particularly, the development of interactive visualisation dashboards requires a broad skill set, ranging from statistical, design and programming knowledge to domain expertise \cite{lam_empirical_2012}. Academic environments, non-governmental and humanitarian aid organisations often lack the required resources which hinders urgently needed contributions. The demand for quick crisis responses stands in stark contrast to time-consuming, expensive development stages. \href{https://github.com/conflict-AI/seismographAPI}{\textit{SeismographAPI}} is an actively maintained, open-source library for the visualisation of temporal-spatial crisis data that combines plug-and-play visualisations with versatile functionality.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/overview.pdf}
\caption{Technical overview of \href{https://github.com/conflict-AI/seismographAPI}{\textit{SeismographAPI}}}
\label{fig:overview}
\end{figure}
\section{Exemplary Use Cases}
\href{https://github.com/conflict-AI/seismographAPI}{\textit{SeismographAPI}} is designed for data analysts to identify patterns in rapid prototyping. Due to its run time and memory-efficiency, it can also be deployed as a permanent visualisation tool for use by decision-makers. To motivate and demonstrate \href{https://github.com/conflict-AI/seismographAPI}{\textit{SeismographAPI}}, we sketch out two practical use cases that are inspired by real-world visualisation needs \cite{weidmann_predicting_2010,obrien_crisis_2010, hegre_predicting_2013, stephany_corisk-index_2020, dong_interactive_2020}.
\paragraph{\textbf{Conflict Monitoring.}}
With the help of \href{https://github.com/conflict-AI/seismographAPI}{\textit{SeismographAPI}}, we visualise a huge dataset comprising 20 years of conflict data on 141 countries, constructed from \href{https://acleddata.com/#/dashboard}{\textit{ACLED}} \cite{raleigh_introducing_2010} and \href{https://ucdp.uu.se}{\textit{UCDP GED}} \cite{sundberg_introducing_2013} data. Per country and month, our dataset features 60 socio-economic and political indicators, which are all displayed in our \href{https://conflict-ai.github.io/seismographAPI/conflict-map.html}{\textit{Conflict Monitoring Map}}.
\paragraph{\textbf{Pandemic Monitoring}}
Our second demonstration case is the \href{https://conflict-ai.github.io/seismographAPI/covid-map.html}{\textit{Pandemic Monitoring Map}}, a visualisation of \textit{COVID-19} infection numbers. The \href{https://github.com/CSSEGISandData/COVID-19}{data} is borrowed from Johns Hopkins University \cite{dong_interactive_2020}.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/SeismographAPI.pdf}
\caption{\href{https://conflict-ai.github.io/seismographAPI/conflict-map.html}{\textit{Conflict Monitoring Map}} and \href{https://conflict-ai.github.io/seismographAPI/covid-map.html}{\textit{Pandemic Monitoring Map}}: two exemplary use cases of the \href{https://github.com/conflict-AI/seismographAPI}{\textit{SeismographAPI}}}
\label{fig:demo}
\end{figure*}
\section{Main Functionality}
\paragraph{\textbf{World Map (center).}}
The SVG Choropleth map represents the core part of \href{https://github.com/conflict-AI/seismographAPI}{\textit{SeismographAPI}}. It allows visualising data at the country- and subcountry-level (political subdivisions) based on the \href{https://www.iso.org/iso-3166-country-codes.html}{ISO-3166} and \href{https://www.iso.org/iso-3166-country-codes.html}{ISO-3166-2} norm. Additional information, such as country-level infection numbers, can be easily displayed on click and hover as exemplified in the \href{https://conflict-ai.github.io/seismographAPI/covid-map.html}{\textit{Pandemic Monitoring Map}}.
\paragraph{\textbf{Time Series Chart (bottom).}}
The time series chart not only visualises, but allows navigating the temporal dimension. For instance, the \href{https://conflict-ai.github.io/seismographAPI/conflict-map.html}{\textit{Conflict Monitoring Map}} even features two time lines, one showing the prediction and another showing the ground truth conflict intensity. When hovering or clicking a point in time, all other panels synchronise. With the help of the ``play'' controls, users can watch all data panels as they change over time in a time-machine manner.
\paragraph{\textbf{Auxiliary Information Panel (right).}}
At the top of the auxiliary information panel, our library provides a menu allowing to interactively customise the dashboard. Users can hide information and panels, such as country names and the country list on the left hand side, zoom-in, choose a night mode and open a ``help" window. To simplify the interface between analysis, report and decision-making, the library has built-in functionality for screen recording. Due to tight integration with \href{https://www.chartjs.org/}{Chart.js}, any chart visualisation can be selected and displayed in the right-hand panel based on data suitability and information needs. For instance, the \href{https://conflict-ai.github.io/seismographAPI/conflict-map.html}{\textit{Conflict Monitoring Map}} displays the most important data features considered for conflict prediction as a horizontal bar chart. The \href{https://conflict-ai.github.io/seismographAPI/covid-map.html}{\textit{Pandemic Monitoring Map}} relies on stacked line charts to map out infection numbers.
\section{Technical Background}
\paragraph{\textbf{Run time and Memory.}}
\href{https://github.com/conflict-AI/seismographAPI}{\textit{SeismographAPI}} builds upon two fast, open-source libraries, \href{https://www.chartjs.org/}{Chart.js} and \href{https://github.com/raphaellepuschitz/SVG-World-Map}{SVG World Map JS}. The time required for data loading is mainly determined by the size of the central SVG world map: \textasciitilde1,3 MB for {ISO-3166-2} country-level and \textasciitilde3,8 MB including all subdivision data. Depending on the chosen map, rendering starts between 300ms and 800ms, document completion is done between 400ms and 2.6s and the full loading time varies from \textasciitilde3s to \textasciitilde10s. To optimise loading and usability, \href{https://github.com/conflict-AI/seismographAPI}{\textit{SeismographAPI}} can also be initialised asynchronously with the JavaScript \href{https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await}{async/await/resolve} method. After the first initialisation of the map, this enables loading data chunks on demand, which increases smoothness. This is demonstrated in the \href{https://conflict-ai.github.io/seismographAPI/conflict-map.html}{\textit{Conflict Monitoring Map}}, where all global conflict data (\textasciitilde1,1MB) is loaded at startup, but the large amount of detailed conflict data (\textasciitilde80KB per country, \textasciitilde21MB in total) is loaded asynchronously on request. Thus, \href{https://github.com/conflict-AI/seismographAPI}{\textit{SeismographAPI}} is able to visualise more than $N = \num{170000}$ data points in the \href{https://conflict-ai.github.io/seismographAPI/conflict-map.html}{\textit{Conflict Monitoring Map}} in \href{https://www.webpagetest.org/result/210509_AiDc81_73869360943cdc9a6a40f9dc250a20b8/}{about 3 seconds} or nearly $N = \num{400000}$ data points in the \href{https://conflict-ai.github.io/seismographAPI/covid-map.html}{\textit{Pandemic Monitoring Map}} in \href{https://www.webpagetest.org/result/210509_BiDc1S_4760793a346f09615cae9fa5ac5124e6/}{about 10 seconds}.
\paragraph{\textbf{Ease of Use.}}
With an intuitive interface and simple data connectors, \href{https://github.com/conflict-AI/seismographAPI}{\textit{SeismographAPI}} is designed for ease of use in common visualisation tasks and workflows. Data can be loaded directly via JSON, CSV or as an HTML table. We even offer a \href{https://pandas.pydata.org}{Pandas} extension to load \href{https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html}{Pandas Dataframes} (as JSON) and \href{https://en.wikipedia.org/wiki/Help:Table}{Wikipedia tables}. The library features clear readme instructions and rich documentation.
\section{Conclusion}
Future versions will include more data connectors, default charts, more detailed guidelines for deployment and options for switching between different data within one map. We presented \href{https://github.com/conflict-AI/seismographAPI}{\textit{SeismographAPI}}, an open-source library aimed at reducing resource constraints and easing swift data visualisation, thereby improving data-driven decision-making for humanitarian purposes.
\bibliographystyle{ACM-Reference-Format}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,886
|
Q: Why is this configurable property not deletable? Configurable properties seem to be deletable:
var o = {};
Object.defineProperty(o, 'prop', {
configurable: true,
value: 'val'
});
delete o.prop; // true
o.prop; // undefined
But it doesn't work in the following case, at least on Firefox and Chrome:
var form = document.createElement('form'),
input = document.createElement('input');
form.appendChild(input);
var elems = form.elements;
Object.getOwnPropertyDescriptor(form, 0)
.configurable; // true <────────────────────── !!!
delete elems[0]; // false │
elems[0]; // input │
(function(){ 'use strict'; // V
delete elems[0]; // TypeError: property 0 is non-configurable
})(); // and can't be deleted
But this seems to contradict the spec.
The delete operator is defined like this:
11.4.1 - The delete Operator
The production UnaryExpression : delete UnaryExpression is
evaluated as follows:
*
*Let ref be the result of evaluating UnaryExpression.
*[...]
*If IsPropertyReference(ref) is true, then
*
*Return the result of calling the [[Delete]] internal method on ToObject(GetBase(ref)) providing
GetReferencedName(ref) and IsStrictReference(ref) as the
arguments.
So the result of using delete depends on [[Delete]]. Now let's see what [[Delete]] does:
8.12.7 - [[Delete]] (P, Throw)
When the [[Delete]] internal method of O is called with property
name P and the Boolean flag Throw, the following steps are taken:
*
*Let desc be the result of calling the [[GetOwnProperty]] internal method of O with property name P.
*If desc is undefined, then return true.
*If desc.[[Configurable]] is true, then
*
*Remove the own property with name P from O.
*Return true.
*Else if Throw, then throw a TypeError exception.
*Return false.
Therefore, if the property is configurable, it should be deletable.
But wait, maybe Object.getOwnPropertyDescritor is a troll and says that a property is configurable, but [[Configurable]] is false. Let's see:
15.2.3.3 - Object.getOwnPropertyDescriptor ( O, P )
When the getOwnPropertyDescriptor function is called, the
following steps are taken:
*
*If Type(O) is not Object throw a TypeError exception.
*Let name be ToString(P).
*Let desc be the result of calling the [[GetOwnProperty]] internal method of O with argument name.
*Return the result of calling FromPropertyDescriptor(desc).
So it also uses [[GetOwnProperty]], like [[Delete]]. Maybe the troll is FromPropertyDescriptor?
8.10.4 FromPropertyDescriptor ( Desc )
When the abstract operation FromPropertyDescriptor is called with
property descriptor Desc, the following steps are taken:
*
*If Desc is undefined, then return undefined.
*Let obj be the result of creating a new object as if by the expression new Object() where Object is the standard built-in
constructor with that name.
*...
*Call the [[DefineOwnProperty]] internal method of obj with arguments "configurable", Property Descriptor {[[Value]]:
Desc.[[Configurable]], [[Writable]]: true, [[Enumerable]]: true, [[Configurable]]: true}, and false.
*Return obj.
So no, it is not a troll neither. The configurable property of the property descriptor is set to the [[Configurable]] value.
How is it possible, then, that a configurable property can't be deleted?
A: Effectively, configurable properties are deletable.
But there is a big problem: that only applies to native objects, but not to host objects.
As explained in 8.6.2 - Object Internal Properties and Methods,
Host objects may support these internal properties with any
implementation-dependent behaviour as long as it is consistent with
the specific host object restrictions stated in this document.
For those, [[GetOwnProperty]] must behave differently:
If a property is described as a data property and it may return
different values over time, then either or both of the [[Writable]]
and [[Configurable]] attributes must be true even if no mechanism
to change the value is exposed via the other internal methods.
In your example, form.elements is a HTMLFormControlsCollection instance defined by the HTML spec, so it's a host object.
Therefore, the situation is
*
*It has a custom [[GetOwnProperty]] which says that the property '0' is configurable because its value may change.
*It also has a custom [[Delete]] which doesn't delete the property, even if [[GetOwnProperty]] says it's configurable.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 416
|
Q: Should I carry on porting FPDF to .NET? I've started to port FPDF to .NET, however I was wondering if its worth the effort.
Has it already been done?
I'm aware of ASP FPDF, but I'm talking about creating a native .NET dll so that any .NET language can use it. I plan to make it public for any one to use.
Further I'm not familiar with PHP, what tips/advice can you give (in terms of porting)?
A: pdfforge (the makers of PDFCreator) wrote fly2pdf, which is a free (for non-commercial use) ActiveX library for creating PDF files.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,773
|
{"url":"https:\/\/chemistry.stackexchange.com\/questions\/88022\/what-are-the-following-equations-called","text":"# What are the following equations called?\n\nOur teacher was writing out the following ways of expressing redox and she used a particular term for what these equations are called. When we write out the following what is this called? Its a way to show what atoms lose and gain electrons, but I forgot what these are called:\n\n\\begin{align} \\ce{Li &-> Li+ + e-}\\\\ \\ce{F + e- &-> F-} \\end{align}","date":"2019-04-26 05:51:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 1, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8487221598625183, \"perplexity\": 1835.8350746702447}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-18\/segments\/1555578760477.95\/warc\/CC-MAIN-20190426053538-20190426075538-00209.warc.gz\"}"}
| null | null |
"I think really Asia is peaking in terms of a powerhouse in the world,"
Chef May Chow is the owner and operator of two enormously successful restaurants in Hong Kong, and this year she's serving as a judge for Forbes' 30 Under 30 Asia list in The Arts category.
The 33-year-old, who was voted Asia's Best Female Chef in 2017 by a panel of more than 300 experts, has firsthand experience of what it's like to be a young entrepreneur. She says that one of the biggest challenges she faced as she started out on her own journey was giving herself credibility. The restaurateur remembers that when she was starting out, people didn't want to work for her because they weren't 100% certain about her chances of success.
Now Chow's "Little Bao" restaurant, which opened in 2013 and occupies a small space in Hong Kong's trendy Soho district, sees customers queuing out the door most evenings. According to Chow, the best way to describe her baos (steamed buns filled with a choice of fillings including braised pork belly, fried chicken or fish tempura) is to call them Chinese burgers.
Last year, the ambitious entrepreneur saw an opportunity to expand, and opened a larger restaurant, the more sophisticated "Happy Paradise," described as a modern Chinese bistro. The neon-soaked venue serves up Chow's own take on Cantonese classics like chicken poached in wine. Even the cocktails have a Chinese twist with ingredients including white tea, green sichuan pepper and Chinese almonds.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,335
|
Celebrating Clara Schumann's 200th at South Bay
Clara Schumann in 1857, photographed by Franz Hanfstaengl.
The Thies Consort, South Bay Chamber Music Society, Pacific Unitarian Church
There's a considerable group of fine musicians who are siblings or spouses of the acknowledged "great composers." To mind immediately comes Mendelssohn's sister Fanny, who wrote almost as much music (much of it unpublished) as her older brother, while in the 20th century Alma Mahler's small but distinguished output of songs bears witness to what she might have gone on to achieve compositionally had her career not become subservient to Gustav's.
Clara in 1832, aged 13.
But the most significant such figure is surely that of Clara Schumann, linked to not one but two of the all-time "greats"—her husband Robert, and Johannes Brahms, from his first arrival at the Schumann household in September 1853 as a brilliant and aspiring young composer and pianist, to Clara's death nearly 43 years later.
She was born on 13 September 1819. Piano lessons began when she was only four years old, and under intensive keyboard and other musical tutelage from her father she proved to be a child prodigy. Piano works of hers from age 11 onwards were published and continue to be performed, as well as later songs and short choral pieces. Her total output was relatively small, however, and she ceased original composition altogether after Robert Schumann died in 1856.
Robert Thies.
The South Bay Chamber Music Society's first concert of 2020 was thus "A Tribute to Clara Schumann on her Bicentenary" that devoted the majority of its program to some of her most enduring works. Robert Thies, however—doubling here as leader of The Thies Consort as well as SBCMS Artistic Director—began the tribute with an introductory talk that concentrated, not so much on Clara's works per se, as on her career following Robert Schumann's death.
Alongside her tireless curatorship of his compositional legacy and ongoing support for Brahms' work—as well as the small matter of raising seven children (an eighth died in infancy)—she embarked on and sustained for decades a career as a renowned touring virtuoso, establishing, as Mr. Thies noted, the practice of playing from memory (and thus necessitating much additional preparation time, down to this day, for recital pianists!). In addition, from 1878 she taught the piano at a conservatory in Frankfurt, thereby influencing playing technique for subsequent generations of pianists.
One of her last works was the Three Romances for Violin and Piano Op. 22, written in 1853, and the concert opened with these, played with warmth and grace by Jessica Guideri and Robert Thies. Though not hugely differentiated in mood and pace, they are melodically and texturally distinct enough—particularly the Leidenschaftlich schnell (Passionately fast) third, with its teeming arpeggiated accompaniment—to make one regret that she never composed a full-scale violin sonata, a reaction confirmed by the next work, her Piano Trio in G minor Op. 17 from 1846, in which Ms. Guideri and Mr. Thies were joined by John Walz, cello.
Notwithstanding the remarkably youthful Piano Concerto in A minor Op. 7, which patrons of the Long Beach Symphony Orchestra will have the not-to-be-missed chance to hear as part of its 2019-2020 season finale on May 30, the Piano Trio is Clara Schumann's most ambitious, and most successful, large-scale composition.
The Thies Consort: l-r Jessica Guideri, Robert Thies, John Walz.
It follows the familiar four-movement pattern of sonata-design first movement, scherzo-and-trio, slow movement, and fast finale, but without any laboriousness or sense of stretching material to fill a large-scale form. A long-breathed first subject shared between all three instruments, punctuated by a peremptory fortissimo figure, leads to a well-contrasted, rhythmically unpredictable second theme. The exposition is marked to be repeated, and this was blessedly observed in The Thies Consort's fine performance.
The first movement development immediately embarks on a contrapuntal pile-up of overlapping figures, truly exciting in this performance, before leading back to a full recapitulation. The delightfully tripping Scherzo that follows (also marked tempo di menuetto—Clara hedging her expressive bets?), is in ländler-ish contrast to the serious first movement, and gives way to a wistfully lingering Trio, before the Scherzo's return.
The Andante third movement is again in ideal contrast, its songful main theme introduced on the piano and then taken up successively by the violin and the cello. Quickly, however, the movement passes into a dramatically dotted central section, equally concise, and then the main theme returns, led this time by the cello.
Phoebe Jevtovic Rosquist.
There's no let-up in grip and memorability in the Allegretto finale, whose muscular main theme adapts well to fugal treatment in the development, and she maintains interest through to the movement's dramatic end. All-in-all, Clara Schumann's Piano Trio can hold its head alongside any work in the genre.
After the interval, her range was further demonstrated by six of her lieder, affectionately sung by Phoebe Jevtovic Rosquist, who in the absence of printed texts introduced each one before she sang it. Particularly notable were the gentle Liebst du um Schönheit (Do you love beauty?), Op. 12 no. 4; the dramatic Lorelei, where Robert Thies gave full expressive intensity to the driven, Erlkönig-like piano part; and the grandly valedictory Ich stand in dunkeln Träumen (I stood in dark dreams), Op. 13 no. 1.
Jessica Guideri and John Walz rejoined Robert Thies for the final item—not by Clara Schumann, but instead her illustrious mentee Johannes Brahms. This was his Piano Trio No. 3 in C minor, Op. 101, composed in 1886. From an opening as convulsively triumphant as that of the Third Symphony, the four movements of the Trio are as varied in mood as they are concise. Following the grand-scaled but nonetheless terse first movement, the Presto non assai Scherzo scurries in uneasy spasms. The Andante grazioso slow movement for much of its length keeps the piano and strings separate in their expression of its beauties, after which the Finale returns to shadows and truculence before it gathers itself to an energetic but withal uneasy conclusion.
Brahms and Clara Schumann in their latter years.
The Thies Consort gave the work a splendidly committed and coherent performance, that crowned a memorable concert. Clara Schumann described the Third Piano Trio as "wonderfully gripping… no previous work of Johannes has so completely carried me away. What a work it is, inspired throughout in its passion, its power of thought, its gracefulness, its poetry." Yes, indeed.
South Bay Chamber Music Society, Pacific Unitarian Church, Montemalaga Drive, Rancho Palos Verdes, 3pm, Sunday, January 19, 2020.
Images: Clara Schumann: (1857) Wikimedia Commons, (1832) Robert-Schumann-Haus; Robert Thies: artist website; Robert Thies Consort: Elaine Lim; Phoebe Jevtovic Rosquist: artist website; Brahms and Clara Schumann: KALW.
Posted by David J Brown
Labels: Brahms, Clara Schumann, David, David Brown, David J Brown, Pacific Unitarian Church, Robert Schumann, Robert Thies, South Bay Chamber Music Society, The Thies Consort
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,021
|
Q: VisualSVN - set up repository and set access to use Basic Windows Authentication - login results in 403? I've set up my repository using VisualSVN Server, imported an existing repostiory, and set the user access rights to use Windows Authentication (Basic). I then try to access the URL of the created repository; after entering my username and password, I am immediately greeted with a 403 Forbidden notice (even though I have specified both Read / Write privileges to myself using VisualSVN Server Manager console).
I've tried using VisualSVN's own username/password combo, and this works successfully. It's only when authenticating via Active Directory, and entering the username and password correctly, that I get this 403 (of course, if I enter it incorrectly, I am prompted for a username and password once more).
Can anyone point me in the right direction?
Cheers
A: I guess that when you installed VisualSVN Server you selected Subversion authentication and authorization type and then later switched to Basic Windows authentication. In such case VisualSVN Server does not automatically create a file required to be in each repository's /conf directory "VisualSVN-WinAuthz.ini". This is currently by design for security reasons but the behavior is going to be improved in future releases.
So I suggest you to check VisualSVN Server's eventlog:
Start eventvwr.msc | Applications and Services Logs | VisualSVN Server log.
What error event do you see there?
If the error is
Failed to load the AuthzVisualSVNReposRelativeAccessFile: Can't open
file 'C:\Repositories\MyRepo\conf\VisualSVN-WinAuthz.ini': The system
cannot find the file specified. (OS 2)
Then you have to create the file which contains the list of authorization rules. To do this you can go to Security properties of a repository in VisualSVN Server Manager console and add / remove any account from the list. This will force VisualSVN Server to create an empty authorization file and global permissions will start working properly.
A: I face the same issue but i resolved, kindly find the my below error.
Failed to load the AuthzVisualSVNSubversionReposRelativeAccessFile: An
authz rule refers to group '@ReleaseTeam', which is undefined
(220003) APR does not understand this error code
192.168.10.131
Error Meaning:
This error shows there is no groups like ReleaseTeam, while we taking backups from existing server that backups refers some users and groups which we created earlier.
So I suggest you to check VisualSVN Server's eventlog:
Start eventvwr.msc | Applications and Services Logs | VisualSVN Server log.
What error event do you see there?
If the error is:
Failed to load the AuthzVisualSVNSubversionReposRelativeAccessFile: An
authz rule refers to group '@ReleaseTeam (Username or Groups from
existing Visual SVN Server)', which is undefined
Steps to follow:
Find the user or group name and Create the same groupname or users in new Visual SVN Server. Try this it will work definitely
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,304
|
{"url":"https:\/\/www.gradesaver.com\/textbooks\/math\/algebra\/college-algebra-10th-edition\/chapter-5-section-5-1-polynomial-functions-and-models-5-1-assess-your-understanding-page-338\/21","text":"## College Algebra (10th Edition)\n\nPublished by Pearson\n\n# Chapter 5 - Section 5.1 - Polynomial Functions and Models - 5.1 Assess Your Understanding - Page 338: 21\n\n#### Answer\n\nIt is not a polynomial.\n\n#### Work Step by Step\n\nBy definition, a polynomial is a function containing only terms where x is raised to a positive power or constants. $\\frac{1}{x}$ is the same as $x^{-1}$. Because x is raised to a negative power, it is not a polynomial.\n\nAfter you claim an answer you\u2019ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide\u00a0feedback.","date":"2018-12-15 20:23:07","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5435744524002075, \"perplexity\": 956.8317355923638}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-51\/segments\/1544376827097.43\/warc\/CC-MAIN-20181215200626-20181215222626-00229.warc.gz\"}"}
| null | null |
)Martti Välikangas (förfinskning av Buddén), född 1 augusti 1893 i Kuopio landskommun, död 9 maj 1973 i Helsingfors, var en finländsk arkitekt. Han utexaminerades som arkitekt från Tekniska högskolan 1895. Åren 1921-25 gjorde han studieresor till Italien, Spanien, Norra Afrikas länder, Frankrike, Tyskland och Skandinavien. Välikangas grundade sin egen arkitektbyrå 1920. Byggnadsstyrelsens överarkitekt var han 1937-40, och 1942-44 ledde han det statliga återuppbyggandet i staden Viborg.
Han är begravd på Sandudds begravningsplats i Helsingfors.
Se också
Olympiabyn
Källor
Finländska arkitekter under 1900-talet
Män
Personer från Kuopio
Födda 1893
Avlidna 1973
Gravsatta på Sandudds begravningsplats
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 318
|
Art In Conversation
"If I'm working on a quilt that was presumably made by a group of women 100 years ago, there's something to grapple with in that conversation."
Portrait of Sanford Biggers. Pencil on Paper by Phong H. Bui
Marianne Boesky Gallery
Soft Truths
Bronx Museum Of Art
Sanford Biggers: Codeswitch,
September 9, 2020 – April 5, 2021
For so long, I waited for the conversation that follows. I started in Columbia University's MFA Visual Arts program in 2016. It happened to be when Sanford Biggers was beginning to leave his position as an educator. The celebrated artist—and the 2017–18 recipient of the prestigious Rome Prize—was taken by the demands of his ever-expanding practice. I remember sitting in an administrative office in that school, mystified by a small silk screen, Lotus (125th) from "The Floating World" (2013). It carried the familiar themes of Biggers's unique practice: the lotus, abstraction and geometry, history and erasure, the power of materials, the language of quilts, the multitude that is the African American experience. I remember sitting there, as an anxious graduate art student wondering how to deal with and care after my own historical marginality. I found myself under the spell of this work. In this snippet of his work, I encountered what I longed for and what had begun to feel impossible. His work carried deep wisdom without ever negotiating its true urgency. It stared back at the history of oppression and claimed space to echo all that had been silenced. I needed to talk to this artist. Our paths never crossed at Columbia.
We finally met on the occasion of two major exhibitions of his work: Soft Truths at Marianne Boesky Gallery which focused on his most recent body of work, and Codeswitch at the Bronx Museum which is a survey of more than a decade of his quilts.
Sanford Biggers, whence/wince, 2020 Antique quilts, charcoal 149 7/8 x 91 3/8 x 98 in. Courtesy the artist and Marianne Boesky Gallery, New York.
Yasi Alipour (Rail): Before turning our attention to your current exhibitions, I want to ask you about what makes your work incredibly significant to me: In your practice, you delve into materials and forms, and through this work, you engage with histories of marginality. It's rare; your work is as "formal" as it is "conceptual"—though both words feel insufficient here. The concerns of the work are always urgent, and yet, they demand serious attention. I keep thinking about the "John Cage meets Sun Ra" concert (1986). That album never fails to amaze me and not just because it's a historic meeting of iconic figures. I'm drawn to the way the improvisation of the two leads them to silence. I know they both cared deeply about silence (and all that was inaudible to the Eurocentric definition of music). But I am also interested in how silence was different for each of them: Cage's commitment to Zen Buddhism and Sun Ra's fascination with Egyptology and Afro-mysticism.
I like thinking about your work in relation. Your work is an extraordinary meeting of meditation and improvisation. There are the obvious facts, that you closely studied meditation in Japan and that you have continuously practiced music and live performance, especially with your conceptual band, Moon Medicin. But it goes deeper. Both elements feel key to all aspects of your practice as a multidisciplinary artist.
Sanford Biggers: Yeah, I see them both, partners, a duet, in some ways, because a part of improvisation has to do with being very mindful, present, and extremely open. And to trust your instinct on when to collaborate with the other. Like with Moon Medicin, it's often a lot of listening before I figure out what I can add, put in, or interject. And sometimes the silence is just as important as the sound, which I think is actually very poignant in that Cage and Sun Ra collaboration. They both really work in tandem. I don't think there is a clear distinction between improvisation and meditation. I think they really feed off of each other.
Rail: Right, there has been a false distinction created between the two, especially if we approach them through the gaze of white America or Western hegemony. In reading through some early writings on your work, I found these ignorant responses basically asking, "how can your work be both Buddhist and political?" The racist problem was twofold: whitewashing Buddhism, meditation, and mindfulness by taking them out of context and exotifying them as Eastern, apolitical, and "passive." And the other was the violent politicization of the Black body through the white gaze. Your work is far beyond these violent divides.
Still, there is so much attention and care you put in the treatment of materials in your work—from painterly and sculptural decisions, to your treatment of music, and even concepts. There's something "meditative" in that sense. As a viewer, it felt like the work demanded that I remain in the "now." And then there are aspects of your work—from Moon Medicin to the way you deal with history—that makes me think about the intensity of conversations that happen in real improvisation when suddenly there is urgency in our different understandings of the past and the need to desire a future. Your work seems so deeply in conversation with the long history of Black radical aesthetics, hip-hop, and Afrofuturism.
Sanord Biggers, Chorus for Paul Mooney, 2017. Antique quilt, fabric, spray paint, acrylic, fabric treated paint. 75 x 77 inches. Courtesy the artist and Marianne Boesky, New York.
Biggers: I think it's about the blurry lines in between all of those. I like to think that we're somewhere in the midst of a simultaneity. Past, present, and future are not in vacuums. They're all in relation to each other. I think about what you said about the early writings about my work and how they were stuck in a binary—of being Buddhist or hip-hop, or political or passive. And it's problematic because things are not that cut and dry. I did a solo exhibition around 2004–2005, and the title of that show was both/and not either/or. I was really addressing all of this stuff very directly in the past, but oddly enough, I don't think people could hear it because they were expecting a specific agenda and projecting their preconceptions onto the work. And I see that in a lot of the early writings as well. Critics that compared me to Kara Walker and David Hammons because that's the tone with which they want to get their information from a Black artist. And I think there's obviously subtlety, nuance, or silence that was overlooked at that time because somehow that was not attributable to the vernacular of an African American artist. I think it was very short-sighted. But finally, the language is starting to catch up. It's interesting you mentioned Afrofuturism, as it has become a buzzword of the moment. But Afrofuturism, as a concept goes back to the early '90s in critical writing with Mark Dery, but as a creative trajectory it goes back decades before that. So, I see Afrofuturism as part of a continuum that goes from John and Alice Coltrane, Sun Ra, Sam Delany, Octavia Butler, P-Funk, and Haile Gerima through Lovecraft Country and Watchmen with Regina King, and countless visual artists since at least the '50s and '60s. For me, that simultaneity of past, present, and future is always involved in my work. I even consider myself a collaborator with history, making work in the present to be unpacked somewhere in the future. And there's a rarely used phrase, in Japanese Onko chisin, which means—similar to the Ghanaian idea of Sankofa—learning from the past to inform the present, to then change the future.
Rail: So important. So to begin discussing the current exhibitions, can we talk about the idea of a "power object." It has remained consistent in the multitude of your practice. In '98, while you were still in undergrad, you got a gig assisting this exhibition at the American Museum of Natural History called Sacred Art of Haitian Vodou. Part of your job was to help the visiting priests from Haiti find the needed material to activate the altars in the exhibition. To do so, they would need to meet with the local New York Vodou priests. I am interested in the notion that the activation of the altar requires meetings; relations to be built among these communities. I found it interesting that you then became witness to these dialogues between the two Afro-diasporic communities. The idea of the "power object" and the relations required for "activation" still feels important in the way you deal with material.
Biggers: The "power object" for me is the notion that objects—and even images at this point—can be imbued with a psychic power—whether that be negative, positive, or myriad—that evokes specific charged cultural and social references. The interesting thing about power objects, however, is that they're not fixed. Symbols, archetypes, imagery that have been around for millennia can be changed by events, usage, and propaganda; the ancient Indian swastika which symbolized prosperity and well-being before being appropriated by the Nazi Party in the 1930s comes to mind. My project is really to conflate what we perceive to be the original function and meaning of historical objects with new significance from/for contemporary culture, world, and society; how do we perceive these things and the disputed notions of the history of those objects? In particular, in Soft Truths, you have these marble pieces where I'm combining aspects of Greco-Roman sculptures with African sculptures from various parts of the continent. It's really a total mashup. We have all heard the stories and are conditioned to believe that monochromatic marbles are the pinnacle of Western aesthetics, ingenuity, and representation. But the reality is, a lot of those monochromatic sculptures were at some point painted and adorned. Our understanding of the "white" sculptures—and the use of that as propaganda for Western Europe and its expansionist project—is false. The same happens when you look at modernism and its inspiration from the images of African objects that were imported to Europe around the turn of the 20th century via books and images. They were denuded to monochromatic brown and black wooden sculptures. But in reality, those objects were also adorned with raffia, beads, colors, and pigments. You have a whitewashed version of history in the classics and a Blackwashed version of history in modernism. The basis of that education and history is a falsehood. You can see how that extrapolates and becomes a metaphor for larger societal conditions. We're living in a moment of soft truth if there's truth at all. I want to explore some of those concepts in this show, but doing it not only through art historical reference but also material. But beyond these conceptual aspects I still try to allow for some valence, play, and improvisation so that the end result is less calculated than it is intuitive.
Rail: In some of your earlier work using African sculptures, you often emphasized your interest in the "dubious origins"—versus authenticity. That still feels true here. To refuse the monopoly of the hegemonic museum (and art history) in authenticating. I appreciate how here you stay with the dubious origin of all of them.
Installation view: Soft Truths, Marianne Boesky Gallery, NY, 2020. Courtesy Marianne Boesky Gallery, New York.
Biggers: I think about it like this: the objects themselves, if they still serve a function within a culture or society or family, does that make it less or more "authentic"? And provenance often in the Western usage of it ultimately ends up: who collected that, who ransacked it, who put it in what wing of what museum, and thus it is named the collection of that person and they are the authority. When in fact, something that could have been made three weeks ago may actually hold more magnitude, power, or real connection to the culture it came from than that object from 100 years ago. Who's to really say? When I say dubious origins, I also say that because the Greco-Roman and neoclassical sculptures that I use are usually knockoffs in the first place. When we go to a museum, like the Louvre or the Met, are we seeing the original piece? We're usually seeing a copy, and that copy was usually inspired by yet another totally unique sculpture. The pose might have been lifted from one sculpture, and then applied and is then called "Athena." And then 200 years later, it's remixed by somebody else and called "Slave Girl." This happens all the time. You see the same practice in music. That's why I often liken this to the sampling and chopping and screwing of recorded material. I'm taking full advantage of those dubious origins by concocting and creating these pieces that are in Soft Truths. For example, many of the masks that are on the figures in this exhibition are not from one singular group. They're usually combinations of multiple forms from different cultures. Thus, all are from dubious or unknown origins. And the contrary effect is that I'm actually creating a unique object because there's no other object that has all of those attributes. For me, these works are objects from a future ethnography. The point being that 100 years from now, when you decipher the elements of one of these pieces, you can't find one true origin and that complicates the conversation.
Rail: As someone who has spent too much time thinking about the violent history of ethnography and the bitter limits of autoethnography, your idea of future ethnography seems so important to me. In Soft Truths, there's this play that messes with notions of the sanctity of the "Greco-Roman" and the "African" past. You take it further, though. There are also some quilts in Soft Truths, which to me implicate Americana in the same way. The Bronx Museum's exhibition, Codeswitch, really engages with that. You commit to a narrative that is as historic as it is mythical: the idea that there were quilts imbued with codes to assist in the Underground Railroad.
Sanford Biggers, Orpheus, 2020. Antique quilt, assorted textiles, wood 82 x 83 1/2 in. Courtesy the artist and Marianne Boesky Gallery, New York.
Biggers: The narrative of them supposedly being used on the Underground Railroad was the jumping-off point. And through that thought process, I started to imagine Harriet Tubman as an astronaut, basically navigating the stars to take people to freedom. That's just sort of how my mind works. That's how I initially got into working with quilts and seeing them as a material that had a lot of potential and power. They are fecund with meaning and history, but beyond that, they begin to engage with other formal, gender, labor, usage, representation, and appropriation issues. Just think about quilts being seen as craft and rarely with the vaunted status of high art. When the Whitney had the exhibition The Quilts of Gee's Bend (2002), it really moved me. In my opinion, they were on par if not superior to several paintings in the collection. And what it meant, having an all-women's art show—disenfranchised women from Alabama. What are the political and the museological ramifications there? All of that was interesting. In the Bronx Museum's Codeswitch, I am showing 50-plus works. I'm working out all of these ideas in those pieces. It's a diaristic view of how I've been working through the materials and references. By the time we see the quilts in Soft Truths, I'm taking a very different approach. It's less additive, more subtractive. It's really mining information, extracting it as opposed to charging them with it. And that's because I have a trust for those materials. They are operating on a certain plane; I feel like I am a kind of interlocutor with them at this point.
Rail: In Codeswitch, I was very aware of the main material being fabric. It felt like each piece could hold, protect, and warm the body. In Soft Truths, I kept thinking of the coldness of the dominant material, the marble.
Biggers: To decipher most of my work, you've just got to read the titles. [Laughter] Soft Truths is really a play of opposites. It's the hard and the soft, it's the sort of detached coldness of marble, so to speak, and the warmth and inviting sense of the quilt, it's about volume and about voids. All of that is at play there. And the titles in the show each speak to their own sort of narratives. For example, Lady Interbellum (2020). Any music or pop fans out there know about the group Lady Antebellum, which for some reason, after 12 years in a hit career, didn't realize that the word "antebellum" has very negative connotations for many people. So, they changed their name to Lady A. And then they got into a big dispute because there was also already a Black female performer in Las Vegas named Lady A, who had been going by that name for 20 years, and a whole legal suit ensued, and Lady Antebellum said that they were going to let her keep her name and then they reneged on the whole deal at the end of it. The colonizer took the name or may have made her pay to use it. So, I call this one Lady Interbellum also in the sense that, politically, we could be interbellum at the moment as we speak. And the other conceit behind Soft Truths and the "Chimera" works featured, is that I did a lot of the thinking for the show while I was at the American Academy in Rome. I went there with the intention of studying the phenomenon of fallen empires; looking at the past to make sense of the present and understand the future. There's also the double-height quilt called whence/wince (2020). Depicted are four of the corner columns of the Parthenon ruins, draped down, flaccid, lying on the floor splayed out of red, white, and blue quilts. It's very basic, looking at the pillars of democracy and what they stand for—if they stand for anything—at the moment. Other works in the show have mythological, biblical, and cinematic references, but the interplay between the works and titles feels more like a cohesive installation than a group of various objects.
Sanford Biggers, Lady Interbellum, 2020. White marble on custom cedar plinth. Marble: 62 1/2 x 45 1/2 x 41 3/4 in, Plinth Stack: 12 1/2 x 51 1/4 x 51 1/2 in, Overall: 75 x 51 1/4 x 51 1/2 in, 190.5. Courtesy the artist and Marianne Boesky Gallery, New York.
Rail: Since we are discussing the role of language in your work, it feels like a good moment to move towards the Bronx Museum. It's an important show. In Codeswitch, I kept thinking about the hand, not only your gestures but all the traces that could easily belong to those who had worked on these quilts before you. There is a lot in Codeswitch that feels beyond language: the hand, hapticity, or even touch. In your engagement with materials and the intensity of the histories they carry, there is something unique in what you achieve; I kept walking around the show, thinking, "you trust the hand."
Biggers: Yes, I trust the hand, but I also trust the material. I consider myself basically a late collaborator with groups of, typically women, quilt-makers who have worked on them over a century ago. A lot of these quilts were on their way to the dustbin. Some I purchased, some were gifted to me, and all of them were basically living in the shadows. Sometimes I sit with them for years at a time before ever making a mark on them. Part of that is to learn and figure out how to navigate each particular quilt. And in that way, there is a lot of trust. I have to sit and consider each of these very intensely before making any marks on them. Sometimes I'm totally guided by the pattern or the color combination of the quilt itself. Sometimes I have to mine and go a little bit deeper and try to drag something out. Sometimes if I squint my eyes, I see the painting already happen. Other times, I have to put it away for months at a time before I come back to it. Some of the more successful ones have taken me years to do because they just kick my ass for three or four years at a time, and then I'd have to return to it. And then some of them, you wake up, and it's like, "Oh, this has to happen right now." This work is less heady in some ways than some of my projects. It started from a very heady conceptual place, but eventually, it became a deeply formal, hand and material-based place.
In the main gallery of the Bronx Museum, on the ground, is the video Mandala of the B-Bodhisattva II (2000). This is the oldest piece in the show. It is a dance floor that I made out of hand-cutting pieces of rubber tile to create, basically a mandala. And above that mandala, we installed a video camera. I was working with David Ellis (aka Squirm) on this particular one that you see here and it was the official dance floor for the Battle of the Boroughs Breakdance Competition that happened at the Bronx Community College in 2000. And we videotaped the event, and I was later asked to be in a show at the Bronx Museum by Lydia Yee and Franklin Sirmans, called One Planet Under A Groove (2001). And that was one of the pieces that was in the show. The 20-foot square dance floor was exhibited as well as a very small monitor that had the video, which showed the dancing. In Codeswitch, however, we've projected that video documentation down onto a platform on the ground. Now it looks like the dancefloor mandala itself. But you also see the dancing take place on it. That's in there not only because of the Bronx connection but the patterns that I was exploring in the early mandala works from the late '90s to early aughts really was the gateway to going deeper into pattern. The quilt then became the next vessel of exploring pattern. I see them as totally related.
Installation view: Sanford Biggers: Codeswitch, The Bronx Museum of the Arts, 2020. Photo: Argenis Apolinario. Courtesy Bronx Museum of the Arts.
Rail: Definitely! Also, the geometric forms used in the mandalas and in the quilts, are in both cases dealing with the body in movement—from the dancers to the secret guides for the fugitives. In an old interview, you once mentioned Buddhist monks who knew the mandala so well that they no longer needed to draw them; they would create the mandala by the movement of their body. I imagine a group, a ritual, a dance, a form of knowledge that is beyond writing, or even drawing.
In Codeswitch, what stood out to me besides the abstract forms was the tree. My mind went straight to your iconic work Blossom (2007)—which returned to view at Brooklyn Museum this summer. I'm thinking about that piece, the tree running through the piano, and your composition of Billie Holiday's "Strange Fruit." In discussing that work, you used to talk about how the tree is both where Buddha found enlightenment and also points to the brutal history of lynching. The tree persists in your work.
Biggers: All the works in Codeswitch really are like sketches of many of my sculptural installations. There are references to pianos, kimonos, trees, and silhouetted figures and forms. This is basically the palimpsest for reading several other works. That's why I call it the "Codex" series. You recanted the story that I was saying about monks often not even drawing or representing a mandala but remembering movements and dancing with the circle. I was really struck by the image. It put in my head this idea of sound pieces with no sound, kinetic pieces with no movement, performative works that don't actually "perform" in the literal sense. But with that being said, I consider these quilts to rest somewhere on that precipice because they are performative objects. Your body relates to them immediately because they are made for your body. But once they're on the wall with that removed, they start speaking a different language: that of drawings, tapestries, and paintings. So, they're vacillating between multiple spaces.
Rail: This also makes me think of your relationship with early hip-hop. A movement born from the breaks of older records stitched together. That seems to be very related to what you were saying: sound pieces with no sound, kinetic pieces with no movement.
Biggers: Well, the break is like a meditation in the way the rhythms become repetitive. And that's where you go into that transcendent moment. Musicologists talk about that all the time. I often refer to Harriet Tubman as an astronaut, navigating the stars. Similarly, I consider hip-hop DJs to be time travelers because they literally are splicing, parsing, and reversing time. And through that they're causing the audience and the crowd to go into breakdancing, improvisatory fits.
Rail: That kind of blew my mind. This feels like the perfect moment to discuss Moon Medicin.
Biggers: The band itself is composed of mainly five people; Mark Hines, Martin Luther, Jahi Sundance, André Cymone (Prince's original bassist on his first few albums), and myself on keyboards. Every performance is a bit of a different playlist. The common attributes per performance is a large video presence and audience participation. And we usually have masks on. And we usually have guests that join us, and they range anywhere from musicians to poets, actors, and comedians, including Rich Medina, Imani Uzuri, Swiss Chris, among others. Meshell Ndegeocello joined us when we were last at the Kennedy Center.
Rail: For me, it felt significant that Codeswitch began with the Moon Medicin video, made in collaboration with Terence Nance, the mastermind behind Random Acts of Flyness (2018). It showed your back, in the midst of a woods—an image that has persisted since some of your earliest works. But here, the nude Black body is covered by your quilt. The Moon Medicin's sound piece spilled into the exhibition space. In thinking about Moon Medicin, I keep returning to this incredible interview you did years ago with Terry Adkins in BOMB Magazine, who even collaborated in one of your earliest costumed performances. You two just seem to have so much care and respect for each other. There's a moment when you say:
Ishmael Reed! Saying that all black people, especially black men, are always under the spotlight—whether it is on a stage, in the spotlight of society, a police car spotlight, or an interrogation light. By being entities in this society we are always performing. I'm sure it extends beyond African Americans or black men, but this was the specific quote and, with that in mind, I felt there was no real difference between my doing a performative act in a gallery or a museum and my standing up and speaking my opinion in an all-white academic environment.
In many ways, Moon Medicin echoes questions that you ask throughout your practice, but somehow it also feels vastly different
Biggers: I think Moon Medicin offers a different voice, and it operates in a slightly different way. Music seems to have an ability to speak to people differently than visual art. On some levels, it doesn't need as much translation. And it can reach more people because of how its modes of distribution and communal events affect the way we understand and receive that information. Through Moon Medicin, I'm able to very spontaneously tackle a lot of issues that I don't necessarily address in my visual practice. Some I do, but many I don't. It's really influenced by Dada, black humor (in the Surrealist sense), and the Theater of the Absurd. At times, I think my work can have different tonalities, even in my visual practice, but when it comes to the performative realm with Moon Medicin, it can go very dark, very dank, and very dirty. But at the end of the day, it's also trying to find a sense of liberation from the confines that society and industry project onto artists of color, and liberation through the nonsensical, the nonlinear, and the non-narrative. I think that's the space that it operates in, and even as a group, it's got a loose formation. We do collaborations with all kinds of people. It's an open-source collaboration tool. I think of it as the third pillar of my practice.
Rail: That brings me to the beginning of our conversation. What is meditation if we refuse to whitewash, individuate, and depoliticize it? What is improvisation if, rather than consuming it as entertainment, we instead focus on the deep listening it requires?
Biggers: It's sort of funny. When you get into deep meditation, there are obviously many forms. There's eating meditation; there's walking meditation, sitting meditation. And I think existence as a marginalized person in the US, every day when you're walking under oppression, that's walking oppression, that's eating oppression, sleeping oppression, it's a constant. So maybe there is a correlation.
I'm thinking about that in relation to what we were talking about with Moon Medicin—and even abstraction to a degree—it is really finding ways to express that are not based on the binary or in opposition to something else but expression that is autonomous and of its own. I often go back to Glissant, creolization, and his ideas on opacity.
Rail: That's so interesting. Your work is honestly a rare experience where I feel like I am witnessing something of the essence of Glissant's creolization. How to create space in the language of the oppressor to talk about oppression, to talk beyond it, to talk to each other? I also think about the way your work refuses to conform to mediums, disciplines, or even expectations. It feels like what Glissant emphasized as the marginalized people's right to opacity. You have this quilt in Codeswitch that I can't stop thinking about, Tyranny of Mirrors (2017). As abstract as it is, it does something rare. It feels like the conversation that happened between John Cage and Sun Ra through improvisation, through silence. I'm also thinking about a room packed with people, listening and dancing in the break.
Biggers: I think that is one of the things that makes it very difficult to document what Moon Medicin does because it's so much about the experience. We usually don't just exist on the stage; we're in the audience, the audience comes up on the stage, there's a porous boundary. I think what is important about mirroring and performance is that I am not so interested in a separation between the performer and the audience; I'm actually more interested in seeing how everyone is projecting onto each other and how everyone is performing—it's a very complicated dance. And I guess that somehow addresses what you're saying with improvisation in my mind. For me, as an improviser, it is focused on the external data coming at you and responding to that, as opposed to in solidarity—creating my own soundscape that's going outwards.
Rail: There's so much trust there and throughout your work, Sanford, in the audience, in the work, even in time. To go back to your conversation with Terry Adkins, I want to ask you about the generational conversation. It's been something that has occupied my mind in thinking about contemporary art that deals with marginalized histories. You have this older work The Cartographer's Conundrum (2012), where you made work inspired by, in conversation with, and in response to the practice of your cousin, John Biggers, a scholar, an Afrofuturist, a master painter, and a muralist. Generational conversations are present throughout your work, from the "power objects" to the resampled quilts, to your dialogues with history itself. I feel like I really long for such conversations. But often, these conversations feel to be surrounded, to be interrupted by structures that are violently eager to simplify, erase, and consume.
Biggers: Well, let's consider one of the templates for writing about artists: finding which great white male artist their work descended from or is akin to. I've always taken issue with that gesture, knowing there have always been so many other fantastic artists from myriad backgrounds to refer to or be influenced by. And to be fair, I'm happy to find inspiration from all types of artists but the legacy of art made by women, people of color, and those that did not get their due, even when they were in the same room with the guys we always hear about, is crucial. Before graduate school, I was very fortunate to have artist mentors that took me under their wing and introduced me to other artists, curators, and art professionals. There was a community and network that spanned the country, and we went to each other's shows and supported each other long before the recent attention that non-white male artists are receiving today. We knew we had to do that because we weren't necessarily going to get that attention or even acknowledgment from the more mainstream art-industrial complex. I think my generation really picked up the torch in that respect and we regularly refer to artists ranging from Edmonia Lewis and Elizabeth Catlett to William T. Williams, David Driskell, and Robert Colescott. So, by the time, myself and my peer group started receiving attention, we were able to drop names and bring up that history. In the MASS MoCA show, there obviously was a bit of that involved. I was also taking some inspiration from my cousin John Biggers's work, but since he was also doing work related to pattern and sacred geometry, I've been continuing a similar line of inquiry across generations.
Sanford Biggers, Incidental Geometry, 2017. Antique quilt, birch plywood, gold leaf. 45 1/2 x 37 1/2 x 16 inches. Courtesy the artist and Marianne Boesky Gallery, New York.
Rail: I think about generational conversations as physically running to your elder, yelling, "why is oppression continuing?" but also being held in a "how have we been surviving?" It's as intimate and vulnerable as it is significant and urgent. I think it's really special, how in your work, you create these conversations with all their intensity and then bring them to places like MASS MoCA and even Codeswitch at the Bronx Museum. I feel like in the current exhibitions, I have witnessed how people were spending time with the work, activating and echoing generational conversations, with all the frustration, with all the contradiction, with all the love and care.
Biggers: I think the intent is very important. I think about this a lot because I do consider myself a conceptual artist. I'm approaching different ideas and projects with different materials all the time. So, what are the things that I can make consistent while I traverse those different materials? One of them is intent. One intent is, shining the light on history and historical figures that probably did not get the type of recognition they deserved at the time they were alive. I am also grappling with material and cultural history. If I'm working on a quilt that was presumably made by a group of women 100 years ago, there's something to grapple with in that conversation. So, my approach to the quilt is never about washing over this thing and implementing my plan. It's about observing and listening to what was laid down before. If I chose to cover an entire panel with one color, it may feel like it's a violent, aggressive type of erasure. But the closer you get to it, the stitching that was actually holding the quilt together is revealed. So, though I've gotten rid of part of the pattern, the line drawing, the really intricate work becomes visible. I think that intent is also visibly why people react to them the way they do, because you can't avoid the hand in any of it.
Rail: You know, the early MASS MoCA piece, Cartographer's Conundrum, made me think of your other works as maps. The quilts may appear as abstraction at first glance, but they unravel, become codes, maps, and carriers of stories, not illustrations, though. It's more like objects that accompany the oral traditions of storytelling.
Biggers: Yeah, they're part of the griotic tradition. I think of them as syntactical as opposed to just strictly visual. Each one of them has its own narrative. And then collectively, they create a hyper-narrative
Rail: It's interesting, on one level, the quilts are storytellers, carrying histories that cannot be contained in history books. On another, the quilts are maps. That desire to map, to understand land—by journeying through it or by owning it. Then on a third level, the quilts as what warms and protects the body.
Biggers: That's what I'm saying, they're so charged because of their relationships with the body. And the body, of course, is our primary repository of history. So, if you have a quilt that held the body, not only does it have its own history, it has that body's history. All that stuff is resonant in the raw material itself. So, by the time I'm intervening on that, I'm having a very fierce dialogue with a lot of different elements. And, ultimately, there's a faith in the power within that. It's about how much or how little you can do to accentuate what's already there.
Yasi Alipour
YASI ALIPOUR (Columbia University, MFA 2018) is an Iranian artist/writer/folder who currently lives in Brooklyn and wonders about paper, politics, and performance. She is a teacher at Columbia University and SVA and is currently a resident at the Sharpe Walentas Studio program. For further information, please visit yasamanalipour.com.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,030
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.