Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
1
1.54M
meta
dict
\section{Introduction} Plastic pollution poses an imminent threat to the marine environment, food safety \cite{BARBOZA2018336}, human health, eco-tourism, and contributes to climate change \cite{schmidt2017export}. Global plastic production has exceeded 500 million tons of plastic, and projections indicate that 30\% of all produced plastic will end up discarded in the oceans \cite{nollkaemper1994land} \cite{epa2014municipal}. Researchers have documented a five-fold increase in plastic debris within the Central Pacific Gyre and have shown that plastic pieces now outnumber the native plankton 6:1 in terms of abundance \cite{clapp2012rising}. A significant amount of marine plastic (about 80\%) originates from land-based sources \cite{Windom_1992}: Most commonly in the form of food containers, such as plastic bags and bottles, and packaging materials. The other ~20\% stems from shipping vessel discharges and discarded commercial fishing gear \cite{Windom_1992}. Studies have shown that removing plastic from the oceans will exponentially benefit the ecosystems. This includes the prevention of the movement of invasive species between regions \cite{carlton2017tsunami}, the prevention of its degradation into micro-plastics \cite{andrady2011microplastics}, and the decrease in emissions of greenhouse gases (thereby decelerating climate change) \cite{royer2018production}]. To understand the spatiotemporal distribution of plastic, we require more accurate methods with reliable and low-cost deployment strategies. Various in situ approaches to ocean plastic monitoring have been proposed. These in situ methods include using SONAR/LIDAR to map plastic debris \cite{valdenegrotoro2019deep}, human counting via visual methods \cite{van2018methodology}, and debris sampling using fishing nets \cite{rech2014rivers}. However, these methods are labor-intensive, incur high financial costs, and do not cover large surface areas. Furthermore, polymers such as polyethylene and polypropylene are affected by the growth of a biofilm when submerged in water that will influence their sinking behaviors \cite{Kaiser_Kowalski_Waniek_2017}. Any polymer that has its density increased by biofilm to a certain point sinks beyond surface sampling devices such as manta trawls. As a result, these surface sampling limitations lead to quantity underestimations of floating plastics. Creating an accurate marine plastic debris estimation requires developing alternative methods to investigate the distribution of positively buoyant plastic across the entire water column. Recently, several methods using computer vision and modern deep learning technologies to quantify marine plastic debris without physical removal have been suggested \cite{oceancleanup} \cite{fulton2018robotic}. The Earth and Space Science journal illustrates a method using the two-stage Faster R-CNN model to actively monitor and identify surface plastic as it floats down a river \cite{oceancleanup}. This approach does not account for the sinking polymer problem but shows that, on average, an automated method detects 34.6\% more plastic than human visual counting does. Remote sensing of plastic litter provides a promising new and less labor-intensive tool for the quantification and characterization of ocean plastic pollution \cite{rs13173401}. A research team at the University of Minnesota developed a computer vision model specialized for marine plastic detection in deep-sea environments \cite{fulton2018robotic} which demonstrates that quantification across the water column can be achieved. It also exemplifies the relationship between object detection models and AUV’s to great success. The AquaVision project \cite{PANWAR2020100026} shows that object detection models can reach high levels of precision utilizing open-source datasets and one-stage approaches such as RetinaNet. Since AquaVision was trained on the TACO dataset. It also indicates that a computer vision model trained on land-based images of plastic can detect similar types of plastic in a marine environment. Unlike these recently proposed algorithms that specialize in monitoring either floating marine plastic \cite{oceancleanup}, deep-sea specific environments \cite{fulton2018robotic}, or models trained on land-based plastic [17]. The Ocean Cleanup group has demonstrated that computer vision models can detect floating plastic debris via cameras attached to above-water vessels \cite{rs13173401}. Their results show that macroplastics can be successfully quantified for comparisons across methods. Our object detection model (DeepPlastic) utilizes a training set composed exclusively of marine-based plastic images and performs equally well across the entire water column while producing significant results. \begin{figure}[ht] \centering \includegraphics[width=0.90\linewidth]{figures/OceanIllustration_Sharp.jpg} \caption{Concept of real-time plastic detection via AUV's equipped with cameras and DeepTrash vision} \label{fig_concept} \end{figure} In this study, we tested four state-of-the-art deep-learning architectures, Faster-RCNN, Single Shot Multibox Detector, YOLOv4-Tiny and YOLOv5-S, then reported their performances to infer marine plastic debris in real-time. The main results will be described as follows: 1) the model’s precision and accuracy to feasibly identify plastic debris, 2) insurability that this method can successfully distinguish marine plastic debris from similar-looking non-plastic objects, and 3) A generalized model capable of detecting marine plastic in most oceanic environments. The results show that deep learning models can identify plastic with significant accuracy while operating at a rate that supports real-time applications such as autonomous underwater vehicles (AUVs) for at-scale marine- plastic quantification and monitoring. \section{Related Work} Increasing demand for identifying and removing plastic from the world’s waterways has led to a surge of research in computer vision and AUV solutions. A team of researchers at the University of Minnesota robotics lab recently experimented with AUV deployments for identifying deep ocean marine plastic debris \cite{fulton2018robotic}. Another growing trend has been to utilize deep learning and computer vision to identify floating marine plastic on the river automatically and ocean surfaces \cite{PANWAR2020100026}. Additionally, AUV’s have been used as a means for environmental surveillance \cite{5509604}, mapping \cite{5603860}, and localization of marine plastic debris [20]. Underwater vision technology has been pushed forward thanks to work done by Ge et al. \cite{Ge_Shi_Mei_Dai_Li_2016} with LIDAR technology to localize and map marine-plastic debris on coastal beaches. Further research into implementing LIDAR in conjunction with forward-facing SONAR image models trained by deep convolutional neural networks was conducted by Howell et al. \cite{Kurz_Buckley_Howell_Schneider_2009}, and Valdenegro-Toro et al. \cite{valdenegrotoro2019deep} which resulted in a model capable of detecting underwater debris with 80\% accuracy. Unfortunately, these methods incur high expenses due to retrofitting sonar and an in-house water tank for evaluation. The University of Minnesota robotics lab \cite{fulton2018robotic} annotated and published a dataset of images collected by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) \cite{JAMSTEC}. JAMSTEC released the J-EDI (JAMSTEC E-Library of Deep-Sea Images), which contains marine plastic debris dating back to 1982 and provides data in the form of images and videos. The work presented in this research paper has benefited from the University of Minnesota team, which released close to 3000 annotated images from the JAMSTEC J-EDI dataset. These datasets were used to train our convolutional neural networks (CNNs) to identify features of plastic debris. Photography, especially video-cameras, have found common application as environmental monitoring systems \cite{mock1995underwater} \cite{premkumardeepak2017intelligent}. Underwater cameras provide a globally accessible and low-cost quantification aid. Combining object detection models with underwater cameras equipped on automobiles such as AUVs makes it possible to observe and monitor sub-surface plastics in known hotspots worldwide \cite{fulton2018robotic}. By mounting video cameras to AUV’s, buoys, and other submersibles, institutions could feasibly quantify macro-plastics, which constitute 90\% of the total plastic mass in the oceans. \ \begin{figure}[ht] \centering \subfigure[Ocean]{\label{fig:ocean_plastic}\includegraphics[width=0.24\textwidth]{figures/plastic-1.jpg}} \subfigure[Lake]{\label{fig:lake_plastic}\includegraphics[width=0.24\textwidth]{figures/plastic-2.jpg}} \caption{Example images of marine plastic debris from the DeepTrash dataset in different marine environments} \label{fig_plastic} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=0.95\textwidth]{figures/dp.pdf} \caption{Methadology for Marine Plastic Detection} \label{fig_met} \end{figure*} \section{Network Architecture} Four state-of-the-art object detection models were selected for this work. Each architecture has different benefits and drawbacks, with the main trade-off being speed for accuracy. \begin{itemize} \item \textit{Faster RCNN Inception v2} Faster RCNN \cite{10.5555/2969239.2969250} is an improvement on R-CNN \cite{DBLP:journals/corr/GirshickDDM13} that introduces a Region Proposal Network to make the network trainable end to end. The network uses the convolutional feature maps to produce region proposals, which are fed to the fully connected (in our case softmax layers) for detection. Faster R-CNN uses VGG-16 \cite{DBLP:journals/corr/SimonyanZ14a} for feature extraction while we use the newer we use the Inception v2 \cite{tensorflowmodelgarden2020} as feature extractor instead because of it's known capabilities to enhance object detection. \item \textit{Single Shot Multibox Detector MobileNet v2} Single Shot MultiBox Detector (SSD) \cite{DBLP:journals/corr/abs-1805-09501} is another well-known detection model that performs object localization and classification in a single forward pass of the network. This architecture introduces additional Convolutional layers with the base network to improve performance. We use a MobileNetv2 implementation \cite{NAGRATH2021102692} for faster performance speeds. \item \textit{YOLOv5-S} \cite{Jocher_Stoken_Borovec_NanoCode012_ChristopherSTAN_Changyu_Laughing_Tkianai_Hogan_Lorenzomammana_et_al._2020} YOLOv5-S Unlike the official release of YOLOv4, YOLOv5 currently exists in active development. There- fore, all YOLOv5 related code, and models may be subject to modification or deletion without notice. YOLOv5-S has 7.5 million parameters, 140 layers and operates at a lightweight 7MB (14MB for weights pre-trained on COCO). This architecture uses the Cross Stage Partial Network (CSP) \cite{wang2019cspnet} as the processing backbone and was trained on MSCOCO to extract rich/informative features from an input image. YOLOv5 also uses a PANet \cite{liu2018path} for the model-neck to generate feature pyramids and the computational friendly LeakyReLU and Sigmoid activation function. The model uses SGD as a default learning rate, but these tests were performed with the ADAM adaptive learning rate enabled \cite{kingma2017adam}. \item \textit{YOLOv4-Tiny} \cite{bochkovskiy2020yolov4} Inference speeds on YOLOv4-Tiny can reach upwards of 400 frames/second when using a 1080Ti GPU with accuracy, precision, and recall that meet the demands of a production-ready robotics platform. YOLOv4- Tiny uses a CSPDarknet53-Tiny neural network as opposed to the regular SPDarknet53 network. To simplify the computation process, the YOLOv4-Tiny model uses the LeakyReLU as an activation function. \end{itemize} \section{Methodology} \subsection{Dataset Construction} The dataset was curated by collecting videos of marine plastic from the field in California (South Lake Tahoe, Bodega Bay, San Francisco Bay). The videos vary significantly in quality, depth, and visibility to better represent the harshness of marine environments. After recording, manual identification of marine plastic captured in the still images was performed, emphasizing choosing images containing complex object detection scenarios such as illumination, noise and occlusion. Each image would then get annotated to prepare them for object detection using the deep learning models. This curation approach ensured that the dataset of images would closely conform to real-world conditions. To further increase the representation of marine plastics in different locations, images were also sourced from datasets created by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) \cite{JAMSTEC}. Annotations were performed using the free tool supervisely \cite{drozdov} to create the final dataset, which contains ~3200 images. Ocean environments provide a wide variety of visual challenges, so all plastic instances get consolidated into a single classification labeled “trash\_plastic”. We called our final dataset, DeepTrash \cite{tata_gautam_2021_5562940} and have open sourced it to further help the research in this field. \subsection{Enhancements of Custom Dataset} The following procedures were implemented for the deep learning models to detect marine plastic: \begin{enumerate}[label={\alph*})] \item \textit{Dataset Formatting} The input data constituted of images and annotation labels for bounding boxes were converted into either a TFRecords (FasterRCNN and SSD), PyTorch (YOLOv5-S) or a Darknet format (YOLOv4) to process each respective model. The bounding boxes delimited each image’s regions of interest based on 2D coordinates located in the respective annotation file. \item \textit{Image Pre-processing} To ensure that learning occurs on the same image properties, auto orient was applied to strip images of their exchangeable Image file format (EXIF) data \cite{9108753} so that the models interpret images regardless of image format. Finally, the input images get resized and bounding boxes adjusted to 416x416 pixels. \item \textit{Data Augmentation} To mitigate the effects of the model generalizing towards undesired features and to replicate underwater conditions such as variable illumination, occlusion, and color–the dataset was further enhanced by randomly changing the brightness and saturation of the images via PyTorch’s built-in Transforms augmentation. These modified images were then added back into the dataset, effectively tripling the size of our dataset. \subsection{Object Detection} We used four state-of-the-art neural network architectures FasterRCNN with Inception v2, Single Shot Multibox Detector with MobileNet v2, YOLOv5-S and YOLOv4, downloaded from their respective repositories . The following software versions were used: Tensorflow1.5, PyTorch v1.8.1, Darknet, OpenCV version 3.2.0, and CUDA 11.2. \subsubsection{Fine Tuning Hyperparameters} This object detection model uses ADAM \cite{kingma2017adam} as the adaptive learning rate, which utilizes a decaying learning rate for a set number of epochs. The final layer of the network uses Softmax and reflects the usage of a single class. \subsubsection{GPU Hardware} We tested two state of the art GPUs:NVIDIA P100 and NVIDIA Tesla V100®GPU (version 460.32.03) were chosen due to their proven parallel computing capability. \subsubsection{Training} After every 1000 epochs (iterations) of training, the model would be evaluated on the validation dataset to calculate precision, recall, and mean average precision (mAP). This means stopping training to check for the following: \begin{itemize} \item When accuracy stops increasing, the model no longer needs additional training to prevent overfitting. \item Depending on performance, hyperparameters should receive adjustments to optimize for evaluation metrics. \end{itemize} \subsubsection{Evaluation Metrics} After the model has finished training, use the testing and validation datasets containing images mutually exclusive from the training dataset as an input to evaluate the network’s performance. The model draws a bounding box around successfully detected objects with a confidence score of .50 or higher. The number of true positive bounding boxes drawn around marine plastic debris and true negatives provides the basis of evaluation. The following performance metrics were utilized to produce results: \begin{itemize} \item \textit {\textbf{True positive and True negative values}}: True positive values represent an outcome in which the models correctly predict a positive class, and conversely, a true negative represents when the model correctly predicts the negative class. \item \textit{\textbf{Precision and Recall}} -- represents if the model suc- cessfully detected plastic in an image. \[Recall = \frac{TP}{TP+FN}\] \[Precision = \frac{TP}{TP+FP}\] \item \textit{\textbf{Mean Average Precision}} -- Evaluates how often the network can recognize plastic in a group of images. After collecting the values for true and false positives, generate a precision-recall curve using the Intersection over Union (IoU) formula: \[ \textit{IOU}=\frac{BBox_{predicted} \cap BBox_{groundTruth}}{BBox_{predicted} \cup BBox_{groundTruth}} \] Where \(BBox_{predicted} \) and \(BBox_{groundTruth}\) Where BBoxpredicted and BBoxgroundT ruth are the areas under the curve for predicted and ground truth bounding boxes, respectively. A high threshold for confidence and IoU must be set to ensure accuracy, with a correct detection represented by the threshold being exceeded. The mAP can then be obtained by integrating the precision-recall curve \cite{10.1007/978-3-642-40994-3_29}: \cite{10.1007/978-3-642-40994-3_29}: \[ mAP = \int_{0}^{1} p(x) dx \] \item \textit{\textbf{F1-Score}} -- Evaluates the balance between precision and recall values. \item \textit{\textbf{GPU Speed (ms/IMG})} -- Represents how fast the network can infer marine plastic debris contained within an input image. \end{itemize} \subsubsection{Visualizing results} For each processed image, the network populates arrays containing the following data: Scores -- Confidence scores for the predicted boxes and Number of detections -- The total number of detections made per image. The following equation converts the normalized coordinates into image coordinates for rendering bounding boxes on top of images: \begin{equation} imgCoord_k = BoxScore_{i}^{j}\cdot Width \end{equation} where $k\in$~(left,right,top,bottom), $i$ is an index of boxes, $j \in(0,1,2,3)$, and $Width$ is a width of the image. These image coordinates were used to visualize the results of predicted bounding boxes in Figure \ref{fig_results}. \iffalse \begin{itemize} \item \textit{\(left\_coordinate=boxes[index][1] * image\_width\)} \item \textit{\(right\_coordinate=boxes[index][3] * image\_width\)} \item \textit{\(top\_coordinate=boxes[index][0] * image\_width\)} \item \textit{\(low\_coordinate=boxes[index][2] * image\_width\)} \end{itemize} \fi \end{enumerate} \begin{figure}[ht] \centering \subfigure[Detection near water surface]{\label{fig:near_water}\includegraphics[width=0.24\textwidth]{figures/fig-1.jpg}} \subfigure[Detection for partially buried debris.]{\label{fig:partial_occlusion}\includegraphics[width=0.24\textwidth]{figures/fig-2.jpg}} \caption{Results generated by the model with bounding boxes and confidence scores rendered over marine plastic debris.} \label{fig_initial_results} \end{figure} \begin{table}[t] \centering \vspace{3mm} \begin{tabular}{l|c|>{\centering\arraybackslash}m{0.7cm}|*{3}{c}} Network& mAP&F1-Score&Precision \\ \hline YOLOv5s&85.0&0.89&\textbf{0.93}\\ Tiny-YOLOv4&84.0&0.80&\textbf{0.96}\\ Faster R-CNN&79.0&0.76&\textbf{0.84}\\ SSD&76.0&0.71&\textbf{0.83}\\ \end{tabular} \caption{Detection metrics in mAP, F1, and Precision.} \label{tab:detection_one} \end{table} \begin{table} \centering \begin{tabular}{ l | c | c | c } Network & P100 & V100 \\ \hline YOLOv5s & 2.8 & 1.4 \\ Tiny-YOLOv4 & 1.9 & \textbf{1.2} \\ Faster R-CNN & 2.4 & 1.5 \\ SSD & 2.5 & 2.1 \\ \end{tabular} \label{tab:fps} \caption{Performance metrics for Inference (ms/img).} \vspace{1mm} \end{table} \section{Results} All results expressed in Table I were produced from the validation dataset presented in the methodology section. Since the images used in the training dataset were not isolated laboratory creations but instead real-world images directly from the field, the general object detection has a more accurate representation of marine plastic debris. This approach comes with a set of trade-offs: \begin{itemize} \item The model performs stronger in real-world deployments, and therefore, the evaluation results in Table II do not significantly differ from near-real-time measurements taken from the field. \item Peak performance of the object detection model in a perfectly controlled environment could not be measured, and the highest possible benchmark of a single detection remains unknown. \item These trade-offs indicate the results of this paper better approximate long-term performance across a wider variety of marine environments–leading to a more substantial evaluation of the object detection model’s performance in the field. \end{itemize} \subsection{Quantitative Results} The results captured in Table I demonstrate that near-real-time object detection of marine plastic debris in the epipelagic layer of the ocean is both feasible and close to real-world execution. The tested models demonstrate high average precision, mAP, and F1 scores relative to their inference speed. Repeated testing of the model produced a results variance of Usually, evaluation results between models showcase a clear relationship between models, such as trading off significant inference speed for increased accuracy. However, the results presented in this paper showcase that both YOLOv4-Tiny and YOLOv5- S produce high debris localization metrics when it comes to identifying epipelagic plastic in near-real-time. YOLOv5-S provides a significantly higher F1 score in exchange for a slight dip in inference performance. Reducing the number of classes to 1, i.e.,” trash plastic,” ensures an even distribution of class examples within the training dataset. The singular nature of this object detection model may reduce the total number of use cases the model can be utilized for–but guarantees strong performance on use cases within the domain of the model. A single classification also builds upon the pre-trained weights’ performance during transfer learning, as it meant less skewing towards unrelated classifications. \subsection{Evaluation Results} \subsubsection{Object Detection} The mAP values obtained from the object detection models on the validation dataset have been expressed in Table I. Both models demonstrate high accuracy in plastic localization. It also reveals that the YOLOv5-S model has a higher mAP than the YOLOv4-Tiny, Faster RCNN and SSD models. \subsubsection{Inference Speed} These speeds were dictated by the GPU (NVIDIA P100 and V100 using a batch size of 32) and included image preprocessing. The YOLOv4-S model provided the highest inference speed-to-maP performance ratio for the provided dataset. \subsection{Qualitative Results} This study focused on determining the feasibility of detecting marine plastic debris for near-real-time monitoring/quantification purposes. To that end, the results in Table I demonstrate that general object detection models can fill this much-needed role. Since a relatively high level of performance can be maintained with such fast inference speeds–we believe that models such as the one presented in this paper can be applied to AUVs and other tools for real-world solutions. Equally important is that these solutions now have a near-future timeline of implementation and have been proven to be low-cost. \section{Discussion} \label{sec:disc} In this study, we built a computer vision model that detects marine plastic debris with high precision, visualizes the detections with bounding boxes, and operates near real-time speeds. These conditions match the requirements for robotic platforms such as AUVs or buoys. As one of the first object detection models specialized for the epipelagic layer, direct comparison results can not be readily performed—however, relative performance comparisons. DeepPlastic and object detection models geared towards plastic detection in deep-sea and river plastic environments reveal DeepPlastics’ state-of-the-art performances. The article mentioned above in Earth and Space Science \cite{https://doi.org/10.1029/2019EA000960} describes a two-stage reference model, utilizing cameras positioned above water, capable of detecting plastic floating on rivers. It utilized 1272 images in its training set and the Faster R-CNN architecture for its second stage. Across multiple experiments, this model’s highest precision rate was 68\% when employing image flipping and the ADAM adaptive learning rate. DeepPlastic was trained on a dataset using image flipping, in addition to other data augmentation techniques, and also uses the ADAM learning rate--but achieves a precision rate of 93\% when detecting marine plastic debris submerged in the ocean via underwater cameras. The University of Minnesota’s (UoM) computer vision model \cite{fulton2018robotic} specialized for marine plastic detection in deep-sea environments utilized 5720 images in its training set and three classes. The DeepTrash training dataset shares many of the same images as both include samples from JAMSTEC \cite{JAMSTEC}. The UoM model achieved an mAP of 82.3\% for its plastic images class using the YOLOv2 architecture and a high of 83.3\% when using Faster R-CNN. DeepPlastic achieves an mAP of 93\% when using the YOLOv4 architecture and input images of marine plastic debris on the same dataset in similar conditions. AquaVision [17] was trained on three datasets totaling ~4400 images, including images of both land-based and marine-based debris and four classes. AquaVision’s highest performance for the plastic class was an average precision of 81.5\% when using the one-stage RetinaNet method. DeepPlastic performs at an mAP of 85\% when using YOLOv5. The specific training datasets used by the three models described above are either not public or utilize datasets outside of the domain of DeepPlastic (i.e., the dataset images are not underwater). Therefore comparing performances via dataset is not an option for this study. \subsection{Points of Improvement} This model can efficiently monitor and quantify marine plastic. Improvements can be made in the following areas: \subsubsection{Data Augmentation Improvements} While grayscale, saturation, and vertical/horizontal flipping have been proven data augmentation techniques–emerging techniques such as AutoAugment [35] could be explored to improve the model’s variability in the future once ready for adaptation. Other methods such as shear and the cutout regularization technique would be great to utilize after integration technologies improve. \subsubsection{Dataset Improvements} The data set used in this study is unique and one of the first of its kind. For the data set, we see three main improvements that can be made to enhance the deep learning model: \begin{itemize} \item Adding more images from different locations \item Using more images from variable water conditions \item Finally, acquiring a more extensive set of underwater plastic images \end{itemize} As more plastic images from different locations and oceanic conditions become available, they will increase marine plastic debris representation–providing a more comprehensive dataset for model training. We believe this will improve the mAP and overall robustness of the object detection model. \subsubsection{Camera Improvements} Readily available off-the-shelf cameras have come a long way but still suffer from certain limitations. The first and most substantial limitation revolves around most underwater cameras that will only work during the daytime. If we want to continue the monitoring process during the nighttime, better night-vision underwater sensors need to be developed. The second limitation stems from the common H.265 video compression techniques \cite{Lu_2019_CVPR} underwater cameras utilize to induce encoding artifacts. This impedes real-time detection by deteriorating the image quality. Developments in end-to-end deep learning video compression techniques \cite{Lu_2019_CVPR} could lead to solutions for this limitation once ready for implementation. \section{Code and Dataset Availability} All code/dataset and instructions to build and utilize the DeepPlastic object detection model can be found \href{https://zenodo.org/record/5562940#.YWSe39nMI-S}{ online via Zenodo -- DeepPlastic}. \indent \section{Conclusion} \label{sec:con} This work’s objective was to develop a deep learning vision model capable of consistently identifying and quantifying marine plastic near real-time. To attain this objective, a pair of general object detection models were constructed using two state-of-the-art deep learning models built for inference speed to measure which performed best. \\ \indent This study concludes that a marine plastic debris detection system based on the YOLOv5-S model would be fast, accurate, and robust enough to enable real-time marine plastic debris detection. This study shows that effective object detection models can be constructed using readily available, pre-enabled GPUs for reasonable costs. \\ \indent Furthermore, the dataset created for and utilized by this general detection model demonstrates that massive, highly curated datasets can be used in conjunction with samples relative to the domain of object detection and web scraping to produce promising results. \textit{This computer vision system enables multiple deployment methods to detect/monitor marine plastic and allows researchers to quantify marine plastic debris without physical removal.} \section{Future Work} Improvement of the dataset would have the highest impact on performance but collecting additional images would require human labor in fieldwork or preprocessing. A technology capable of producing synthetic images containing marine plastic debris in an ocean environment could provide an automated solution to dataset creation. This could be accomplished with a two-stage autoencoder [37]. Object detection models trained on identifying jellyfish (or other objects similar to marine plastic debris) paired with a our object detection model could lead to a decrease in false positives. Inference speed could be improved through specialized GPU technology or tailoring models towards specific higher power GPUs than used in this study. An end-to-end video compression technique explicitly developed for near real-time object detection could lead to a better ratio of true positives to true negatives and an improved range on object detection. Tailoring this object detection model for vision-equipped AUVs could result in automated identification and plastic removal devices capable of scalable deployment across large bodies of water, as shown in figure 1. Further optimizations could build in support for stationary monitoring devices such as buoys as well. We hope that such a system will facilitate scalable adoption by researchers and civilians to detect and clean up marine plastic. \section{Acknowledgements} We gratefully acknowledge the help and support of Nikhil Deshmudre for his efforts and help with the deployment of this computer vision system. The authors would also like to thank Joseph Nelson, Co-Founder of Roboflow.com, for providing us with Roboflow Pro free of charge, making it easier to iterate on the deep learning models. Some of the images in this dataset were sourced from the TrashCan dataset, where the researchers hand-annotated and open-sourced over 5000 images from the JAMSTEC-JEDI dataset. The authors would like to thank the researchers from the University of Minnesota, Robotics LAB, and the Japan Agency for Marine-Earth Science and Technology for open sourcing this data to contribute to the advancement of science. Finally, we would like to thank Rae Rose Lowe for her support throughout this process.
{ "timestamp": "2021-10-22T02:05:54", "yymm": "2105", "arxiv_id": "2105.01882", "language": "en", "url": "https://arxiv.org/abs/2105.01882" }
\section{Impact of channel access mechanisms} \label{sec:access_channel} This section provides the experiments that have led to assess the impact of channel access mechanism in a GEO-satellite backhaul system. The PEP mechanisms are not activated. \subsection{Dynamic SATCOM access and uncongested network} The characteristics of this test scenario are as follows: \begin{itemize} \item Number of connected UEs: 10; \item Same application for all UEs: data transfer in download; \item All UEs start downloading at the same time; \item OpenSAND limits: 20 Mbps Forward / 1 Mbps in Return; \item Flow limitation per UE (SLA) in download: 2 Mbps; \item OpenSAND access (total return channel at 1 Mbps): \begin{itemize} \item CRA = 50 kbps / RBDC = 1000 kbps \item CRA = 100kbps / RBDC = 900 kbps \item CRA = 500 kbps / RBDC = 500 kbps \item CRA = 1000 kbps /RBDC = 0 kbps \end{itemize} \end{itemize} The metric reported in this report is the rate convergence time, \textit{i.e.} the time required for the UE to reach the rate of its SLA. This enables the analysis of the impact of the access mechanism on the speed of convergence of congestion control. \begin{table}[h] \caption{Rate convergence time when all users start at the beginning of the connection} \label{table:access_uncongested} \begin{tabularx}{\linewidth}{c|c} Access Method & Rate convergence time (s) \\ \hline CRA = 50 kbps / RBDC = 1000 kbps & 10 \\ \hline CRA = 100 kbps / RBDC = 900 kbps & 9 \\ \hline CRA = 500 kbps / RBDC = 500 kbps & 6 \\ \hline CRA = 1000 kbps / RBDC = 0 kbps & 7 \\ \end{tabularx} \end{table} The results of this experiment are presented in Table \ref{table:access_uncongested}. A low value of CRA impacts the end user speed convergence time. That being said, once the threshold of 50\% of the capacity on the return link is reached, increasing it beyond does not seem to bring significant gains. \subsection{Dynamic SATCOM access and congested network} The previous test has shown that considering a low value of CRA, for example at 10\% of the capacity, increases the rate convergence time. That being said, an actual system will likely be loaded. The test presented in this section considers 9 UEs whose connection is established before the last UE starts downloading. The characteristics of this test scenario are as follows: \begin{itemize} \item Number of connected UEs: 10; \item Same application for all UEs: data transfer in download; \item 9 UEs start downloading from the start, and one UE starts the download 10 seconds later; \item OpenSAND limits: 20 Mbps Forward / 1 Mbps in Return; \item Flow limitation per UE (SLA) in download: 2 Mbps; \item OpenSAND access (total return channel at 1 Mbps): \begin{itemize} \item CRA = 50 kbps / RBDC = 1000 kbps \item CRA = 100kbps / RBDC = 900 kbps \item CRA = 500 kbps / RBDC = 500 kbps \item CRA = 1000 kbps /RBDC = 0 kbps \end{itemize} \end{itemize} The reported metric in this report is the rate convergence time, \textit{i.e.} the time required for the 10$^{th}$ UE to reach the rate of its SLA. This enables the analysis of the impact of the access mechanism on the speed of convergence of congestion control. \begin{table}[h] \caption{Rate convergence time for the 10$^{th}$ UE when all others UE started at the beginning of the connection} \label{table:access_congested} \begin{tabularx}{\linewidth}{c|c} Access Method & Rate convergence time (s) \\ \hline CRA = 50 kbps / RBDC = 1000 kbps & 4 \\ \hline CRA = 100 kbps / RBDC = 900 kbps & 11 \\ \hline CRA = 500 kbps / RBDC = 500 kbps & 10 \\ \hline CRA = 1000 kbps / RBDC = 0 kbps & 10 \\ \end{tabularx} \end{table} The results of this experiment are presented in Table \ref{table:access_congested}. The result of the case 'CRA = 50 kbps / RBDC = 1000 kbps' shows a very short convergence time. This may be due to the link load which is variable. This phenomenon could have been absorbed by a larger number of tests, which could not be carried out due to lack of time. Moreover, the comparison of the other cases completes the results of the previous section. When the CRA is set to a value greater than 100 kbps, the convergence performance is the same. Once the network is loaded, it is not necessary to dynamically adapt the use of the resource on the return path. \subsection{Dynamic SATCOM access and mixed upload and download traffic} In order to complete the results observed in the previous sections, tests with mixed download and upload traffic were carried out and a subset of the results is presented in this section. The characteristics of this test scenario are as follows: \begin{itemize} \item Number of connected UEs: 10; \item Application data transfer in download for 8 UEs and Upload for 2 UEs; \begin{itemize} \item Long flows for download and upload last 30 seconds; \item Short flows are 1 MB in download and 300 kB in upload; \item 7 UEs start long flows (download) and 1 UE start long flows (upload) at the beginning of the experiment; \item Short flows (1 UE in download and 1 UE in upload) start 10 seconds later; \end{itemize} \item OpenSAND limits: 20 Mbps Forward / 1 Mbps in Return; \item Flow limitation per UE (SLA): 2 Mbps (download) and 300 kbps (upload); \item OpenSAND access (total return channel at 1 Mbps): \begin{itemize} \item CRA = 50 kbps / RBDC = 1000 kbps \item CRA = 100kbps / RBDC = 900 kbps \item CRA = 500 kbps / RBDC = 500 kbps \item CRA = 1000 kbps /RBDC = 0 kbps \end{itemize} \end{itemize} \begin{table}[h] \caption{Short flows download and upload times in a congested environment} \label{table:up-dw-acc} \begin{tabularx}{\linewidth}{c|c|c} Access Method & 1 MB download time (s) & 300 kB upload time \\ \hline CRA = 50 kbps & 10.3 & 7.5 \\ RBDC = 1000 kbps & & \\ \hline CRA = 100 kbps & 10.7 & 7.2 \\ RBDC = 900 kbps & & \\ \hline CRA = 500 kbps & 9.9 & 7.1 \\ RBDC = 500 kbps & & \\ \hline CRA = 1000 kbps & 11.6 & 7.5 \\ RBDC = 0 kbps & & \\ \end{tabularx} \end{table} The results of this experiment are presented in Table~\ref{table:up-dw-acc}. The different access mechanisms, once the network is loaded, do not seem to illustrate substantial gains for the file sizes considered. \subsection{Discussion} The main conclusions of this activity are: \begin{itemize} \item Once the network is loaded, \begin{itemize} \item The impact of the choice of access method does not matter; \item CRA / RBDC or SCPC combinations (i.e. all capacity in CRA) show similar performance \end{itemize} \item If the network is not loaded, \begin{itemize} \item A CRA value that is too low (lower than 10\% of maximum capacity) can impact performance; \item A CRA / RBDC approach can reduce costs. \end{itemize} \end{itemize} \section{Acknowledgments} \label{sec:ackno} This study was founded by CNES SMILE project. The authors would like to thank all those who participated in making this study possible. \section{Discussion} \label{sec:discussion} The main contributions of all these studies are as follows: \begin{itemize} \item Different proofs of concept for the GEO backhaul service have been implemented; \item If the system is congested, the protocol optimizations offered by a PEP (to improve QoE) or adaptation of access mechanisms (to reduce costs) do not bring significant gains; \item If the system is not congested: \begin{itemize} \item Protocol optimizations bring significant gains for file transfer; \item Adaptation of access mechanisms, \textit{i.e.} the constant reduction of the allocated throughput, enables to reduce the costs of access while having a negligible impact on services. \end{itemize} \end{itemize} The tests carried out do not take into account the complexity of the operator's core networks. The introduction of WAN accelerator equipment (i.e. PEP) ensures performance in the part for which the operator is responsible and neglects the impacts of the network conditions between the operator's network and the data servers. These equipments isolate error segments, perform local retransmissions and tune the protocols to the network where it is deployed. They also implement caching mechanisms. Although negligible gains were measured by the introduction of these equipments, this observation does not allow us to deduce its uselessness given that many functions were not considered and evaluated. The results nevertheless show the importance of studying the impacts of the different protocol layers for SATCOM systems offering a backhauling service for mobile networks. This includes many multi-layered technical interactions, as well as the understanding of which is necessary to optimally size and implement such systems. \section{Introduction} \label{sec:introduction} Mobile Network Operators (MNO) have the regular need to optimize their communication infrastructure in order to better manage congestion, guarantee the quality of the service and maximize income. The end user demand may not be located close to the already deployed core network of a MNO and answering to it may not be economically viable. The deployment of LTE cells and their connection to the core network with a satellite system has proved to be an efficient solution to this issue. The satellite backhauling service represented 36,750 sites served by satellite in 2018, which is twice the amount served in 2012. As this service is growing strongly and a source of significant revenue for operators, it is important to present the issues related to this use case and the necessary considerations allowing to define future satellite systems. \begin{figure}[h] \centering \includegraphics[width =\linewidth]{figures/archi_backhaul_lte.png} \caption{Backhaul services through satellite systems} \label{fig:archi-backhaul} \end{figure} Figure~\ref{fig:archi-backhaul} shows an architecture of a LTE backhauling service through satellite. End users (User Equipment, EU) access Radio Access Network (RAN) of a MNO using LTE standards. The satellite system connects the RAN to the core network of the MNO. The backhauling services need for relatively strong end-to-to-end Quality of Service (QoS) requirements. In general, variable data rates should be guaranteed by the QoS mechanisms, the voice from 64~kbps and video from 256~kbps should also be guaranteed. There may also be strong requirements on the round trip time, jitter and packet loss rate. \begin{itemize} \item[$\blacktriangleright$] GEO satellite backhauling accesses are facing a trade-off between quality of network service, quality of the user experience and the price of the access. \end{itemize} Satellite systems may allocate Constant Rate Assignment (CRA) for backhauling services (as opposed to Rate-Based Dynamic Capacity, RBDC). With CRA, a portion of the satellite resource is dedicated to a user even if it does not actually use it. The reduction of the CRA would let the satellite resource management mechanisms to allocate the unused resource to the systems that actually need it. However, decreasing the CRA and increasing the RBDC may reduce the Quality of Experience (QoE) due to the request-allocation loop inherent to RBDC mechanisms. This study measures the relevance of using dynamic resource allocation mechanisms for backhaul services through satellite systems and their impact on the QoE. The satellite system is emulated with OpenSAND~\cite{opensand,opensand-site}, the LTE system with Amarisoft~\cite{amarisoft}, and the experiments are orchestrated by OpenBACH~\cite{openbach}. We compare the relevance of applying PEP~\cite{RFC3135} mechanisms and dynamic resource allocations when the system is loaded by measuring the QoE for Web browsing, data transfer and VoIP applications. The main conclusions are the following. \begin{itemize} \item[$\blacktriangleright$] When the system is congested, PEP and layer-2 access mechanisms do not provide significant improvements. \item[$\blacktriangleright$] When the system is not congested, data transfer can be greatly improved through TCP optimizations. \item[$\blacktriangleright$] Tuning the Constant Rate Assignment can help in reducing the cost of the resource and provide QoE improvments when the network is not loaded. \end{itemize} \section{Platform validation} \label{sec:platform} This sections provides details on how the exploited platform has been validated. \subsection{Validation strategy} The experience feedback from previous activities shows that the platform needs to be set up carefully, especially when it integrates so many elements provided by different entities. The following step-by-step procedure has been exploited: \begin{itemize} \item Prepare the test architecture with the Amarisoft platform and two OpenBACH agents (to emulate the clients); \item Launch OpenBACH scenarios allowing an end-to-end QoS analysis and QoE measurements to validate the Amarisoft component; \item Add the PEPs in "deactivated" mode (TCP acceleration disabled) at the ends of the system; \item Launch OpenBACH scenarios allowing an end-to-end QoS analysis to validate the Amarisoft along with "deactivated" PEP; \item Add OpenSAND between the eNodeB and the Core of Amarisoft; \item Launch OpenBACH scenarios allowing end-to-end QoS analysis to validate the Amarisoft along with OpenSAND and "deactivated" PEP; \item Activate PEP; \item Launch OpenBACH scenarios allowing QoS analysis of end-to-end to validate the Amarisoft along with OpenSAND and activated PEP. \end{itemize} \subsection{Validation results} \begin{figure}[h] \centering \includegraphics[width =\linewidth]{figures/phase1_lteospep_qos_rate_fwd.png} \caption{Forward link throughput(b/s)} \label{fig:forward_qos_data-rate} \end{figure} \begin{figure}[h] \centering \includegraphics[width =\linewidth]{figures/phase1_lteospep_qos_rate_rtn.png} \caption{Return link throughput (b/s)} \label{fig:return_qos_data-rate} \end{figure} \begin{figure}[h] \centering \includegraphics[width =\linewidth]{figures/phase1_lteospep_qos_rtt.png} \caption{Round Trip Time} \label{fig:rtt_qos} \end{figure} The results related to the QoS measurements are presented in Figure~\ref{fig:forward_qos_data-rate}, Figure~\ref{fig:return_qos_data-rate}, Figure~\ref{fig:rtt_qos}. The traffic has been generated by iperf3 (TCP), nuttcp (TCP/UDP), fping (ICMP ping) and hping (TCP/IP ping) The forward channel rate is limited to 20~Mbps and 10~Mbps in the return channel. Changes in TCP throughput already illustrate the impact of congestion losses on TCP throughput and its ability to use all of the available capacity. Moreover, the use of OpenBACH makes it possible to obtain these curves with less effort and greater control and thus assuring the consistency of the results. The delay measured by hping is $0$ because the traffic is intercepted by the PEP, which gives an illusion of low latency. \section{Platform details} \label{sec:platform} This sections provides details on the exploited platform. \subsection{On the need for a controlled emulation} The exploitation of an emulated platform let us consider mechanisms and algorithms~\cite{when-emulation} that are close to those implemented in deployed systems. When it comes to considering QoE measurements, simulations may not map actual protocol performances~\cite{vtc-trustable}. However, using different proprietary equipments may result in outputs that are difficult to undersand and analyse. To cover this issue, we propose the exploitation of a maximum number of opensource softwares towards reproducible and controlled tests while considering systems as close as possible to real ones. \subsection{End-to-end emulated platform} \begin{figure}[h] \centering \includegraphics[width =\linewidth]{figures/archi_platform.png} \caption{Platform architecture} \label{fig:archi-platform} \end{figure} Figure~\ref{fig:archi-platform} presents the different equipments that compose the platform: \begin{itemize} \item Proprietary Performance Enhancing Proxies (PEP); \item The cellular network is emulated with AMARISOFT~\cite{amarisoft}; \item The satellite system is emulated with OpenSAND~\cite{opensand,opensand-site}; \item The tests are orchestrated by OpenBACH~\cite{openbach}. \end{itemize} The PEPs are not deployed within the LTE emulation network since our equipment could not deal with packets encapsulated within GTP-U tunnels. \section{Impact of transport protocol mechanisms} \label{sec:access_channel} This section provides the experiments that have led to assess the impact of transport protocol mechanisms in a GEO-satellite backhaul system. \subsection{Experiment set up} This section presents the results for a web access or for a short file transfer. The characteristics of the scenarios presented in this section are the following: \begin{itemize} \item Number of connected UEs: 10; \item Test duration: 30 seconds; \item Data transfer is started for 9 UEs to load the link: 7 in download and and 2 in upload; \item The 10th UE consumes a given type of service (VoIP, video, Web or File transfer); \item The 9 UEs that load the link start their activity at the same time; \item The 10th UE that consumes the service starts a few seconds later, once the link is loaded; \item OpenSAND limits: 20 Mbps Forward / 1 Mbps in Return; \item OpenSAND access: CRA = 100 kbps / RBDC = 900 kbps or CRA = 500 kbps / RBDC = 500 Kbps; \item Flow limitation per UE (SLA) of 2 Mbps in download and 100 kbps in upload. \end{itemize} The results presented in this section do not take into account the diversity of web pages and protocols used. For example, a page using the HTTP1 protocol and multiple objects has very different characteristics and probably different performance than a page using HTTP2. The tests concerning the use of application traffic representative of a voice over IP or a video transmission did not show different results, whether the WAN accelerators are activated or not and whatever the configuration of the access (CRA = 100 kbps / RBDC = 900 kbps, CRA = 500 kbps / RBDC = 500 kbps). \subsection{Focus on web transfer} In order to limit the complexity of the analysis, it was decided to test a single web page with the following characteristics: 6.5 MB page size using the HTTP protocol. On a 2 Mbps capacity link, the optimal transfer time would be 26 seconds. \begin{table}[h] \centering \caption{Simple web page downloading time} \label{table:pep_web} \begin{tabular}{c|c|c|c|c} Access Method & \multicolumn{2}{c|}{CRA=100kbps} & \multicolumn{2}{c}{CRA=500 kbps} \\ & \multicolumn{2}{c|}{RBDC=900kbps} & \multicolumn{2}{c}{RBDC=500kbps} \\ \cline{2-5} & No PEP & PEP & No PEP & PEP \\ \hline Average (s) & 33.9 & 36.1 & 34.5 & 34.8 \\ Max (s) & 35.9 & 40.8 & 41.1 & 38.0 \\ Min (s) & 31.0 & 32.2 & 31.6 & 32.4 \\ \end{tabular} \end{table} The calculation of a statistic analysis based on 10 experiments is presented in Table~\ref{table:pep_web}. The transfer times of the page vary between 31 and 41 seconds, all configurations combined. The introduction of a PEP does not bring significant gain on web browsing in a congested context. It is worth pointing out that these experiments do not consider the exploitation of a PEP equipment where it should provide benefits, \textit{i.e.} within the SATCOM system. This was not possible due to the lack of GTP-U capable PEPs. \subsection{Focus on file transfer} The client proceeds to two different downloads during the experiment. During the first one (fetch 1), the network is loaded with cross-traffic. The second download (fetch 2) occurs when no other UE are using the satellite resource. Each configuration is tested five times. \begin{table}[h] \centering \caption{Simple web page downloading time and CRA=100 kbps, RBDC=900 kbps} \label{table:pep_file_100} \begin{tabular}{c|c|c|c|c} & \multicolumn{2}{c|}{No PEP} & \multicolumn{2}{c}{PEP} \\ \cline{2-5} & Fetch 1 & Fetch 2 & Fetch 1 & Fetch 2 \\ Average download time (s) & 11.67 & 9.44 & 11.34 & 8.16 \\ \end{tabular} \end{table} \begin{table}[h] \centering \caption{Simple web page downloading time and CRA=500 kbps, RBDC=500 kbps} \label{table:pep_file_500} \begin{tabular}{c|c|c|c|c} & \multicolumn{2}{c|}{No PEP} & \multicolumn{2}{c}{PEP} \\ \cline{2-5} & Fetch 1 & Fetch 2 & Fetch 1 & Fetch 2 \\ Average download time (s) & 11.91 & 9.32 & 11.49 & 6.88 \\ \end{tabular} \end{table} Tables~\ref{table:pep_file_100} and~\ref{table:pep_file_500} present the results for this experiment. In general, all fetchs 1 show the same values. It means that whatever the channel access characteristics and whatever the transport layer, when the network is loaded, the results are the same. However, when the network is not loaded (fetch 2), including a PEP results in 3 \% to 26 \% performance improvements. The gains are more important when the CRA is high. \subsection{Discussion} Regarding file transfer, the gains brought by WAN acceleration are significant without congestion, and amplify the conclusions on the adaptation of the access method: the higher the CRA, the higher the gain brought by the PEP. Regarding web browsing, for a simple web page and in congestion, the WAN accelerator does not bring significant gains.
{ "timestamp": "2021-05-06T02:11:50", "yymm": "2105", "arxiv_id": "2105.01901", "language": "en", "url": "https://arxiv.org/abs/2105.01901" }
\section{Introduction} \label{sec-intro} Turbulent multiphase flows are ubiquitous in nature and technology. Examples are raindrops \citep{rain, zaleski}, ocean waves \cite{spray}, fuel sprays \cite{fuel}, and the transmission of virus-laden droplets during respiratory events \cite{covid,covid-steven,covid-cs}, just to name a few. In order to gain deeper insights into their complex and rich behavior, efficient, high-fidelity computations are crucial. For turbulent multiphase flows, direct numerical simulations (DNSs) present far greater challenges than for single-phase flows \cite{rev-tmf}. The reasons are the much finer length-scales and faster time-scales induced by the existence of the second phase, especially when the deformable interfaces between the fluids break up or coalesce. To-date, many numerical methods have been developed, such as phase field methods (also known as diffuse interface methods) \cite{soldati1, soldati2, breakup}, volume of fluid methods \cite{pop16, luka19jfm}, level set methods \cite{ls-tmf}, and front tracking \cite{ft-tmf}, Lattice-Boltzmann \cite{LB19JFM}, and immersed boundary \cite{roberto-jcp, cs20} methods. Among them, the phase-field method is an approach in which a scalar (volume fraction of one fluid) is tracked by the Cahn-Hilliard equation and the sharp fluid-fluid interface is replaced by a narrowly mixed layer \cite{jacqmin99}. In the past decade, application of the phase-field method has been increasingly appealing because of its versatility. For example, the method has been applied to the simulation of turbulent flows \cite{soldati1, soldati2, soldati4, breakup}, flows with moving contact lines \cite{liu15,sui14,ding07jfm,zy}, fluid-structure interaction \cite{chen1,liu17,chen,liu20}, melting flows \cite{melt1,melt2,melt3}, ternary flows \cite{liu18}, and even brittle fracture simulation \cite{brittle}. In the phase-field method, two immiscible phases are represented by their volume fractions $C$ and $1-C$, respectively. The spatial distribution of $C$ is determined by the Cahn-Hilliard equation \cite{pfm96,jacqmin99,ding07jcp}: \begin{equation} \frac {\partial C} {\partial t} + \nabla \cdot ({\bf u} C) = M \nabla^2 \left[a_1\nabla^2 C+a_2\psi'(C)\right]. \label{eq-ch-d} \end{equation} The quantity in square brackets is the chemical potential defined by the variation of free energy with respect to $C$. It includes an excess free energy term (the first term), and a bulk energy term (the second term) with $\psi=1/4\,C^2(C-1)^2$ being the simplest non-singular form that has two equal energy minima, namely at $C=0$ and $1$ \cite{pfm96,jacqmin99,ding07jcp}. Physically, $\psi$ represents the bulk energy density due to the inhomogeneous distribution of volume fraction in the interfacial region. We will give more technical details in Section \ref{sec-ch}. In Eq.~(\ref{eq-ch-d}), the Laplacian of the first term on the right hand side is biharmonic, i.e. it contains fourth-order derivatives. State-of-the-art solvers for the standard single phase flow Navier-Stokes equation is highly efficient and well-studied for turbulent flows. This is because the typical algorithm to solve the computationally demanding Poisson equation--a necessary step for enforcing incompressibility--is based on fast Fourier transforms (FFTs) \cite{fft1,fft2}, as described in Ref.~\cite{cf15}. In Refs.~\cite{jcp12} and \cite{jcp14}, the FFTs is extended to multiphase flows by employing a split method, meaning the variable-coefficient pressure-gradient term is spilt into an implicit constant term and an explicit variable term. As a result, the Poisson equation can be solved up to $40$ times faster than without the split method \cite{jcp14}. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{fig0} \hspace{0.05\linewidth} \includegraphics[width=0.45\linewidth]{fig0-2} \caption{\label{fig-dis} (a) $25$ points used to discretize the biharmonic terms in the Cahn-Hilliard equation (\ref{eq-ch}) in the scheme of Eq.~(\ref{eq-disbi}). Symbols in different colors represents the points at different $z$ plane. In the new discretization scheme of Eq.~(\ref{eq-new}), the spherical points are replaced by the cubic ones. (b) Two dimensional situation of the discretization of the biharmonic terms. The circles are replaced by the square ones.} \end{figure} However, with the application of FFTs in multiphase flows, the computational cost of the biharmonic term becomes the new bottleneck for the phase-field method. The reason for this is that the common solution technique for the biharmonic term in the phase-field method involves an implicit solution that requires $25$ grid points for a second-order spatial discretization, see Fig.~\ref{fig-dis}(a) (details in Section \ref{sec-bi}). Therefore, in this study, we will in particular focus on an optimal discretization of the biharmonic term. We propose a novel discretization scheme for the biharmonic term in the phase-field method to couple with the approximate-factorization method, which is an efficient way to implicitly solve the hyperbolic systems \cite{axayaz} and easily parallelize it. We will implement the phase-field method \cite{ding07jcp} with this novel scheme into our open-source DNS package AFiD (\href{https://github.com/PhysicsofFluids/AFiD}{www.afid.eu}) \cite{jcp96,cf15}, which is a second-order finite difference solver that has been well-validated in many studies of turbulent flows \cite{zhu-tc,shan,qi3}. AFiD is highly-parallelized with a pencil distributed strategy \cite{cf15,gpu}, and includes an FFT-based Poisson solver \cite{jcp96}. In addition, we will apply a split method \cite{jcp12,jcp14} to the pressure solver to deal with large density differences between the two phases. To validate the present approach, we simulated cases of drop deformation in a shear flow and of a rising buoyant bubble. Our results are compared to previous studies and are further assessed using a grid convergence study. Finally, we simulated the case of a breakup of one big drop as well as the coalescence of $O(10^3)$ drops in turbulent Rayleigh-B\'enard convection, and show the good performance of the present approach for large-scale computation. The paper is organized as follows. The governing equations are introduced in Section \ref{sec-ge}. Then we address the numerical methodology in Section \ref{sec-num}. In Section \ref{sec-case}, we simulate several test cases to validate our approach and show its ability to deal with turbulent multiphase flows in large-scale computation. We conclude our study in Section \ref{sec-con}. \section{Governing Equations} \label{sec-ge} \subsection{Cahn-Hilliard (CH) equation} \label{sec-ch} Turbulent flows with two incompressible immiscible fluids are investigated here. We use the phase-field method \cite{jacqmin00,ding07jcp} to capture the interface between two fluids. Here, the sharp interface is modeled by a diffused one with finite thickness, and represented by contours of the volume fraction $C$ of fluid $1$, and thus the volume fraction of fluid $2$ is $1-C$. The evolution of the volume fraction $C$ is governed by the Cahn-Hilliard equation, \begin{equation} \begin{array}{ll} \displaystyle \frac {\partial C} {\partial t} + \nabla \cdot ({\bf u} C)&\displaystyle=\frac{1}{\Pe}\left[-\Cn ^{2} \nabla^4 C+\nabla^2 \left(C^{3} - 1.5 C^{2}+ 0.5 C\right)\right], \end{array} \label{eq-ch} \end{equation} where $\bf u$ is the flow velocity. We choose the P\'eclet number (the ratio of advection and diffusion) and the Cahn number (a dimensionless measure of the thickness of diffuse interface) the same as in Ref.~\cite{liu15}, i.e. $\Pe=0.9\Cn$ and $\Cn=0.75h/L$ with $h$ and $L$ the uniform mesh size and the characteristic length, respectively. To enforce mass conservation, the correction method proposed by \cite{shu} is used. This correction method resembles that of Ref.~\cite{soldati3} and exhibits good performance (see Section \ref{sec-rb}). \subsection{Navier-Stokes (NS) equations} \label{sec-ns} The fluid motion is governed by the momentum and continuity equations, \begin{equation} \rho\left(\frac {\partial {\bf u}} {\partial t} + {\bf u} \cdot \nabla {\bf u}\right)= - \nabla P + \frac{1}{\Re} \nabla \cdot \mu (\nabla {\bf u}+ \nabla {\bf u}^{T}) + \frac{\bf F_{st}}{\We}+ {\bf G}, \label{eq-ns} \end{equation} \begin{equation} \nabla \cdot {\bf u}= 0, \label{eq-con} \end{equation} which have been made dimensionless using the properties of fluid $1$. Here, $\bf u$ is the velocity and $P$ the pressure. $\rho$ and $\mu$ are the density and the dynamic viscosity, respectively, which are both functions of $C$ defined as, \begin{equation} \rho =C + \lambda_\rho(1-C), \label{eq-rho} \end{equation} \begin{equation} \mu =C + \lambda_\mu(1-C), \label{eq-mu} \end{equation} where $\lambda_\rho=\rho_2/\rho_1$ and $\lambda_\mu=\mu_2/\mu_1$ are the ratio of the densities and viscosities of the two phases (denoted by the subscript), respectively. The surface tension force ${\bf F}_{st}$ is computed as in \cite{ding07jcp}, \begin{equation} {\bf F}_{st} =6\sqrt{2}\phi \nabla C / \Cn. \label{eq-fst} \end{equation} In Eq.~(\ref{eq-ns}), the gravity force is ${\bf G}=-\rho/\Fr \, {\bf j}$ with $\bf j$ being the vertical direction. The dimensionless numbers controlling the problems are thus the Reynolds number $\Re=\rho_1UL/\mu_1$, the Weber number $\We=\rho_1 U^2 L/\sigma$, and the Froude number $\Fr=U^2/(gL)$, where $\sigma$ the surface tension coefficient, $g$ the gravity acceleration, and $U$ is the characteristic velocity. \section{Numerical method} \label{sec-num} We use staggered meshes and solve the CH equation on the uniform mesh with size $h$ for all three directions and the NS equations on the stretched mesh: the procedure for the coupling of the two meshes (uniform and stretched) is based on that reported in \cite{rodolfo} and it is described in Section \ref{sec-mesh}. A low-storage third-order Runge-Kutta method \cite{rk} is used to temporally advanced all the equations. The biharmonic term in Eq.~(\ref{eq-ch}), viscosity term in Eq.~(\ref{eq-ns}), and diffusion term in Eq.~(\ref{eq-t}) are implicitly solved by the Crank-Nicolson scheme, while the other terms are solved explicitly. In spatial discretization, central second-order accurate finite-difference schemes are used for all terms (details can be found in \cite{jcp96,ding07jcp}), except for two: one is the advection term of volume fraction $C$ in CH equation (\ref{eq-ch}), which is solved by fifth-order WENO scheme \cite{ding07jcp}, and the other is the biharmonic term which is solved by a novel scheme proposed in Section \ref{sec-bi}. \subsection{Discretization of biharmonic term in CH equation} \label{sec-bi} To accurately advance the CH equation (\ref{eq-ch}) with a large time step, we should implicitly solve the biharmonic term $\Cn^2\nabla^4 C$ at the right-hand side of Eq.~(\ref{eq-ch}). At the same time, its discretization scheme should retain the same order of error as the term $\nabla^2(C^3-1.5C^2+0.5C)$, which is also at the right-hand side of Eq.~(\ref{eq-ch}) and discretized by central second-order finite-difference schemes of $O(h^2/L^2)$. Typically, the biharmonic term is discretized according to Fig.~\ref{fig-dis}(b) (we restrict the expression to a 2D case for the ease of representation), \begin{equation} \begin{array}{rl} (\Cn^2\nabla^4 C)_{i,j}=&\displaystyle \left(\frac{0.75h}{L}\right)^2\left(\frac{\partial^4 C}{\partial x^4}+\frac{\partial^4 C}{\partial y^4}+\frac{2\partial^4 C}{\partial x^2 \partial y^2}\right)_{i,j}\\ \\ =&\displaystyle \left(\frac{0.75h}{L}\right)^2\left(\frac{L}{h}\right)^4(C_{i-2,j} -8 C_{i-1,j} +20 C_{i,j} -8 C_{i+1,j} + C_{i+2,j} \\ & +C_{i,j+2}-8 C_{i,j+1}-8C_{i,j-1}+C_{i,j-2} \\ & +2C_{i-1,j+1} +2 C_{i+1,j+1}+2C_{i-1,j-1}+2C_{i+1,j-1})\\ \\ & +O(h^4/L^4). \end{array} \label{eq-disbi} \end{equation} When we implicitly solve this expression, the presence of mixed partial derivatives poses challenges for computational cost and code parallelisation. To circumvent the use of mixed partial derivatives when solving Eq.~(\ref{eq-disbi}), we propose a new discretization scheme, which is shown in Eqs.~(\ref{eq-new}), (\ref{eq-a2d}) and (\ref{eq-a3d}). Thus, we can split this discretization into two one-dimensional parts $A_x C$ with $C_{i+m,j}$ and $A_y C$ with $C_{i,j+n}$, \begin{equation} \Cn^2\nabla^4 C=(A_x+A_y)C, \label{eq-divbi} \end{equation} which means that only the points on the axes remain (Fig.~\ref{fig-dis}b). Then, we can use the approximate-factorization method (described at the end of this section) to efficiently solve $\Cn^2\nabla^4 C$ implicitly. Our main idea is replacing $C_{i\pm1,j\pm1}$ in Eq.~(\ref{eq-disbi}) with $C_{i+m,j}$ and $C_{i,j+n}$ (Fig.~\ref{fig-dis}b), where $m$ and $n=-2,-1,0,1,2$. The replacement is justified based on the Taylor series expansions, \begin{equation} \left\{\begin{array}{lr} C_{i+m,j}= C_{i,j} \quad + m (h/L) C'_x \quad+ m^2 (h/L)^2 C''_x/2 & +\,m^3 (h/L)^3 C'''_x/6\quad+O(h^4/L^4),\\ & m=-2,-1,0,1,2;\\ \\ C_{i,j+n}=\ C_{i,j}\quad + n (h/L) C'_y\quad +n^2 (h/L)^2 C''_y/2 & +\,n^3 (h/L)^3 C'''_y/6\quad+O(h^4/L^4),\\ & n=-2,-1,0,1,2;\\ \\ C_{i+m,j+n}=C_{i,j}\ +\sqrt{2}m (h/L) C'_s\ +m^2(h/L)^2 C''_s & +\sqrt{2}m^3 (h/L)^3 C'''_s/3\ +O(h^4/L^4),\\ & (m,n)=(-1,1), (1,-1);\\ \\ C_{i+m,j+n}=C_{i,j}\ +\sqrt{2}m (h/L) C'_\tau\ +m^2(h/L)^2 C''_\tau & +\sqrt{2}m^3 (h/L)^3 C'''_\tau/3\ +O(h^4/L^4),\\ & (m,n)=(1,1), (-1,-1);\\ \end{array}\right. \label{eq-taylor} \end{equation} where we define $C'_e=(\partial C/\partial e)_{i,j}$ with $e=x$, $y$, $s$ and $\tau$, so do $C''_e$ and $C'''_e$. The directions $x$ and $y$ are the perpendicular axis directions in Cartesian coordinates, and the directions $s$ and $\tau$ are obtained by rotating $x$ and $y$ by $45^\circ$. Since the Laplacian operator is rotational invariant, we have \begin{equation} \nabla^2 C = C''_s+C''_\tau=C''_x+C''_y, \label{eq-c2} \end{equation} so we have the relations, \begin{equation} \begin{array}{rl} &C_{i+1,j+1}+C_{i+1,j-1}+C_{i-1,j+1}+C_{i-1,j-1}\\ =&4C_{i,j}+2(h/L)^2(C''_s+C''_\tau)+O(h^4/L^4)\\ =&4C_{i,j}+2(h/L)^2(C''_x+C''_y)+O(h^4/L^4)\\ =&2C_{i,j}+0.5\{[C_{i,j}+2^2 (h/L)^2 C''_x/2+O(h^4/L^4)]+[C_{i,j}+(-2)^2 (h/L)^2 C''_x/2\\&+O(h^4/L^4)]\}\\&+0.5\{[C_{i,j}+2^2 (h/L)^2 C''_y/2+O(h^4/L^4)]+[C_{i,j}+(-2)^2 (h/L)^2 C''_y/2\\&+O(h^4/L^4)]\}\\ =&0.5(2C_{i,j}+C_{i+2,j}+C_{i-2,j})+0.5(2C_{i,j}+C_{i,j+2}+C_{i,j-2})+O(h^4/L^4), \end{array} \label{eq-trans1} \end{equation} where the first and third-order derivatives are eliminated since the points are symmetrical about $(i,j)$. Thus, $C_{i\pm1,j\pm1}$ can be replaced by $C_{i+m,j}$ and $C_{i,j+n}$ as shown in Fig.~\ref{fig-dis}(b). Substituting Eq.~(\ref{eq-trans1}) into Eq.~(\ref{eq-disbi}), we get the new discretization scheme, \begin{equation} \begin{array}{rl} (\Cn^2\nabla^4 C)_{i,j}=&\displaystyle \left(\frac{0.75h}{L}\right)^2\left(\frac{L}{h}\right)^4(C_{i-2,j} -8 C_{i-1,j} +20 C_{i,j} -8 C_{i+1,j} + C_{i+2,j} \\ & +C_{i,j+2}-8 C_{i,j+1} -8C_{i,j-1}+C_{i,j-2}\\ & +2C_{i,j}+C_{i+2,j}+C_{i-2,j}+2C_{i,j}+C_{i,j+2}+C_{i,j-2})\\ \\ & +O(h^2/L^2)\\ \\ =&\displaystyle \left(\frac{0.75h}{L}\right)^2\left(\frac{L}{h}\right)^4(2C_{i-2,j} -8 C_{i-1,j} +10 C_{i,j} -8 C_{i+1,j} + 2C_{i+2,j} \\ & +2C_{i,j+2}-8 C_{i,j+1}+10 C_{i,j} -8C_{i,j-1}+2C_{i,j-2}) \\ \\ & +O(h^2/L^2), \end{array} \label{eq-new} \end{equation} where the error $O(h^2/L^2)$ is of the same order as the term $\nabla^2(C^3-1.5C^2+0.5C)$ at the right-hand side of Eq.~(\ref{eq-ch}). Comparing Eq.~(\ref{eq-new}) and Eq.~(\ref{eq-divbi}), we get the following pentadiagonal matrix, \begin{equation} A_x=A_y=\displaystyle \left(\frac{0.75L}{h}\right)^2 \left[\begin{array}{cccccccc} \cdots &&&\cdots&&&&\cdots\\ 2&-8&12&-8&2 &&&0\\ 0&\ddots&\ddots&\ddots&\ddots&\ddots &&\vdots\\ \vdots&&\ddots&\ddots&\ddots&\ddots&\ddots &0\\ 0&& &2&-8&12&-8&2 \\ \cdots &&&&\cdots&&&\cdots\\ \end{array}\right], \label{eq-a2d} \end{equation} for 2D, where the values in the first and last rows are determined by boundary conditions. Now, with the convenient form of Eq.~(\ref{eq-a2d}), the approximate-factorization method can be employed to solve the biharmonic term implicitly. The same idea can be directly extended to three-dimensions, and the points used in the mixed partial derivatives are replaced as shown in Fig.~\ref{fig-dis}(a). Thus, we get the operators, \begin{equation} A_x=A_y=A_z=\displaystyle \left(\frac{0.75L}{h}\right)^2 \left[\begin{array}{cccccccc} \cdots &&&\cdots&&&&\cdots\\ 4&-16&24&-16&4&&&0\\ 0&\ddots&\ddots&\ddots&\ddots&\ddots &&\vdots\\ \vdots&&\ddots&\ddots&\ddots&\ddots&\ddots &0\\ 0&& &4&-16&24&-16&4 \\ \cdots &&&&\cdots&&&\cdots\\ \end{array}\right], \label{eq-a3d} \end{equation} for 3D. With $A_x$, $A_y$ and $A_z$, we can use the approximate-factorization method \cite{axayaz,jcp96} to efficiently solve the following equation with the known $q^l$ from the previous time step and unknown $q^{l+1}$ for the next time step, \begin{equation}\label{eq-example} \frac{q^{l+1}-q^l}{\delta t} = E+\beta(A_x +A_y +A_z)\frac{q^{l+1}+q^l}{2}, \end{equation} where $E$ represents the terms calculated explicitly, $\beta$ is the constant coefficient, $A_x$, $A_y$ and $A_z$ are discretization operators, and $(q^{l+1}+q^l)/2$ originates from the Crank-Nicolson scheme. Eq.~(\ref{eq-example}) can be rewritten as, \begin{equation}\label{eq-re} \left[1-\frac{\delta t \beta}{2}(A_x +A_y +A_z)\right](q^{l+1}-q^l) = \delta t E+ \delta t \beta(A_x +A_y +A_z) q^l. \end{equation} Then we factorize the operators on the left, \begin{equation}\label{eq-fac} \left[1-\frac{\delta t \beta}{2}(A_x +A_y +A_z)\right] = \left(1-\frac{\delta t \beta}{2}A_x\right) \left(1-\frac{\delta t \beta}{2}A_y\right) \left(1-\frac{\delta t \beta}{2}A_z\right)+O(\delta t^2 \beta^2). \end{equation} After factorization, the computation only requires inversions of separate tridiagonal matrices rather than the inversion of a large sparse matrix, which leads to a significant reduction in computation cost and memory \cite{axayaz,jcp96}. Then, Eq.~(\ref{eq-re}) can be solved by the following steps, \begin{equation} \label{eq-fac-1} \left(1- \frac{\delta t \beta}{2}A_x\right)\delta q^* = \delta t E+ \delta t \beta(A_x +A_y +A_z) q^l, \end{equation} \begin{equation} \label{eq-fac-2} \left(1- \frac{\delta t \beta}{2}A_y\right)\delta q^{**} = \delta q^*, \end{equation} \begin{equation} \label{eq-fac-3} \left(1- \frac{\delta t \beta}{2}A_z\right)(q^{l+1}-q^l) = \delta q^{**}, \end{equation} where the superscript $*$ represents the intermediate parameter. In Eqs.~(\ref{eq-fac-1}), (\ref{eq-fac-2}) and (\ref{eq-fac-3}), the inversion of matrix will be extremely cheap when we carefully choose $A_x$, $A_y$ and $A_z$, respectively, provided they only involve the points in one dimension. \subsection{FFT-based solver with a split method for Poisson equation with large density contrast} \label{sec-fft} The NS equation (\ref{eq-ns}) is solved here by a projection method, \begin{equation} \frac{{\bf u}^{l+1}-{\bf u}^{*}}{\delta t}=-\frac{1}{\rho^{l+1}}\nabla P^{l+1}, \label{eq-pro} \end{equation} where ${\bf u}^*$ is an intermediate velocity field calculated from Eq.~(\ref{eq-ns}) without the pressure term. Considering $\nabla \cdot {\bf u}^{l+1}=0$, we have, \begin{equation} \nabla \cdot \left(\frac{1}{\rho^{l+1}}\nabla P^{l+1}\right)=\frac{1}{\delta t}\nabla \cdot {\bf u}^{*}. \label{eq-poi} \end{equation} To solve this Poisson equation with large density variations, we use the split method proposed by \cite{jcp14} to apply fast Poisson solver to Eq.~(\ref{eq-poi}). In the split method \cite{jcp14}, the Poisson equation (\ref{eq-poi}) with the variable coefficient $1/\rho^{l+1}$ is split into an implicit constant density part and an explicit variable part, \begin{equation} \frac{1}{\rho^{l+1}}\nabla P^{l+1}=\frac{1}{\rho_2}\nabla P^{l+1}+\left(\frac{1}{\rho^{l+1}}-\frac{1}{\rho_2}\right)\nabla (2P^l-P^{l-1}), \label{eq-split} \end{equation} where we define $\rho_2 \le \rho_1$. Substitute Eq.~(\ref{eq-split}) into Eq.~(\ref{eq-poi}), \begin{equation} \nabla^2 P^{l+1}=\nabla \cdot \left[\left(1-\frac{\rho_2}{\rho^{l+1}}\right)\nabla (2P^l-P^{l-1})\right]+\frac{\rho_2}{\delta t}\nabla \cdot {\bf u}^{*}. \label{eq-newpoi} \end{equation} Then, a standard fast Poisson solver can be used here. After getting $P^{l+1}$, the velocity field is updated as, \begin{equation} {\bf u}^{l+1}={\bf u}^{*}-\delta t\left[\frac{1}{\rho_2}\nabla P^{l+1}+\left(\frac{1}{\rho^{l+1}}-\frac{1}{\rho_2}\right)\nabla (2P^l-P^{l-1})\right]. \label{eq-u} \end{equation} \subsection{Pencil distributed parallel strategy} \label{sec-para} The parallel method in the present approach is a pencil distributed parallel strategy (details in \cite{cf15}). Here, the computational domain is split in two dimensions and this strategy allows us to use more CPU cores for large-scale computation, such as $70$ billion points with $64K$ cores as reported in \cite{cf15}. The other advantage is that this strategy is well coupled with the approximate-factorization method to implicitly solve the equations. The high performance of this parallel method has been extensively validated in \cite{cf15} and \cite{gpu}. Moreover, it has already been used in many studies of turbulent flows in large-scale simulations \cite{richard,zhu-tc,shan,zhu}. \subsection{Multi-resolution meshes for $C$ and $\bf u$} \label{sec-mesh} One feature of our method is that the volume fraction field $C$ can be integrated on a refined uniform mesh, even if the momentum field $\bf u$ is integrated on a non-uniform mesh. For the $C$ field, a uniform mesh is a recommended choice. The reasons for this are as follows: The computation of surface tension force is key to simulate multiphase flows. To ensure the truncation error of surface tension in space is of the same order in all directions, uniform mesh spacing in each direction is necessary near the interface. Furthermore, considering the drops in turbulent flows is likely to break up into smaller sized drops and distribute throughout the domain, the use of a uniform mesh can easily handle the spatially dispersed drops. Therefore, the uniform mesh is a good choice for $C$ field. On the other hand, in wall-bounded turbulence, the resolution requirements of the $\bf u$ field are more restrictive at the walls, where very thin kinematic boundary layers need to be resolved. The same strict requirements apply for Rayleigh--B\'{e}nard convection, where a large number of near-wall nodes are required to resolve thin thermal boundary layers \cite{olga}. Therefore, a stretched non-uniform mesh is a good choice for resolving $\bf u$ or the temperature field. This multi-resolution treatment of the mesh allows for large computational savings \cite{rodolfo, liu-dual} since the operations are by far cheaper when integrating the momentum field on coarser meshes, as compared to the single scalar Cahn-Hillard equation without any elliptic equation. The multi-resolution method that decouples $\bf u$ and $C$ works as follows. $\bf u$ is projected from a base mesh, which is non-uniform, to a refined uniform mesh on which $C$ resides. The projection employs a tri-cubic Hermite spline interpolation, with a stencil of four points in each direction, for a total of sixty-four points in three dimensions. Here, the Hermitian interpolation is a preferred since the accuracy has been proven to be sufficient for turbulent flows, and is considerably cheaper than other methods such as B-splines \cite{rodolfo}. This stencil is generated only once at the start of the simulation and is reused throughout. To preserve the solenoidal properties of the momentum field, instead of directly projecting $\bf u$, the normal velocity gradients on the base mesh are first computed and then the projection is applied on the normal velocity gradients. Finally, with a refined 2D velocity field interpolated at a reference location (in each direction), the refined velocities are integrated for the entire domain using the interpolated gradients. For the back-coupling of the $C$ field, the refined uniform mesh is directly projected to the stretched mesh since there is no solenoidal requirement for $C$. This down-sampling projection step is used to obtain $\mu$, $\rho$ and ${\bf F}_{st}$. The present method is an improvement over the previous method used in \cite{rodolfo}, since here, the stretched mesh can contain an arbitrary number of nodes employing different stretching parameters. \section{Results and discussion} \label{sec-case} In Section \ref{sec-shear}, we test the convergence of the results with mesh refinement and the performance of the new discretization scheme for the biharmonic term. Section \ref{sec-buble} shows the ability of the present approach to deal with large density and viscosity contrasts. In Section \ref{sec-rb0}, a possible application of multiphase turbulence is simulated --- Rayleigh-B\'enard convection with drops, where the performance of the multi-resolution meshes is also tested. \subsection{Drop deformation in shear flow} \label{sec-shear} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{fig1-1}% \caption{\label{fig-shear1} Configuration for drop deformation in shear flow.} \end{figure} In order to test the mesh refinement convergence of our approach and the performance of the new discretization scheme for the biharmonic term, we consider the deformation of a drop in a shear flow with matched density and viscosity. A drop of radius $R$ is initially placed at the center of a domain of $8R \times 8R \times 8R$, as shown in Fig.~\ref{fig-shear1}. In the domain, there are two no-slip plates moving at a speed of $U$ in opposite direction, and periodic boundary conditions are used in the other directions. Due to the shear stress exerted by the surrounding fluid, the drop elongates until the surface tension counteracts the resulting load. We define the deformation ratio $\Gamma = (L - B)/(L + B)$ as in \cite{dual, shear1, shear2} to quantify the degree of drop deformation, where $B$ and $L$ are the lengths of the minor and major axes of the deformed drop at equilibrium, respectively, see Fig.~\ref{fig-shear1}. The governing dimensionless parameters are the capillary number $\Ca = \mu\dot{\gamma} R/\sigma$, the Reynolds number $\Re= \rho\dot{\gamma} R^2/\mu$, and the Weber number $\We=\rho(\dot{\gamma} R)^2 R/\sigma=\Ca \, \Re$, where $\dot{\gamma} =2U/H$ is the shear rate and H the thickness of the fluid layer. Gravity is not considered here. With $\Ca \ll 1$ and $\Re \ll 1$, $\Gamma$ is expected to linearly depend on $\Ca$ accounting to $\Gamma\approx (35/32)\Ca$ \cite{shear0}. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{fig1-2a}% \hspace{0.1\linewidth} \includegraphics[width=0.4\linewidth]{fig1-2b}% \caption{\label{fig-shear2} (a) Comparison of the present results ($\nabla$) with the theoretical approach in \cite{shear0} (black line) and the previous numerical results in \cite{dual} ($\Delta$) in terms of the drop deformation ratio $\Gamma$ at various capillary numbers $\Ca$. (b) Convergence study with mesh refinement at $\Ca=0.1$ in terms of the error $E_h$ of $\Gamma$. The slope of the solid line is $k=1.4$.} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig1-3a}% \hspace{0pt} \includegraphics[width=0.7\linewidth]{fig1-3b}% \caption{\label{fig-shear3} Breakup of a spherical drop in shear flow at $\Ca=0.39$ and $\Re=1$. The snapshots are at $t=20$ (upper) and $t=29$ (lower).} \end{figure} Fig.~\ref{fig-shear2} shows the variation of the deformation ratio $\Gamma$ as function of $\Ca$ at $\Re=0.03$, for simulations performed on a grid with $h=0.005$. The comparison with the theoretical prediction \cite{shear0} and the previous numerical results \cite{dual} gives good agreement. With increasing $\Ca$, the deformation ratio $\Gamma$ becomes larger than the theoretical prediction \cite{shear0} since the assumption of $\Ca \ll 1$ for this prediction is no longer satisfied. As reported in the previous studies \cite{dual, shear1, shear2}, the drop breaks up at $\Ca = 0.39$ and $\Re=1$. We also perform this case in a domain of $12R \times 8R \times 8R$ as shown in Fig.~\ref{fig-shear3}. The drop breaks up into three smaller ones as expected. Fig.~\ref{fig-shear4} shows the results of the convergence study with different mesh size $h=0.0031$, $0.0042$, $0.0050$, $0.0063$ and $0.0100$ at $\Ca=0.1$. The numerical error $E_h$ is calculated by comparing $\Gamma$ to the value obtained with the finest mesh ($h=0.0031$). The convergence rate is of $1.4$, which is between $1$ and $2$, as expected since the phase-field method \cite{jacqmin00, ding07jcp} for the interface used here is first-order accurate while the NS solver is second order \cite{jcp96}. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{fig1-4}% \caption{\label{fig-shear4} Convergence study with mesh refinement in terms of $E_{max}$. Red symbols $\nabla$ represents the data with the explicit scheme of Eq.~(\ref{eq-disbi}) and blue $\delta t=5 \times 10^{-5}$, and $\Delta$ with the new implicit scheme of Eq.~(\ref{eq-new}) and $\delta t=2 \times 10^{-3}$. The slope of the solid line is $k=1.4$.} \end{figure} We have also tested the performance of the explicit discretization scheme in Eq.~(\ref{eq-disbi}) and the new implicit scheme in Eq.~(\ref{eq-a3d}) for the biharmonic term $\Cn^2\nabla^4 C$ described in Section \ref{sec-bi}. Since the explicit scheme requires a small time step, here we consider the quantity $\Gamma_{max}$, which is reached around $t=0.4$, instead of $\Gamma$ at equilibrium, which is attained only around $t=30$. Here we show a convergence study with mesh refinement at $\delta t = 5\times 10^{-5}$ (the largest value to maintain numerical stability) with the explicit scheme and $\delta t = 2\times 10^{-3}$ with the new scheme in Fig.~\ref{fig-shear4}, where the results agree well. It shows the new implicit scheme in Eq.~(\ref{eq-a3d}) is highly efficient and accurately discretizes the biharmonic term. Thanks to this, we can perform the large-scale simulations of turbulent multiphase flows. \subsection{Rising bubble with buoyancy} \label{sec-buble} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{fig2-1}% \caption{\label{fig-bubble1} Configuration for rising bubble with buoyancy.} \end{figure} In this subsection, we test the performance of the present approach by simulating a three-dimensional bubble rising in liquid water with a large density and viscosity contrast up to $1000$ and $100$ times, respectively, which has the same configuration as previous axisymmetric studies \cite{ding07jcp,rising}. Initially, we place a bubble (fluid 2) of radius $R$ in the domain of $8R \times 8R \times 8R$ with the distance from the bottom plate to bubble center of $1.6R$, as shown in Fig.~\ref{fig-bubble1}. No-slip and non-penetration boundary conditions are enforced at all boundaries. The dimensionless parameters controlling this problem are the Reynolds number $\Re=\rho U R/\mu=100$, the Bond number $\Bo=\rho U R^2/\sigma=200$, and the density and viscosity ratios $\lambda_\rho=0.001$ and $\lambda_\mu=0.01$, respectively. Note that $\Fr=1$ and $\We=\Bo$ due to the characteristic velocity $U=\sqrt{gR}$. The mesh used here is $400 \times 400 \times 400$, where the mesh size is the same as in the axisymmetric simulations \cite{ding07jcp,rising}. \begin{figure} \centering \includegraphics[width=\linewidth]{fig2-2}% \caption{\label{fig-bubble2} (a) Interface shape of the rising bubble of the present study. (b) Comparison of the results in the $x-z$ plane of the present study (black) with those of the previous studies \cite{ding07jcp} (right half, green dashed line) and \cite{rising} (left half, red dashed line) at $t=0.8$, $1.6$ and $2.4$, respectively.} \end{figure} Thanks to buoyancy, the bubble rises. For $\Bo \gg 1$, with the surface tension not large enough to counteract buoyancy, the bottom of the bubble will rise faster than the top, as shown in Fig.~\ref{fig-bubble2}. Therefore, eventually, the bubble breaks up from the tip and evolves into a toroid. Although here a three-dimensional case is performed to test the performance of our code, the flow is indeed axisymmetric, so that we can compare our results with the previous studies of axisymmetric simulations \cite{ding07jcp,rising}. In our numerical simulations, the breakup occurs at t = 1.61 and y = 4.1R, which agrees well with the simulations from previous studies using different numerical approaches, which are t = 1.60 and y = 4.05R with level set method \cite{rising}, and t = 1.61 and y = 4.09R with diffuse-interface method \cite{ding07jcp}. Besides, Fig.~\ref{fig-bubble2} presents the comparisons of the bubble shape at different time instants $t=0.8$, $1.6$ and $2.4$. It shows that also the shape of the bubble's interface in the present study is in good agreement with the previous ones \cite{ding07jcp,rising}. \subsection{Multiphase turbulent Rayleigh-B\'enard convection} \label{sec-rb0} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{fig3-1}% \caption{\label{fig-rb1} Configuration for breakup of one big drop in turbulent Rayleigh-B\'enard convection in a domain with a hot bottom plate ($\theta=1$) and a cold top plat ($\theta=0$).} \end{figure} Here we consider a possible application of multiphase turbulent flows by using the present approach: Turbulent Rayleigh-B\'enard convection with drops, as shown in Fig.~\ref{fig-rb1}. Rayleigh-B\'enard convection is the motion of a fluid in a cell heated from below and cooled from above \cite{rev1,rev2,chilla}. For Rayleigh-B\'enard convection, the temperature advection equation reads \begin{equation} \rho c_p \left(\frac{\partial {\theta}}{\partial t} + {\bf u} \cdot \nabla \theta \right)= \frac{1}{\sqrt {\Pr \Ra }}\nabla \cdot (k_d\nabla \theta), \label{eq-t} \end{equation} where $c_p=k_d/(\kappa \rho)$ is the specific heat capacity. The thermal conductivity $k_d$ is defined as \begin{equation} k_d =C + \lambda_{k_d}(1-C), \label{eq-k} \end{equation} where $\lambda_{k_d}=k_{d2}/k_{d1}$ is the ratio of the thermal conductivity. We choose the distance between the hot and cold plates as the characteristic length, and the free fall velocity $U=\sqrt{\alpha_1 g L \Delta}$ as the characteristic velocity. The relevant dimensionless groups of the configuration are the Rayleigh number $\Ra=\alpha_1 \rho_1 g L^3 \Delta/(\mu_1 \kappa_1)$, the Prandtl number $\Pr=\mu_1/(\rho_1 \kappa_1)$, where $\alpha$ is the thermal expansion coefficient, $\Delta$ the temperature difference and $\kappa$ the thermal diffusivity, in addition to the dimensionless numbers controlling the droplets. For this case the gravity force ${\bf G}$ in Eq.~\ref{eq-ns} depends on both $C$ and the dimensionless temperature $\theta$, whose effects on density are considered within the Boussinesq approximation, \begin{equation} {\bf G}=\left\{[C+\lambda_\alpha \lambda_\rho (1-C)] \, \theta-\frac{\rho}{\Fr}\right\} {\bf j}, \label{eq-g} \end{equation} where $\lambda_\alpha=\alpha_2/\alpha_1$ is the ratio of the thermal expansion coefficients $\alpha$. \subsubsection{Breakup of one big drop in turbulent Rayleigh-B\'enard convection} \label{sec-rb} Initially, a drop of radius $0.23H$ (represented by $C=1$) with matched density and viscosity with the ambient fluid is placed at the center of the domain $H \times H \times H$, with a linear temperature profile and zero velocity. The boundary conditions at the top and bottom plates are set as $C=0$, no-slip condition and fixed temperature $\theta=0$ (top) and $1$ (bottom). Periodic boundary conditions are used in the horizontal directions. The chosen dimensionless parameters are $\Ra=10^8$, $\Pr=1$ and $\We=8000$. Note that $\We$ here is large because it is defined by the system height instead of the droplet size. For local Weber number which is defined using the droplet size, we find that the value is $O(1)$ after the droplet breakup, which is consistent with the Kolmogorov-Hinze theory \cite{kol,hinze}. The chosen Rayleigh number is large enough for the flow to enter the turbulent regime. The mesh is $500 \times 500 \times 500$, which is consistent with the grid resolution checks in \cite{zhou}. \begin{figure} \centering \includegraphics[width=\linewidth]{fig3-2}% \caption{\label{fig-rb2} Snapshots of the interface shape of drops at $\Ra=10^8$, $\Pr=1$ and $\We=8000$. The times are (a) $t=9$, (b) $t=12$, (c) $t=15$ and (d) $t=100$. Temperature on the surface is shown in different colors (hot in red and cold in blue).} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\linewidth]{fig3-3}% \caption{\label{fig-rb3} Probability distribution function (PDF) of the drop size $D/H$ calculated from the drop volume. The solid and dashed lines indicate the scaling laws $-10/3$ \cite{pdf} and $-4/3$ \cite{pdfjfm}, respectively. The red circles denote the results on the single resolution meshes, and the blue cross symbols on the multi-resolution meshes.} \end{figure} Fig.~\ref{fig-rb2} shows snapshots of the drops in Rayleigh-B\'enard convection. The drops first deform due to buoyancy (see Fig.~\ref{fig-rb2}a), and then breaks up because of the small surface tension (see Fig.~\ref{fig-rb2}b). As time evolves, hundreds of drops of various sizes are advected in the turbulent field (Fig.~\ref{fig-rb2}c and \ref{fig-rb2}d). The drop size is characterized by an effective diameter $D$, which is defined as $\frac{4\pi}{3} (D/2)^3=V$, with $V$ being the drop volume. The resulting distribution of the drop sizes is shown in Fig.~\ref{fig-rb3}. We observe that the probability distribution function (PDF) of the large drops follows the scaling $(D/H)^{-10/3}$ while that of the small drops obeys the scaling $(D/H)^{-4/3}$, which both originate from the previous theory studies for the respective regimes \cite{pdf,pdfjfm}: First, in turbulent flows, the distribution of the drop size has been studied extensively. The well-known $-10/3$ scaling law for the large drops in turbulence was proposed in ref. \cite{pdf} and validated by many experimental and numerical studies \cite{nature02,pipe,pop16,LB19JFM}. Second, the derivation of the $-4/3$ scaling law for the relatively small drops originates from a recent study \cite{pdfjfm}. It is based on the energy balance in a regime dominated by surface tension. Fig.~\ref{fig-rb3} shows that the present numerical simulations and the theoretical analyses \cite{pdf,pdfjfm} give consistent results. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{fig3-4}% \caption{\label{fig-rb4} Time evolution of the mass loss $l_{loss}$ of drops in turbulent Rayleigh-B\'enard convection. The blue and red curves represent the results on the single and multi-resolution meshes, respectively.} \end{figure} The mass conservation is also tested in this section. Fig.~\ref{fig-rb4} shows the normalized mass loss $l_{loss}= (m_t-m_0)/m_0$, where $m_t$ is the mass of fluid $1$ (drops) at time $t$ and $m_0$ is the initial mass of the drop. We see that the maximal mass loss is of the order of $10^{-5}$ and the value of $l_{loss}$ is not increasing in time. This demonstrates the good mass conservation in the present approach, which is consistent with the other studies with phase-field methods \cite{ding07jcp, shu03, soldati3}. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{fig3-5a}% \hspace{0.1\linewidth} \includegraphics[width=0.4\linewidth]{fig3-5b}% \caption{\label{fig-rb5} (a) Wall time and (b) speedup of the computation time compared to that with single core as functions of CPU cores with gridpoints of $1000^3$ ($\Delta$) and $2000^3$ ($\nabla$). The empty symbols are the present data, and the filled symbols the data of turbulent single-phase flows \cite{gpu}.} \end{figure} We also simulated the case on multi-resolution meshes with otherwise unchanged parameters, uniform mesh of $500^3$ for the CH equation and stretched mesh of $250^3$ for the NS equation, i.e. the same resolution for volume fraction $C$ and a coarser one for velocity $\bf u$ and temperature $\theta$ compared to the single-resolution gird. The consistent results obtained on the multi- and single-resolution meshes are shown in Fig.~\ref{fig-rb3} and \ref{fig-rb4} in terms of PDF of $D/H$ and the time evolution of $l_{loss}$. We also test the computational efficiency of the method on the supercomputer MareNostrum at the Barcelona Computing Center (2 sockets Intel Xeon Platinum 8160 CPU with 24 cores each @ 2.10GHz, for a total of 48 cores per node). Two sets of gridpoints are used, i.e. $1000^3$ and $2000^3$, and the option of multi-resolution is not used here to fit the setting of the previous study. The wall clock time per step and the speedup comparing with a single core as functions of CPU cores are presented in Fig.~\ref{fig-rb5}. Compared to the AFiD code for single phase flows \cite{gpu}, the computational cost of the present approach for the multiphase flows is only less than $1.5$ times more. Moreover, the parallel efficiency is quite good until the CPU cores used are more than $3072$. These data show that the computational performance of the present approach for turbulent multiphase flows is nearly as good as the solver for turbulent single-phase flows. \subsubsection{Coalescence of $O(10^3)$ drops in Rayleigh-B\'enard convection} \label{sec-1000} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig4-1}% \caption{\label{fig-1000i} Initial configuration for the study of coalescence of $O(10^3)$ drops in turbulent Rayleigh-B\'enard convection. The color code represents the temperature.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{fig4-2}% \caption{\label{fig-1000t} Snapshots of the interface shape of drops at $\Ra=10^8$, $\Pr=1$ and $\We=1000$ with initially (a) $1000$ drops and (b) only one drop. Temperature on the surface is shown in the same color bar of Fig.~\ref{fig-rb2}. } \end{figure} The topological change of the interface includes the breakup and coalescence of drops. In Section \ref{sec-rb}, we clearly observed the breakup of drops. In this section, we will show the coalescence of $O(10^3)$ drops in turbulent Rayleigh-B\'enard convection. The initial setup is presented in Fig.~\ref{fig-1000i}, where we placed $1014$ drops with a uniform diameter of $0.08H$ in a domain of $2H\times 2H \times H$. The simulation was performed on the mesh of $1000\times 1000\times500$ and $2048$ CPU cores. The Weber number was set to $\We=1000$, which is smaller than that in Section \ref{sec-rb}. The other dimensionless parameters and boundary conditions are the same as in Section \ref{sec-rb}. As seen from the snapshots at $t=10$, $40$ and $150$ in Fig.~\ref{fig-1000t} (a), most of drops coalesce into larger ones. Since the Weber number here is smaller than that in Section \ref{sec-rb}, surface tension here is stronger and can resist inertia, leading to larger drop sizes. We also simulated a case with a different initialization, where only one big drop with a diameter of $0.8H$ is placed at the center of the domain. Although different initial conditions are used, similar statistic equilibrium states were obtained after sufficiently long times (see Fig.~\ref{fig-1000t}). \section{Conclusion} \label{sec-con} In this study we have shown how to efficiently implement the phase-field method into the single-phase DNS solver AFiD. A new discretization scheme for the biharmonic term $\Cn^2\nabla^4 C$ of the Cahn-Hilliard equation has been proposed. Together with the approximate-factorization method, the FFT-based Poisson solver, and a pencil distributed parallel strategy, massive DNSs (up to 8 billion gridpoints and 3072 CPU cores are used) for turbulent multiphase flows can be performed. The suggested new approach has then been validated by comparisons with several numerical experiments. In the case of drop deformation in shear flow, the results agree well with theoretical and previous numerical results, and the convergence study with mesh refinement shows an accuracy between first and second order, as expected. Then, also for the case of a rising bubble with buoyancy, good agreement is achieved when comparing our results with previous simulations, even with large density or viscosity contrast of up to $1000$ or $100$ times, respectively. Furthermore, in the case of breakup and coalescence of drops in turbulent Rayleigh-B\'enard convection, we observe good performance of our approach to deal with turbulent multiphase flows, including good mass conservation and high efficiency of computation, thus establishing our scheme to perform reliable simulations for turbulent multiphase flows in large-scale computations. The new scheme and code therefore offer great opportunities to better understand the physics of turbulent two-phase flow with coalescence and breakup of droplets and bubbles. \section*{Acknowledgments} This work was financially supported by ERC-Advanced Grant under the project no. 740479. We acknowledge PRACE for awarding us access to MareNostrum in Spain at the Barcelona Computing Center (BSC) under the project 2020225335, and Irene at Tr\'es Grand Centre de calcul du CEA (TGCC) under the project 2019215098. This work was also carried out on the national e-infrastructure of SURFsara, a subsidiary of SURF cooperation, the collaborative ICT organization for Dutch education and research. \bibliographystyle{model1-num-names}
{ "timestamp": "2021-05-06T02:09:39", "yymm": "2105", "arxiv_id": "2105.01865", "language": "en", "url": "https://arxiv.org/abs/2105.01865" }
\section{Introduction} Space-times containing plane gravitational waves have seen extensive analytical study over the years and many closed form solutions, which necessarily assume certain symmetries or wave profiles, now exist and their properties are known (see \cite{griffiths2016colliding} for an excellent overview.) While there are a number of analytic solutions for the propagation and collision of waves assuming a vanishing cosmological constant \cite{brinkmann1923riemann, peres1959some, takeno1961mathematical,khan1971scattering, penrose1972geometry, nutku1977colliding}, the non-vanishing cosmological constant analogues pale in number, and there are no closed form solutions for colliding waves in this case. Penrose's cut-and-paste method \cite{penrose1972geometry, penrose1968twistor}, which cuts Minkowski space-time along a null hyperplane, shunts one half along the same surface and then pastes the two halves back together gives rise to a space-time with one impulsive gravitational wave (i.e. with a Dirac delta function wave profile.) This has been generalized to non-zero, constant curvature backgrounds \cite{podolsky1999nonexpanding, podolsky1999expanding, griffiths2000exact, podolsky2000collision, podolsky2002exact, podolsky2019cut} where the wave fronts are topologically spherical for $\lambda>0$ and hyperboloidal for $\lambda<0$. There do not exist however, closed form solutions to the full non-linear Einstein equations with $\lambda\neq0$ that contain gravitational waves with \emph{plane symmetric} wave fronts. De Sitter space-time, the unique solution to the Einstein vacuum equations with constant positive scalar curvature, can be thought of as a model of a universe which is expanding at an accelerated rate from the positive $\lambda$ contribution. Quantum gravitational back-reaction on inflation \cite{tsamis1997quantum} allows for the creation of cosmic scale gravitational radiation, where, if one does not account for their creation, can be modelled completely classically through gravitational perturbations of de Sitter space-time \cite{tsamis2013pure,tsamis2014classical}. It is theorized that such a background of radiation may weaken the expansion and even halt it completely. Analytical calculations have been done to explore this hypothesis by studying how an expansion parameter and its time derivative could be manipulated through such a field at an initial instance of time. The question of what happens away from this surface remains unanswered, and attempting to answer this in the full non-linear regime analytically would be very difficult, if not impossible. In this paper, we numerically evolve the Einstein vacuum equations with positive cosmological constant in plane symmetry with the goal of shedding light on the above topics. To do so, we implement an initial boundary value problem following Friedrich and Nagy \cite{friedrich1999initial}, which is wellposed, and allows us to generate gravitational perturbations through boundary conditions rather than solving the constraints. This framework has already been implemented and numerically validated in previous work \cite{frauendiener2014numerical} for $\lambda=0$. We generalize this to an arbitrary cosmological constant as well as the inclusion of matter terms through components of $\Phi_{ab} = -(1/2)R_{ab} + (1/8)Rg_{ab}$ and scalar curvature $\Lambda = (1/24)R$ for completeness. We follow the conventions of Penrose and Rindler \cite{penrose1986spinors,penrose1988spinors} throughout. \section{Review of plane gravitational waves with $\lambda=0$} Here we briefly present the space-times of a single impulsive gravitational plane wave and the collision of two, which are colinearly polarized, with $\lambda=0$. This can be accomplished by summarizing the Khan-Penrose solution \cite{khan1971scattering}, which describes the latter. \begin{figure}[H] \centering \includegraphics[width=0.4\linewidth]{kpsoln.png} \caption{The structure of the Khan-Penrose solution.} \label{fig:kpsoln} \end{figure} Fig.~\ref{fig:kpsoln} showcases the Khan-Penrose solution in null coordinates $u,\,v$, where the two spatial dimensions that span the planes are suppressed, so that each point represents a plane. Null curves are represented by lines with slope $\pm1$ and the impulsive waves are given by $\Psi_0 = \delta(v),\,\Psi_4=\delta(u)$, where $\delta$ is the Dirac delta function, so their path is given by the dashed lines. These lines split the space-time into four regions. The lower region is Minkowski space-time, the two side regions are space-times containing one propagating wave only, and the top region is the interaction region after scattering. All four regions can be represented by the single line element \begin{eqnarray}\label{eq:kpsoln} \textrm{d}s^2 &= \frac{2(1 - p^2 - q^2)^{3/2}}{\sqrt{1 - p^2}\sqrt{1 - q^2}(pq + \sqrt{1 - p^2}\sqrt{1 - q^2})^2}\textrm{d}u\textrm{d}v \nonumber \\ &\quad -(1 - p^2 - q^2)\Big{(}\frac{\sqrt{1 - p^2} + q}{\sqrt{1 - p^2} - q}\Big{)}\Big{(}\frac{\sqrt{1 - q^2} + p}{\sqrt{1 - q^2} - p}\Big{)}\textrm{d}x^2 \nonumber \\ &\quad -(1 - p^2 - q^2)\Big{(}\frac{\sqrt{1 - p^2} - q}{\sqrt{1 - p^2} + q}\Big{)}\Big{(}\frac{\sqrt{1 - q^2} - p}{\sqrt{1 - q^2} + p}\Big{)}\textrm{d}y^2, \end{eqnarray} where $p := u\,\Theta(u)$ and $q := v\,\Theta(v)$. The interaction region contains a spacelike curvature singularity on the surface $u^2 + v^2 = 1$ and can be seen as such due to the divergence of, for example, the Weyl invariant $I$. The region containing only the $\Psi_0$ wave is where $u<0$ and $v\geq0$ and the line element Eq.~\eref{eq:kpsoln} reduces to \begin{equation}\label{eq:OneImpulsiveWave} \textrm{d}s^2 = 2\textrm{d}u\textrm{d}v - (1 + q)^2\textrm{d}x^2 - (1 - q)^2\textrm{d}y^2. \end{equation} This region and its $\Psi_4$ counterpart contain a \emph{fold singularity} along $v=1$ resp. $u=1$. As Eq.~\eref{eq:OneImpulsiveWave} can be transformed to Minkowski space-time by a coordinate transformation, one would think this is merely a coordinate singularity. However looking closer one sees that this is not the case as there does not exist a $C^1$ extension from this region to $v=1$ resp. $u=1$ \cite{matzner1984metaphysics}. Further, it is found that a certain projection of the $u=\;$constant, $v=\;$constant surfaces into Minkowski space in standard null coordinates converge at $v=1$. This has the consequence, which is discussed in more detail in Sec.~\ref{sec:AnalysisOfSingleWave}, that the spin-coefficients $\rho$ and $\rho'$, which when positive, represent the converging of a null geodesic congruence along $l^a$ and $n^a$ respectively, diverge to positive infinity, showing an ever strengthening contraction of null rays in both null directions. \section{The equations} \subsection{General setup}\label{sec:general-setup} We write the Einstein equations in the form of an IBVP following Friedrich and Nagy \cite{friedrich1999initial} with the additional imposition of a pair of commuting space-like Killing vectors that represent our plane symmetry. Further, we include matter coming from an energy momentum tensor $T_{ab}$. A detailed explanation of this process in vacuum with vanishing cosmological constant has been laid out in \cite{frauendiener2014numerical}. We only give a brief summary here, emphasising the differences when including a non-vanishing cosmological constant and matter. The Einstein equations take the form \begin{equation} \Phi_{ab} + (3\Lambda - \frac12\lambda)g_{ab} = 4\pi T_{ab}, \end{equation} where \begin{equation} R_{ab} = 6\Lambda g_{ab} - 2\Phi_{ab}, \end{equation} and $\Lambda$ and $\Phi_{ab}$ correspond to the trace and tracefree part of the Ricci tensor $\Phi_{ab}$ and $\lambda$ is the cosmological constant. To start setting up our gauge, we first assume our space-time can be foliated by planes. We then define the coordinates $t,z$ for time and the direction of wave propagation respectively, both being constant within the planes. Using the holonomic basis we define the null tetrad \begin{eqnarray} l^a &= \frac{1}{\sqrt{2}}\Big{(}(1+B)(\partial_t)^a + A(\partial_z)^a\Big{)},\\ n^a &= \frac{1}{\sqrt{2}}\Big{(}(1-B)(\partial_t)^a - A(\partial_z)^a\Big{)},\\[4pt] m^a &= \xi(\partial_x)^a+\eta(\partial_y)^a, \end{eqnarray} where $A,B,\xi,\eta$ are functions of $(t,z)$ only. This leads to the metric \begin{equation} g = \mathrm{d} t^2 - 2 \frac{B}{A}\, \mathrm{d} t\mathrm{d} z - \frac{1-B^2}{A^2}\, \mathrm{d} z^2 + \frac2{(\xi\bar\eta - \bar\xi\eta)^2} \left(\eta\,\mathrm{d} x - \xi\,\mathrm{d} y\right)\left(\bar\eta\,\mathrm{d} x - \bar\xi\,\mathrm{d} y\right). \end{equation} To obtain equations for the metric functions and find algebraic relations for the spin-coefficients (due to the plane symmetry assumption) we apply the commutator equations (see \cite{penrose1986spinors} Eq. (4.11.11)) to the coordinates. To obtain equations for the spin-coefficients we use the curvature equations (see \cite{penrose1986spinors} Eq. (4.11.12)). To obtain equations and algebraic relations for the components of the Weyl tensor $C_{abcd}$, $\Phi_{ab}$ and $\Lambda$, we use the equations coming from the Bianchi identity (see \cite{penrose1986spinors} Eqs (4.12.36-4.12.41)). The algebraic conditions are found to be \begin{eqnarray} \rho = \bar\rho,\quad \rho' = \bar\rho',\quad \kappa = \kappa' = \alpha = \beta = \tau = \tau' = 0, \\[4pt] \Psi_1 = \Psi_3 = 0,\quad \Psi_2 = \sigma\sigma' - \rho\rho' + \Lambda + \Phi_{11}, \\[4pt] \Phi_{01} = \Phi_{10} = \Phi_{12} = \Phi_{21} = 0. \end{eqnarray} Following Friedrich and Nagy, we set \begin{equation} \epsilon = \frac12(\rho - \rho' + F - \mu),\qquad \gamma = \frac12(\rho - \rho' + F + \mu), \end{equation} where the free function $F = \chi + i f$ is a freely specifiable gauge source function and $\mu$ is taken as a system variable. $\chi$ is the mean extrinsic curvature of the $z=$ constant hypersurfaces and $f$ determines the rotation of the $m^a$ frame vector along $(\partial_t)^a$. The geometrical interpretation of the new variable $\mu$ can be explained in the gauge $F = \rho' - \rho$, which is the gauge used for most of our results and turns out to be the Gau\ss\; gauge. Although predisposed to develop caustics, an expanding universe, which we consider here, acts to counter this. The fact we are in the Gau\ss\; gauge can be seen immediately by noticing that the only non-vanishing component of the acceleration of the unit time-like vector $(\partial_t)^a$ along itself is proportional to \begin{equation} \gamma + \bar{\gamma} + \epsilon + \bar{\epsilon} = F + \bar{F} + 2(\rho - \rho') = 0 \end{equation} for this choice of $F$. The ``acceleration'' $z^a\nabla_az^b$ of the space-like unit vector $z^a := A(\partial_z)^a$ along itself is proportional to $\mu + \bar{\mu}$, which gives an interpretation for the real part of $\mu$. The imaginary part just corresponds to a phase change of $m^a$. It is found that the equations for $\eta,\xi$ decouple from the others, and as they are superfluous to the results subsequently presented we do not include them in the system. The evolution equations are \numparts \begin{eqnarray} \sqrt2 \partial_t A &= (\mu + \bar\mu)\,A, \label{ee:1}\\ \sqrt2 \partial_t B &= (2\rho - 2\rho' + F + \bar F) + (\mu + \bar\mu) B,\label{ee:2}\\ \sqrt2 \partial_t \rho &= 3\rho^2 + \sigma \bar\sigma + \rho(F + \bar F) + \Phi_{00} - \Phi_{11} - 3\Lambda,\label{ee:3}\\ \sqrt2 \partial_t \rho' &= 3\rho^{\prime2} + \sigma' \bar\sigma' - \rho'(F + \bar F) - \Phi_{11} + \Phi_{22} - 3\Lambda,\label{ee:4}\\ \sqrt2 \partial_t \sigma &= 4\rho\sigma - \rho'\sigma + \rho\bar\sigma' + \sigma(3F - \bar F) + \Psi_0,\label{ee:5}\\ \sqrt2 \partial_t \sigma' &= 4\rho'\sigma' - \rho\sigma' + \rho'\bar\sigma - \sigma'(3 F - \bar F) + \Psi_4,\label{ee:6}\\ \sqrt2 \partial_t \mu &= \mu^2 + \mu\bar \mu - 3 (\rho - \rho')^2 + (\mu + \bar \mu) (\rho + \rho') - \sigma \bar\sigma - \sigma'\bar\sigma' + 2 \sigma\sigma'\nonumber\\ & - (\rho - \rho')(\bar F + 3F) - F^2 - F \bar F - \sqrt2 A\partial_z F - \sqrt2 B\partial_t F \nonumber \\ & - \Phi_{00} + 2\Phi_{11} - \Phi_{22} - 6\Lambda, \label{ee:7}\\[5pt] &\hspace{-3.9cm}(1-B) \partial_t \Psi_0 - A \partial_z \Psi_0 = \sqrt2 \left((2\rho - \rho' + 2 F + 2\mu)\Psi_0 + \sigma(3\Psi_2 + 2\Phi_{11}) + \bar\sigma'\Phi_{00}\right),\label{ee:8}\\ &\hspace{-3.9cm}(1+B) \partial_t \Psi_4 + A \partial_z \Psi_4 = \sqrt2 \left((2\rho' - \rho - 2 F + 2\mu)\Psi_4 + \sigma'(3\Psi_2 + 2\Phi_{11}) + \bar\sigma\Phi_{22}\right),\label{ee:9} \end{eqnarray} \endnumparts while the constraints take the form \numparts \begin{eqnarray} 0=C_1 &:= \sqrt2 A\partial_z\rho - (1 - 3 B) \rho^2 - (1 - B) \sigma\bar\sigma + \rho (\mu + \bar\mu + 2\rho')\nonumber \\ &\quad + \rho B (F + \bar F) -(1-B)\Phi_{00} - (1+B)\Phi_{11} - 3(1+B)\Lambda,\label{ce:1}\\[4pt] 0=C_2 &:= \sqrt2 A\partial_z\rho' + (1 + 3 B) {\rho'}^2 + (1 + B) \sigma'\bar\sigma' - \rho' ( \mu + \bar\mu + 2\rho) \nonumber \\ &\quad - \rho' B (F+\bar F) = (1-B)\Phi_{11} + (1+B)\Phi_{22} + 3(1-B)\Lambda,\label{ce:2}\\[4pt] 0=C_3&:=\sqrt2 A\partial_z\sigma + (1+B) \rho\bar\sigma' - 2 (1-2B)\rho\sigma + (1-B) \rho'\sigma \nonumber\\ &\hskip8em + \sigma(3\mu - \bar\mu) + B\sigma(3F - \bar F) - (1-B) \Psi_0 ,\label{ce:3}\\ 0=C_4&:= \sqrt2 A\partial_z\sigma' - (1-B) \rho'\bar\sigma + 2 (1+2B)\rho'\sigma' - (1+B) \rho\sigma' \nonumber\\ &\hskip8em - \sigma'(3\mu - \bar\mu) - B\sigma'(3F - \bar F) + (1+B) \Psi_4. \label{ce:4} \end{eqnarray} \endnumparts To supplement the above, the divergence free condition on the energy-momentum tensor (equivalently the Bianchi identity, which are given in \cite{penrose1986spinors}, see Eq. 4.12.40) gives \numparts \begin{eqnarray} &(1-B)\partial_t\Phi_{00} + (1+B)(\partial_t\Phi_{11} + 3\partial_t\Lambda) \nonumber \\ &= \sqrt{2}(2\rho + \mu + \bar\mu + F + \bar F)\Phi_{00} + 4\sqrt{2}\rho\Phi_{11} \nonumber \\ &\quad+ A(\partial_z\Phi_{00} - \partial_z\Phi_{11} - 3\partial_z\Lambda),\label{dfe:1}\\ &(1+B)\partial_t\Phi_{22} + (1-B)(\partial_t\Phi_11 + 3\partial_t\Lambda) \nonumber \\ &= \sqrt{2}(2\rho' + \mu + \bar\mu - F - \bar F)\Phi_{22} + 4\sqrt{2}\rho'\Phi_{11}\nonumber \\ &\quad+ A(\partial_z\Phi_{11} - \partial_z\Phi_{22} + 3\partial_z\Lambda).\label{dfe:2} \end{eqnarray} \endnumparts Considering only the vacuum equations with cosmological constant, i.e. $\Phi_{ab}=0$, Eqs~\eref{dfe:1}--\eref{dfe:2} are identically satisfied and Eqs~\eref{ee:1}--\eref{ee:9}, Eqs~\eref{ce:1}--\eref{ce:4} comprise a closed system of equations, where the evolution equations are symmetric hyperbolic and the constraints propagate. When matter terms are present and one takes into account Eqs~\eref{dfe:1}--\eref{dfe:2}, it is still found that the above system is symmetric hyperbolic and the constraints propagate. The resulting subsidiary system is \numparts \begin{eqnarray} \sqrt{2}\partial_tC_1 &= (6\rho + F + \bar F)C_1 + \bar\sigma C_3 + \sigma \overline{C_3}, \\ \sqrt{2}\partial_tC_2 &= (6\rho' - F - \bar F)C_2 + \bar\sigma' C_4 + \sigma' \overline{C_4}, \\ \sqrt{2}\partial_tC_3 &= (4\sigma + \bar\sigma')C_1 - \sigma C_2 + (4\rho - \rho' + 3F - \bar F)C_3 + \rho\overline{C_4}, \\ \sqrt{2}\partial_tC_4 &= (4\sigma' + \bar\sigma)C_2 - \sigma' C_1 + (4\rho' - \rho - 3F + \bar F)C_4 + \rho'\overline{C_3}. \end{eqnarray} \endnumparts In order to close the system, one must in general couple it to equations describing the evolution of matter. There is a lot of freedom in this choice and it depends very much on the physical situation one wants to model. In general this choice will alter the principal part and, as a consequence, symmetric hyperbolicity and constraint propagation could be lost. Two useful quantities are now introduced for monitoring the behaviour of the evolved space-time. First we note that the extrinsic curvature of our $t=\;$constant surfaces is $K_{ab}=-h_a^ch_b^d\nabla_ct_d$, where $h_{ab} = g_{ab} - t_at_b$ is the induced 3-metric on the surfaces and $t_a = (1-B^2)^{-1/2}(\textrm{d}t)_a$ is the unit conormal. We then define a local expansion parameter proportional to the mean extrinsic curvature $K_a{}^a$ as \begin{equation} \mathcal{H} := -\frac13K_a{}^a = \frac{\sqrt{2}(B^2-1)(B(F + \bar{F}) + \mu + \bar{\mu} + 2(\rho + \rho')) - 2 A \partial_zB}{6(1-B^2)^{3/2}}, \end{equation} which is used to monitor the expansion rate of the space-time along the time coordinate vector field. The Weyl scalar curvature invariants are useful tools for identifying whether a singularity is a curvature singularity. In the absence of matter and with our plane symmetry assumptions, the real part of $C_{abcd}C^{abcd}$ is the Weyl scalar curvature invariant \begin{equation}\label{eq:KretschmannScalar} I := 2\Psi_0\Psi_4 + 6\Psi_2^2. \end{equation} We define the wave profile \begin{equation} p(x) = \cases { 32a\sin(bx)^8 & $\displaystyle0<x<\frac{\pi}{b}$ \cr 0 & otherwise }, \end{equation} where $b=35\pi/4$ so that the area of the profile is $a=\int_0^{\pi/b} p(x)\textrm{d}x$ and the amplitude is $32a$. We take $a$ as a measure of the strength of the wave. The boundary conditions for $\Psi_0$ and $\Psi_4$ will make use of $p(x)$ and are chosen in the subsequent sections. \subsection{De-Sitter space-time} We investigate a variety of cases of plane gravitational waves propagating in de Sitter space-time (dS). The unperturbed metric in inflationary coordinates can be written \cite{hawking1973large} \begin{equation} \mathrm{d} s^2 = dt^2 - A_0^{-2}e^{2Ht}(\mathrm{d} x^2 + \mathrm{d} y^2 + \mathrm{d} z^2), \qquad H^2=\lambda/3,\label{eq:dSLineElement} \end{equation} which covers half of the space-time and matches our setup for plane symmetry. This represents an expanding universe of the FLRW type. The appearance of $A_0 := A(0,z)$ is used to scale the spatial directions and will be useful later. It is useful to write dS in terms of null coordinates as \begin{eqnarray} \mathrm{d} s^2 &= e^{2Ht}\Big{(}2\mathrm{d} u\,\mathrm{d} v - (\mathrm{d} x^2 + \mathrm{d} y^2)\Big{)} \\ &= 2\Big{(}\sqrt{2} - H(u + v + \sqrt{2})\Big{)}^{-2}\Big{(}2\mathrm{d} u\,\mathrm{d} v - (\mathrm{d} x^2 + \mathrm{d} y^2)\Big{)}, \end{eqnarray} with transformations \begin{eqnarray} u &= \frac{1}{\sqrt{2}}[H^{-1}(1 - e^{-Ht}) - A_0^{-1}(1+z)],\label{id:u}\quad\\ v &= \frac{1}{\sqrt{2}}[H^{-1}(1 - e^{-Ht}) - A_0^{-1}(1-z)].\label{id:v} \end{eqnarray} The Minkowskian analogue of the above can be found in the limit $H\rightarrow0$. In our formalism Eq.~\eref{eq:dSLineElement} gives the initial data \begin{eqnarray}\label{eq:dSID} A = A_0,\quad \rho = \rho' = \mu = \pm\sqrt{\lambda/6}, \end{eqnarray} with the remaining system variables, gauge quantities and matter terms vanishing. We will use the negative non-vanishing initial data, corresponding to a \emph{future expanding} universe, and set $\Phi_{ab}=0$. We incorporate into the system null coordinates $u(t,z),v(t,z)$ which satisfy $l^a\nabla_au=0$ and $n^a\nabla_av=0$ respectively. Their initial and boundary data are fixed by Eqs~\eref{id:u}, \eref{id:v} so that when no wave is present we reproduce the same null coordinates as in Eq.~\eref{eq:dSLineElement} when $F$ is chosen appropriately. The above expressions for $u,v$ were chosen so that initially $u(0,-1) = 0 = v(0,1)$. Having $u,v$ available allows us to define the semi-invariant coordinates $(T,Z)$ by $T:=\sqrt{2}(v+u)$ and $Z:=\sqrt{2}(v-u)$ with which we can produce Penrose-Carter diagrams, i.e. diagrams where null curves are lines with slope $\pm1$. When in exact dS, as $t\rightarrow\infty$ we obtain $T\rightarrow2(H^{-1} - A_0^{-1})$ and $Z\rightarrow2zA_0^{-1}$. \section{Numerical setup} We utilize the Python package COFFEE \cite{doulis2019coffee}, which contains all the necessary functionality to perform a numerical evolution using the method of lines. We discretize the $z$-direction into equi-distant points in the interval $[-1,1]$ and approximate the $z$-derivative using Strand's finite difference stencil \cite{strand1994summation} which is fourth order in the interior, third order on the boundary and has the summation-by-parts property \cite{gustafsson1995time}. We march in time using the explicit fourth order Runge-Kutta scheme with a timestep determined by $\Delta t= c\,\Delta z$, where $\Delta z$ is the step size in the $z$-direction and $c$ is the CFL constant. Unless otherwise stated we take $c=0.5$. Boundary conditions are imposed using the Simultaneous Approximation Term (SAT) method \cite{carpenter1999stable} with $\tau=1$. This particular selection of numerical methods within COFFEE has proven to be numerically sound for a variety of different systems (see for example \cite{frauendiener2014numerical,beyer2017numerical}). In the subsequent situations, all constraints are verified to converge at the expected order everywhere. \section{A single wave}\label{sec:SingleWave} \subsection{An analytical view}\label{sec:AnalysisOfSingleWave} Before analyzing the numerical results, it is worthwhile to perform a small analytic study of the propagation of one wave when either Minkowski or de Sitter initial data are taken. Firstly, the evolution equations for $\rho$ and $\sigma$ (Eqs~\eref{ee:3} and \eref{ee:5}), which have a close relationship to Sachs' optical equations, give with vanishing $\Phi_{ab}$ \begin{eqnarray} \sqrt{2}\partial_t\rho &= \rho(F + \bar{F}) + 3\rho^2 + \sigma\bar{\sigma} - 3\lambda,\qquad \\ \sqrt{2}\partial_t\sigma &= \sigma(3F - \bar{F} + 4\rho - \rho') + \rho\bar{\sigma}' + \Psi_0. \end{eqnarray} For the case of Minkowski initial data, which is obtained by setting $\lambda=0$ in the de Sitter initial data, and where we choose $F(t,z)=0$ to extend the exact gauge of dS to the whole space-time, we find the following: The introduction of a non-zero $\Psi_0$ on the right boundary causes $\sigma$ to become non-zero there. This in turn causes $\rho$ to become non-zero. As $\partial_t\rho>0$, we find that $\rho$ will inevitably diverge. Further, one can see by looking at the evolution equations for the primed spin-coefficients, all primed spin-coefficients stay zero throughout the space-time, due to the forever zero $\Psi_4$. Further, this implies that $\Psi_2=0$ everywhere and thus the Weyl invariant $I$ given by Eq.~\eref{eq:KretschmannScalar} also remains zero everywhere. These are well known result for propagation of a single plane gravitational wave in Minkowski space-time, see \cite{griffiths2016colliding} for an overview (in a different gauge). The case of expanding de Sitter initial data, with non-vanishing $\lambda$ and again choosing $F(t,z)=0$, is quite different. A non-zero $\Psi_0$ leads to a non-zero $\sigma$ as before, but now a non-zero $\sigma$ leads to a non-zero $\sigma'$ as well as a non-zero $\rho$. This non-zero $\sigma'$ then makes $\rho'$ and even $\Psi_4$ non-zero, implying the non-linear back-reaction effect is realized. This in turn leads to a non-zero Weyl invariant. The added complexity of the non-zero $\lambda$, which couples all system variables together in a complicated, non-linear way, stops us from concluding statements analogous to the Minkowski case as above, emphasising the need for numerics. \subsection{Numerical analysis} We now fix $\lambda=3$ and choose the boundary conditions to be \begin{equation} \Psi_4(t,-1) = 0,\qquad \Psi_0(t,1) = p(v(t)), \end{equation} where $p(v)$ has the area of the wave packet as a parameter, and the change of area is realized by a change in amplitude. We perform evolutions with wave areas $a$ taking the values $1.67,\,1.6765,\,1.6769105,\,1.6769106,\,1.676912$, $1.67695,\,1.68$ for reasons that will become apparent shortly. In all these cases, once the wave has entered and subsequently left the computational domain, the space-time is fully excited in that all system variables have evolved away from their original values. For the case of four smallest values of $a$ we find that the space-time asymptotes back to the de Sitter space-time everywhere. This indicates that the wave has been wiped out by the accelerated expansion, already in stark contrast to the Minkowskian analogue where a future singularity is guaranteed. Fig.~\ref{fig:Psi0andHOneWaveNoBlowup} shows a contour plot of $\Psi_0$ and $\mathcal{H}$ over the entire space-time. It is clear that $\mathcal{H}$ decreases due to the addition of the gravitational wave, but then settles back down to its original value of one. The only remaining effect after the wave has passed is the time delay between different regions of space-time, such as the left and right boundaries. \begin{figure}[H] \centering \subfloat[\centering $\Psi_0$] {{\includegraphics[width=0.5\linewidth]{Psi0ContourSingleWaveNoBlowup.png}}} \qquad \subfloat[\centering $\mathcal{H}$] {{\includegraphics[width=0.5\linewidth]{HContourSingleWaveNoBlowup.png}}} \caption{Contour plots of $\Psi_0$ and $\mathcal{H}$ plotted with respect to the semi-invariant $T$ and $Z$ coordinates where $a=1.67$. The dashed line represents the last timeslice and the crossed lines are $u=0$ and $v=0$.} \label{fig:Psi0andHOneWaveNoBlowup} \end{figure} To see how the representation of the null directions $l^a$ and $n^a$ in the coordinate basis change during the simulation, we look at the metric functions $A$ and $B$. It is seen that $A\rightarrow0$ as in the exact de Sitter case, representing the exponential expansion, and although initially $B$ increases to some value less than one, it asymptotes back to zero. Notably, the rate at which $A\rightarrow0$ and $B\rightarrow0$ causes the $\textrm{d}t\textrm{d}z$ metric coefficient to asymptote to a constant non-zero value and the $\textrm{d}z^2$ metric coefficient to diverge to positive infinity. The fact that $A$ and $B$ never actually reach zero implies our gauge remains regular. \begin{figure}[H] \centering \includegraphics[width=0.5\linewidth]{NullVec1.png} \caption{Two diagrams showcasing how the vectors $l^a$, $n^a, \partial^a_t$ and $\partial^a_z$ behave on the right boundary when a future singularity occurs.} \label{fig:RightBoundaryDiagram} \end{figure} For the four largest values of $a$ we find that the simulation crashes after some time due to $A\rightarrow0$ and $B\rightarrow1$ in finite time on the right boundary, the same as in the Minkowskian case. Fig.~\ref{fig:RightBoundaryDiagram} shows how this affects the relevant frame vectors there, where we note the relationships \begin{equation} t^a := \partial_t^a = \frac{1}{\sqrt{2}}\Big{(}l^a + n^a\Big{)},\qquad z^a := A \partial_z^a = \frac{1}{\sqrt{2}}\Big{(}(1-B)l^a - (1+B)n^a\Big{)}, \end{equation} where $t^a$ and $z^a$ are normalised. The left diagram is with respect to the $\{l^a,n^a\}$ null basis defined in the tangent space and exemplifies the fact that $t^a = \partial_t^a$ and is always normalised to one. It also showcases that the evolution of $z^a$ can cause trouble. This can be seen by noting that as $B\rightarrow1$ the $z=\;$constant surfaces become characteristic. The right diagram looks at another potential issue, this time in our $(T,Z)$-coordinates. In this case $t^a$ is no longer given by a vertical line, but $l^a$ and $n^a$ remain as lines with slope $\pm1$ from the definition of $u$ and $v$. The ``shrinking'' of the $n^a$ and the ``growing'' of $l^a$ is due to both coefficients of $n^a$ in the coordinate basis approaching zero, and enforces that $t^a$ is proportional to the sum of the two and that their normalisation conditions are maintained. This behaviour affects the expansion rate $\mathcal{H}$ and we find it decreases and actually diverges to $-\infty$ on the right boundary, as shown in Fig.~\ref{fig:IandHOneWaveBlowup}. \begin{figure}[H] \centering \subfloat[\centering $I$] {{\includegraphics[width=0.5\linewidth]{IContourSingleWaveBlowup.png}}} \qquad \subfloat[\centering $\mathcal{H}$] {{\includegraphics[width=0.5\linewidth]{HContourSingleWaveBlowup.png}}} \caption{Contour plots of $I$ and $\mathcal{H}$ plotted with respect to the semi-invariant $T$ and $Z$ coordinates where $a=3$. The dashed line represents the last timeslice.} \label{fig:IandHOneWaveBlowup} \end{figure} The features discussed above indicate that for these larger wave areas, the expansion rate of the space-time is not strong enough to overcome the contractivity of the wave, and a future singularity is formed. In the Minkowski case the analogue is a \emph{fold singularity} as discussed in Sec.~\ref{sec:AnalysisOfSingleWave}. In our de Sitter case, Fig.~\ref{fig:IandHOneWaveBlowup} and Fig.~\ref{fig:IAlongRightBoundary} show that the Weyl invariant $I$ is diverging on the right boundary (and similarly close to the right boundary), unlike the Minkowski case, and adds emphasis to the classification of a curvature singularity. \begin{figure}[H] \centering \includegraphics[width=0.5\linewidth]{I_AlongBoundary_a3p0.png} \caption{The Weyl invariant $I$ along the right boundary with $a=3$ for multiple $z$-resolutions which all fall within the same drawn curve.} \label{fig:IAlongRightBoundary} \end{figure} A final note is that changing the polarization of the wave, implemented by replacing $p(x)$ with $e^{i\phi}p(x)$ for some real constant $\phi$, does not affect the expansion rate or Weyl invariant as seen in Fig.~\ref{fig:Psi0andHOneWaveNoBlowup}, Fig.~\ref{fig:IandHOneWaveBlowup} and Fig.~\ref{fig:IAlongRightBoundary}. \subsection{Critical behaviour}\label{sec:criticalbehaviour} An obvious question has been raised: What is the critical behaviour when the ingoing wave has the critical wave area $a_c$ that separates these two distinct futures? One can obtain $a_c$ by using a simple binary search. For $\lambda=3$ this is found to be $1.67691055 < a_c < 1.67691056$. Fig.~\ref{fig:rhosigma_AlongRightBoundary} shows $\rho$ and $\sigma$ along the right boundary for various wave areas close to $a_c$. It is clear that as $a\rightarrow a_c$ an interval appears where $\rho$ and $\sigma$ are constant in time and the interval becomes longer the closer $a$ is to $a_c$. This indicates that a special critical behaviour may exist. \begin{figure}[H] \centering \subfloat[\centering $\rho$] {{\includegraphics[width=0.5\linewidth]{rhoAlongRightBoundary.png}}} \qquad \subfloat[\centering $\sigma$] {{\includegraphics[width=0.5\linewidth]{sigmaAlongRightBoundary.png}}} \caption{Plots of $\rho$ and $\sigma$ along the right boundary for different wave areas close to $a_c$. The curves corresponding to the first four values of $a$ from smallest to largest are the curves asymptoting back to their initial values from left to right. The curves corresponding to the larger four values of $a$ from smallest to largest are the curves which diverge from right to left.} \label{fig:rhosigma_AlongRightBoundary} \end{figure} All system variables except $A$ become constant in a finite $t$-interval on the boundary which becomes larger the closer to $a_c$ we take our wave area, and $\mu,\rho,\rho',\sigma,\sigma'$ take on values different than their initial ones. This implies a steady state solution, different from the de Sitter space-time. It turns out we can solve for unconstrained steady state solutions (but with $A$ a function of time) algebraically by setting all time derivatives except $A$ to zero in our evolution system, as well as taking $\Psi_0 = \Psi_4 = F = 0$. One of these solutions is found to match the values we see numerically. However, this exact solution \emph{does not} satisfy the constraint equations, and is thus a ``false'' steady state. This can be seen explicitly during our evolution, by noticing that the constraints do not converge and are wildy violated during this steady state period, see Fig.~\ref{fig:BifurcationConstraintViolation}. This is a consequence of our free evolution scheme, which by definition, is ``free'' from enforcing the constraints to be satisfied. It is found that the only free steady state solution (with $A$ varying in time) that also satisfies the constraints in the case of a positive cosmological constant with $\Psi_0 = \Psi_4 = F = 0$ is the de Sitter space-time. \begin{figure}[H] \centering \includegraphics[width=0.65\linewidth]{ConstraintViolationOneWaveBifurcation} \caption{A convergence test for the constraint $C_1$ along the right boundary for the case of a single wave where $a=1.6769105$ and $\lambda=3$.} \label{fig:BifurcationConstraintViolation} \end{figure} The fact that no critical behaviour exists for this wave profile ansatz will be important when attempting to find a solution where the expansion is halted with gravitational radiation, and will be discussed in detail in Sec.~\ref{sec:SupressingExpansion}. \subsection{An impulsive wave}\label{sec:OneImpulsiveWave} Many analytical solutions describing gravitational waves in the literature have an impulsive wave profile, i.e. $\Psi_0 = \delta(v)$ where $v$ is a null coordinate, which is a consequence of the cut-and-paste method of Penrose \cite{penrose1972geometry}. An example is the propagation of a single impulsive gravitational plane wave with $\lambda=0$ given by Eq.~\eref{eq:OneImpulsiveWave}, where $\Psi_2 = 0 = \Psi_4$. To date, an exact solution for a single propagating plane gravitational wave with $\lambda>0$ has not been found. One cannot use Penrose's cut-and-paste method to find such a solution because this leads to wavefronts that are spherical or hyperboloidal when $\lambda>0$ or $\lambda<0$ respectively \cite{podolsky2019cut}. Thus, to try shed some light toward an analytic solution, we numerically evolve our system with $\lambda>0$ and with one ingoing wave, whose wave profile approximates the Dirac delta function. We set $\Psi_0(v,1) = q(v)$ where \begin{equation}\label{eq:qBC} q(x) := \cases { a\sin(bx)^8 & $\displaystyle0<x<\frac{\pi}{b}$ \cr 0 & otherwise }, \end{equation} where $b = 35\pi a / 128$ and $q(x)$ has the property that $\displaystyle\lim_{a\rightarrow\infty}q(x)=\delta(x)$. We also change our gauge and fix $F$ by the condition that $\partial_t B=0$. This matches the gauge of the exact solution given by Eq.~\eref{eq:OneImpulsiveWave} and yields $F = \rho' - \rho$. We choose $a=128,\,256,\,512,\,1024$, populate our spatial interval $z\in[-1,1]$ with $6401$ equi-distant points to accurately resolve these steep wave profiles and choose $\lambda=0.6$ and $\lambda=1.2$ to exemplify futures that do and do not have a singularity respectively. For $\lambda=1.2$, to see the effect of the limit $a\rightarrow\infty$, Fig.~\ref{fig:ImpulsiveWaveAlongz0Lambda0p2} shows the Weyl components along $z=0$. These seem to indicate that in this limit, they all vanish for $v>0$ along $z=0$. By inspection it is clear that this happens along any $z=\;$constant curve once the wave has past and thus in the whole region $v>0$. Further, all system variables asymptote back to dS after the wave has past, and no singularity is formed. Note the numerical error in Fig.~\ref{fig:ImpulsiveWaveAlongz0Lambda0p2} (a) just before $t=1$. This is due to the steep wave profile and its interaction with the left boundary propagating back into the computational domain. This phenomenon is discussed in detail in Sec.~\ref{sec:CollidingImpulsiveWaves}. \begin{figure}[H] \centering \subfloat[\centering $\Psi_0$] {{\includegraphics[width=0.33\linewidth]{psi0_OneImpulsiveWave_Alongz0_Lambda0p2.png}}} \qquad \subfloat[\centering $\Psi_2$] {{\includegraphics[width=0.33\linewidth]{psi2_OneImpulsiveWave_Alongz0_Lambda0p2.png}}} \qquad \subfloat[\centering $\Psi_4$] {{\includegraphics[width=0.33\linewidth]{psi4_OneImpulsiveWave_Alongz0_Lambda0p2.png}}} \caption{Plots along $z=0$ of $\Psi_0, \Psi_2$ and $\Psi_4$ as $a\rightarrow\infty$ with $\lambda=0.6$.} \label{fig:ImpulsiveWaveAlongz0Lambda0p2} \end{figure} Fig.~\ref{fig:ImpulsiveWaveAlongRightBoundaryLambda0p10p2} shows the $\Psi_2$ and $\Psi_4$ components along the right boundary for $\lambda=0.6$ and $\lambda=1.2$, where a future singularity is formed when $\lambda=0.6$. \begin{figure}[H] \centering \subfloat[\centering $\Psi_2$ for $\lambda=0.6$] {{\includegraphics[width=0.5\linewidth]{psi2_OneImpulsiveWave_AlongRightBoundary_Lambda0p1.png}}} \qquad \subfloat[\centering $\Psi_2$ for $\lambda=1.2$] {{\includegraphics[width=0.5\linewidth]{psi2_OneImpulsiveWave_AlongRightBoundary_Lambda0p2.png}}} \\ \subfloat[\centering $\Psi_4$ for $\lambda=0.6$] {{\includegraphics[width=0.5\linewidth]{psi4_OneImpulsiveWave_AlongRightBoundary_Lambda0p1.png}}} \qquad \subfloat[\centering $\Psi_4$ for $\lambda=1.2$] {{\includegraphics[width=0.5\linewidth]{psi4_OneImpulsiveWave_AlongRightBoundary_Lambda0p2.png}}} \caption{Plots along $z=1$ of $\Psi_2$ and $\Psi_4$ as $a\rightarrow\infty$ with $\lambda=0.6$ and $\lambda=1.2$.} \label{fig:ImpulsiveWaveAlongRightBoundaryLambda0p10p2} \end{figure} Like in the $\lambda=0$ case this is a curvature singularity. It is much easier to see this in the $\lambda > 0$ case as the Weyl invariant $I$ diverges to positive infinity. \section{Two waves} We now present results pertaining to the scattering of two colinearly polarized gravitational waves. The setup is analogous to the single wave case of Sec.~\ref{sec:SingleWave} with the exception of the boundary condition for $\Psi_4$, which is now taken to be $\Psi_4(t,-1) = p(u(t))$. We continue to use the gauge $F=\rho'-\rho$ which corresponds to the gauge used in the Khan-Penrose solution for colliding colinearly polarized impulsive gravitational plane waves with $\lambda=0$ \cite{khan1971scattering}. It is found that many features are similar to the case of one wave. \subsection{Comparison against $\lambda=0$} The general behaviour can be explained by looking at contour plots of $I$ in Fig.~\ref{fig:CollidingWavesI} for varying $\lambda$ (so that we can see how $\lambda>0$ differs from $\lambda=0$) and fixing $a=1$ in the wave profiles. If $\lambda$ is small enough ($\lambda=0$ or $\lambda=0.06$), we obtain a future curvature singularity. As $\lambda$ gets larger ($\lambda=0.6$), the expansion increases the time before this singularity occurs. If we increase $\lambda$ more ($\lambda=6$), we get to the situation where the expansion has wiped out the waves and the effect of their scattering on the curvature, and we asymptote back to dS again. \begin{figure}[H] \centering \subfloat[\centering $\lambda=0$] {{\includegraphics[width=0.5\linewidth]{I_Lambda0.png}}} \qquad \subfloat[\centering $\lambda=0.06$] {{\includegraphics[width=0.5\linewidth]{I_Lambda0p01.png}}} \\ \subfloat[\centering $\lambda=0.6$] {{\includegraphics[width=0.5\linewidth]{I_Lambda0p1.png}}} \qquad \subfloat[\centering $\lambda=6$] {{\includegraphics[width=0.5\linewidth]{I_Lambda1.png}}} \caption{Penrose-Carter contour plots of the Weyl invariant $I$ for the case of colliding waves with varying $\lambda$.} \label{fig:CollidingWavesI} \end{figure} Fig.~\ref{fig:CollidingWavesH} shows the expansion rate $\mathcal{H}$ decreasing the most in the centre of the collision, $u=v$, where the Weyl invariant $I$ attains a local maximum (in time and space.) \begin{figure}[H] \centering \subfloat[\centering $\lambda=0.06$] {{\includegraphics[width=0.5\linewidth]{ContourPlotHTwoCollidingWavesLambda0p01}}} \qquad \subfloat[\centering $\lambda=6$] {{\includegraphics[width=0.5\linewidth]{ContourPlotHTwoCollidingWavesLambda1}}} \caption{Penrose-Carter contour plots of the expansion rate $\mathcal{H}$ for the case of colliding waves with varying $\lambda$.} \label{fig:CollidingWavesH} \end{figure} \subsection{Critical behaviour}\label{sec:criticalbehaviourtwowaves} Now we fix $\lambda=0.6$ and see how varying the area of the wave profile $a$ affects things. We find the following three scenarios, where $a_1$ and $a_2$ are given later in the section, and are found with binary search: \begin{itemize} \item[1] $a\lessapprox a_1$: asymptote back to dS. \item[2] $a_1\lessapprox a \lessapprox a_2$: $\mu\rightarrow\infty$ but $I\rightarrow0$. \item[3] $a\gtrapprox a_2$: $\mu\rightarrow\infty$ and $I\rightarrow\infty$. \end{itemize} Only in case 3 do $\rho,\,\rho',\,\sigma,\,\sigma'$ diverge, in the other two they asymptote back to their initial values. Due to the evolution equation $\sqrt{2}\partial_tA = (\mu + \bar{\mu})A$, in cases 2 and 3 we have that $A\rightarrow\infty$ also, causing the $t,z$ portion of the line element to approach $\textrm{d}t^2$, causing an infinite contraction in the $z$-direction. This is represented in $l^a$ and $n^a$ as shown in Fig.~\ref{fig:AToInfinityDiagram}, where the $t=\;$constant surfaces approach being null. Further, as we discovered in Sec.~\ref{sec:general-setup}, the real part of $\mu$ is essentially the acceleration of the unit conormal to the $z=\;$constant surfaces and the fact that this acceleration diverges to negative infinity agrees with the contraction in this direction. We are in the Gau\ss\; gauge, and along spatially constant curves, which are in this case geodesics, the proper time and the time $t$ are equivalent. Our gauge can then be thought of as adapted to free falling observers. This then lends the physical interpretation of the caustic singularity in case 2. The three possible futures occurring after the interaction of the gravitational waves with these observers can then be described as follows: \begin{itemize} \item Case 1: The gravitational contraction is not strong enough to cause the timelike geodesics to converge or the curvature to diverge. \item Case 2: The gravitational contraction is strong enough to cause the timelike geodesics to converge and create a coordinate singularity. However, it is not strong enough to cause the curvature to diverge and this goes back to zero. \item Case 3: The gravitational contraction is strong enough to cause both the timelike geodesics to converge and the curvature to diverge, resulting in a physical curvature singularity. \end{itemize} \begin{figure}[H] \centering \includegraphics[width=0.3\linewidth]{NullVec2.png} \caption{The effect of $A\rightarrow\infty$ as $t\rightarrow\infty$ on the null vectors along a $z=\;$constant curve.} \label{fig:AToInfinityDiagram} \end{figure} It is noted that in the gauge $B=0$ the characteristic speeds of the waves are $\pm A$. In the cases where $A\rightarrow\infty$ we decrease the CFL number $c$ dynamically to avoid instabilities and settle with smaller timesteps instead. As we now have \emph{two} bifurcations, which we call $a_1$ and $a_2$, it remains to be seen whether these will have critical behaviours. We find, again using binary search, that approximately $a_1 \approx 0.852548$. Unlike in Sec.~\ref{sec:criticalbehaviour}, the constraints do not diverge as our simulations use a wave profile with area $a$ closer to $a_1$. Fig.~\ref{fig:HIMuCrit} and Fig.~\ref{fig:AHAlongz0MuCrit} show that the expansion rate drops to around $25\%$ of its original value at its minimum, for a long time, before asymptoting back to dS again. This implies that we can cause, with just two colliding waves, the expansion rate to locally decrease substantially for a certain period, without causing a future singularity. Further, it is noticed that although $\mu$ differs substantially in the above cases, $\rho, \rho', \sigma$ and $\sigma'$ change very little, and if drawn differ by an amount smaller than the drawn curve. \begin{figure}[H] \centering \subfloat[\centering $\mathcal{H}$] {{\includegraphics[width=0.5\linewidth]{HContourCollidingWavesMuCrit.png}}} \qquad \subfloat[\centering $\mathcal{I}$] {{\includegraphics[width=0.5\linewidth]{IContourCollidingWavesMuCrit.png}}} \caption{The expansion rate $\mathcal{H}$ and Weyl invariant $I$ with $a\approx a_1$ and $\lambda=0.6$.} \label{fig:HIMuCrit} \end{figure} \begin{figure}[H] \centering \subfloat[\centering $A$] {{\includegraphics[width=0.4\linewidth]{A_CollidingWavesAlongz0MuCrit.png}}} \qquad \subfloat[\centering $\mathcal{H}$] {{\includegraphics[width=0.4\linewidth]{H_CollidingWavesAlongz0MuCrit.png}}} \caption{The metric function $A$ and the expansion rate $\mathcal{H}$ with $\lambda=0.6$ along $u=v$ equiv. $z=0$ for multiple values of $a$ close to $a_1$.} \label{fig:AHAlongz0MuCrit} \end{figure} We find, again using binary search, that approximately $a_2\approx0.9595$, and find that taking $a$ close to this value results in the constraints remaining well behaved. Fig.~\ref{fig:IAlongz0ICrit} shows that the Weyl invariant $I$ diverges for $a>a_2$, goes to zero for $a<a_2$ and goes to some other value when $a\approx a_2$. In all these cases $\mu$ diverges to infinity and thus so does $A$. This implies that to maintain a stable evolution our timestep must decrease to compensate, and the simulations shown in Fig.~\ref{fig:IAlongz0ICrit} stop when the timestep becomes smaller than 1e-8. It is likely that the simulation with $a=0.9595$ does not converge to some constant value other than zero, but rather we cannot march in time far enough to see it either diverge to infinity or approach zero. \begin{figure}[H] \centering \subfloat[] {{\includegraphics[width=0.4\linewidth]{I_CollidingWavesAlongz0ICrit.png}}} \qquad \subfloat[] {{\includegraphics[width=0.4\linewidth]{Crit_I_a0p9595.png}}} \caption{(a) The Weyl invariant $I$ with $\lambda=0.6$ along $u=v$, i.e. $z=0$, for multiple values of $a$ close to $a_2$ and (b) a contour plot of $I$ for $a=0.9595$.} \label{fig:IAlongz0ICrit} \end{figure} \subsection{Impulsive waves}\label{sec:CollidingImpulsiveWaves} As in Sec.~\ref{sec:OneImpulsiveWave} we mimic the Dirac delta function wave profiles of the $\lambda=0$ solutions. For colliding waves, this is when $\Psi_0 = \delta(v)$ and $\Psi_4 = \delta(u)$. We thus choose our wave profiles as $\Psi_0(v,1) = q(v)$, $\Psi_4(u,-1) = q(u)$, where $q(x)$ is given in Eq.~\eref{eq:qBC} and approximates the Dirac delta function. Our results in Sec.~\ref{sec:criticalbehaviourtwowaves} indicate that we should explore three possible regions, namely regions where we asymptote back to dS, $\mu$ diverges but not $I$, and where $I$ diverges. These still exist for the approximately impulsive wave profiles and are exemplified by choosing $\lambda=6,\,0.72$ and $0.6$ respectively. \begin{figure}[H] \centering \subfloat[\centering $\Psi_2$ for $\lambda=0.06$] {{\includegraphics[width=0.5\linewidth]{psi2_CollidingImpulsiveWaves_Alongz0_Lambda0p01.png}}} \qquad \subfloat[\centering $\Psi_4$ for $\lambda=0.06$] {{\includegraphics[width=0.5\linewidth]{psi4_CollidingImpulsiveWaves_Alongz0_Lambda0p01.png}}} \\ \subfloat[\centering $\Psi_2$ for $\lambda=0.72$] {{\includegraphics[width=0.5\linewidth]{psi2_CollidingImpulsiveWaves_Alongz0_Lambda0p12.png}}} \qquad \subfloat[\centering $\Psi_4$ for $\lambda=0.72$] {{\includegraphics[width=0.5\linewidth]{psi4_CollidingImpulsiveWaves_Alongz0_Lambda0p12.png}}} \\ \subfloat[\centering $\Psi_2$ for $\lambda=6$] {{\includegraphics[width=0.5\linewidth]{psi2_CollidingImpulsiveWaves_Alongz0_Lambda1.png}}} \qquad \subfloat[\centering $\Psi_4$ for $\lambda=6$] {{\includegraphics[width=0.5\linewidth]{psi4_CollidingImpulsiveWaves_Alongz0_Lambda1.png}}} \caption{Plots along $u=v$ equiv. $z=0$ of $\Psi_2$ and $\Psi_4$ as $a\rightarrow\infty$ with $\lambda=0.6,\,0.72$ and $6$.} \label{fig:CollidingImpulsiveWavesAlongz0} \end{figure} Fig.~\ref{fig:CollidingImpulsiveWavesAlongz0} shows $\Psi_2$ and $\Psi_4$ over time along $u=v$ for the different values of $\lambda$. In particular, we see that they do not converge to zero for $u,v>0$ as $a\rightarrow\infty$ as in the case of one wave. This is to be expected from comparison with the Khan-Penrose solution, which already has non-vanishing $\Psi_0,\,\Psi_2$ and $\Psi_4$ in the region after scattering, as well as a theorem by Szekeres \cite{szekeres1965gravitational}. Of particular note is the abrupt change in sign of the first time derivative of $\Psi_4$ for $\lambda=0.72$. This sharp turn, which is smooth with a small enough timestep, does not appear this distinctly in any other system variables, except for $\Psi_0$ due to symmetry. Fig.~\ref{fig:Psi4ContourCollidingWaves} shows that this turning point occurs not only at some point along $u=v$ but along an entire null surface which follows the characteristic of $\Psi_4$ from the point where the left boundary hits $v=0$. This is the result of the vanishing boundary condition for $\Psi_4$ on the left boundary being in disagreement with the non-vanishing $\Psi_4$ tail generated by $\Psi_0$ as it passes through the boundary. While at first sight it makes sense to impose a no ingoing radiation condition, this is blatantly unphysical when the evolution itself creates ingoing modes. Between the boundary condition and the evolution equation it is the latter which is fundamental. The boundary condition is nearly completely free to choose and is put in ``by hand''. A common question in a non-linear regime with boundaries that contain both ingoing and outgoing modes is then: How does one make consistent the ``corner condition'', i.e. the physical compatibility between data on a timeslice induced via evolution and the boundary data to yield a physically meaningful result? The answer is simply that there is no clear way to prescribe boundary conditions that match the values in the interior unless one already has an exact solution \begin{figure}[H] \centering \includegraphics[width=0.5\linewidth]{Psi4ContourLambda0p12a128.png} \caption{A contour plot of $\Psi_4$ for $a=128$ and $\lambda=0.72$ in the case of colliding impulsive waves.} \label{fig:Psi4ContourCollidingWaves} \end{figure} \section{Suppressing the expansion with a train of waves}\label{sec:SupressingExpansion} In \cite{tsamis2014classical}, the rate of change of an expansion rate parameter $H_{TW}$ with respect to time on a space-like initial value surface (IVS) was calculated to be $N(H^2 - (1/3)K^{ab}K_{ab})$, where $N$ is the lapse in their coordinate system, $K_{ab}$ is the extrinsic curvature to the IVS and $H$ is as per our definition of dS in inflationary coordinates. They hypothesize that there should be no reason why initial data cannot be chosen to satisfy $K^{ab}K_{ab} > 3H^2$ so that the expansion is slowed down and even completely halted\footnote{$N$ is a lapse and should always be positive.}. We can investigate this numerically without solving the constraints by simply choosing dS initial data together with a variety of boundary conditions and seeing how the space-time evolves. We thus explore how a train of waves, generated by choosing the boundary conditions for $\Psi_0$ and $\Psi_4$ appropriately, might accomplish this. To do so, we fix $\lambda=0.6$ and define a new function \begin{equation}\label{eq:streambc} p_{stream}(x) = \cases { 32a\cos(c\,x^2)^8\sin(b\,x)^8 & $\displaystyle0<x<\sqrt{\frac{\pi}{2c}}$ \cr 0 & otherwise }, \end{equation} where $a=0.894,\,b=3129\pi/128000,\,c=1/3$ and we choose $\Psi_0(v,1) = p(v),\,\Psi_4(u,-1) = p(u)$. These constants were chosen through trial and error to give the largest decrease in the expansion while maximizing the interval of time this occurred, before either a singularity is formed or the space-time starts to approach dS again. The cosine factor has the effect of decreasing the amplitude of the wave until it completely vanishes at $ct^2=\pi/2$. This is to hold off a future singularity forming, while still decreasing the expansion rate $\mathcal{H}$. \begin{figure}[H] \centering \subfloat[$\mathcal{H}$] {{\includegraphics[width=0.5\linewidth]{HContourStream.png}}} \qquad \subfloat[$I$] {{\includegraphics[width=0.5\linewidth]{IContourStream.png}}} \caption{Our expansion rate $\mathcal{H}$ and the Weyl invariant $I$ where boundary conditions were chosen using Eq.~\eref{eq:streambc} and with $\lambda=0.6$.} \label{fig:HIContourStream} \end{figure} Fig.~\ref{fig:HIContourStream} shows the expansion rate $\mathcal{H}$ and Weyl invariant $I$ as contour plots. We see that the expansion rate slowly declines across the entire spatial domain and after a long time ($t\approx14$), forms a coordinate singularity as in case 2. This Weyl invariant shows clearly where the collision regions are, and after a time the waves begin to drag more and more curvature along with them as a tail. \begin{figure}[H] \centering \includegraphics[width=0.5\linewidth]{HAlongz0.png} \caption{Our expansion rate $\mathcal{H}$ along $z=0$ where boundary conditions were chosen using Eq.~\eref{eq:streambc} and with $\lambda=0.6$.} \label{fig:HStreamAlongZ0} \end{figure} Fig.~\ref{fig:HStreamAlongZ0} shows just how long we can decrease the expansion for, while holding off forming a singularity. Note that along a spatially constant curve the simulation time $t$ is the proper time of a free falling observer along this curve, and so when we talk about trying to maximize the length of time before a singularity occurs, it is inherently physical. In previous sections we have found that if a singularity was to form after a collision of two waves (with our wave profile), this happens after a few $t$. However here, we can decrease the expansion considerably, without forming a singularity, for up to $t=13$. We cannot however find boundary conditions that lower the expansion rate to zero without very quickly forming a singularity. It is certainly possible that such finely tuned boundary conditions exist, but our studies suggest that they would be very special. \section{Summary and discussion} \label{sec:summary} In this paper we have put forward the full non-linear Einstein equations with cosmological constant and non-vanishing energy momentum with the assumption of plane symmetry. These equations were realized through the Newman-Penrose formalism and the imposition of the Friedrich-Nagy gauge, leading to a wellposed initial boundary value problem with timelike boundaries. We specialized to vacuum where $\lambda>0$ and chose initial data to be that induced by the de Sitter space-time in inflationary coordinates. This allowed the exploration of how this space-time is affected by gravitational perturbations, which we generated through appropriate boundary conditions for $\Psi_0$ and $\Psi_4$. It was found that when only one of the waves was non-vanishing the space-time either wiped out the wave via expansion, or the wave was too strong and a future singularity was produced. The bifurcation was studied and did not produce any critical behaviour. The wave profile was taken to approximate the Dirac delta function to analogize with a known exact solution for $\lambda=0$. With both waves non-vanishing, and in the physically motivated Gau\ss\; gauge, we found three distinct situations: The waves were not strong enough to cause a contraction of our timelike curves to create a singularity, a coordinate singularity is formed but the curvature remains finite, or a curvature singularity is formed. The second case shows that we can create a singularity where the Weyl invariant $I$ does not diverge, but our expansion parameter diverges to negative infinity along, and close to, the surface $u=v$. The critical behaviour of the two bifurcations separating these futures was explored. Impulsive wave profiles were approximated and it was shown that two bifurcations occur in this case as well. We encountered two numerical pitfalls during our exploration. Firstly, our free evolution resulted in a false steady state solution close to the bifurcation of the single wave case. As we chose our wave area closer to the bifurcation value, our free evolution approached a steady state (while $A$ was still evolving in time) that did not satisfy the constraints. This happened even though the constraints were satisfied and converged above and below this critical value, showing how careful one must be in monitoring constraints during a free evolution. Secondly, we found that the combination of the evolution system and our non-radiating boundary conditions became unphysical in the colliding wave case after the waves left the computational domain through the boundaries. This was due to the backreaction of the waves creating tails of ingoing radiation, at odds with the boundary conditions. This is however, independent of the fact that our system is wellposed and numerically stable. The question as to how one could ``guess'' the right boundary conditions is delicate and creates a problem that all non-linear simulations, in particular in numerical relativity, face. We presented how the above situations affected the local expansion rate $\mathcal{H}$, which was taken to be the mean extrinsic curvature of our timeslices up to a constant factor. It was shown that for the case of two waves colliding, we could decrease $\mathcal{H}$ substantially for a long period of time, where the cut-off was determined by numerical limitations, before the space-time asymptoted back to dS. We could do a similar thing with a continuous stream of waves, making the expansion rate drop more uniformly over the computational domain. This showcased the potential to lower the expansion rate over a wider spatial interval. Although we were not able to find boundary conditions that completely halted expansion for a period before either asymptoting back to de Sitter space-time or forming a singularity, we could still lower it substantially for a long time. Even so, our results do not violate the hypothesis of Woodard and Tsamis', namely that our universe may be in an unstable gravitationally bound state. It would be interesting to see whether, with further testing, we may be able to find boundary conditions that do completely halt expansion for a period. Now that exploration toward the behaviour of plane gravitational waves with $\lambda>0$ has started and details have been uncovered, it would also be interesting to see whether one can use the results as hints toward an exact solution for impulsive waves. For the case of one propagating impulsive wave, knowing that the Weyl components vanish in the region after the wave has passed should already be a good start. \section*{References} \bibliographystyle{elsarticle-num}
{ "timestamp": "2021-05-06T02:12:00", "yymm": "2105", "arxiv_id": "2105.01906", "language": "en", "url": "https://arxiv.org/abs/2105.01906" }
"\\section{Introduction}\n\n29P/Schwassmann-Wachmann 1 (SW1) is a continuously active Centaur at the(...TRUNCATED)
{"timestamp":"2021-05-19T02:20:21","yymm":"2105","arxiv_id":"2105.01789","language":"en","url":"http(...TRUNCATED)
"\\subsection{Similarity join under $\\ell_2$ metric}\n\\label{sec:l2}\n\nIn this section, we consid(...TRUNCATED)
{"timestamp":"2021-05-06T02:07:38","yymm":"2105","arxiv_id":"2105.01818","language":"en","url":"http(...TRUNCATED)
"\\section{\\textbf{Introduction}} Fractional calculus (FC) and fractal geometry (FG) have become ra(...TRUNCATED)
{"timestamp":"2021-05-06T02:10:35","yymm":"2105","arxiv_id":"2105.01885","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\n\n\n\n\nCold atom systems play a key role in both fundamental and applie(...TRUNCATED)
{"timestamp":"2021-05-06T02:12:04","yymm":"2105","arxiv_id":"2105.01907","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nACM's consolidated article template, introduced in 2017, provides a\nconsi(...TRUNCATED)
{"timestamp":"2021-05-06T02:07:44","yymm":"2105","arxiv_id":"2105.01823","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\\label{introduction}\n\n\\indent A \\textit{starter} in an additive abelia(...TRUNCATED)
{"timestamp":"2021-05-06T02:11:23","yymm":"2105","arxiv_id":"2105.01895","language":"en","url":"http(...TRUNCATED)
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
8