diff --git "a/9tFST4oBgHgl3EQfbTje/content/tmp_files/load_file.txt" "b/9tFST4oBgHgl3EQfbTje/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/9tFST4oBgHgl3EQfbTje/content/tmp_files/load_file.txt" @@ -0,0 +1,2568 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf,len=2567 +page_content='Highlights Partitioning Distributed Compute Jobs with Reinforcement Learn- ing and Graph Neural Networks Christopher W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Parsonson, Zacharaya Shabka, Alessandro Ottino, Georgios Zervas Demonstrate that deciding how much to partition distributed jobs is a key factor in determining overall system throughput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Demonstrate that optimising for only the job completion time leads to high blocking rates and poor throughput in dynamic job arrival scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Introduce a new partitioning algorithm which leverages reinforcement learning, a graph neural network, and a novel formulation of the user- defined job completion time specification to automatically learn to partition jobs such that the blocking rate is minimised and user require- ments are met.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Demonstrate the proposed algorithm out-performing baselines on a state- of-the-art optical network architecture running five real deep learning computation graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='13799v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='LG] 31 Jan 2023 Partitioning Distributed Compute Jobs with Reinforcement Learning and Graph Neural Networks Christopher W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Parsonson1,∗, Zacharaya Shabka1, Alessandro Ottino1, Georgios Zervas1 ∗Corresponding author: zciccwf@ucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='uk 1UCL Partitioning Distributed Compute Jobs with Reinforcement Learning and Graph Neural Networks Christopher W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Parsonson1,∗, Zacharaya Shabka1, Alessandro Ottino1, Georgios Zervas1 Abstract From natural language processing to genome sequencing, large-scale ma- chine learning models are bringing advances to a broad range of fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Many of these models are too large to be trained on a single machine, and instead must be distributed across multiple devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This has motivated the research of new compute and network systems capable of handling such tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In particular, recent work has focused on developing management schemes which decide how to allocate distributed resources such that some overall objective, such as minimising the job completion time (JCT), is optimised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, such studies omit explicit consideration of how much a job should be distributed, usually assuming that maximum distribution is desirable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In this work, we show that maximum parallelisation is sub-optimal in relation to user-critical metrics such as throughput and blocking rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To address this, we propose PAC-ML (partitioning for asynchronous computing with machine learning).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' PAC-ML leverages a graph neural network and reinforcement learning to learn how much to partition computation graphs such that the number of jobs which meet arbitrary user-defined JCT requirements is maximised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In experiments with five real deep learning computation graphs on a recently proposed optical architecture across four user-defined JCT requirement distri- butions, we demonstrate PAC-ML achieving up to 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2% lower blocking rates in dynamic job arrival settings than the canonical maximum parallelisation strategy used by most prior works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Keywords: Deep Learning, Reinforcement Learning, Graph Neural Networks, ∗Corresponding author: zciccwf@ucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='uk 1UCL Preprint submitted to Journal of Parallel and Distributed Computing February 1, 2023 Distributed Asynchronous Computing, Job Partitioning, Optical Networks 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Introduction The last decade has seen an exponential increase in the amount of compute demanded by big data jobs such as artificial intelligence (AI) and genome processing, with resource requirements doubling every 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 months since 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 50× faster than Moore’s Law (OpenAI, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This trend is showing no sign of slowing down.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The fundamental relationship between neural net- work accuracy and scale (Kaplan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020) provides a strong incentive for practitioners seeking performance improvement to further increase their resource requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Moreover, brain-scale AI will require at least as many parameters as the ≈1 000 trillion synapses present in the human brain (Furber, 2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' several orders of magnitude more than the largest models used today.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The compute time and memory requirements of state-of-the-art big data applications already far outstrip the capabilities of any single hardware device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For example, one of the current largest deep neural networks (DNNs), Megatron-Turing natural language generation (MT-NLG) (Smith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022), contains 530 billion parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' These parameters alone occupy ≈1 000 GB, exceeding the capacity of the largest A100 GPU by over an order of magnitude, and the parameter loss gradients tracked during training occupy several times more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Even if the model could be fitted onto a single device, the training time would be ≈900 years2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To address these compute time and memory demands, rather than using a single device, big data jobs must be distributed and parallelised across a cluster of machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For example, the Selene supercomputing cluster (NVIDIA, 2020) consists of 358 400 A100 GPU tensor cores, bringing the MT-NLG training time from 900 years down to the order of days3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, parallelising jobs across ever-more machines brings its own challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' With any parallelisation strategy, at some point the output of each ‘worker’ (a single device processing at least part of a job) must be collected and synchronised to get the overall result of the parallelised computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This synchronisation requires communication between the workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As the number 2Assuming it takes 8 V100 GPUs 36 years to train a 175 billion parameter model (NVIDIA, 2022) and extrapolating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3Assuming a linear parallelisation speedup and 0 communication overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2 Figure 1: How the network overhead of six distributed deep learning jobs (encompassing object tracking, recommendation, natural language processing, and image recognition) increases with the number of workers used in Meta’s GPU cluster (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' of workers used to execute a job is increased, the per-worker computation demands decrease, but the overall communication overhead between workers grows (see Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This shifts the performance bottleneck away from the workers themselves and into the network connecting them, and brings additional challenges with managing varying traffic characteristics for different job types and parallelisation strategies (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Parsonson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Benjamin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To address the communication bottleneck in distributed computing, recent works have sought to develop optical clusters (Benjamin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ballani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Khani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' machines interconnected by optical switches (Parsonson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Gerard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Compared to their electronic counterparts, optically switched networks offer orders of magnitude improvements in scalability, bandwidth, latency, and power consumption (Ballani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Zervas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021) (see Section 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Optical clusters are typically operated under the optical circuit switched (OCS) paradigm due to its non-blocking circuit configurations with high capacity and scalability (Raja et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' OCS networks are fundamentally different from the electronic packet switched (EPS) architectures used by most current clusters, resulting in entirely new communication patterns and resource demand characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Consequently, new compute and network resource management schemes are needed in order to optimally allocate jobs and maximise performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3 Job 1 Job 3 Job 5 Job 2 Job 4 Job 6 60 40: 20 - 101 102 # WorkersOf the many resource management tasks which must be performed in a compute cluster, job partitioning (how to split a job up across how many devices) is key to overall performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' More partitioning can lead to lower compute times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, it may also increase network overhead and occupancy of cluster resources, possibly leading to future jobs being blocked upon arrival and consequently lower overall cluster throughput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Prior works such as SiP- ML (Khani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021) have introduced simple partitioning heuristics for optical networks which have notably improved cluster performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, they have not been designed under the more realistic setting of dynamic and stochastic job arrivals, have not considered the state of the cluster in a ‘network-aware’ manner when making partitioning decisions, and have been crafted to optimise for the sub-optimal objective of minimising job completion time (JCT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In this work, we first argue that simply minimising the JCT is a naive objective because it brazenly encourages more parallelisation of a job re- quest without considering the effect this has on the ability of a cluster to service subsequent jobs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We then introduce a new more subtle formulation of the optimisation metric, the user-defined blocking rate, which more aptly encompasses the desires of cluster users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Next, we propose a simple modi- fication of the quantised SiP-ML partitioner which, rather than maximally parallelising all jobs, minimally parallelises them such that they meet the user-defined maximum acceptable completion time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Then, we propose a novel network-aware partitioning strategy (see Figure 5 and Section 5) called PAC-ML (partitioning for asynchronous computing with machine learning) which utilises reinforcement learning (RL) and a graph neural network (GNN) to flexibly meet the demands of the user in an arbitrary manner given the current state of the cluster network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Finally, we demonstrate our method in simulation on the recently propsed RAMP optical architecture (Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022), achieving up to 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2% lower blocking rates than the best heuristic baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We show that different user-defined demand environments require different partitioning strategies for optimal results, and that a key advantage of PAC-ML is that it is able to discover performant strategies automatically without the need for handcrafted heuristics or environment-specific tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Related Work Recent years have seen a surge of interest in developing methods to distribute machine learning (ML) tasks across multiple devices (Ben-Nun 4 and Hoefler, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mayer and Jacobsen, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' One approach has been to optimise the physical plane of the distributed cluster such as its compute and network devices and architectures (Parsonson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Khani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In this work, we instead focus on optimising the virtual plane, which determines how physical layer resources are allocated to execute a job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We divide the virtual plane into three sub-components: Job (1) partitioning (how many devices to use);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2) placement (which devices to use);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' and (3) scheduling (in which order to use the devices).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Many prior virtual plane works have considered (2) and (3) (how to distribute), whereas we focus on (1) (how much to distribute).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, in this section we comment on recent progress across all these fields, since we leverage this progress throughout our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' ML for discrete optimisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Many combinatorial optimisation (CO) problems turn out to be NP-hard, rendering exhaustive search techniques intractable for practical application (Bengio et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Consequently, practitioners rely on either approximate algorithms, which give restricted performance guarantees and poor scalability (Williamson and Shmoys, 2011), or heuristics, which have limited solution efficacy (Halim and Ismail, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Since the first application of neural networks to CO by Hopfield and Tank (1985), the last decade has seen a resurgence in ML-for-CO (Bello* et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Dai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Barrett et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Gasse et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Barrett et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Parsonson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The advantages of ML-for-CO over approximation algorithms and heuristics include handling complex problems at scale, learning either without external input and achieving super-human performance or imitating strong but computationally expensive solvers, and (after training) leveraging the fast inference time of a DNN forward pass to rapidly generate solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Since almost all cluster resource management tasks can be reduced to canonical CO problems (Bengio et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021), many state-of-the-art resource management methods utilise recent advances in ML-for-CO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Job placement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mirhoseini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2017) were the first to apply ML to the task of deciding which operations in a computation graph to place on which devices in a cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' They used a sequence-to-sequence model consisting of an LSTM DNN with an attention mechanism trained with the simple REINFORCE policy gradient RL algorithm (Williams, 1992) such that the JCT of a deep learning job was minimised, outperforming handcrafted heuristics when training the Inception-V3 computer vision and LSTM natural language processing models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2018) furthered this work by replacing REINFORCE with the more advanced proximal policy optimisation (PPO) 5 RL algorithm (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017) with lower variance and reduced training hardware demands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' They demonstrated their method beating Mirhoseini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2017) on the CIFAR-10 image recognition benchmark in terms of JCT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mirhoseini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2018a) proposed a novel hierarchical model which decomposed the job placement task into a joint group-and-place problem, reducing the JCT of Inception-V3, ResNet, LSTM, and NMT models by up to 60% relative to the state-of-the-art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' All works up to this point used DNN architectures restricted to Euclidean- structured input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Consequently, in order to handle non-Euclidean graph-structured data such as computation graphs and cluster networks, they had to be re-trained each time a new graph structure was considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Addanki et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2019a) were the first to instead leverage a GNN, as well as the grouping scheme of Mirhoseini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2018a), to learn to generalise across different job types with varying computation graph structures, demonstrating device placement schemes which were on par with or better than prior approaches on Inception-V4, NASNet, and NMT after 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1× fewer training steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Khadka et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2021) furthered the use of GNNs for job placement by combining GNNs, RL, and population-based evolutionary search with the hierarchical group-and-place scheme of Mirhoseini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2018a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Concretely, they replaced the manually-designed operation grouping heuristic with a learned policy capable of superior scaling and JCT performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Job scheduling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Bao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2018) addressed the job scheduling problem (the order in which to execute operations placed across a set of devices) using a primal-dual framework for online job scheduling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' They represented the problem as an integer linear programme (ILP) which their proposed algorithm could solve in polynomial time in an online fashion such that the cluster resources were maximally utilised and the JCT minimised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2021) proposed a placement-aware scheme which leveraged the pre- determined device placement allocation to decide on a job schedule which could reduce the average JCT by up to 25% relative to other scheduling methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Paliwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2020) went further by utilising an RL-trained GNN and a genetic algorithm to jointly optimise both job placement and scheduling, demonstrating both lower JCT and peak memory usage than other strategies when distributing TensorFlow computation graphs across a cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Job partitioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To the best of our knowledge, Khani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2021) are the only ones to have explicitly considered the question of how much to distribute a computation graph in the context of an optical network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Like other works, they assumed that a maximum parallelisation strategy (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 6 partition the job across as many workers as possible) is a desirable objective, and then focused on how best to design the physical layer such that the JCT could be minimised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' All works discussed in this section have assumed that the JCT is the key objective to minimise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Consequently, where the question of partitioning is considered, prior works have assumed that more parallelisation is desirable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, we posit that user-critical metrics such as throughput and blocking rate are compromised by prioritising optimisation of the JCT in a cluster setting with dynamic job arrivals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To address this shortcoming, we propose a new ML-based resource management scheme which explicitly addresses the partitioning question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Concretely, our work leverages the emergent trend from these other virtual plane fields, namely utilising an RL-trained GNN, to decide how much to partition different jobs in a dynamic setting with arbitrary user-defined completion time requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Background 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Parallelisation Types of parallelism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Parallelisation is the process of distributing a computational job across multiple devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This is done in order to reduce the time and/or physical memory needed to complete the job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' There are three main types of deep learning parallelism;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' data parallelism, model parallelism, and hybrid parallelism (see Appendix 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1 for extended background infor- mation on these methods).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Although today the most common method for DNN training parallelisation is data parallelism for its simplicity and limited network overhead, we focus on the less common but more desirable model parallelism paradigm for its strong scaling capabilities (Khani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Our proposed partitioning methods are applicable to hybrid and pipeline par- allelism, but these require additional simulation complexity and are therefore beyond the scope of this manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Computational jobs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' A computational job is a directed acyclic graph (DAG) whose nodes are operations and edges are dependencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Operations are computational tasks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' some mathematical reduction, a database query, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Dependencies are either control dependencies, where the child operation can only begin once the parent operation has been completed, or data dependencies, where at least one tensor is output from the parent operation and required as input to the child operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In the context of DNNs, a job DAG is a sequence of forward pass, backward pass, and parameter 7 Figure 2: Diagram showing a DNN job DAG being partitioned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Top: A forward pass DAG where each node has an associated partition degree (how many times it will be divided when partitioned).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Bottom: A partitioned DAG with forward and backward passes handled consecutively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Green edges in the graph represent data flow (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' output to input) between consecutive operations in the forward pass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Orange edges represent gradient exchanges processed in the backward pass (backpropagation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Blue edges represent full connectivity collective operations to synchronise weight updates across partitioned components of an operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Note that, for brevity, the top unpartitioned DAG only shows the forward pass (since, before partitioning, the graph structure is identical to the backward pass), whereas the bottom partitioned DAG shows both the forward and backwards passes (since, after partitioning, the graph structures are different).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 8 Original DAG (to be partitioned) Partition Degree: 1 Partition Degree: 4 Partition Degree: 4 Partition Degree: 2 Partition Degree: 1 Data Flow Gradient Flow RAMP Collective Partitioned DAG Forward Pass 3a Backward Pass All-to-all 2a All-to-all All-to-allupdate operations which need to be performed on data exchanged between operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Whether or not this data passes through a communication network is determined by how the operations are partitioned, placed across a cluster of workers, and parallelised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Job partitioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Job partitioning refers to the process of splitting the operations of a job DAG into u (the partition degree) smaller sub-operations which can in turn be placed across u workers, thus reducing their run time and memory requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Partitioning is used in the model, hybrid, and pipeline parallelisim paradigms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' More partitioning can decrease compute time and memory requirements, but requires more inter-worker communication, complex intra-worker operation scheduling, and greater resource utilisation, therefore potentially increasing overall completion time, cluster complexity, and subsequent job blocking rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Figure 2 visualises how an initial DAG for some arbitrary neural network architecture, where each operation has a partitioning degree, can be re-represented in terms of its partitioned form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Both forward and backward passes are explicitly represented since inter- operation information dependencies (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the edges in the graph) are not the same in each pass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Optical Networking Most current cluster networks use optic fibre communication links, but the switch devices which interconnect the network are usually electronic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Limitations of electronic networking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Electronic networks have poor scalability, bandwidth, latency, and power consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Concretely, since the per-port bandwidth is limited and the power consumption required to cool active electronic devices is expensive, the bisection bandwidth achievable in an electronic network is restricted, thus hampering scalability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Consequently, although the compute power of DCN server nodes, as measured by FLOP/s, has increased by a factor of 65 over the last 18 years, the bandwidth of the DCN network facilitating communication between these nodes has only increased by a factor of 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='8, resulting in an 8-factor decrease in bytes communicated per FLOP (Bergman, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This has created a performance bottleneck not in the server nodes themselves, but rather in the network connecting them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This issue is especially compounded when striving for strong scaling via model parallelism with distributed computing, and with the trend towards larger models with ever more parameters as described in Section 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Optical circuit switched networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Cluster networks with optical switches have the potential to offer significant improvements in performance 9 Figure 3: The mean network overhead of the 6 distributed deep learning jobs reported by (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022) in Meta’s GPU cluster compared to that of RAMP as reported by Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2022) on the 5 jobs considered in our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Note that this is an approximate comparison, and that the important takeaway is that RAMP retains low network overheads as jobs become increasingly distributed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (due to larger bandwidth and lower switching latency) and energy efficiency (due to the lack of optical-electronic-optical conversion overhead), as well as the capability to scale to next-generation large-scale distributed compute jobs with exascale bandwidth and compute (Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' OCS networks in particular offer a promising avenue with which to realise commercial optical networks due to their non-blocking circuit configurations with high capacity and scalability and low deterministic switching latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In contrast to optical packet switched networks, OCS networks are simpler to implement and they eliminate the need for in-switch buffering or queuing and addressing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' RAMP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' RAMP is a state-of-the-art OCS architecture designed specifically for cloud data centres and distributed deep learning systems (Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' RAMP networks are parameterised by NC communication groups, NR racks per communication group, and NS servers per rack, resulting in a NW = NC × NR × NS worker cluster with a colloquially termed ‘RAMP shape’ defined by tuple ⟨NC, NR, NS⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' At its core, RAMP proposes a novel set of message passing interfaces (MPIs) for performing the synchronisation steps (AllReduce, AllGather, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=') required by distributed DNN training jobs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' These will be referred to as collective operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' These MPIs are designed to take full advantage of the high bandwidth provided by optical network architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Consequently, as shown in Figure 3, the network overhead of RAMP remains remarkably low as the number of workers used to execute a job increase (see Section 6 for experimental details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The RAMP authors 10 (%) Meta RAMP Network Overhead ( 40- 20 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='99 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='062 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='13 0 101 102 # Workersshowed that this low network overhead enables unprecedented scalability with up to 65 536 worker nodes capable of training O(trillion) parameter DNN models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' RAMP placement rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As detailed in Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2022), a group of workers in a RAMP shape can only undergo collective operations if they are selected with respect to certain rules, loosely termed here ‘symmetry’ rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For shape ⟨NC, NR, NS⟩, these rules are as follows: (1) NS workers per rack spread over NR racks requires that the set of workers on each rack span NR distinct communication groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' These NR distinct communication groups do not have to be the same set across racks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2) NS workers on NR = 1 rack must span NS communication groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (3) NS workers spread over NR racks (NS = 1 worker per rack) must span NS distinct communication groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In our simulations, we use a simple first-fit operation placement heuristic which conforms to these rules (refer to Appendix 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 for further details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Reinforcement Learning RL is the study of optimal decision making in natural and artificial systems (Sutton and Barto, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In the general RL setting shown in Figure 5, an agent interacts with an environment at each sequential time step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The environment can be described by tuple ⟨T, R⟩, where T is a state transition probability matrix defining the transition probabilities from all states s to all successor states s′ taking action u where T u ss′ = P(St+1 = s′|St = s, U t = u), and R is a scalar reward function giving the expected immediate (next state) reward given current state s and chosen action u where Ru s = E(Rt+1|St = s, U t = u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Markov decision process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The environment is usually assumed to have the Markov property whereby P(st+1|st) = P(st+1|ht);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' that is to say that the probability of the next state being st+1 given the current state st is the same as the equivalent probability given all previous states in history ht = {s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', st}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As such, this RL setting is usually assumed to be a Markov decision process (MDP) described by tuple ⟨S, U, T, R, γ⟩ where S is a finite set of possible environment states, U is either a discrete (finite) or continuous (infinite) set of possible actions, and γ ∈ [0, 1] is a discount factor specifying the factor by which to multiply future expected rewards to discount their present value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Since Markov states are stochastic, future rewards are never fully certain and are therefore expressed as an expectation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Agent goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The agent’s goal is to learn to maximise its expected total discounted future reward, termed the ‘value’ or ‘return’ Gt = �∞ k=0 γkRt+k+1, 11 over the course of an episode (a sequence of decision steps which may or may not terminate at some point).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To do so, the agent can use model-free RL to avoid explicitly modelling the environment by only using its policy function and/or its value function to make decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The policy function π maps an observed state st to a corresponding action ut such that some estimated score objective is maximised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The value function estimates the expected return Gt from being in state st and following policy π (the state value function v) or from being in state st, taking action ut, and following policy π (the action value function q).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Crucially, value and policy functions can be approximated and learned with DNNs, enabling RL to be scaled to large problem instances (see Appendix 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 for extended background information on DNNs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Advantages of RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Using traditional RL has several advantages over heuristics and other ML paradigms such as supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' First, no external data from human-designed or computationally expensive heuristics is required, enabling an agent to learn super-human policies without potentially sub-optimal initial biases towards a certain strategy or a costly expert example collection-and-labelling phase (Silver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Second, a DNN with a finite number of layers and neurons will have its expressivity constrained (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020), restricting the complexity of the set of functions it is capable of approximating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Because the objective of an RL agent is to maximise its expected future return which, under the assumption that a suitable reward function has been crafted, is equivalent to maximising performance on a given task, RL agents are able to maximise task performance given DNN expressivity constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Third, since RL agents maximise future return, they are capable of learning sophisticated non-myopic policies which sacrifice short-term reward in exchange for higher long-term return (Sutton and Barto, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' User-Defined Blocking Rate To motivate our work, we first explore the key metrics to consider when evaluating a job partitioning strategy with the help of an experiment on 32 GPU workers, and then introduce a new formulation of the user-defined blocking rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' All experimental details are given in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The inadequacy of optimising the job completion time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As dis- cussed in Section 2, most prior works researching management schemes for distributed computing aim to minimise JCT;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the time taken to com- plete a given job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' If a job j begins running at wall clock time tstart wc,j and is 12 Figure 4: (a-b) Demonstration of how more partitioning can lead to a lower JCT than no partitioning (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' sequentially running the job on a single device), but this may be at the cost of a higher blocking rate since more cluster resources are occupied when subsequent jobs arrive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (c-d) Demonstration of how optimising for the cluster throughput leads to an unfair bias towards more partitioning, because more parallelism creates more work for the cluster and therefore artificially increases cluster throughput even though, from the perspective of the user, the original offered throughput may be lower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' completed at time tend wc,j, researchers usually record the completion time as JCTj = tend wc,j − tstart wc,j .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Consequently, most systems maximise the degree to which they parallelise jobs in order to minimise JCT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' While it is true that end users undoubtedly want this JCT metric to be minimised, it fails to quantify when a job was blocked, which occurs when no cluster resources were available to service it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' While more parallelism will often lead to a lower JCT for a given job, it will also use up more of the cluster’s compute and network resources, potentially blocking future job arrivals (see Figure 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Therefore in practice, end-users wish to minimise both the JCT and the overall blocking rate (the fraction of jobs blocked over a given time period).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' While maximum parallelisation will lead to a minimised JCT, we posit that a balance between these two extreme parallelisation strategies can more aptly optimise for both the JCT and blocking rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Alternative optimisation objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' One metric which encapsulates both the JCT and blocking rate is throughput;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the information processed per unit time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' There are two issues with using throughput as an optimisation objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (1) Operators must be careful how they measure the throughput to be optimised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' If they measure the cluster throughput (the total cluster information processed per unit time), they will be biased towards more 13 Sequential Paramac (a) X104 (b) Rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 - S Blocking 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 XX~X-X1 2X S (c) X107 S (d) X107 B 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='25 B Cluster Offered 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='75 00.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 Load X107 Load X107 Rate (B/s) Rate (B/s)parallelisation, because when a job is partitioned and parallelised, the edge dependencies coming in to and out of the partitioned operation node(s) must be replicated (see Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This artificially creates more information for the cluster to process even though, from the end users’ perspective, the total information processed of their original demand is the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Therefore, the offered throughput (the total original demand information (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' before parti- tioning was applied) processed per unit time) is a more suitable throughput metric to optimise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Figure 4 shows an example of how a ‘maximum parti- tioning’ strategy, such as that used by SiP-ML (Khani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021), can have superior cluster throughput when compared to a ‘no partitioning’ strategy (sequentially running the job on a single device) despite having lower offered throughput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, offered throughput is still an inadequate optimisation metric, because (2) in practice, different jobs being serviced by the cluster originating from different client users have different priorities and job comple- tion time requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For example, two identical machine learning training jobs might be submitted to the cluster, but one from a user who intends to deploy the model commercially and requires it to be completed overnight, and the other from a user who is employing the model for research and has less stringent completion time requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ideally, operators would direct their clusters to meet flexible user-defined per-job completion time requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The user-defined blocking rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To enable users to dynamically de- termine the completion time on a per-job basis whilst also maximising the number of job demands satisfied, we introduce a new formulation of the user-defined blocking rate objective for the partitioning algorithm to optimise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Given a job which, if executed sequentially on one device, would be completed in JCTseq j , we define the maximum acceptable JCT as JCTacc j = β · JCTseq j , where {β ∈ R : 0 < β ≤ 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Here, β is a parameter chosen by the user which determines how quickly the job must be completed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' If JCTj > β·JCTseq j , then the cluster will have failed to complete the job within the required time and the job will be recorded as having been blocked.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The user-defined blocking rate is therefore the fraction of jobs which failed to meet the JCTj ≤ β·JCTseq j requirement over a given period of time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Note that rather than brazenly optimising for either the JCT or the blocking rate, the user-defined blocking rate enables the cluster operator to instead dynamically specify their desired completion time on a per-job basis, and the performance of the cluster is evaluated according to how well it was able to meet the requirements of the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Furthermore, the β parameter corresponds to the speed-up factor being requested by the user and, since {β ∈ R : 0 < β ≤ 1}, can be given directly 14 Figure 5: An overview of our PAC-ML approach transitioning from step t → t + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' At each time step t when there is a new job to be placed on the cluster, we: (i) Use a GNN to generate an embedded representation of the node and edge features in the job’s computation graph, and a standard feedforward DNN to do the same for the global job and cluster features;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (ii) concatenate the outputs of (i) and use another feedforward DNN to generate a logit for each action ut ∈ U t;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (iii) pass the chosen action ut to the environment and partition the job accordingly;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (iv) apply any internal environment allocation heuristics (operation and dependency placement and scheduling, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=') to attempt to host the job on the cluster;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (v) if accepted onto the cluster, perform a lookahead to evaluate the job’s completion time;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (vi) fast-forward the environment’s wall clock time twc to when the next job arrives, and return the corresponding reward rt+1 and updated state st+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' as input to a DNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' PAC-ML Partitioning Methodology RL agents can learn general policies without the need for human guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' An RL job partitioner therefore has the potential to take an arbitrary maxi- mum acceptable JCT provided by the user and automatically decide how much to distribute the job such that, over a period of time, the number of jobs which meet the JCT requirements specified by the user is maximised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Such an agent would therefore be able to minimise the blocking rate whilst also accounting for the flexible and dynamic JCT specifications of the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Following this 15 Agent DNN forward pasS Action selection Generate job Generate global Generate Action graph embedding embedding logits scores = 0 ui =1 Concatenate 2 = 2 ug = 3 =4 DNN modules Node Edge Global Logit Environment Input at time t Environment transition Job Cluster Partitioned job 01 gc gu Allocate place ops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' : schedule ops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 4- place deps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 4- schedule deps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Operation Dependency Worker Communication link NNModule Actionlogic, we now describe our PAC-ML (partitioning for asynchronous computing with machine learning) approach for learning to partition computation jobs with RL and a GNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Markov Decision Process Formulation Since allocating cluster resources for jobs arriving dynamically in time is a sequential decision making process, formulating problems such as job partitioning as an MDP is a natural approach and facilitates the application of many traditional and state-of-the-art RL algorithms (Mao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Addanki et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Paliwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' States.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' A new job j arriving at time step t is comprised of a DAG G(O, D, gj) with node operations O, edge dependencies D, and any other job statistics which might be recorded gj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Similarly, the state of the cluster at time t is made up of the number of workers available, the jobs currently running on the cluster, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To compress the state of the cluster and the job requesting to be placed into a representation suitable as input for a neural network at time step t, we encode this information into five feature vectors: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Per-operation features oi∀i ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', |O|} (5 features): (i) The com- pute cost (run time in seconds on an A100 GPU);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (ii) a binary variable indicating whether the operation has the greatest compute cost in the job;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (iii) the memory cost (byte occupancy);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (iv) a binary variable indicating whether the operation has the greatest memory cost in the job;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' and (v) the node depth with respect to the source node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The compute and memory costs are normalised by the highest compute and memory cost operations in the job, and the node depth is normalised by the depth of the deepest node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Per-dependency features di∀i ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', |D|} (2 features): (i) The size (in bytes) of the edge dependency normalised by the largest dependency in the job;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' and (ii) a binary indicator of whether the dependency is the largest in the job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Global job features gj (15 features): (i) The number of operations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (ii) the number of dependencies;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (iii) the sequential job completion time;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (iv) the maximum acceptable job completion time;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the maximum acceptable job completion time fraction β both (v) raw and (vi) normalised;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (vii) the total memory cost of all of the operations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (viii) the total size of all of the dependencies;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (ix) the number of training steps which need to 16 be performed;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the (x) mean and (xi) median of the operation compute costs;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the (xii) mean and (xiii) median of the operation memory costs;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' and (xiv) the mean and (xv) median of the dependency sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Each feature is normalised by the highest respective value of the feature across all job types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Global cluster features gt C (2 features): (i) The number of occu- pied workers;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' and (ii) the number of jobs running.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Both features are normalised by the total number of workers in the cluster NW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Global action features gt U (NW 2 features): A binary vector indicating the validity of each possible partitioning decision given the state of the cluster and the RAMP rules defined by (Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Given the state st encapsulating both the job requesting to be placed and the current state of the cluster, the partitioning agent uses a policy π(st) to select a number of times ut up to which to partition each operation in the job’s computation graph (using a similar minimum operation run time quantum discretisation scheme to Khani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2021)), where ut i∀i ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', NW 2 } (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' there are � NW 2 + 1 � possible discrete actions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Note that ut = 0 enables the agent to reject a job without placing it, ut = 1 places the job onto one worker and runs it sequentially, and 1 < ut ≤ NW 2 attempts to distribute the job’s operations across up to ut workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In our setting and given the RAMP rules of Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2022), an invalid partitioning action is one which is at least one of: (i) An odd number (except ut = 1), or either (ii) greater than the number of workers available or (iii) has no valid RAMP placement shape given the current state of the cluster (see Section 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As a consequence of the RAMP rules defined by Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2022), which require that the worker and network resources allocated to a given job are reserved exclusively for that job for the duration of its run time, we are able to perform a deterministic lookahead to evaluate what the overall completion time, JCTj, of the job will be as soon as it is placed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Subsequently, when a job j arrives at time step t, we can immediately determine whether or not the cluster met the JCTacc j specified by the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This enables the use of a simple per-step +1/−1 reward scheme, rt+1 = � 1, if JCTj ≤ β · JCTseq j −1, otherwise , (1) which when aggregated and maximised over the course of an episode 17 corresponds to maximally meeting the specified per-job completion time requirements and therefore minimising the user-defined blocking rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In our hybrid time- and event-drive simulation, when the agent makes a partitioning decision at time step t, the environment transitions to the next step t + 1 by fast-forwarding its internal simulated wall clock time, twc, to when the next job arrives and requests to be placed, updating the states of any running and completed jobs and their corresponding compute and network resources as necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The episode terminates when twc = T max wc .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' PAC-ML Learning Setup Reinforcement learning algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To find a policy which maximises the expected return when partitioning jobs, we used the state-of-the-art Ape-X DQN (Horgan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018) RL algorithm;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' a distributed and highly scalable value-based method (see Appendix 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='7 for algorithm details and hyperparameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Neural network architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To make the learning of value and policy functions tractable in large state-action spaces, we approximated them with a custom-built message passing GNN implemented using the open-source PyTorch (Paszke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019) and DGL (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019) libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Refer to Appendix 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 for further architectural details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Experimental Setup All code for reproducing the experiments and links to the generated data sets are provided at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='com/cwfparsonson/ddls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Simulation environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We built an open-source Gym environment (Brockman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016) to simulate the RAMP OCS system of Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2022) in an RL-compatible manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We used a hybrid time- and event- driven simulation approach where we kept track of the internal simulation wall clock time twc, enabling the measurement of time-based metrics, but only took a partitioning decision when needed (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' when a new job demand arrived at the cluster), aiding efficiency since no discrete steps were needlessly simulated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' All our experiments used similar cluster parameters to Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We used NW = 32 (NC = 4, NR = 4, NS = 2) A100 GPUs with 80 GB memory capacity, 2 THz memory frequency, and a peak computational power of 130 Tflop/s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We assumed an intra-GPU propagation latency of 50 ns, a negligible OCS circuit reconfiguration latency of 1 ns, a worker input-output latency of 100 ns, and a total worker communication capacity 18 Figure 6: The four β distributions used in our experiments in order to measure the capability of each partitioner to cater to different user-defined maximum acceptable completion time requirement settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In each βX experiment setting, each new job generated was assigned a β value sampled from βX in order to get the maximum acceptable job completion time, β · JCTseq (see Section 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 TB/s (resulting in a per-transceiver bandwidth of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6×1012 NC B/s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' All experiments were run up to a simulated wall clock time of T max wc = 106 s (around 12 days) of continuous cluster operation with dynamic job arrivals and were repeated across 3 random seeds, with the subsequent min-max confidence intervals for each measurement metric reported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' More details of the simulation environment are provided in Appendix 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Compute jobs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We used the computation graph time and memory profiles of five real deep learning job types open-accessed with Microsoft’s PipeDream research (Narayanan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019, 2021) (see Appendix 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 for details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' These jobs encompassed image classification (AlexNet (Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2012), ResNet-18 (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016), SqueezeNet-10 (Iandola et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016), and VGG-16 (Simonyan and Zisserman, 2014)) and natural language processing (GNMT (Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016)) tasks, thereby testing the generality of the approaches we considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' All jobs arrived to the cluster dynamically and stochastically throughout the simulation period, with the inter-arrival time fixed at 1000 s to control the load rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Each job was ran for Niter = 50 training iterations, where one training iteration consists of one forward and backward pass through the neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Partitioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' When partitioning the operations in a job’s computation graph, we allowed the partitioning agents to split each operation up to NW 2 19 BB Bc BD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='04 Probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 Probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 3times (the environment’s ‘maximum partitioning degree’).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We followed Khani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2021) by (1) assuming a linear dependency between the total number of operation splits and each split’s compute time;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' and (2) choosing a minimum quantum of computation time, τ, and splitting operations up to a number of times which would result in sub-operations with a compute time no smaller than τ in order to maximise GPU utilisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We set τ = 10 ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As such, a given partitioning action ut set the maxmimum partitioning degree of the job, but individual operations within the job could be split fewer times depending on their initial compute time and τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Note that although this restricts each operation to be distributed across a maximum of ut servers, the total number of workers used by all operations in the job can still be greater than ut depending on the operation placement heuristic’s choices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Maximum acceptable job completion times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In our setting, a par- titioner would ideally be able to take an arbitrary job with an arbitrary maximum acceptable job completion time, β · JCTseq, and partition the job such that the completion time requirement is satisfied for as many dynami- cally arriving jobs as possible (thereby minimising the user-defined blocking rate;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' see Section 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To test each partitioner’s ability to do this, we ran experiments using four β distributions (βA, βB, βC, and βD;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' see Figure 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For each βX experiment, when one of the five possible jobs was randomly generated to arrive at the cluster, a β value, discretised to two decimal places, was randomly sampled from the experiment’s βX distribution and assigned to the job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' By sampling a broad range of β values from a selection of βX distributions, we ensured that we could analyse the performance of each partitioning agent under different completion time requirement settings and subsequently measure the capability of each method to cater for different user-defined requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Heuristics RL Random Paramax Paramin PAC-ML βA 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='517+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='015 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='015 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='262+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='002 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='003 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='309+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='014 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='015 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='203+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='007 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='009 βB 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='601+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='007 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='008 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='263+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='006 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='004 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='396+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='006 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='003 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='258+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='007 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='003 βC 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='505+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='016 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='012 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='267+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='004 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='006 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='307+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='015 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='012 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='117+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='003 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='003 βD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='465+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='004 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='006 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='263+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='006 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='004 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='142+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='027 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='046 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='099+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='008 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='007 Table 1: Blocking rate performance of the partitioning agents on the four β distributions (best in bold).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Results are given as the mean across 3 seeds, and error bars denote the corresponding min-max confidence intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 20 Partitioner baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We considered three heuristic baseline partition- ing strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (1) Most prior works partition a given job across as many workers as are available up to a pre-defined environment maximum partition degree (Khani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We refer to this strategy as ‘Paramax’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2) Given the low network overhead (see Figure 3) and contention- less nature of RAMP, and given the operations’ linear split-compute time dependency of our environment, a reasonable estimate for the completion time of a job with sequential run time JCTseq distributed across ut workers would be JCT ≈ JCTseq ut .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Therefore, in light of our objective to minimise the user-defined blocking rate, we introduce a new partitioning strategy, ‘Paramin’, which partitions the job up to the estimated minimum amount of parallelisation needed to satisfy the job’s completion time requirements, ut = ⌈ 1 β⌉ (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the estimated speed-up factor needed).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (3) For completeness, we also ran a ‘Random’ partitioning baseline, which selects a partitioning degree randomly from amongst the number of available workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Metrics recorded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To measure the performance of our partitioning agents, we recorded the following key metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (1) User-defined blocking rate (which we abbreviate to ‘blocking rate’): The fraction of arrived jobs which had their completion time requirements met by the cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2) Offered throughput: The total ‘information size’ of the original jobs (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' before partitioning was applied) processed per unit time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Since the open-access PipeDream job profiles used in our experiments did not contain per-operation flop/s (computational load) information, we summed the jobs’ operation and dependency sizes (measured in bytes (B)) to get the total ‘information size’ of each job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The load rate could then be defined as the rate of job information arriving at the cluster per unit time, and the corresponding offered throughput as the rate at which this total job information was processed by the cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For a full list of metric definitions, refer to Appendix 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' PAC-ML Partitioning Results & Discussion 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Performance of the PAC-ML Partitioner Comparison to the baseline partitioners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To test the performance of each partitioning agent under different completion time requirement settings, we ran our experiments across four different β distributions (see Section 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We visualise the relative blocking rate and throughput performance differences between the agents in Figure 7, where an agent’s ‘score’ is its normalised performance relative to the best-performing agent with respect to a given 21 Figure 7: Validation performances (higher is better) of each partitioning agent evaluated across three seeds normalised with respect to the best-performing partitioner in each BX environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We evaluate these scores as scoreblocking = � best_blocking_rate blocking_rate � , and scorethroughput = � throughput best_throughput � for each agent (refer to Appendix 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='8 for all raw metric values).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As shown in Table 1 and Figure 7, our PAC-ML agent achieved the best blocking rate across all four β distributions, beating its nearest rival by 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5%, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='90%, 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2%, and 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3% for βA,B,C,D respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Comparison amongst the baseline partitioners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Figure 7 visualises the performance of the best PAC-ML agents on each of the four β distribution environments compared to the baseline heuristic performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Interestingly, the best baseline in terms of blocking rate for βA,B,C is Paramax, but this switches to Paramin for βD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' On βB, PAC-ML achieved roughly equivalent performance to Paramax by learning that, on this β demand distribution, maximum parallelisation led to the lowest blocking rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This shows that different partitioning strategies have varying relative performances under different cluster settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' A key advantage of PAC-ML is therefore that the question of which partitioning strategy is best for a given environment need not be addressed by sub-optimal hand-crafted heuristics or environment- specific hyperparameter tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Instead, we have demonstrated in Table 1 and Figure 7 that PAC-ML can automatically learn performant partitioning strategies in arbitrary environment settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Analysis of the PAC-ML Partitioner Offered throughput analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' One risk of optimising only for the blocking rate when training the PAC-ML agent is that it maximises the number of jobs accepted by prioritising small low-information jobs at the cost of a sub-optimal offered throughput;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' a key metric when measuring a cluster’s quality of service to users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Figure 7 shows that the offered throughput 22 PAC-ML (Ours) Paramaa Paramin Random ore .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='00 re 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 Better L Worse e 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='75 Rat 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='25 B βA βB Bc βD βA βB βc βDFigure 8: Mean per-job blocking rates of the five job types considered for each partitioning agent under each βX setting plotted against the number of operations (ops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' ), number of dependencies (deps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' ), the total job information size, and the sequential run time of the job were it ran on a single device (JCTseq).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' improves with the blocking rate, with the PAC-ML agent ultimately achieving the best throughput across all four β distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Bias analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' An important question is whether there is any bias in the kinds of jobs the PAC-ML agent learns to prioritise in order to minimise the blocking rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To investigate this, Figure 8 shows the blocking rate vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the original characteristics for each of the five jobs considered (see Appendix 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 for a summary of these characteristics) for each βX distribution environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The PAC-ML agent had little to no bias across the jobs relative to the other partitioners, with all jobs attaining approximately the same blocking rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' There was a slight bias towards the larger jobs with greater sequential completion times and more information to process, which is likely due to the fact that larger jobs occupy more resources and therefore inherently become 23 PAC-ML (Ours) Paramac Paramin Random βB βc βA βD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 Rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 b0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 locking .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 B 0 7 50 100 50 100 50 100 50 100 # Ops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' # Ops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' # Ops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' # Ops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 - Rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 - O 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 b0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 locking 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 : 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 O B d 50 100 150 50 100 150 50 100 150 50 100 150 # Deps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' # Deps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' # Deps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' # Deps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 Rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 b0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 locking 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 B cp 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 Job Size ×1010 Job Size ×1010 Job Size ×1010 Job Size ×1010 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 Rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 bo 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 locking 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 B 1 1 1 1 2 3 1 2 3 1 2 3 1 2 3 JCTseq ×104 JCTseq JCTseq ×104 JCTseq ×104 ×104favoured over smaller jobs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Conclusion & Further Work In conclusion, we have introduced a new partitioning strategy called PAC- ML.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Leveraging RL and a GNN, PAC-ML learns to partition computation jobs dynamically arriving at a cluster of machines such that the number of jobs which meet arbitrary user-defined completion time requirements is max- imised without the need for hand-crafted heuristics or environment-dependent hyperparameter tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We tested our partitioner on the recently proposed RAMP optical architecture (Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022) across four distributions of user-defined completion time requirements, demonstrating up to 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2% lower blocking rates relative to the canonical maximum parallelisation strategies used by most prior works when partitioning five real deep learning jobs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We hope that our work will spur a new avenue of research into developing partitioning strategies for distributed computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In this section, we outline potentially interesting areas of further work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Exceeding completion time expectations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In this work, we rewarded PAC-ML with +1 for completing a job within the user-defined maximum acceptable completion time and −1 for failing to do so.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Although minimising the blocking rate is crucial for users, it would also be desirable to minimise the JCT as much as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' An interesting area of further study would therefore be to incorporate this objective into the reward function, perhaps by combining the JCT speed-up factor or offered throughput with the blocking rate via multi-objective RL (Hayes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Real-world experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Our work has considered real open-access deep learning computation graph profiles but on a simulated optical architec- ture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' A natural but significant next step would be to implement PAC-ML in a real distributed cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' An important question would be whether an agent trained in a simulated environment would be capable of inferring in a real cluster at test time, or if real-world training would be needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Generalisation to unseen environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This study ran PAC-ML in an environment which had the same load rate, β distribution, cluster network size, and job computation graphs at train and test time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' An interesting research question would be whether PAC-ML would be able to learn on one set (or a distribution) of these parameters and then generalise to a new set at test time, or if it would need to leverage existing or new state-of-the-art 24 methods in GNN (Knyazev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Garg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Fan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021) and RL (Cobbe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Kirk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021) generalisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Robustness to stochastic inter-arrival times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In our experiments, we fixed the inter-arrival rate in order to fix the load rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, real clusters have variable inter-arrival times (Parsonson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Handling highly stochastic environments is a known challenge for RL (Mao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019), and therefore presents an interesting future research avenue for PAC-ML.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Combining the virtual plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In our work, we have considered the job partitioning task in isolation of the job placement and scheduling tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, prior works have found the merging of the latter sub-tasks into a single resource management problem beneficial to performance (Paliwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' An interesting area of further work would be to combine PAC-ML into a a single algorithm which handled job partitioning, placement, and scheduling via methods such hierarchical RL (Barto and Mahadevan, 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Vezhnevets et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mirhoseini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Paliwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021) or multi-agent RL (Foerster, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Appendix 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Extended Background 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Parallelisation There are three main types of deep learning parallelism;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' data parallelism, model parallelism, and hybrid parallelism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Data parallelism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Data parallelism (Slotnick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 1962) is where an identical copy of the DNN model is sent to each worker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The input training data is parallelised by sampling a training batch, splitting it into non-overlapping micro-batches, training each worker on its own micro-batch, and updating the workers’ local model parameters using some method to synchronise the gradients of the parameters with respect to the training loss after each training iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This synchronisation step is commonly referred to as AllReduce, and can be performed using various techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Data parallelism can be applied to any DNN model regardless of its architecture, enables the use of large data sets (which are crucial for scaling model performance (Hoffmann et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022)), and facilitates the use of large training batch sizes which can lead to smoother and faster convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This is a form of weak scaling, where the JCT is decreased by reducing the total number of training iterations needed via increasing the amount of data processed per iteration as the number of workers is increased (Khani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, it scales poorly 25 for large models with many parameters since all parameters must fit onto a single worker and then be synchronised at the end of each training step, and has the constraint that the training data must be i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' in order for parameter updates to be computed and summed across workers to attain the updated model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Model parallelism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Model parallelism (Karakus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021) is where the DNN model is partitioned (split) and a part of the model is sent to each worker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In the DNN forward pass, a training batch is sampled, copied, and sent to each worker which holds layer-1 of the DNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The layer-1 worker(s) then compute the layer-1 output(s) and forward them to the worker(s) which hold layer-2, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In the backward pass, the gradients of the model parameters with respect to the training loss are computed by starting at the worker(s) which hold the final layer and propagating these gradients back to the layer-1 workers, after which the partitioned model will be globally synchronised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Layer outputs, gradients, and activations are exchanged during the training iteration using a synchronisation step commonly referred to as AllGather.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Model parallelism facilitates the use of very large models which otherwise would not fit onto a single worker and caters for time-efficient parallelisation of computational operations where possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This is a form of strong scaling, where the JCT and per-worker memory utilisation are decreased via increasingly partitioning different parts the job across more workers as the number of workers is increased (Khani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, passing gradients between workers during training can create a large communication overhead (Mirhoseini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017, 2018b), and expert domain knowledge of the specific model architecture is needed to know how to split the model across multiple workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hybrid parallelism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hybrid parallelism (Dean et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2012) is where a combination of data and model parallelism is used to strive for the benefits of both.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This can be extended to include pipeline parallelism (Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Narayanan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019), where intra-batch parallelism (data and model parallelism) are combined with inter-batch parallelism (pipelining) where multiple micro-batches are processed simultaneously where possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hybrid parallelism can result in higher worker utilisation and the advantages of both model and data parallelism, but requires complex bidirectional pipelining across different inputs, careful model parameter versioning to ensure correct computations of the gradients during the backward pass, and each stage allocated across workers must be load-balanced to ensure roughly equivalent computational times between workers in order to maximise peak pipeline 26 throughput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Neural Networks as Function Approximators Neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Neural networks are a composition of linear and non-linear (activation) functions connected in a chain to form a DAG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Each function in the chain is a layer parameterised by a set of weights and biases which, given enough parameters, can be trained to approximate any universal function (Hornik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 1989;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Montufar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Neural networks with multiple intermediary (hidden) layers between input and output are referred to as deep neural networks and have powerful expressivity capabilities when approximating complex non-linear functions (Hornik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 1989;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Montufar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Whereas standard DNNs are restricted to handling only vector- and grid-structured inputs (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' sentences, images, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' ), GNNs are generalised DNN architectures which can handle graph-structured data as inputs (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' job DAGs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Most current GNNs use the message passing paradigm by mapping each node and edge onto a vector embedding space before performing additional graph-level embeddings and readouts if desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Specifically, each GNN layer usually performs four stages: (i) On each edge in the input graph, use a message function to generate a message (representation) to pass from a source node to a set of destination nodes, where each node stores the message(s) it receives in its mailbox;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (ii) on each node in the input graph, apply an aggregate function (a vanilla reduce operation such as mean, sum, max, min, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', or a trainable function) to the messages in its mailbox to generate an intermediate aggregate representation of its neighbourhood;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (iii) pass the intermediate aggregate representation through a trainable function to produce a final vector embedding for each node;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' and (optional) (iv) if desired, at the end of the final GNN layer, pass the node embeddings through a trainable function to produce a graph-level representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Crucially, the parameters of all message, aggregation, and forward pass functions are shared across nodes, enabling GNNs to be inductive in that they can generalise to unseen nodes and graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Reinforcement Learning Algorithm Here we break down the key background components of the RL approach used for PAC-ML.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Q-learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Q-learning (Watkins, 1989) is the canonical value-based algorithm which can be applied to a sequential decision making process 27 formalised as an MDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' It is an off-policy temporal difference algorithm whose goal is to learn the action value function mapping state-action pairs to their expected discounted future return when following a policy π;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Qπ(s, u) = Eπ � �∞ t′=t+1 γt′−1r(st′)|st=s, ut=u � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' By definition, an optimal policy π∗ will se- lect an action which maximises the true Q-value Q∗(s, u), π∗(s) = arg maxu′ Q∗(s, u′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Concretely, the classical Q-learning algorithm maintains an action value look-up table Q(s, u) mapping all possible state-action pairs to their predicted discounted return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The return is the sum of future rewards over the remainder of the episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' During training, Q-learning follows an exploration-exploitation policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The simplest such policy is ϵ-greedy, where a random action is sampled with probability ϵ ∈ [0, 1] and the best action, according to the current Q table, is sampled with probably 1 − ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' At each time step t, the agent in state st uses this policy to select an action ut which it performs in the environment to transition to the next state st+1 and receive a reward rt+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Q(s, u) is then updated according to: Q(st, ut) ← Q(st, ut) + α · � rt + γ · max u′ Q(st+1, u′) − Q(st, ut) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2) On the right-hand side of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2, Q(st, ut) is the agent’s estimate of the discounted return of taking action ut in state st, α is the learning rate, γ is the factor by which to discount future rewards to their present value, and maxu′ Q(st+1, u′) is an estimate of the future value of being in state st+1 and taking an ‘optimal’ action according to Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The rt + γ · maxu′ Q(st+1, u′) term is called the temporal difference target, and and the collective rt + γ · maxu′ Q(st+1, u′) − Q(st, at) term the temporal difference error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As such, the maxu′ Q(st+1, u′) term treats Q as an oracle from which optimal actions can be sampled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Although Q is usually randomly initialised and changes at each update step, the general idea is that, with stable learning and sufficient exploration, Q will converge on the true Q∗ function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As a side note, Q-learning is a temporal difference algorithm because, rather than using the actual returns to update Q in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2 as done my Monte Carlo methods, it uses a bootstrapped estimate of the future returns maxu′ Q(st+1, u′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Furthermore, it is an off-policy algorithm because the policy used to select the action ut at the current time step, such as ϵ-greedy sampling of Q, is different to the policy used to select the next-state action u′ when evaluating the temporal difference target, such as greedy sampling of Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This is as opposed to on-policy temporal difference algorithms, such as 28 SARSA, which use the same action selection policy for both the current time step and for future time steps when bootstrapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Deep Q-learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Many practical problems have an extremely large number of possible state-action combinations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For example, the game of Go has over 10700 possible sequences;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' far more than the number of atoms in the universe (Silver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As such, modelling the action value function with a tabular approach is intractable given practical memory constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To enable Q-learning to be scaled to complex problems, deep Q-learning (DQN) (Mnih et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2013) approximates the true Q-function with a DNN parameterised by θ such that Qθ(s, u) ≈ Q∗(s, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Concretely, during training at each time step t, Qθ(s, u) is used with an exploration strategy such as ϵ-greedy to select an action and add the observed transition T = (st, ut, rt+1, γt+1, st+1) to a replay memory buffer (Lin, 1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The network’s parameters θ are then optimised with stochastic gradient descent to minimise the mean squared error loss between the online network’s predictions and a bootstrapped estimate of the Q-value, JDQN(Q) = � rt+1 + γt+1 max u′ Q¯θ(st+1, u′) − Qθ(st, ut) �2, (3) where t is a time step uniform randomly sampled from the buffer and Q¯θ a target network with parameters ¯θ which are periodically copied from the acting online network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The target network is not directly optimised, but is used to provide the bootstrapped Q-value estimates for the loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Only periodically updating the target network rather than at each learning step leads to lower variance in the bootstrapped targets at each step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This helps helps to stabilise learning and leads to better convergence (Mnih et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Double DQN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In the traditional Q-learning update rule of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2 and the DQN loss of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3, the Q-function used to select and evaluate an action for the temporal difference target is the same;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' maxu′ Q(st+1, u′) for Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2, and maxu′ Q¯θ(st+1, u′) for Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, this can lead to an overestimation bias where the chosen action u′ is incorrectly over-valued because the same function which perceives u′ as being best is also being asked to evaluate it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This can lead to high variance updates, unstable learning, and convergence on local minima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Double DQN (van Hasselt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2015) reduces overestimation by decomposing the max operation in the temporal difference target into action selection and action evaluation and performing these two tasks with two separate networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 29 Concretely, action u′ is greedily selected according to the online network Qθ and evaluated with the separate target network Q¯θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The loss term from Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3 then becomes: JDDQN(Q) = � rt+1 + γt+1Q¯θ(st+1, max u′ Qθ(st+1, u′)) − Qθ(st, ut) �2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (4) Prioritised experience replay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Vanilla DQN replay buffers are sampled uniformly to obtain transitions for network updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' A preferable approach is to more frequently sample transitions from which there is much to learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Prioritised experience replay (Schaul et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016) deploys this intuition by sampling transitions with probability pt proportional to the last encountered absolute temporal difference error, pt ∝ |rt+1 + γt+1 max u′ Q¯θ(st+1, u′) − Qθ(st, ut)|ω, (5) where ω is a tuneable hyperparameter for shaping the probability distribu- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' New transitions are added to the replay buffer with maximum priority to ensure all experiences will be sampled at least once to have their errors evaluated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' n-step Q-learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Traditional Q-learning uses the target network’s greedy action at the next step to bootstrap a Q-value estimate for the temporal difference target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Alternatively, to improve learning speeds and help with convergence (Sutton and Barto, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hessel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017), forward-view multi-step targets can be used (Sutton and Barto, 2018), where the n-step discounted return from state s is r(n) t = n−1 � k=0 γ(k) t rt+k+1, (6) resulting in an n-step DQN loss of JDQNn(Q) = � r(n) t + γ(n) t max u′ Q¯θ(st+n, u′) − Qθ(st, ut) �2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (7) Dueling DQN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Traditional DQN approaches use a DNN architecture which is not specific to RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Subsequently, when learning the Q-function, the entire DNN architecture must learn to estimate the state value and the action advantage for each action in order to learn the state-action function Qπ(s, u) of being in state s, taking action u, and following policy π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, in 30 many problems where bootstrapped Q-learning is applied, the most important objective is to learn to estimate the value of each state rather than the effect of each action for each state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This is especially true in environments and individual states where future transitions are mainly influenced by factors other than the agent’s actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Leveraging the insight that in many states it is unnecessary to estimate the value of each action choice, Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2015) developed a new DNN architecture, termed ‘dueling DQN’, which is better suited to the Q-learning task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Concretely, the dueling architecture uses the same core DNN as standard DQN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' However, rather than following the initial encoding with a single sequence of fully connected layers to get a Q-value for each possible action in the current state, dueling DQN uses two separate streams of fully connected layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' One stream, parameterised by β, estimates the state value function Vθ,β(s) (the estimated future discounted return of the current state regardless of future actions taken), and the other stream, parameterised by α, estimates the relative action advantage function Aθ,α(s, u) (the relative difference in the future discounted return of each action).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The outputs of the two streams are then combined via a special aggrega- tion function to recover the state-action value function Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Crucially, V (s) and A(s, u) must be combined into Q(s, u) in such a way that they are indepen- dently identifiable from the output Q values alone in order for backpropagation to be able to calculate the appropriate loss and weight updates for the sep- arate V (s) and A(s, u) streams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As such, a simple Q(s, u) = V (s) + A(s, u) aggregation function to get the Q-values from the two streams does not suffice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Instead, the authors tried two different aggregation schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The first aggregation method subtracted the advantage of the maximum advantage action from all advantages to make the argmax action’s advantage 0 and the rest < 0, Qθ,α,β = Vθ,β(s) + � Aθ,α(s, u) − max u′ Aθ,α(s, u′) � , (8) thus enabling V (s) to be recovered at the argmax action’s Q-value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The second aggregation method subtracted the mean advantage from all action advantages to centre the advantage values around 0 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' to have a mean of 0), Qθ,α,β = Vθ,β(s) + � Aθ,α(s, u) − 1 |A| � u′ Aθ,α(s, u′) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (9) 31 This makes V (s) recoverable from Q(s, u) by estimating the V (s) value which, when subtracted from each A(s, u) value, leads to a set of A(s, u) values which have a mean of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In practice, this second approach of using the mean was found to lead to more stable learning since using a mean operation resulted in lower variance targets between learning steps compared to when a max operation was used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As with standard Q-learning, the output of the dueling network is a set of Q-values (one for each action), therefore no change to the underlying algorithm other than a slight adjustment of the network architecture was required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' By decomposing the Q-function approximator in this way, dueling DQN is able to attain superior policy evaluation in the presence of many similar-value actions, and the authors demonstrated their architecture achieving state-of-the-art performance on the Atari 2600 games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ape-X DQN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Noting that state-of-the-art ML performance is often achieved with more computation, more powerful models, and larger training data sets, Horgan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2018) proposed Ape-X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' a parallelisation approach to off-policy experience replay RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Concretely, rather than using a single actor-learner setup, Ape-X decouples acting from learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' It distributes many actors across a set of CPU cores each with their own instance of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Each actor retains a copy of a DNN shared across actors which it uses for action selection to accumulate experiences in parallel with other actors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' These experiences are then communicated to a central shared replay buffer, where a single learner mounted on a GPU uses prioritised experience replay to sample the most important experiences for learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Learner sampling, gradient computation, and network updates are done asynchronously with one another on separate threads, as are the periodic updates made to the actors’ networks with the latest shared learner network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' By using multiple actors in parallel, not only can orders of magnitude more transition data be attained for learning, but also a broader diversity of experiences can be collected by allocating a different exploration strategy to each actor and thereby avoid local optima in difficult exploration and large state-action space settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For Nactors distributed actors, Horgan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2018) used a per-actor ϵ-greedy exploration strategy whereby each actor i had a fixed exploration probability ϵi = ϵ1+ i Nactors−1 ·α where ϵ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 and α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The authors demonstrated their approach achieving new state-of-the-art results on Atari in a fraction of the training time of prior works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 32 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Metric Definitions Table 2 summarises the metric jargon used throughout our manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Metric Description Job completion time Time between job arriving and being completed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Sequential job completion time Time it would take to complete a job were its operations ran sequentially on a single device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Maximum acceptable job completion time Maximum time allowed to complete a job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Speed-up factor Factor difference between sequential job completion time and actual job completion time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Network overhead Fraction of the job completion time spent communicating information be- tween workers when no computation was taking place.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Blocking rate Fraction of the arrived jobs which were successfully serviced across a given period of time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Job information size Summed sizes (in bytes) of a job’s operations and dependencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Cluster throughput Total partitioned job information pro- cessed per unit time by the cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Offered throughput Total original job information pro- cessed per unit time by the cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Load rate Amount of job information arriving at the cluster per unit time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Job inter-arrival time Time between when two jobs arrived at the cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Table 2: Descriptions of the various metrics referred to throughout this manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Experimental Hardware All environment simulations were ran on Intel Xeon ES-2660 CPUs, and all learner network training and inference was done on either a V100 or an A100 GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 33 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Additional Simulation Details 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Code Structure We built a core RAMP simulation environment which followed a Gym- like interface (Brockman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016) but without inheriting from a Gym environment object to allow additional flexibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We then built a wrapper ‘job partitioning’ environment which did conform to the Gym interface but used our core RAMP simulation environment to perform the internal RAMP simulation logic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Our code base is publicly available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' com/cwfparsonson/ddls for further practical implementation details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Job Allocation Procedure When a job arrives at the cluster, our environment uses the following ordered sequence of task executions to allocate the job: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' partitioning: Partition the job DAG’s operations to attain a ‘partitioned’ job DAG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' placement: Place the operations in the partitioned job DAG onto a sub-set of cluster workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' scheduling: For each worker, schedule the priority of its placed operations to resolve conflicts where ≥ 2 operations are ready to be executed at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Dep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' placement: Given the placed operations and the data de- pendencies which must be exchanged between operations, place the dependencies onto cluster communication links.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Dep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' scheduling: For each communication link, schedule the priority of its placed dependencies to resolve conflicts where ≥ 2 dependencies are ready to be communicated at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Job Allocation Methods Each of the above allocation procedure tasks can be performed by any algorithm, heuristic, or learning agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In our work, we use the following methods: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' partitioning: PAC-ML, Paramax, Paramin, or Random.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' See the main manuscript for details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' placement: A first-fit heuristic customised for the requirements of RAMP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' See Section 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 below for details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 34 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' scheduling: Shortest remaining processing time (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Alizadeh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Given a set of operations placed on a worker, the operation with the shortest remaining run time will have the highest priority and therefore be executed first wherever two operations on the same worker request to be executed at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Dep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' placement: Shortest path & first-fit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Given a set of operation placements, for any dependencies which need to be transferred through the network (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' for dependencies with size > 0 and whose parent operation is placed on a separate worker from the child operation), (1) first-fit select a path from the k−shortest path with available light channel(s), and (2) first-fit select an available channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Dep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' scheduling: Shortest remaining processing time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Given a set of dependencies placed on a communication link channel, the dependency with the shortest remaining processing time (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the lowest amount of information left to be transferred) will have the highest priority and therefore be communicated first wherever two dependencies on the same link channel request to be transported at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' First-Fit Operation Placement in RAMP The original RAMP paper of Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2022) did not specify an operation placement heuristic which conformed to the RAMP placement rules (see Section 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Here, we propose a simple first-fit heuristic which conforms to these rules whilst making the placement problem tractable for large cluster networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The basic idea behind partitioning and placement in the scenario described in this work is to exploit the network efficiencies of RAMP as much as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' In particular, this means maximising the use of RAMP’s highly efficient collective operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For a generic partitioned DAG, in the backward pass, collectives happen for each operation when weights/gradients are shared between sub-operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' If both a parent and child operation are placed on the same set of (RAMP symmetry adherent) workers, then when the parent communicates its output to the child’s input in the forward pass this will also constitute a collective operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As such the placement heuristic implemented here seeks to primarily maximise the amount that these two conditions are encountered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Given some operation, o, that has been partitioned into N equal sub-operations, oi and needs to be placed, the placement is handled as: 35 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' If a parent of o has been partitioned and placed across N servers which adhere to the RAMP symmetry conditions, and if these servers each have enough memory to store oi, then place o across this set of N servers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This ensures collective operations can happen in both the forward and backward pass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Otherwise, check if a set of N workers can be found in the network that adheres to the RAMP symmetry requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This is achieved by sliding the various possible symmetric shapes over the topology until a suitable one (or none) is found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' This ensures collective operations in the backward pass only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Allocating in this way ensures that every partitioned operation can exploit RAMP’s efficient collective operation process on the backward pass, and where possible can also exploit it on the forward pass when receiving information from (one of) its parents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Evaluating the job completion time The time to complete each operation was taken from the real computation job profiles of the DNN jobs considered (see Section 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To calculate the communication time of point-to-point information transfers and of the MPI collectives, we used the equations and code of Ottino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Possible Causes of a Job Being Blocked A job is blocked when either JCT > β · JCTseq (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' failing to meet user’s chosen JCT requirement) or when the cluster does not have enough available resources to service the job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The possible causes of this latter form of blocking are: Prior jobs using up too many cluster resources when later jobs arrive;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the minimum operation run time quantum not being low enough to partition the operations enough times to lead to the desired JCT;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' mounted worker operation scheduling conflicts for partitioned operations mounted on the same worker leading to longer run times, since one worker can only execute one operation at a time;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' and excessive communication overheads incurring from over-partitioning of the job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 36 Figure 9: Visualisation of the characteristics of the deep learning computation graphs used for our experiments before partitioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The bottom left sub-figure contains the model colour code scheme for all other sub-figures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The statistics shown are for the operations and dependencies which need to be executed and satisfied to conduct one training iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Therefore, to carry out Niter training steps, the computation graph would need to be executed Niter times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Computation time units are reported in seconds, and memory units in bytes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Job Computation Graph Data Sets All computation graphs used in our experiments were taken from the open-access PipeDream computation graph data set (Narayanan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Figure 9 shows a visualisation of the key computation graph characteristics for each neural network model considered, where the numbers reported are for one training iteration (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' one forward and backward pass through the model).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Table 3 reports the same characteristics but in tabular form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Finally, for completeness, Figure 10 shows the actual job DAGs of the models used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Neural Network Architecture As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 11, we used a message passing GNN similar to Graph- SAGE with mean pooling (Hamilton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017) to parameterise the PAC-ML policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Table 4 summarises the hyperparameters used for the components of this DNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We note that we did not perform extensive hyperparamter tuning on the GNN architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Below is a detailed explanation of this architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' GNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' First, the GNN layer takes in the DAG’s node and edge features and generates an embedding for each node and edge in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Then, each local node’s nearest neighbour (1-hop away) sends the local node a 37 ×104 ×109 600 - 125 - 400 - 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 - iido 75 2 - Max.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 50 - 200 - 25 - 70 ×1010 ×1010 ×109 3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5- 150 - Size ( azis # α ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content="0→ FO'T Fo 1 80 - Graph 0." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='8 - 60 - Computation 40 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 - 20 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 E ELL ELL 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 - 100 108 109 101 102 108 109 Operation Compute Time Operation Memory Dependency MemoryModel # ops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' JCTseq Max.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' comp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' time Σ op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' mem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Max.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' mem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Depth # deps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Σ dep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' size Max.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' dep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' size ResNet-18 142 36 668.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='35 473.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='625 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='258 66 × 109 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='822 121 2 × 109 60 159 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='733 29 × 109 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='822 083 6 × 109 VGG-16 82 34 525.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='35 113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='330 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='625 30 × 109 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='644 315 × 109 80 83 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='467 06 × 109 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='644 167 × 109 GNMT 96 4470.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='80 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='88 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='368 447 × 109 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='269 491 × 108 30 117 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='027 801 × 109 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='194 437 1 × 109 SqueezeNet-10 136 38 000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='15 474.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='637 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='962 62 × 109 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='168 007 × 109 102 153 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='910 09 × 109 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='167 950 × 109 AlexNet 46 36 061.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='15 635.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='902 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='046 234 × 109 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='198 339 6 × 109 44 47 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='422 161 × 109 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='198 246 4 × 109 Table 3: Summary of the characteristics of the deep learning computation graphs used for our experiments before partitioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The statistics shown are for the operations (‘ops.’) and dependencies (‘deps.’) which need to be executed and satisfied to conduct one training iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Therefore, to carry out Niter training steps, the computation graph would need to be executed Niter times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Computation (‘comp.’) time units are reported in seconds, and memory (‘mem.’) units in bytes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 38 Figure 10: Deep learning computation graphs used for our experiments before partitioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Each computation graph represents the operations and dependencies which need to be executed and satisfied to conduct one forward and one backward pass through the neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Therefore, to carry out Niter training steps, the computation graph would need to be executed Niter times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 39 ResNet-18 VGG-16 GNMT SqueezeNet-10 AlexNet 国国国Figure 11: Schematic of the DNN architecture with |L| GNN layers used to parameterise the policy of PAC-ML.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The GNN is similar to that of GraphSAGE with mean pooling (Hamilton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Each GNN layer l ∈ L contains a node, edge, and reduce DNN module and ultimately learns to create an embedded representation for each node in a given job DAG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' These per-node embeddings are then passed, along with any global job, cluster, and action features, to a readout module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The readout module ultimately generates scores for each possible action, which enables an action to be selected following a given exploration-exploitation policy being followed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For clarity, this figure only shows the GNN embedding-generation process for node 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' See accompanying text for a detailed explanation of this architecture and the accompanying figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 40 (1, 3) Mean pooling Mean pooling [1,2,3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='57 RLlib FC Chosen action Job Clustel Action Node Eade Labelmessage (‘message passing’) which is the neighbouring nodes’ embeddings concatenated with their connected edges’ embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' These messages are stored in the local node’s ‘mailbox’, which now contains information about the node’s neighbourhood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To ensure consistent dimensioning with the received messages, a dummy zero-padded edge embedding is concatenated with the local node’s embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Next, the reduce module takes the local and message embeddings and generates a reduced representation for each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Finally, to generate a layer-l output embedding for the local node, the element-wise mean of the reduced embeddings is taken (‘mean pooling’).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Note that this embedding process is done for each node in the DAG, but for clairty Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 11 only follows node 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' If l < L (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' if this is not the last GNN layer), these final node embeddings are used as new features for the original DAG’s nodes and are passed to the next GNN layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' If l ≡ L, then the node embeddings are passed to the readout module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Note that (1) the node, edge, and reduce modules are shared across the aforementioned operations within a given GNN layer when generating node embeddings, but not across different GNN layers, and (2) the lth-layer’s output node embeddings will contain information about the node’s neighbourhood from up to l hops away.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Readout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The readout module takes the GNN’s node embeddings and the job’s and cluster’s global features as input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To convert the node-level embeddings of the GNN into a representation of the overall job DAG, their element-wise mean is taken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To generate an embedding capturing the global job, cluster, and action information, a global DNN module is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The DAG and global embeddings are then concatenated and passed to a logit module, which in turn generates a vector of (optionally masked) scores for each possible action in the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Finally, based on these scores and the exploration-exploitation policy being followed, an action is selected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Reinforcement Learning Algorithm Approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Given the stochastic nature of our dynamic cluster environ- ment setting, we hypothesised that a value-based RL method would be best suited to our setting (Mao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We did try the PPO (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017) actor-critic method but found performance to be worse, although we leave a full analysis of alternative RL algorithms to future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' As stated in the main manuscript, we used the state-of-the-art value-based Ape-X DQN RL algorithm (Horgan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018) to attain the PAC-ML policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Concretely,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' we used the Ape-X parallelisation approach with double 41 Parameter Value Message passing # hidden dimensions 64 Message passing # output dimensions 32 Reduce module # hidden dimensions 64 Reduce module # output dimensions 64 if l < L,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' else 16 Global module # hidden dimensions 8 Global module # output dimensions 8 Logit module RLlib FC net # layers 1 Logit module RLlib FC net # hidden dimensions 256 All modules’ activation ReLU GNN # layers L 2 Apply action mask False Table 4: Hyperparamters used for the PAC-ML ApeX-DQN DNN policy architecture shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Note that the ‘message passing’ dimensions refer to the dimensions of the concatenated node and edge modules’ embeddings, so the dimensions of these modules’ hidden and output embeddings will be half the corresponding ‘message passing’ dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Due to the RLlib implementation of Ape-X DQN, we did not apply an action mask, but instead included the action mask in the global features given to the model and used the reward signal to train the agent to avoid selecting invalid actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Q-learning action selection-evaluation (van Hasselt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2015) and multi- step bootstrapped learning targets (Sutton and Barto, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hessel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017), prioritised experience replay (Schaul et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016), a dueling DQN network architecture (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2015), and a per-actor ϵ-greedy exploration algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For a breakdown of each of these components, refer to Appendix 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' To select the algorithm hyperparameters, we con- ducted a Bayesian search across the search space summarised in Table 5, with simulations conducted in a light 32-worker RAMP environment with a maximum simulation run time of 2 × 105 seconds to speed up the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We adopted similar search ranges to those used by Kurach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hoffman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Parsonson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2022b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For each set of hyperparameters, we ran the algorithm for 100 learner steps (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' training epochs), and performed a validation across 3 seeds at each learner step (see Figure 12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We selected the parameter set with the highest episode return across the 3 seeds (see Table 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We also report the importance of each parameter with respect to the total episode return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The importance is calculated by training a random forest with all algorithm hyperparameters as inputs and the episode return as the target output, with the per-feature (hyperparameter) importance values 42 Figure 12: Validation performance of the Ape-X DQN hyperparameter sweep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Each agent was trained for 100 learner steps, and at each learner step a validation was performed across 3 seeds - the mean metrics with their min-max interval bands are plotted for each hyperparameter set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' predicted by random forest reported accordingly (fab, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' how, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' All our experiments used the same per-actor ϵ-greedy exploration as Horgan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' We note that our RL algorithms were implemented using the open-source RLlib library (Liang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018) and hyperparameter tuning was done using Weights & Biases (Biewald, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Final Learning Curves For completeness, Figure 13 shows the learning curves of the tuned PAC- ML agents in each βX environment superimposed on the baseline agents’ performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' At each learner step, the PAC-ML agent was evaluated across three seeds in the validation environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Additional Experimental Results Figure 14 shows the performance of the agents in terms of raw blocking rate, throughput, JCT, and JCT speed-up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Funding and Acknowledgments Funding EPSRC Distributed Quantum Computing and Applications EP/W032643/1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the Innovate UK Project on Quantum Data Centres and the Future 10004793;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' OptoCloud EP/T026081/1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' TRANSNET EP/R035342/1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the Engineering and Physical Sciences Research Council EP/R041792/1 and EP/L015455/1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' the Alan Turing Institute;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' and Horizon Europe Dynamos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 43 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='9 Return 75 Rate slocking 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='8 100 isode 125 B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='7 150 T T 20 40 60 80 100 20 40 60 80 100 Learner Steps Learner StepsParameter Search Range Best Value Importance Discount factor γ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='99, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='993, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='997, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='999, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='9999} 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='999 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='004 Learning rate Log-uniform values ( 1 × 10−7, 1 × 10−3 ) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='121 × 10−7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='045 vmin {−1, −10, −100, −200, −1000} −1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='01 vmax {1, 10, 100, 200, 1000} 1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='004 Target network update frequency { 1 × 103, 1 × 104, 1 × 105 } 1 × 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='001 Prioritised replay α {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='7, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='8, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='9} 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='04 Prioritised replay β {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='7, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='8, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='9} 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='047 n-step {1, 3, 5, 10} 3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='227 # CPU workers 32 32 − # GPU workers 1 1 − Batch mode Truncated episodes Truncated episodes − Rollout length 50 50 − Train batch size 512 512 − Optimiser Adam Adam − Dueling True True − # atoms 1 1 − Noisy False False − Double Q True True − Replay buffer capacity 100 000 100 000 − Learning starts 10 000 10 000 − Prioritised replay TD-error ϵ 1 × 10−6 1 × 10−6 − Table 5: Ape-X DQN training parameter sweep search range,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' best value found,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' and corresponding parameter importance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 44 Figure 13: Validation curves of the PAC-ML agent trained in four different β distribution environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' At each learner step (update to the GNN), the agent was evaluated across 3 seeds, with the mean blocking rate, offered throughput, JCT, and JCT speed-up (relative to the jobs’ sequential run time JCTseq) performance metrics reported as well as their min-max confidence intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' For reference, the performances of the baseline heuristic partitioners are also plotted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 45 PAC-ML (Ours) Paramac Paramin Random βA βB βc βD Slocking 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 Rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='50 B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='25 X107 X107 X107 X107 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 S B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 X104 X104 X104 X104 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 W 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 JCT 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 - 15 15 15 JCT 10 10 10 10 5 5 5 5 S 0 100 200 0 100 200 0 100 200 0 100 200 Learner Steps Learner Steps Learner Steps Learner StepsFigure 14: Validation performances of each partitioning agent evaluated across three seeds, with the mean blocking rate, offered throughput, JCT, and JCT speed-up (relative to the jobs’ sequential run time JCTseq) performance metrics reported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' References , 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Intro to machine learning: Lesson 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='youtube.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' com/watch?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='v=0v93qHDqq_g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' , 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Introduction to hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://forums.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='fast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='ai/ t/wiki-lesson-thread-lesson-4/7540.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Addanki, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Venkatakrishnan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gupta, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Mao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Alizadeh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Placeto: Learning Generalizable Device Placement Algorithms for Distributed Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Red Hook, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Addanki, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Venkatakrishnan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gupta, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Mao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Alizadeh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Placeto: Learning Generalizable Device Placement Algorithms for Distributed Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Red Hook, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Alizadeh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Sharif, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Katti, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', McKeown, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Prabhakar, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shenker, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Pfabric: Minimal near-optimal datacenter transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 46 PAC-ML (Ours) Paramac Paramin Random X107 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='6 - (B/s) Rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='4 : 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 X104 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5- 15 h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='uhh S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0- 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='0 0 βA βB βc βD βA βc βDSIGCOMM Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 43, 435–446.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/2534169.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2486031, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/2534169.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2486031.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ballani, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Costa, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Behrendt, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cletheroe, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Haller, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Jozwik, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Karinou, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lange, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Thomsen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Williams, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Sirius: A flat datacenter network with nanosecond optical switching, in: Proceedings of the Annual Conference of the ACM Special Interest Group on Data Communication on the Applications, Technologies, Ar- chitectures, and Protocols for Computer Communication, Association for Computing Machinery, New York, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 782–797.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/3387514.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3406221, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/3387514.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3406221.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Bao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Peng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Wu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Online job scheduling in dis- tributed machine learning clusters, in: IEEE INFOCOM 2018 - IEEE Conference on Computer Communications, IEEE Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 495–503.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1109/INFOCOM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='8486422, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1109/ INFOCOM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='8486422.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Barrett, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Clements, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Foerster, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lvovsky, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Exploratory combinatorial optimization with reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Association for the Advancement of Artificial Intelligence .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Barrett, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Parsonson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Laterre, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Learning to solve com- binatorial graph partitioning problems via efficient exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='14105 URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='14105, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='14105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Barto, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Mahadevan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Recent advances in hierarchical reinforce- ment learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Discrete Event Dynamic Systems 13, 341–379.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Bello*, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Pham*, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Norouzi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bengio, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Neural combinatorial optimization with reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https: //openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='id=rJY3vK9eg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ben-Nun, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Hoefler, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Demystifying parallel and distributed deep learning: An in-depth concurrency analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' ACM Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Surv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/3320060, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/3320060.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lodi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Prouvost, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Machine learning for combinatorial optimization: A methodological tour d’horizon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' European Journal of 47 Operational Research 290, 405–421.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='sciencedirect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' com/science/article/pii/S0377221720306895, doi:https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='ejor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='07.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='063.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Benjamin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gerard, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lavery, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bayvel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zervas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Pulse: Optical circuit switched data center architecture operating at nanosecond timescales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Journal of Lightwave Technology 38, 4906–4921.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1109/ JLT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2997664.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Benjamin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ottino, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Parsonson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zervas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Traffic tolerance of nanosecond scheduling on optical circuit switched data center network, in: 2022 Optical Fiber Communications Conference and Exhibition (OFC), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 1–3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Benjamin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Parsonson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zervas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Benchmark- ing packet-granular ocs network scheduling for data center traffic traces, in: OSA Advanced Photonics Congress 2021, Optica Publishing Group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' NeW3B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: http://opg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='optica.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='cfm?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='URI= Networks-2021-NeW3B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1364/NETWORKS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='NeW3B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Bergman, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Empowering Flexible and Scalable High Performance Architectures with Embedded Photonics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' IPDPS .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Biewald, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Experiment tracking with weights and biases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='wandb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='com/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' software available from wandb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='com.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Brockman, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cheung, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Pettersson, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Schneider, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Schulman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Tang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zaremba, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Openai gym.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' arXiv preprint arXiv:1606.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='01540 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Cai, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Saeed, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gupta, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Campbell, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Le, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Phurti: Application and network-aware flow scheduling for multi-tenant mapreduce clusters, in: 2016 IEEE International Conference on Cloud Engineering (IC2E), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 161–170.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1109/IC2E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Cobbe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Klimov, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Hesse, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kim, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Schulman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Quantifying generalization in reinforcement learning, in: Chaudhuri, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Salakhut- dinov, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' ), Proceedings of the 36th International Conference on Machine Learning, PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 1282–1289.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' mlr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='press/v97/cobbe19a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 48 Dai, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Khalil, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Dilkina, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Song, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Learning combinatorial optimization algorithms over graphs, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Red Hook, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 6351–6361.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Dean, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Corrado, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Monga, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chen, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Devin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Mao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ranzato, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Senior, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Tucker, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Yang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Large scale distributed deep networks, in: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Red Hook, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 1223–1231.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Dong, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Luo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Yu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Finn, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ma, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' On the expressivity of neural networks for deep reinforcement learning, in: Proceedings of the 37th International Conference on Machine Learning, JMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Fan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shi, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cui, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Wang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Generalizing graph neural networks on out-of-distribution graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/ abs/2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='10657, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='10657.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Foerster, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Deep multi-agent reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' thesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' University of Oxford.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Furber, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Large-scale neuromorphic computing systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Journal of Neural Engineering 13, 051001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1088/ 1741-2560/13/5/051001, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1088/1741-2560/13/5/051001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Gao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Spotlight: Optimizing device placement for training deep neural networks, in: Dy, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Krause, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' ), Proceedings of the 35th International Conference on Machine Learning, PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 1676–1684.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='mlr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='press/v80/gao18a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Garg, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Jegelka, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Jaakkola, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Generalization and representational limits of graph neural networks, in: III, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Singh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' ), Proceedings of the 37th International Conference on Machine Learning, PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3419– 3430.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='mlr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='press/v119/garg20c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Gasse, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chételat, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ferroni, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Charlin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lodi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Exact combinatorial optimization with graph convolutional neural networks, in: Advances in Neural Information Processing Systems 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 49 Gerard, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Parsonson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shabka, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bayvel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lavery, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zervas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Swift: Scalable ultra-wideband sub-nanosecond wavelength switching for data centre networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' arXiv URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='05489, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='05489, arXiv:arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='05489.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Gerard, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Parsonson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shabka, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Thomsen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bayvel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lavery, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zervas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ai-optimised tuneable sources for bandwidth-scalable, sub-nanosecond wavelength switching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Opt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Express 29, 11221–11242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: http://opg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='optica.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/oe/abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='cfm?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='URI=oe-29-7-11221, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1364/OE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='417272.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Halim, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ismail, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Combinatorial optimization: Comparison of heuristic algorithms in travelling salesman problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Archives of Computa- tional Methods in Engineering 26, 367–380.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hamilton, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ying, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Leskovec, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Inductive representation learning on large graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='02216, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='02216.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' van Hasselt, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Guez, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Silver, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Deep reinforcement learning with double q-learning arXiv:1509.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='06461.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hayes, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Radulescu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bargiacchi, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Källström, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Macfarlane, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Reymond, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Verstraeten, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zintgraf, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Dazeley, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Heintz, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', How- ley, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Irissappane, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Mannion, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Nowé, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', de Oliveira Ramos, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Restelli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Vamplew, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Roijers, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' A practical guide to multi- objective reinforcement learning and planning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Auton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Agents Multi Agent Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 36, 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1007/s10458-022-09552-y, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1007/s10458-022-09552-y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' He, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ren, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Deep residual learning for image recognition, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 770–778.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1109/CVPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hessel, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Modayil, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', van Hasselt, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Schaul, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ostrovski, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Dabney, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Horgan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Piot, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Azar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Silver, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Rainbow: Combining improvements in deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' arXiv arXiv:1710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='02298.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hoffman, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shahriari, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Aslanides, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Barth-Maron, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Momchev, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Sinopalnikov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Stanczyk, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ramos, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Raichuk, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Vincent, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 50 Hussenot, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Dadashi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Dulac-Arnold, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Orsini, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Jacq, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ferret, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Vieillard, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ghasemipour, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Girgin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Pietquin, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Behbahani, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Norman, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Abdolmaleki, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cassirer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Yang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Baumli, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Hen- derson, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Friesen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Haroun, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Novikov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Colmenarejo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cabi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gulcehre, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Paine, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Srinivasan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cowie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Piot, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', de Freitas, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Acme: A research framework for distributed re- inforcement learning URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='00979, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='00979, arXiv:arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='00979.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hoffmann, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Borgeaud, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Mensch, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Buchatskaya, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cai, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ruther- ford, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Casas, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Hendricks, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Welbl, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Clark, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Henni- gan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Noland, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Millican, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Driessche, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Damoc, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Guy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Osindero, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Simonyan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Elsen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Rae, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Sifre, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Training compute-optimal large language models URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='15556, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='15556, arXiv:arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='15556.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Caesar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Godfrey, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Finishing flows quickly with preemptive scheduling, in: Proceedings of the ACM SIGCOMM 2012 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication, Association for Computing Machinery, New York, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 127–138.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/2342356.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2342389, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/2342356.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2342389.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hopfield, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Tank, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 1985.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' "neural" computation of decisions in optimization problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Biological Cybernetics 52, 141–152.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1007/BF00339943.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Horgan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Quan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Budden, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Barth-Maron, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Hessel, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', van Hasselt, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Silver, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Distributed prioritized experience re- play, in: International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='id=H1Dy---0Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hornik, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Stinchcombe, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', White, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Multilayer feedforward networks are universal approximators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Neural Networks 2, 359–366.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Huang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bapna, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Firat, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lee, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ngiam, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' GPipe: Efficient Training of Giant Neural Networks Using Pipeline Parallelism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Red Hook, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 51 Iandola, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Han, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Moskewicz, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ashraf, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Dally, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Keutzer, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5mb model size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' arXiv preprint arXiv:1602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='07360 URL: http: //arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/1602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='07360.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' cite arxiv:1602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='07360Comment: In ICLR Format.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Kaplan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', McCandlish, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Henighan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Brown, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chess, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Child, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gray, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Radford, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Amodei, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Scaling laws for neural language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' CoRR abs/2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='08361.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/ 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='08361, arXiv:arXiv:2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='08361.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Karakus, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Huilgol, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Wu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Subramanian, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Daniel, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cavdar, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Xu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Rahnama, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Quintela, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Amazon sagemaker model parallelism: A general and flexible framework for large model training URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='05972, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='05972, arXiv:arXiv:2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='05972.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Khadka, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Aflalo, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Mardar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ben-David, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Miret, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Mannor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Hazan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Tang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Majumdar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Optimizing memory placement using evolutionary graph reinforcement learning, in: International Con- ference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='net/ forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='id=-6vS_4Kfz0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Khani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ghobadi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Alizadeh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Glick, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bergman, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Vahdat, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Klenk, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ebrahimi, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Sip-ml: High-bandwidth optical network interconnects for machine learning training, in: Proceedings of the 2021 ACM SIGCOMM 2021 Conference, Association for Computing Machinery, New York, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 657–675.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 1145/3452296.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3472900, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/3452296.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3472900.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Kirk, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zhang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Grefenstette, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Rocktäschel, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' A survey of generalisation in deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/ abs/2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='09794, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='09794.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Knyazev, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Taylor, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Amer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Understanding Attention and Generalization in Graph Neural Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Red Hook, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Krizhevsky, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Imagenet classification with deep convolutional neural networks, in: Proceedings of the 25th Inter- 52 national Conference on Neural Information Processing Systems - Volume 1, Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Red Hook, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 1097–1105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Kurach, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Raichuk, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Stanczyk, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zając, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bachem, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Espeholt, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Riquelme, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Vincent, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Michalski, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bousquet, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gelly, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Google research football: A novel reinforcement learning environment URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='11180, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='11180, arXiv:arXiv:1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='11180.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Li, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Xu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cao, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Scheduling distributed deep learning jobs in heterogeneous cluster with placement awareness, in: Proceedings of the 12th Asia-Pacific Symposium on Internetware, Association for Computing Machinery, New York, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 217–228.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 1145/3457913.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3457936, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/3457913.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3457936.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Liang, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Liaw, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Nishihara, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Moritz, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Fox, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Goldberg, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gonza- lez, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Jordan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Stoica, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' RLlib: Abstractions for distributed reinforcement learning, in: Dy, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Krause, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' ), Proceedings of the 35th International Conference on Machine Learning, PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3053–3062.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='mlr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='press/v80/liang18b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Lin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Self-improving reactive agents based on reinforcement learning, planning and teaching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 8, 293–321.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1007/BF00992699, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1007/BF00992699.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Alizadeh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Menache, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kandula, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Resource manage- ment with deep reinforcement learning, in: Proceedings of the 15th ACM Workshop on Hot Topics in Networks, Association for Computing Machin- ery, New York, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 50–56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/ 3005745.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3005750, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/3005745.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3005750.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Venkatakrishnan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Schwarzkopf, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Alizadeh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Vari- ance reduction for reinforcement learning in input-driven environments, in: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='id=Hyg1G2AqtQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mayer, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Jacobsen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Scalable deep learning on distributed infrastructures: Challenges, techniques, and tools.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' ACM Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Surv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/3363554, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/3363554.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 53 Mirhoseini, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Goldie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Pham, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Steiner, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Dean, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' A hierarchical model for device placement, in: International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='id= Hkc-TeZ0W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mirhoseini, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Goldie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Pham, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Steiner, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Dean, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' A hierarchical model for device placement, in: International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='id= Hkc-TeZ0W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mirhoseini, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Pham, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Steiner, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Larsen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kumar, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Norouzi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bengio, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Dean, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Device placement optimization with reinforcement learning, in: Proceedings of the 34th International Conference on Machine Learning - Volume 70, JMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2430–2439.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mishra, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Benjamin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zervas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Monet: heterogeneous memory over optical network for large-scale data center resource disaggregation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Journal of Optical Communications and Networking 13, 126–139.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 1364/JOCN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='419145.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mnih, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kavukcuoglu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Silver, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Graves, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Antonoglou, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Wierstra, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Riedmiller, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Playing atari with deep reinforcement learning arXiv:arXiv:1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='5602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Montufar, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Pascanu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' On the number of linear regions of deep neural networks, in: Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, MIT Press, Cambridge, MA, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2924–2932.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Narayanan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Harlap, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Phanishayee, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Seshadri, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Deva- nur, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Granger, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gibbons, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zaharia, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Pipedream: Generalized pipeline parallelism for dnn training, in: ACM Sym- posium on Operating Systems Principles (SOSP 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='microsoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='com/en-us/research/publication/ pipedream-generalized-pipeline-parallelism-for-dnn-training/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Narayanan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Phanishayee, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zaharia, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Memory-efficient pipeline-parallel dnn training, in: Inter- national Conference on Machine Learning (ICML 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: 54 https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='microsoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='com/en-us/research/publication/ memory-efficient-pipeline-parallel-dnn-training/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' NVIDIA, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Nvidia selene: Leadership-class supercomputing in- frastructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='nvidia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='com/en-us/on-demand/session/ supercomputing2020-sc2019/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' NVIDIA, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Nvidia ai platform delivers big gains for large language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' https://developer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='nvidia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='com/blog/ nvidia-ai-platform-delivers-big-gains-for-large-language-models/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' OpenAI, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ai and compute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' https://openai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='com/blog/ ai-and-compute/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ottino, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Benjamin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zervas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ramp: A flat nanosecond optical network and mpi operations for distributed deep learning systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' arXiv URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='15226, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 15226.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Paliwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gimeno, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Nair, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lubin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kohli, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Reinforced genetic algorithm learning for optimizing computation graphs, in: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https: //openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='id=rkxDoJBYPB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Parsonson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Benjamin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zervas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Traffic generation for benchmarking data centre networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Optical Switching and Networking 46, 100695.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Parsonson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Laterre, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Barrett, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Reinforcement learning for branch-and-bound optimisation using retrospective trajectories URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='14345, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='14345, arXiv:arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='14345.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Parsonson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shabka, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chlupka, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Goh, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zervas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Optimal control of soas with artificial intelligence for sub-nanosecond optical switching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Journal of Lightwave Technology 38, 5563–5573.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1109/ JLT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='3004645.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Paszke, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gross, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Massa, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lerer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bradbury, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chanan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Killeen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lin, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gimelshein, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Antiga, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Desmaison, 55 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kopf, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Yang, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', DeVito, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Raison, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Tejani, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chil- amkurthy, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Steiner, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Fang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bai, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chintala, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Pytorch: An imperative style, high-performance deep learning library, in: Advances in Neural Information Processing Systems 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Curran As- sociates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 8024–8035.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: http://papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='cc/paper/ 9015-pytorch-an-imperative-style-high-performance-deep-learning-library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' pdf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Raja, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lange, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Karpov, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Fu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Behrendt, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cletheroe, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lukashchuk, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Haller, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Karinou, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Thomsen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Jozwik, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Costa, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kippenberg, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ballani, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ultrafast optical circuit switching for data centers using integrated soliton microcombs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Nature Communications 12, 5867.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1038/ s41467-021-25841-8, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1038/s41467-021-25841-8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Schaul, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Quan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Antonoglou, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Silver, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Prioritized experience replay arXiv:1511.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='05952.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Schulman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Wolski, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Dhariwal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Radford, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Klimov, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Proximal policy optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' CoRR abs/1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='06347.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: http://dblp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='uni-trier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='de/db/journals/corr/corr1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='html# SchulmanWDRK17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Silver, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Huang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Maddison, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Guez, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Sifre, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', van den Driess- che, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Schrittwieser, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Antonoglou, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Panneershelvam, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lanctot, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Dieleman, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Grewe, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Nham, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kalchbrenner, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lillicrap, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Leach, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kavukcuoglu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Graepel, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Hassabis, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mastering the game of Go with deep neural networks and tree search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Nature 529, 484–489.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1038/nature16961.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Simonyan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zisserman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Very deep convolutional networks for large-scale image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' CoRR abs/1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1556.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' org/abs/1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1556.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Slotnick, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Borck, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', McReynolds, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 1962.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The solomon com- puter, in: Proceedings of the December 4-6, 1962, Fall Joint Com- puter Conference, Association for Computing Machinery, New York, NY, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 97–107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/1461518.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1461528, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1145/1461518.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1461528.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 56 Smith, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Patwary, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Norick, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', LeGresley, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Rajbhandari, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Casper, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Prabhumoye, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zerveas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Korthikanti, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zhang, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Child, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Aminabadi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Bernauer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Song, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shoeybi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', He, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Houston, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Tiwary, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Catanzaro, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' arXiv URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='11990, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='11990, arXiv:arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='11990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Sutton, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Barto, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Reinforcement Learning: An Introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Second ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', The MIT Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: http://incompleteideas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='net/book/ the-book-2nd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Vezhnevets, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Osindero, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Schaul, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Heess, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Jaderberg, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Silver, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kavukcuoglu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' FeUdal networks for hierarchical reinforcement learning, in: Precup, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Teh, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' ), Proceedings of the 34th Inter- national Conference on Machine Learning, PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 3540–3549.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='mlr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='press/v70/vezhnevets17a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Feng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Improving generalization in reinforcement learning with mixture regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' org/abs/2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='10814, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='10814.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Wang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zheng, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ye, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gan, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Li, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Song, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zhou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ma, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Yu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gai, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Xiao, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', He, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Karypis, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Deep graph library: A graph-centric, highly-performant package for graph neural networks URL: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='01315, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='01315.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Khazraee, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zhong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Jia, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Mudigere, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kewitsch, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Ghobadi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Topoopt: Optimizing the network topology for distributed dnn training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='00433 arXiv:arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='00433.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Schaul, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Hessel, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', van Hasselt, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Lanctot, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', de Freitas, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Dueling network architectures for deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/1511.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='06581.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' cite arxiv:1511.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='06581Comment: 15 pages, 5 figures, and 5 tables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Watkins, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Learning from Delayed Rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' thesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' King’s College, Oxford.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 57 Williams, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Simple statistical gradient-following algorithms for connectionist reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 8, 229–256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1007/BF00992696, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1007/BF00992696.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Williamson, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shmoys, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' The Design of Approximation Algo- rithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Cambridge University Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Schuster, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Norouzi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Macherey, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Krikun, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Cao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gao, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Macherey, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Klingner, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Shah, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Johnson, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kaiser, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Gouws, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kato, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kudo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kazawa, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Stevens, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Kurian, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Patil, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Young, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Smith, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Riesa, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Rudnick, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Corrado, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Hughes, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Dean, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Google’s neural machine translation system: Bridging the gap between human and machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' arXiv preprint arXiv:1609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='08144 URL: http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='org/abs/1609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='08144.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' cite arxiv:1609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='08144.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Zervas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Yuan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Saljoghei, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Chen, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Mishra, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Optically dis- aggregated data centers with minimal remote memory latency: Technologies, architectures, and resource allocation [invited].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Journal of Optical Commu- nications and Networking 10, A270–A285.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='1364/JOCN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='00A270.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Yu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', Xu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' Hierarchical reinforcement learning by discovering intrinsic options, in: International Conference on Learning Rep- resentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' URL: https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content='id=r-gPPHEjpmw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'} +page_content=' 58' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9tFST4oBgHgl3EQfbTje/content/2301.13799v1.pdf'}